Military AI Start-Ups are on the Big Boom, Thanks to the Russia-Ukraine War

Military AI Start-Ups are on the Big Boom, Thanks to the Russia-Ukraine War

Early in March, a RADIO TRANSMISSION between multiple Russian soldiers in Ukraine was recorded via an unencrypted channel. It shows the soldiers evacuating in fear and confusion after coming under artillery fire. An AI was listening to what the warriors were saying. Several artificial intelligence algorithms created by Primer, a US business that offers AI services for intelligence analysts, were used to automatically record, transcription, translate, and analysis their utterances. The use of AI systems to monitor Russia's army on a large scale illustrates the rising significance of sophisticated open-source intelligence in military confrontations, even if it is unclear whether Ukrainian soldiers also intercepted the transmission.

Russian transmissions that were not encrypted have been uploaded online, translated, and discussed on social media. Similar scrutiny has been given to other data sources, such as social media posts and smartphone videos. But what is particularly new is how natural language processing technology is being used to examine Russian military communications. The Ukrainian army still frequently uses human analysts who toil away in a room someplace, decoding messages and deciphering orders to make sense of intercepted communications.

The CEO of data analytics firm Palantir, Alexander Karp, addressed European politicians precisely two weeks after Russia invaded Ukraine in February. He stated in an open letter that since the war was imminent, Europeans needed Silicon Valley's assistance to update their arsenals. According to Karp, countries in Europe must embrace "the relationship between technology and the state, between disruptive companies that aim to loosen the hold of established contractors and the federal government ministries with funding" if they want to "remain strong enough to defeat the threat of foreign occupation."

The call is being answered by the military. The creation of a $1 billion innovation fund by NATO was announced on June 30. The fund would invest in early-stage businesses and venture capital firms that are working on "priority" technologies such as automation, big data processing, and artificial intelligence. The Germans have set aside little under half a billion within a $100 billion budget infusion to the military for research and artificial intelligence since the conflict began, while the UK has created a new AI policy exclusively for defense.

Kenneth Payne, director of defense studies research at King's College London and author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict, asserts that "war is a catalyst for change." The need to introduce additional AI capabilities to the battlefield has become more urgent in light of the conflict in Ukraine. Startups like Palantir, which are seeking to profit as the military scramble to upgrade their arsenals with the newest technologies, stand to benefit the most. But as the technology advances and the possibility of limitations and rules limiting its use seems as remote as ever, long-standing ethical questions about the use of AI in conflict have become more pressing.

The military's relationship with technology hasn't always been friendly. Google withdrew from the Pentagon's Project Maven, an effort to develop image recognition technologies to enhance drone attacks, in 2018 as a result of employee protests and indignation. The incident sparked a contentious discussion about human rights and the ethics of using AI for autonomous weaponry. Additionally, it prompted well-known AI researchers to commit not to develop fatal AI, including Yoshua Bengio, a Turing Prize recipient, and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the founders of prominent AI lab DeepMind.

Why AI?

Companies that market military AI makes bold claims about the capabilities of their products. They claim it can assist with tasks both routine and dangerous, such as analyzing satellite data, reviewing resumes, and identifying patterns in data to enable soldiers to act more quickly in combat. Target identification can be aided by image recognition software. In addition to helping soldiers distribute supplies more securely than is feasible on foot, autonomous drones may be used for surveillance, and assaults in the air, sea, or on land.

According to Payne, these technologies are still in their infancy on the battlefield, and militaries are now engaging in a phase of experimentation, oftentimes with mixed results. There are several instances of AI businesses making lofty claims about their products that ultimately prove to be untrue, and battle zones are perhaps among the technologically most difficult settings in which to employ AI due to the dearth of suitable training data. In a study for the United Nations Institute for Disarmament Research, Arthur Holland Michel, a specialist in drones and other surveillance technology, said that this might lead to the failure of autonomous systems in a "complex and unpredictable manner."

However, several militaries continue to advance. The British army proudly declared in 2021 that it had deployed AI in a military action for the first time to offer intelligence on the surrounding environment and geography, in a cryptic news release. The US is developing autonomous military vehicles in collaboration with startups. The US and British forces are already creating swarms of hundreds or even thousands of autonomous drones, which may one day prove to be potent and deadly weapons. Many specialists are concerned. This approach, according to Meredith Whittaker, faculty director at the AI Now Institute and senior AI adviser at the Federal Trade Commission, is more about enhancing tech corporations than enhancing military operations.

She argued that proponents of AI are stoking Cold War rhetoric and attempting to promote a narrative that portrays Big Tech as a "critical national infrastructure," which is too big and significant to be broken up or regulated, in an article for Prospect magazine that she co-wrote with Lucy Suchman, a sociology professor at Lancaster University. They caution against assuming that the military will adopt AI when it is an active decision with significant ethical considerations and trade-offs.

AI war chest

The push for greater AI in defense has grown louder and louder over the past few years as the Maven issue fades into history. Former Google CEO Eric Schmidt, who served as the NSCAI's head, has been one of the most vocal in his calls for the US to embrace military AI more quickly. The NSCAI urged the US military to invest $8 billion annually in these technologies or risk falling behind China in a study published last year detailing actions the US could take to catch up to China in AI by 2025.

A Georgetown Center for Security and Emerging Technologies research estimates that the Chinese military invests at least $1.6 billion yearly in AI. According to Lauren Kahn, a research fellow at the Council on Foreign Relations, there is already a significant push to catch up in the US. The US Department of Defense requested $874 million for AI in 2022, according to research published in March 2022, even if that amount does not cover all of the department's AI initiatives.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net