A variety of things today are run by artificial power or to be precise Artificial Intelligence. Across every industry, technology has left no stone unturned for innovation. But when it comes to nuclear codes, is it wise to give control to AI? A pair of researchers who are associated with the US Air Force suggest giving nuclear codes to an AI.
Air Force Institute of Technology associate dean Curtis McGiffin and Louisiana Tech Research Institute researcher Adam Lowther, also affiliated with the Air Force, co-wrote an article — with the ominous title “America Needs a ‘Dead Hand’” — which argues that the US needs to develop “an automated strategic response system based on artificial intelligence.”
In a layman’s term, they want to give an AI the nuclear codes. And yes, as the authors admit, it sure sounds a lot like the “Doomsday Machine” from Stanley Kubrick’s 1964 satire “Dr. Strangelove.”
As noted by Futurism, the “Dead Hand” referenced in the title refers to the Soviet Union’s semi-automated system that would have launched nuclear weapons if certain conditions were met, including the death of the Union’s leader. This time, though, the AI-powered system suggested by Lowther and McGiffin wouldn’t even wait for a first strike against the US to occur — it would know what to do ahead of time.
According to the researchers, “it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”
Moreover, the article also notes, the attack-time compression is the phenomenon that modern technologies, including highly sensitive radar and near-instantaneous communication, drastically reduced the time between detection and decision time. The challenge: modern weapon technologies, particularly hypersonic cruise missiles and vehicles, cut the window even further.
As per Lowther’s and McGiffin’s argument, these new technologies are shrinking America’s senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time.
Their suggestion for giving nuclear codes to AI projects the idea to use an AI-powered solution to negate any surprise capabilities or advantages of retaliatory strikes of the enemy. It would replace what Lowther and McGiffin describe as a “system of systems, processes, and people” that “must inevitably be capable of detecting launches anywhere in the world and have the ability to launch a nuclear strike against an adversary.”
Moreover, according to Bulletin of the Atomic Scientists editor Matt Field, handing over the nuclear codes to an AI could have plenty of negative side effects. One of them is automation bias, as Field points out in his piece. People tend to blindly trust what machines are telling them, even favoring automated decision-making over human decision-making. He also argues that there’s the simple fact that the AI doesn’t have much data to run on which means that most of the data fed to the AI would be simulated data.