Superintelligent AI: Can we Control it before It’s a Threat to Humanity?

Superintelligent AI: Can we Control it before It’s a Threat to Humanity?

New Findings from Max Planck Institute say that it is Impossible to control Superintelligent AI

There is a huge hype around how artificial intelligence would someday lead to superintelligent AI. This superintelligent AI will be the harbinger of intelligent machines and also mark the achievement of reaching the long-term goal of human-level intelligence.

This concept was first discussed way back around 1950s, by computer pioneer Alan Turing. He proposed that it is possible that the human species would be "greatly humbled" by AI, and its applications may surpass the general unease of making something smarter than oneself. . However, superintelligent AI has had a negative reputation so far.

In light of recent advances in artificial intelligence, several tech pundits have revived the discussion about the potential dangers of this version of AI. Meanwhile, there is no way to subdue the fears about superintelligent AI, causing elimination of humanity instead of being a gift

Some of this paranoia, was fueled by Elon Musk's speech at MIT in 2014, where he called artificial intelligence as humanity's "biggest existential threat" and compared it to "summoning the demon." This fear has also been reiterated by luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results that agree on the risks posed by AI superintelligence.

Even Oxford Professor Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. The book focuses on the stage at which artificial intelligence achieves an intelligence explosion. As per his claim, 90% human-level AI will be attained by 2075. However, many experts have dismissed such claims due to lack of data or have called Bostrom as 'professional scaremonger'. In his MIT Technology Review article, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence pointed out that though artificial intelligence has attained the intelligence level to defeat us in board games like Chess, AI has failed to score above 60% on eighth grade science tests or score above 48% in disambiguating simple sentences.

Though this doesn't imply that superintelligent AI is impossible. In fact, it is evident, but there maybe no fail proof way to known that. Kevin Kelly, Founding Editor of Wired magazine, stated that intelligence is not a single dimension, so 'smarter than humans' is a meaningless concept.

A recent paper in the Journal of Artificial Intelligence Research has caught the attention of the scientific community, warning that it could become fundamentally impossible to control a superintelligent AI. The paper is based on a study by scientists at Max-Planck institute along with an international team of researchers, who have been investigating the possibility of monitoring superintelligent machines with the help of algorithms. They, hypothesized a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances. Their study involved simulating the AI's behavior first and then halting it when considered harmful.

According to study coauthor Manuel Cebrian, a superintelligent machine that controls the world sounds like science fiction. Manuel is the Leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity," he continued while speaking to a press release.

A halting problem proposed by Alan Turing, is an algorithm in which given information about a program and an input to that program, the algorithm will always predict whether the program will halt when fed with that input. The irony is that the halting problem is undecidable.

Careful analysis carried out by the team found that the current paradigm of computing cannot build such an algorithm. Based on these calculations the containment problem is incomputable, no single algorithm can find a solution for determining whether an AI would produce harm to the world. Hence, the paper authors argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself.

Iyad Rahwan, Director of the Center for Humans and Machines, says that if an algorithm can command a superintelligent machine not to destroy the world, then it could halt its operations. "If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable," Iyad adds. Furthermore, the researchers highlight that there is no way to know if or when will superintelligent machines arrive. This is because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

Nick Bostrom has proposed another solution to control superintelligent AI, which is by limiting its abilities by cutting it off from Internet, in order to prevent it from doing harm to humans. However, this approach will only render the superintelligent AI significantly less powerful.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net