Super Intelligent AI Will Be Out of Human Hands, Says Researchers

Super Intelligent AI Will Be Out of Human Hands, Says Researchers

A potential danger to humanity, here's why it's beyond human comprehension

Unfortunately, researchers think it could be challenging to regulate a super intelligent AI. The explanation is simple: if AI can comprehend information better than humans, our processing power will be limited. We may never be able to govern the super-intelligent AI if we are unable to understand its intellect.

But surely all AI is designed to be human-friendly? Okay, sure. But according to the authors of recent research, if we don't fully comprehend the scenarios that AI can generate, we cannot design empathy towards humans in artificial intelligence. The authors of the new article contend that we cannot establish rules like "do no harm to humans" until we are aware of the kinds of situations that an AI is likely to encounter. We are unable to impose restrictions once a computer system operates at a level beyond the capacity of our programmers. Researchers quash any hope to stop AI.

This is due to a superintelligence's multifaceted nature, which makes it potentially capable of mobilizing a variety of resources to accomplish goals that may be beyond human comprehension, let alone being under human control.

The "halting problem," which Alan Turing first presented in 1936, serves as the foundation for the team's justification. It makes an effort to comprehend if a computer program will come to a conclusion (and stop) or continue incessantly looking for the solution in a loop. Turing demonstrated that while it is feasible to know the solution for certain programs, it is not possible to know the solution for every program that might be constructed.

While we can know that for some specific programs, it is mathematically impossible to develop a mechanism that will allow us to know that for every hypothetical program that might ever be written, as Turing demonstrated through some clever math. That brings us back to AI, which is a super-intelligent state that might conceivably store every program in existence at once in its memory.

Similar to this, an AI that has been designed to never hurt people may conclude (and stop) or not. In either case, humans are unable to calculate and contain. Researchers suggest that to contain AI, it might be cut off from specific networks or portions of the internet to restrict its capabilities, especially if it's extremely intelligent.

The schedule is a little less gloomy. It will likely be several years before humanity faces such an existential computational reckoning, according to at least one assessment.

Fears that artificial intelligence (AI) is capable of surpassing the strongest human wits in games like chess, go, and jeopardy may one day go rogue have been reported alongside news of AI defeating humans in these games. Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid and the study's primary author, notes that the topic of whether artificial superintelligence could be managed is an old one. It dates at least to the 1940s and Asimov's First Law of Robotics.

The following are the three "Laws of Robotics" by Issac Asimov in the 1942 short story "Runaround" :

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Now, Alfonseca and his co-workers argue that because of fundamental computing-related limitations, it may be impossible to operate a super intelligent AI.

The researchers proposed that any program aimed at ensuring that a super intelligent AI cannot harm humans must first model the behavior of the computer to foresee potential outcomes. The super-smart computer would therefore need to be stopped by this containment mechanism if it truly may cause harm.

The scientists asserted that no containment program could, however, replicate the behavior of the AI and anticipate, with 100 percent accuracy, if its actions may cause harm. The program may not catch errors while attempting to precisely replicate AI behavior or anticipate the outcomes of AI decisions.

Although it may not be possible to control a super-intelligent artificial general intelligence, it should be possible to control a super-intelligent narrow AI—one specialized for certain functions instead of being capable of a broad range of tasks like humans. On the other hand, there is no need to spruce up the guest room for our future robots quite yet.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net