How to Understand if AI is Swapping Civilization

How to Understand if AI is Swapping Civilization

What if we wake up one morning to the news that a super-power AI has emerged with disastrous consequences?  Nick Bostrom's Superintelligent and Max Tegmark's Life 3.0 books argue that malevolent superintelligence is an existential risk for humanity.

Rather than endless anticipation, it's better to ask a more concrete, empirical question: What would alarm us that superintelligence is indeed at the doorstep?

If an AI program develops fundamental new capabilities, that's the equivalent of a canary collapsing.

AI's performance in games like Go, poker, or Quake 3, is not a canary. The bulk of AI in such games is social work to highlight the problem and design the solution. The credit for AlphaGo's victory over human Go champions was the talented human team at DeepMind that merely ran the algorithm the people had created. It explains why it takes several years of hard work to translate AI success from one little challenge to the next. Techniques such as deep learning are general, but their impactful application to a particular task needs extensive human intervention.

Over the past decades, AI's core success is machine learning, yet the term 'machine learning' is a misnomer. Machines own only a narrow silver of humans' versatile learning abilities. If you say machine learning is like baby penguins, know how to fish. The reality is that adult penguins swim, catch fish, digest it. They regurgitate fish into their beaks and place morsels into their children's mouths. Similarly, human scientists and engineers are spoon-feeding AI.

In contrast to machine learning, human learning plans personal motivation to a strategic learning plan. For example, I want to drive to be independent of my parents (Personal motivation) to take driver's ed and practice on weekends (strategic learning). An individual formulates specific learning targets, collects, and labels data. Machines cannot even remotely replicate any of these human abilities. Machines can perform like superhuman; including statistical calculations, but that is merely the last mile of learning.

The automated formula of learning problems is our first canary, and it does not seem anywhere close to dying.

The second canary is self-driving cars. As Elon Musk speculated, these are the future. Artificial intelligence can fail catastrophically in atypical circumstances, like when an individual in a wheelchair crosses the street. In this case, driving is more challenging than any other AI task because it requires making life-critical, real-time decisions based on the unpredictable physical world and interaction with pedestrians, human drivers, and others. We should deploy a limited number of self-driving cars when they reduce accident rates. Human-level driving is achieved only when this canary be said to have kneeled over.

Artificial intelligence doctors are the third canary. AI already has the capability of analysing medical images with superhuman accuracy, which is a little slice of a human doctor's job. An AI doctor's responsibility would be interviewing patients, considering complications, consulting other doctors, and so on. These are challenging tasks, which require understanding people, language, and medicine. This type of doctor would not have to fool a patient into wondering it is human. That's why it is different from the Turing test. A human doctor can do a wide range of tasks in unanticipated situations.

One of the world's most prominent AI experts, Andrew Ng, has stated, "Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars."

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net