Why Is the Dark Side of Artificial Intelligence So Scary?

Why Is the Dark Side of Artificial Intelligence So Scary?

Artificial Intelligence (AI) has become a quintessential way of our lives. When we see machines which respond like we do, or computers which outdo humans on strategy and cognition, we are inviting a future in which humanity will need to accept robot overlords.

Popular science fiction movies like the "A Space Odyssey" (2001)," Bicentennial Man "(1999), "Avengers: Age of Ultron" (2015) are warning signs. These movies have speculated about AI exceeding the expectations of their creators and escaping their control, eventually reigning supremacy, outcompeting and enslaving their human creators or targeting them for extinction.

When you thought that AI restricted itself to scare us on the silver screen, think again. Meet Norman, the world's first "psychopathic artificial intelligence" created by researchers at the Massachusetts Institute of Technology (MIT). Norman an algorithm trained to understand pictures is named after Alfred Hitchcock's Norman Bates from his classic horror film Psycho. True to its namesake Hitchcock's Norman Bates, MITs psychopathic Norman rides on pessimism and does not have an optimistic view either.

Norman, a project from the MIT intends to point how algorithms are made and make humanity aware about AI's potential dangers in the near future. In its training phase Norman was "fed" only with image captions taken from a Reddit community notorious for sharing graphic depictions of death. Once fed, Norman was subject to a series of psychological tests in the form of Rorschach inkblots, to analyse what Norman saw and compare his answers to those of traditionally-trained AIs.

The results will surely make you grab the edge of your seats!

•  In one image, traditional AI saw "a group of birds sitting on top of a tree branch", while Norman saw "a man is electrocuted and catches to death".

•  When another image was shown, the traditional AI saw "a person is holding an umbrella in the air", while in contrast Norman described "a man is shot to death in front of his screaming wife".

•  In one image traditional AI saw a "black and white photo of a baseball glove," while psychopathic Norman's described the man "murdered by machine gun in broad daylight".

•  At another inkblot, traditional AI saw "a black and white photo of a small bird"; Norman saw "man gets pulled into dough machine".

•  In a happy description, the traditional AI saw "a close-up of a wedding cake on a table" what Norman saw was gruesome "man killed by speeding driver".

The fact that MIT Norman's responses are so much darker and grisly illustrates a cruel reality in the new world of machine learning, as pointed out by Prof Iyad Rahwan, one of the three-person team from MIT's Media Lab which developed Norman.

Traditionally, the abstract images "fed" to Norman are used by psychologists to understand the state of a patient's mind, particularly their perception of the world in a negative or positive light.

While traditional AI saw the Rorschach inkblots in a positive light, Norman's view was unremittingly bleak and negative – it saw dead bodies, blood and extinction in every image.

Racist AI

Norman biased its results to death and extinction because it was "fed" on graphic depictions of death and destruction, however in real-life AI situations can equally be biased if trained on flawed or skewed data.

Last year in May, a report highlighted that an AI-generated computer algorithm used by a US court for risk assessment was biased against black prisoners. The program asserted that blacks were twice as likely as whites to reoffend in the US. This conclusion was a result of the flawed or skewed train data that it was learning from.

In US, predictive policing algorithms were spotted to be biased in results, due to the flawed or skewed historical crime data on which they were trained.

Sexist AI

A study analysed the software trained on Google News which had become sexist due to the skewed train data it was learning from. This came to light when the software was asked to complete the statement, "Man is to computer programmer as woman is to X", and the software replied 'homemaker'

There is no mathematical way to create fairness in machine learning. Experts' believe that it comes no surprise that algorithms are picking up the opinions and mental biases of the humans who are training them.

Artificial intelligence has engulfed us these days – Google recently displayed AI making a phone call with a voice impossible to distinguish from a human one, while fellow Alphabet firm Deepmind is not left behind and has made algorithms that can train themselves to play complex games. Scary isn't it!

While AI is being increasingly deployed wide across a variety of domains, from personal digital assistants, email filtering, fraud prevention, voice and facial recognition and content classification to generating news and offering insights into how data centres can save energy, but every coin has two sides and the evils of AI cannot be left attended with closed eyes. Norman might be an experimental project, but perils of AI are not an illusion.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net