Ethical AI- 10 Crimes that Artificial Intelligence May Encourage

by August 10, 2020

Artificial Intelligence

Ever wondered what will happen if Artificial Intelligence goes to the wrong hands?

Artificial intelligence (AI) may play an increasingly essential role in criminal acts in the future. From a possibility of fraud to deepfakes AI-driven manipulation may cause harm as well. Analytics Insights compiles the list of 10 AI induced crimes that may cause a fear going forward-

 

Deepfakes

Neural processing engines (NLPs) can help AI take the darker side if they are deployed for all the wrong means. The infamous case of global celebrities caught in the web of deep fakes is not hidden. With non-ethical hackers gaining funds from the dark web and the underworld, the probability of deepfakes only grow larger.

 

Killer Robots

Can AI powered robots kill? The answer is yes! Killer robots are capable of causing a lot of harm which is even hard to detect and stop. Robots have proven to be disasters in Kawasaki Heavy Industries plant (Akashi, Japan) killing Kenji Urada, a 37-year-old maintenance worker in 1981; Golden State Foods (Industry, California-based meat supplier) killing Ana Maria Vital, a 40-year-old factory worker on July 21, 2009 and in Ventra Ionia (Ionia, Michigan-based auto assembly factory) killing Wanda Holbrook, killing a 57-year-old factory technician on July 7, 2015 to name a few.

 

Privacy Invasion

Artificial Intelligence may lead to a loss of privacy in the future. Surprised? Consider technologies like facial recognition which can find you out from a crowd and all security cameras that have an eye on you round the clock! The data gathering abilities of AI also mean that a timeline of one’s daily activities can be created by accessing their data from various social networking sites.

 

Autonomous Cars

Self-driving cars are not completely safe — at least not yet!

The Uber incident where a woman was mowed down by a self-driving car because the car was not able to spot the victim in time, points the risks of AI powered autonomous cars. Though self-driving cars are probably the future mode of transport, it will take a lot of time making the current roads compliant with sensors for self-driving cars to commute. A single bug in the algorithm could make the AI driven car go for a complete toss, but fear not, today’s driverless cars still need a human behind the wheel for emergency takeover.

 

ML and Phishing Attacks

AI can convince social media users to click on phishing links that come with mass-produced messages. Wondering how? Each message is constructed using machine learning techniques engineered from past behaviours and public profiles, customised for each individual thus camouflaging the intention behind each message. If the potential victim clicks on the phishing link and fills in the details in the subsequent web-form, that information straight goes to a criminal that could be used for theft and fraud.

 

Artificial Intelligence Biases

Who can forget the bias that AI brings? For example, as the investigative news site ProPublica has found, a criminal justice algorithm used in Broward Country, Florida, mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants. Other research has found that training natural language processing models on news articles can lead them to exhibit gender stereotypes.

 

Superintelligence

More powerful Artificial Intelligence becomes Superintelligent in simple words, more superior to human performance in nearly all domains. While this might still sound like science fiction, many data science leaders believe it is possible. Superintelligence may transform the world economically, socially, and politically as the Industrial Revolution, which may lead to extremely positive developments, however causing potential catastrophic risks from accidents (safety) or misuse (security).

 

Social Control

AI is targeting our privacy in an all-new way with which we are not even able to imagine. AI can know several things about us, credit to our social footprints. Talk about your liking, your travels, hobbies and AI knows it, and targets us with marketing strategies, forcing us to buy things with curated marketing and psychological strategies.

 

Autonomous Weapons

AI programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, AI can pose risks. It might even be plausible to expect that the nuclear arms race will be replaced with a global autonomous weapons race. Russia’s president Vladimir Putin said: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

 

Weaponized Programming

AI is being used by the security forces of various countries, with the help of which they are creating robot soldiers who will fight for them during wartime. But the most significant risk is that those robots are designed to shoot enemies for which they are also provided with weapons, but in case if even a single set of those robots get wrong, then they can even start shooting their forces also.