Top 10 Artificial Intelligence Risks that Humans Will Face in 2075

Top 10 Artificial Intelligence Risks that Humans Will Face in 2075

As artificial intelligence grows more sophisticated and ubiquitous, the voices warning against artificial intelligence risks in present and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources, or autonomous weapons that operate without human oversight shows artificial intelligence risks on a number of fronts. And we're still in the very early stages. There are many artificial intelligence risks that humans will face in 2075. Destructive superintelligence — aka artificial general intelligence that's created by humans and escapes our control to wreak havoc — is in a category of its own. It's also something that might or might not come to fruition, so at this point, top artificial intelligence risks are less than a hypothetical threat — and ever-looming sources of existential dread. Here are the top 10 artificial intelligence risks that humans could face in 2075.

Job Automation

Job automation is generally viewed as the most immediate concern. It's no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries — particularly but not exclusively those whose workers perform predictable and repetitive tasks — disruption is well underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with "high exposure" to automation, meaning that before long at least 70 percent of their tasks — ranging from retail sales and market analysis to hospitality and warehouse labour — will be done using AI.  This could be among the top artificial intelligence risks in 2075.

Risk on Humanity 

While job loss is currently the most pressing issue related to AI disruption, it's merely one among many potential risks. In a February 2018 paper titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," 26 researchers from 14 institutions (academic, civil, and industry) enumerated a host of other dangers that could cause serious harm — or, at minimum, sow minor chaos — in less than five years.

Risk of Privacy

"Malicious use of AI," can put our privacy at risk. Now everything is exposed to the internet and people who are expertise in this field have started misusing the power of artificial intelligence.  AI will adversely affect privacy and security.

AI Bias And Widening Socioeconomic Inequality

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when it's a certain kind of work — the predictable, repetitive kind that's prone to AI takeover — research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money.

Autonomous Weapon

Not everyone agrees with Musk that AI is more dangerous than nukes. But what if AI decides to launch nukes — or, say, biological weapons — sans human intervention? Or, what if an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so. 

Stock Market Instability

Have you ever considered that algorithms could bring down our entire financial system? That's right, Wall Street. Algorithmic trading could be responsible for our next major financial crisis in the markets. A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Risk for Artists

A lot of critics believe that AI will disrupt the art industry because it enables artists to express themselves more easily. It also eliminates some costly tasks such as drawing or painting which leaves less time for artists to have a chance at making a living from their art. Some people believe that this new type of technology will replace humans in society and therefore there won't be any need for human-based art. However, other people think that this is just one more tool for an artist who wants to expand their horizons.

Risk of AI in the Medical Industry

These concerns include the risk of bias, lack of clarity for some AI algorithms, privacy issues for data used for AI model training, security issues, and AI implementation responsibilities in clinical settings. There are some ethical problems faced by AI clinical applications.

Social manipulation

Social media through its autonomous-powered algorithms is very effective at target marketing. They know who we are, and what we like and are incredibly good at surmising what we think. Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. Presidential election and the U.K.'s Brexit referendum, but if the accusations are correct, it illustrates AI's power for social manipulation. By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, in whatever format they will find most convincing—fact or fiction.

Misalignment between our goals and the machine's

Part of what human's value in AI-powered machines is their efficiency and effectiveness. But, if we aren't clear with the goals we set for AI machines, it could be dangerous if a machine isn't armed with the same goals we have. For example, a command to "Get me to the airport as quickly as possible" might have dire consequences. Without specifying that the rules of the road must be respected because we value human life, a machine could quite effectively accomplish its goal of getting you to the airport as quickly as possible and do literally what you asked, but leave behind a trail of accidents.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net