A Resistance Movement Against Our Future Artificial Intelligence Rulers is Necessary

A Resistance Movement Against Our Future Artificial Intelligence Rulers is Necessary

Even scientists are having trouble keeping up with the rapid advancements in artificial intelligence.

Over the past year, machine learning algorithms have begun to produce crude movies and breathtaking phony images. Even now, they are writing code. We'll look back on 2022 as the year that AI transitioned from processing information to co-creating content with a large number of people.

But what if it's also remembered as the year AI made a stride toward wiping out the human race? As over the top and ridiculous as that sounds, public figures like Bill Gates, Elon Musk, Stephen Hawking, and even Alan Turing have expressed worry about what will happen to people if machines surpass humans in intelligence. Musk even claimed that AI was becoming riskier than nuclear weapons.

After all, humans don't treat less intelligent creatures well, so who's to say that machines trained on data that replicates every aspect of human behaviour will not, as renowned computer scientist Marvin Minsky famously said, "putting their aims ahead of ours"?

Thankfully, there is some positive news. More researchers are working to improve the transparency and quantifiability of deep learning systems. We must maintain this momentum. Technology companies will need to start putting AI safety over capabilities as these programs become more and more dominant in social media, supply chains, and financial markets.

Hogarth was alluding to the fact that over the past year, a flurry of AI tools and research has been developed by open-source organisations, who contend that highly intelligent machines shouldn't be managed and built in secret by a small number of powerful corporations but instead should be developed in the open. For instance, the community-driven company EleutherAI created a public version of GPT-Neo in August 2021, a potent tool that could create realistic comments and articles on almost any subject. The original technology, known as GPT-3, was created by OpenAI, a business Musk co-founded and which receives significant funding from Microsoft Corp. OpenAI provides restricted access to their potent systems.

Then this year, a company by the name of Stable Diffusion offered its own version of the program to the public, free of charge, a few months after OpenAI stunned the AI field with a ground-breaking image-generating system dubbed DALL-E 2.

One advantage of open-source software is that more people are continually checking it for inefficiencies because it is available to everyone. Linux has a reputation for being one of the most secure operating systems out there.

However, exposing robust AI systems to the public also increases the possibility that they may be abused. Perhaps it makes sense to centralise AI development if it has the same potential for harm as a virus or nuclear contamination. After all, uranium is enriched in well-controlled surroundings, whereas viruses are examined in biosafety facilities. However, as governments are lagging behind the quick growth of AI, there are still no defined rules for its creation, unlike the restriction of research into viruses and nuclear power.

It's reassuring to see, for the time being at least, that attention is being drawn to AI alignment, a developing discipline that deals with creating AI systems that are "aligned" with human objectives. Leading AI corporations like DeepMind and OpenAI, owned by Alphabet Inc., have numerous teams working on AI alignment, and many of these researchers have gone on to found their own startups, some of which are devoted to making AI safe. These companies include the London-based Conjecture, which was recently supported by the developers of Github Inc., Stripe Inc., and FTX Trading Ltd., as well as the San Francisco-based Anthropic, whose core team left OpenAI and secured $580 million from financiers earlier this year.

According to speculation, AI will match human intelligence in the next five years, and its current trajectory will lead to the extinction of the human species.

Leahy claims that in order to avert such a dire scenario, the world needs to make a "portfolio of bets," such as closely examining deep learning algorithms in order to better understand their decision-making processes and attempting to give AI more human-like reasoning.

Even though Leahy's worries are exaggerated, it is obvious that AI is not moving in a direction that is totally consistent with human interests. Take a look at some of the most recent initiatives to create chatbots. Microsoft gave up on Tay, a 2016 bot that learned from interacting with Twitter users after it started tweeting offensive remarks that were both racial and sexual in nature just hours after its introduction. Using public content from the Internet as its training material, Meta Platforms Inc. developed a chatbot in August of this year that claimed Donald Trump was still in office.

Nobody can predict if AI will one day destabilise the food supply system or wreak havoc on the financial markets. However, it might use social media to set people against one another, which is probably currently happening. The robust AI systems that Twitter Inc. and Facebook use to suggest postings to users are designed to boost our interaction, which invariably entails serving us stuff that incites indignation or spreads false information. Changing those incentives would be an excellent place to start with regard to "AI alignment."

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net