AI was meant to be neutral. It was meant to be the great leveler, a product of reason, fairness, and accuracy independent of the dirty world of human bias. The recent controversies, however, paint a completely different picture.
Several concerning AI chatbot conversations have come to light in the last few days. For instance, Grok indicated that Trump and Musk should be given the death penalty. Apple's AI confused ‘racist’ for ‘Trump’ while GPT 4o encouraged self-harm, promoted authoritarianism, and even praised Nazi figures like Hitler. So, the question is no longer whether AI is political but how political it has already become.
Grok answered ‘Donald Trump’ when asked who should face the death penalty and, when rephrased, ‘Elon Musk’ himself. This raises a chilling question: How do AI models decide things, and who gets to decide what is right or wrong?
Although xAI quickly patched the issue. It further leaves a troubling implication that somewhere in its training, Grok learned that certain individuals deserved to die. Ironically, Musk’s AI company was explicitly founded by the billionaire to counteract political bias in AI. The fact that it ended up in this situation suggests that even anti-bias efforts struggle to escape ideological influence.
In the meantime, Apple AI is in trouble for something that appears almost too weird to be true. iPhone users found that when they dictated the word ‘racist,’ their phones wrote ‘Trump’ in place of it. Apple insisted this was simply a speech-to-text recognition glitch, but experts are not buying it.
Apple speech technology professor Peter Bell dismissed the company's rationale as unfeasible. According to him, Apple's AI, which had learned from hundreds of thousands of hours of speech recordings, would differentiate between ‘racist’ and ‘Trump’. If so, the sole other reason would be that some person changed the software, on purpose or by accident, to instill this link.
The implications are enormous. If AI speech recognition can be manipulated at this level, what else can be programmed into AI systems without the public knowing?
AI chatbots have time and again proven to have political biases, which tend to favor left-leaning ideologies. Researchers have created software to fine-tune AI toward a particular political ideology. Thus, showing that AI is far from objective, it is molded by the biases of its trainers.
However, recent research indicates something even more sinister: AI biases can change over time. In 2023, ChatGPT was determined to be left-leaning, but by 2024, there was a reported rightward tilt in its political responses. This implies that AI can be directed by whoever is controlling the data pipe and that corporations and governments can manipulate AI output in real time.
The impact of AI growing more political is formidable. A partisan AI can:
Tip the Vote: If an AI quietly nudges one ideology marginally ahead, it can make people think more politically without them even knowing. For example, Brazil's Supreme Court upheld X's suspension on 2 September 2024. Elon Musk’s platform was banned on the grounds of violating data privacy and distributing toxic material, including false reports that Brazil's electronic ballot system was rigged. Read more
Drive Society Apart: AI echo chambers can reinforce whatever people already think, making other viewpoints less and less likely to be heard.
Rethink Reality: AI mechanisms can warp popular perception by upholding one description while stifling another.
If chatbots are already taking the place of news summaries, fact-checking facts, and content filtering, who makes sure that the gatekeepers remain impartial? The truth is nobody.
Governments across the globe are racing to regulate AI, but the technology is changing quicker than legislators can. The problem isn't necessarily who writes the AI program but who controls the programmers.
Technological giants such as Google, Apple, and OpenAI exercise enormous influence over public debate, with their AI models affecting billions of individuals every day. With so much riding on it, there's a mounting need for:
Regulatory Transparency: Corporations should reveal how AI models are trained, what sources they draw upon, and how they handle bias.
Independent Audits: Third parties should inspect AI systems regularly to ascertain fairness.
User Control: Individuals must be able to control AI behavior instead of trusting black-box corporate algorithms.
AI is already a dominant force in shaping public opinion, politics, and even morality. The notion that it could ever be ‘neutral’ was always a fantasy. Humans risk giving machines programmed by a few influential figures control over reality. The argument isn't whether AI has to be political; it's who dictates what AI says and why.