

What started as a bold push to make AI more expressive has now triggered a global regulatory storm. Governments are lining up to take action against X after deepfake images were produced by its AI chat service Grok, containing a series of sexual acts.
This move has heightened concern about how quickly AI developments are overtaking laws that ensure user protection.
This backlash comes after people shared examples of Grok producing unconscionable, sexualized images of individuals based on altered photos of real people. Regulators have indicated these images could be targeted at females as well as minors, surpassing the boundaries of the laws on obscenity, protection of minors, and mandated online harm legislation.
The platform response time has also been criticized. Elon Musk's network is also facing questions about inadequate measures in ensuring consumer safety and unrestricted, problematic features.
Indonesia has issued one of the most serious warnings yet. The national communications ministry announced the possible blocking of the entire X platform if the company does not adhere to regulatory requirements in immoral and harmful content. The government clarified that AI does not diminish responsibility.
In Europe, there have been calls from Germany for the European Union to take action through the Digital Services Act. It says comprehensive platforms are required to remove illegal posts promptly and deal with systemic risks.
The UK government has also disapproved of these deepfakes. Australia’s eSafety regulator, on the other hand, has begun an inquiry into image tools used by Grok. With the authorities determining whether the platform has violated online safety laws, there have been reports in India requesting clarifications from X over potential violations of IT laws. Moreover, they have threatened action if necessary.
Also Read: Grok Tops App Charts in Japan and France Amid Deepfake Probes
The Grok scandal highlights an increasing challenge in the digital sectors. Governments are interested in supporting AI research, but definitely not at the expense of safety and consent. With increasing AI applications on social platforms, many are pushing these platforms to assume more responsibilities against misuse.
The result of these inquiries may set a precedent. It could outline how far a government can go in maintaining AI aspects and to what extent a platform is responsible for the damage caused by its AI.