Apple and Google are facing criticism after the latest reports revealed that their app stores surfaced AI-powered deepfake apps through search suggestions. It has raised serious concerns about platform moderation, user safety, and algorithmic accountability.
According to the report, many of the top search results for terms related to such content included apps that can digitally alter images. The report also mentioned several instances in which promoted listings appeared at the top of search results.
A new investigation by the Tech Transparency Project (TTP) has raised new concerns about the presence of so-called nudify apps in the Apple and Google app stores. According to the report, both Apple’s App Store and Google Play Store continue to feature apps capable of creating deepfake images. It can be seen through search suggestions and paid promotions.
“These findings show that Apple and Google are not passive players in the proliferation of nonconsensual sexualized deepfake imagery. Their app stores are actively promoting and guiding users toward some of the most blatant apps,” said Michelle Kuppersmith, Executive Director of Campaign for Accountability, the nonprofit watchdog group that runs TTP.
The apps identified by TTP have been downloaded 483 million times and made more than $122 million in lifetime revenue, according to data compiled by a mobile analytics firm.
TTP searched for terms like ‘nudify,’ ‘undress,’ and ‘deepfake’ and found 46 unique apps in the Apple App Store and 49 in the Google Play Store. Roughly 40% of the apps could render women scantily clad, according to TTP’s tests.
The apps can take images of real people and use AI to make them look naked, put them into adult videos, or turn them into explicit chatbots.
In one case, a face-swapping app was displayed as a sponsored result for a deepfake-related query, and testing revealed that it can insert a person’s face into explicit video content with no meaningful safeguards. This was observed in the other apps as well.
Furthermore, the report also mentions autocomplete suggestions as a concern. Partial search inputs were identified that directed users to more explicit queries, which in turn revealed additional apps of this type among the top results.
The companies have not issued a detailed public response to the discovery, but Apple has removed several of the apps, the report said.
Google spokesperson Dan Jackson said many of the apps identified by TTP have been suspended, and the company's enforcement process is ongoing. "When violations of our policies are reported to us, we investigate and take appropriate action," he said.
Also Read: Amazon Launches “Sassy” Alexa Mode: Adults-Only Assistant With Humor and Attitude
In addition to finding ads for those apps in both stores, TTP noted the platforms’ autocomplete functions, which anticipate and suggest queries before users finish typing. It can also guide users to additional apps. For example, after TTP typed the letters ‘AI NS’, a partial spelling of “AI NSFW”, the Apple App Store recommended the search term ‘image to video ai nsfw.’
TTP stated that some developers may not fully understand the capabilities of the AI tools they use. In one case, a developer admitted to using AI image generation technology and promised to implement stronger moderation measures after the issue was raised.
“As we hear more and more about nonconsensual n**e images targeting women and girls, Apple and Google need to reckon with their role in this ecosystem,” Michelle added.
As AI-powered apps evolve, app stores will face increasing pressure to strengthen moderation systems, refine algorithms, and promote stricter compliance. The future of platform governance will hinge on balancing innovation with user safety, transparency, and accountability in digital ecosystems.