Artificial Intelligence

Google Play AI Apps Exposed: Are Your Photos and Data Being Stolen?

Security Researchers Find 72% of Android AI Apps Contain Hardcoded Secrets, Raising Major Privacy Risks for Google Play Users

Written By : Simran Mishra
Reviewed By : Manisha Sharma

Overview:

  • Several popular AI apps on the Google Play Store exposed millions of photos, videos, chat records, and identity details due to poorly secured cloud storage and weak app security practices.

  • Security researchers found that many AI apps contain technical flaws, such as hardcoded secrets and misconfigured databases, that can enable unauthorized access to sensitive user data.

  • Users can reduce risks by checking app developers, limiting permissions, avoiding uploads of sensitive documents, and keeping security tools like Play Protect active on their devices.

Artificial intelligence apps have quickly become part of everyday smartphone life. People use AI tools to edit photos, create short videos, chat with virtual assistants, and generate creative images in seconds. These apps promise smart features and fun results. A single tap can turn a normal photo into digital art or an animated video.

Many Android users install these apps from the Google Play Store without thinking about security. Photos, videos, and personal details often get uploaded while using these services. Trust plays a huge role here. Users believe their files stay private inside the app. However, recent investigations have shown that this trust may not always be justified. Security researchers have found that several popular AI apps left large amounts of personal data exposed on the internet.

The discovery shocked many experts in the cybersecurity world. Millions of personal photos, videos, and sensitive records were found in poorly secured storage systems. Anyone with the web address could view or download those files. This situation has raised serious questions about privacy in fast-growing AI apps.

A Huge Leak of Photos and Videos

One of the most serious discoveries involved a popular app called Video AI Art Generator and Maker. The app allowed users to upload photos and short videos. The AI system then turned those files into digital art or animated content.

The app became extremely popular, surpassing 500,000 downloads on the Play Store. Many users shared personal photos, family pictures, and creative videos through the tool. Security researchers later discovered that the cloud storage connected to the app had almost no protection.

Nearly 1.5 million photos and 385,000 videos uploaded by users were visible inside the storage system. The total collection of files exceeded 8.27 million items. When researchers measured the size of this database, the data exceeded 12 terabytes.

These files included original photos, uploaded videos, and AI-generated media created by the application. The storage system has kept every file uploaded since the app launched in June 2023.

Also Read: How to Verify Google AI-Generated Videos in the Gemini App

Identity Verification App Also Exposed Records

Another major case involved an AI verification app called IDMerit. Many online services use verification tools to confirm users' identities. These systems usually require documents such as national identity cards, phone numbers, addresses, and other personal details.

Researchers discovered that the database associated with this app contained a large number of records. Nearly one billion personal entries appeared inside the system. The information belonged to users across more than 25 countries, including India, the United States, Germany, and Brazil.

Some of the exposed files included identity documents and personal contact details. Such information is highly valuable to criminals who attempt financial scams or identity fraud.

Chat Apps Also Showed Security Weakness

The investigation also uncovered problems with another AI tool, Chat and Ask AI. This app allowed users to ask questions and chat with artificial intelligence for help with writing, research, and daily tasks.

Researchers discovered that a large number of user messages were exposed in a backend database. Millions of chat records remained accessible without strong restrictions. These conversations included personal discussions and private questions that users believed were safe.

Even though no large public misuse appeared immediately, the exposure itself created serious privacy risks.

Why Did These Security Problems Happen?

Experts studying these incidents found several technical mistakes inside AI apps. One common issue involves something called hardcoded secrets. Developers sometimes store passwords or API keys directly inside the app code. Anyone who studies the app can find those hidden credentials.

A security study that examined many Android AI apps revealed an alarming number. About 72% of the apps contained at least one hardcoded secret in the code.

Cloud storage configuration also created problems. AI apps need powerful servers to process images, videos, and AI models. Developers often store uploaded files in cloud storage systems. Without strict authentication settings, those storage systems become publicly accessible.

The fast growth of AI apps also plays a role. Developers rush to release new tools and exciting features. Security testing sometimes receives less attention during development.

The Rise of Risky AI Apps

Cybersecurity experts also report an increase in malicious AI apps. Some apps look like normal photo editors or chatbot tools. Behind the scenes, hidden software may collect personal data from the device.

These apps may request access to photos, contacts, location, and microphone. Once installed, they may collect information silently in the background.

The popularity of AI technology has led to the launch of thousands of new apps on the Play Store. Many users download these tools quickly without checking the developer's credibility or reading the privacy policies.

Google’s Safety Systems

Google continues to strengthen security inside the Play Store. Automated systems scan apps before they appear for download. These tools analyze code, permissions, and possible malware threats.

In a recent safety report, Google confirmed that its security systems blocked more than 1.7 million apps that violated Play Store policies in a single year. The company also prevents billions of risky installations every year.

Android devices also include Play Protect, a built-in security system. It scans installed apps daily and alerts users if harmful activity appears.

Even with these protections, new security gaps may still appear in some applications.

Also Read: Top AI App Builders in 2026 to Create Apps Faster

Simple Steps to Stay Safe

Smartphone users can protect their personal data by following simple steps. Checking the developer profile before installing an app can reveal important details. Established developers with a long history of trusted apps usually maintain stronger security practices.

The process requires ongoing monitoring because users must manage permission requests. AI photo applications need permission to access the entire gallery because they require users to upload only a single photograph. The device security system protects its stored private files through permission-based restrictions.

Another safe habit is to avoid uploading highly sensitive documents to unknown AI apps. Users should keep their financial records and identity documents along with other private materials on secure platforms they trust.

Keeping Play Protect active and updating apps regularly also improves device security.

Final Words: Is Your Data Safe with AI?

Artificial intelligence apps have changed how people create content and use technology on smartphones. Photo editors, AI chat tools, and creative generators offer exciting possibilities. Millions of users enjoy these features every day. At the same time, recent discoveries inside Google Play AI apps show that privacy must always remain a top priority in the digital world.

The exposure of millions of photos, videos, and identity records reminds everyone that convenience should never replace security. Strong protection from developers and careful choices from users can create a safer digital environment. When users check permissions, review app developers, and avoid sharing sensitive files with unknown apps, personal data remains far more secure in the growing world of artificial intelligence.

FAQs 

1. Does AI have access to Google Photos?
Ans.
 AI can access Google Photos only if you allow the feature. Google is rolling out an AI Mode that can connect with Gmail and Google Photos. This feature is available to certain subscribers and works only after users enable it.

2. How to remove apps blocked to protect your device?
Ans.
 If an app is blocked to protect your device, open the Play Store on your Android phone. Tap your profile icon, go to Play Protect or security settings, and review the blocked app warning. You may update settings or install a safer version.

3. Does Google AI record your searches?
Ans.
When you use AI-powered search features, Google may collect your search queries, answers generated, and related page data. This information helps improve AI systems and machine learning features and is handled in accordance with Google’s official privacy policy.

4. How do I know if someone has access to my Google Photos?
Ans.
You can check your Google Account activity to see if someone else may be accessing your Google Photos. Look for unusual logins, unknown devices, or suspicious activity in your account security section to identify possible unauthorized access.

5. Can people see all my photos on Google Photos?
Ans.
No, your photos on Google Photos are private by default. Other people can only see them if you choose to share an album or send a sharing link. You can manage or stop sharing at any time through the app’s sharing settings.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Will Altcoins Surge Again? The Truth About Altseason 2.0

Will Utility Altcoins Outperform Meme Coins Amid Market Volatility?

Top XRP Ledger Coins by Market Cap to Watch in 2026

Digital Gold is Dead: How Bitcoin Got Tied to Nasdaq in 2026

Top 10 Ethereum Treasuries to Watch in 2026