Google’s Nano Banana Pro Sparks Safety Concerns After Generating Fake Aadhaar, PAN IDs

Google’s AI Tool Raises Concerns as Experts Warn Fake Aadhaar and PAN IDs Could Bypass KYC and SIM Verifications, Making Identity Fraud Harder to Detect
Google’s Nano Banana Pro Sparks Safety Concerns After Generating Fake Aadhaar, PAN IDs
Written By:
Simran Mishra
Reviewed By:
Manisha Sharma
Published on

Google’s new AI tool, Nano Banana Pro, can now make fake Aadhaar and PAN cards. The AI tool can produce realistic documents without asking questions or giving warnings. This has caused concern about privacy and identity fraud.

How the Tool Is Being Used

Since its launch, Nano Banana Pro has received praise for its sharp 4K image generation and better character consistency. Its integration with Google Search also fuels imaginative real‑world use cases, such as turning LinkedIn profiles into infographics or visualising complex ideas on a digital whiteboard.

However, some users have raised concerns about the tool creating hyper‑realistic Indian identity proofs. In tests, Nano Banana Pro replicated both Aadhaar and PAN cards using fictional details, including the user’s photo, without raising any questions. A Bengaluru techie demonstrated this more publicly, creating cards under the name “Twitterpreet Singh.”

Why Experts Are Worried

Google marks generated images with a visible Gemini watermark and embeds an invisible SynthID digital watermark to distinguish AI-generated content. However, experts caution that these safeguards may not be enough. Detection tools for SynthID are not yet publicly available, and watermarks can be edited out or ignored.

Cybersecurity professionals warn that such fake IDs pose a serious threat. These AI‑created cards could be used for SIM card fraud or bypassing KYC in financial services, or malicious identity theft. Traditional image‑based identity verification systems may struggle to detect forgery, especially when faced with images generated by powerful AI such as Nano Banana Pro.

The issue here goes beyond trick photography. It’s about how AI can be misused. Google’s safety teams appear to have underestimated a fundamental risk: that highly realistic fake IDs will be produced by malicious actors.

Moreover, this problem is not entirely new. Similar concerns had previously emerged when OpenAI’s ChatGPT (GPT‑4o) was shown generating fake Aadhaar, PAN, and even voter ID cards for users. However, Nano Banana Pro makes this threat far more accessible by offering superior image fidelity.

Authorities in India have also expressed concern. Experts and law enforcement have cautioned that users must avoid relying on AI-generated IDs. Some have urged institutions to strengthen their verification processes, such as by requiring chip, QR, or API-based validation rather than relying on screenshots.

To stay safe, users should not generate or use AI‑made identity documents. Institutions that accept identity proofs must be extremely cautious and require more than just visual inspection. Regulators may also need to step in to enforce stricter rules around AI-generated images to prevent misuse.

Also Read – Pune Techie’s Viral Demo of Google Nano Banana Pro Sparks Online Debate

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net