What to keep in mind while exploring the Nano Banana trend

The Nano Banana trend shows AI’s creative power, but also raises broader questions around data use. Past cases remind us why privacy, consent, and transparency matter as such viral trends grow.

author-image
Punam Singh
New Update
Nano Banana trend
Listen to this article
0.75x1x1.5x
00:00/ 00:00

The viral rise of AI-powered image trends like Nano Banana highlights how fast consumer technology can capture public imagination. Millions of users are experimenting with selfies to create playful avatars, 3D figurines, or stylised portraits. From a consumer-based technology adoption perspective, this is an impressive case of mainstream engagement with generative AI.

Advertisment

However, what lessons from the past should we keep in mind as such trends grow?

Why is it riskier than a simple filter?

Gemini 2.5 or Nano Banana edits and re-renders user photos into new images. And, because it edits user-supplied photos, the system might be ingesting high-resolution facial data to derive facial features to ensure consistent edits across variations. This kind of raw and derived facial data is precisely the sort of biometric signal that can be reused to re-identify, match or train recognition systems.

What actually happened in the past

There have been several cases that show the variety of harms and enforcement outcomes you should expect if biometric image collection is mishandled.

Here are a few documented cases where things have gone wrong.

Advertisment

Incident

What happened

Key problematic aspects

Meta / Facebook – Texas settlement

Meta was sued for using facial recognition on user images without explicit consent via its “Tag Suggestions” feature.

Mass biometric gathering; lack of proper disclosure; users weren’t clearly informed; accumulated liability leading to a large settlement (USD 1.4B in Texas)

Lensa.ai / Prisma Labs lawsuit over biometric data

In Illinois, users sued Lensa alleging the app’s Magic Avatars feature collected “facial geometry” data without proper permission, and stored/enabled use for training neural networks.

Points of concern: access to device photos, use of personal data for training AI, storage and reuse of biometric indicators, potential violation of the law 

Clearview AI

This company scraped billions of photos from social media, built a huge facial recognition database, which was later ruled by various regulators to have violated privacy and data protection rules (GDPR, etc.). Also, a data breach (client list exposed).

No consent of subjects; data used for surveillance or law enforcement in opaque ways; public outcry and regulatory penalties; risk of misuse; little control by people whose images were used.

Tea Dating Advice app breach

The women-only app had ~72,000 images leaked, including selfies and photo IDs; some user verification images, which many assumed would be deleted, but apparently were retained; private messages also leaked.

Exposure of sensitive personal images; violation of user expectations about privacy and deletion; possible misuse of image data for fraud, deepfakes; trust breach.

GenNomis / AI-Nomis exposed database

A database of AI-generated images and user prompts was left exposed, including harmful content (explicit, CSAM, etc.).

Not only personal content in some cases, but facilitates harmful content, showing the ethical/content moderation risk; it also shows how image/prompt data can leak if the infrastructure is insecure.

Madurai Police / Copseye app

A local facial recognition app used by police left a database of photos, names, OTPs, admin credentials, etc., publicly accessible.

Law enforcement context amplifies risk; the stakes are higher; personal identifiable data and biometric (photo) data were exposed; rights, oversight, and security protocols were insufficient.

How user expectations collide with reality

Users treat trends as ‘fun’ and expect ephemeral outputs. But in reality, some apps often retain uploads for debugging or to improve models. Terms and policies may grant broad rights like use, modify or train on that users do not read.

Watermarks or metadata are some security features that may be present, but do not prevent reuse or downstream licensing completely.

Advertisment

The gap between expectation and practice is where many past harms arose.

What users should do now?

  • Treat selfies like financial data: avoid uploading high-resolution IDs or sensitive photos.
  • Prefer on-device or local-only processing features; check for explicit “do not retain” statements.
  • Review app permissions, read the privacy policy (search for words: retain, train, sublicense, third party).
  • If uncomfortable, skip the trend; the social cost is lower than losing control of your biometric data.