
Meta’s new stand-alone AI app is drawing sharp criticism for what experts are calling a “privacy nightmare.”
Designed for interactive and social use, the app is unintentionally exposing sensitive personal information — from health concerns to legal confessions — to the public, often without users realizing it.
The controversy stems from the app’s “share” button, which encourages users to publish their AI chats.
Many are unaware that selecting it makes their conversation publicly visible.
This has led to the exposure of private matters, including court documents, medical symptoms, and even audio recordings discussing personal hygiene or legal advice.
A Security researcher discovered instances of users revealing home addresses and court-related content.
The problem deepens when Meta AI is linked with Instagram. If a user’s Instagram profile is public, their AI interactions — no matter how sensitive — can appear in the app’s open feed.
The result is a chaotic stream of bizarre and sometimes inappropriate content, from job applications to AI-generated memes.
While the app is gaining viral attention, privacy advocates warn that Meta must implement clearer privacy controls. Until then, users risk turning personal moments into public spectacle.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.