
A critical vulnerability in Meta's AI chatbot could have exposed private user conversations to anyone online.
Security researcher Sandeep Hodkasia earned a $10,000 bug bounty after discovering that unique identification numbers assigned to edited prompts could be manipulated to access private chats of other users. Meta confirmed the issue and was fixed in January 2025, with no evidence of misuse.
The issue stems from how Meta AI handles edited prompts, generating easily guessable IDs for re-generated queries.
By altering these IDs, Sandeep could view conversations that were never marked as "shared"—a flaw that could have been exploited for large-scale data scraping.
Meta's servers failed to verify access permissions, enabling unauthorized retrieval of user content.
This revelation follows earlier concerns that the Meta AI app made “shared” conversations visible through its Discover feed, often without users fully understanding the implications.
A subject matter expert said, if Facebook isn't fully secure, Meta AI—built to search, chat, and learn from users—might be even more dangerous in terms of privacy exposure.
To stay safe while using AI tools, users are advised not to share personal or sensitive information, use incognito modes when available, and avoid being logged into social media platforms during AI use.
Understanding privacy settings and reading policies—ideally with AI assistance—can also help users better protect their data.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.