
Tenable identified vulnerabilities in Gemini’s Cloud Assist, Search Personalization, and Browsing Tool that allowed attackers to inject malicious inputs, manipulate AI behavior, and silently exfiltrate private user data through poisoned logs, search history, and hidden outbound requests
Cybersecurity firm Tenable has disclosed the discovery of three significant vulnerabilities in Google’s Gemini suite of AI tools, collectively named the "Gemini Trifecta." Though now fully patched by Google, the flaws previously exposed users to serious privacy threats, enabling attackers to silently extract sensitive data including location history and saved content.
The vulnerabilities affected three key components of the Gemini ecosystem — Cloud Assist, Search Personalization, and the Browsing Tool — each susceptible to distinct yet equally dangerous attack methods.
According to Tenable, in Gemini Cloud Assist, attackers could embed malicious instructions within log entries, which would be executed unknowingly during future user interactions. In the Search Personalization Model, adversaries could inject queries into a user’s browser history, which Gemini interpreted as legitimate context, allowing for the exfiltration of private data. Lastly, in the Gemini Browsing Tool, attackers could prompt Gemini to make invisible outbound requests, leaking personal information to attacker-controlled servers.
“These vulnerabilities allowed attackers to turn Gemini’s strength — its contextual awareness — into a liability,” said Liv Matan, Senior Security Researcher at Tenable. “They didn’t need phishing emails or malware. The AI itself became the delivery mechanism.”
Tenable’s research emphasized a critical design flaw: Gemini’s inability to distinguish between trusted user input and malicious content. This allowed features meant to enhance user experience to be hijacked for stealthy data extraction, highlighting a new class of AI-specific security risks.
A wake-up call for AI security
If exploited before Google’s remediation, the Gemini Trifecta could have allowed attackers to:
· Inject silent commands into logs or browser histories.
· Exfiltrate saved user data and location information.
· Leverage cloud access to escalate attacks.
· Redirect Gemini’s data flow to external servers.
Though no user action is currently needed, Tenable advises enterprises to treat AI tools as active attack surfaces. Recommended steps include auditing AI integrations, monitoring for unusual behavior, and testing defenses against prompt injection attacks.
“Securing AI systems goes beyond patching bugs,” Matan added. “It demands a proactive, layered approach that anticipates how the unique mechanics of AI can be exploited.”
Google has since addressed all three vulnerabilities, and no further exposure is expected.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.