
Google has announced a new bug bounty program that focuses entirely on artificial intelligence, and has invited security researchers and ethical hackers to find flaws in its AI systems. Depending on how serious the issue is, Google is offering rewards that go as high as $30,000. This new initiative expands Google’s long-running Vulnerability Reward Program and says that it wants experts to help identify “rogue actions,” those moments when AI behaves in unexpected or dangerous ways.
Google has given a clear idea of what kind of vulnerabilities it wants researchers to look for. Imagine an attacker tricking Google Home into unlocking a smart door or using a hidden command that makes Gmail summarise someone’s emails and send them to a third party. These are the sort of high-risk exploits Google wants to uncover before they can be used in the real world.
However, the company has made one thing clear, simply making Gemini or any other AI model hallucinate doesn’t count as a bug. Problems related to the kind of content the AI generates, such as hate speech or copyrighted material, should be reported directly within the product using its feedback tools. Google says this helps its AI safety teams retrain and improve models in a more targeted way.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.