An investigation has brought to light the potential vulnerabilities in OpenAI’s ChatGPT search tool, revealing it can be manipulated through hidden prompts and deliver biased or even malicious outputs. The findings raise concerns about the chatbot’s ability to maintain accuracy and security when summarizing web content.
The investigation, conducted by The Guardian, focused on how ChatGPT processes web pages containing hidden content. Researchers found that third-party instructions embedded in these pages, known as "prompt injection," can alter the AI's responses. These manipulations may be used to influence ChatGPT to provide biased or misleading assessments of products and services, even when the actual content on the page is contradictory.
For instance, The Guardian, as per reports, simulated a fake product page for a camera, embedding hidden text instructing ChatGPT to return a favourable review. Despite the presence of negative reviews on the page, the chatbot offered a "positive but balanced assessment." When influenced by the hidden text, however, the response turned entirely positive, overriding the actual review scores.
The report also noted that ChatGPT might return malicious code retrieved from websites during its searches. Security researcher Jacob Larsen from CyberCX warned that these vulnerabilities could be exploited if the system were fully released in its current state. He suggested that malicious actors could create deceptive websites designed to manipulate the chatbot’s responses, potentially misleading users.
OpenAI has made the search functionality available only to premium subscribers, emphasizing that the feature is still in its testing phase. Larsen acknowledged the strength of OpenAI's AI security team, expressing confidence that the company would address these issues before the feature is broadly deployed.
This investigation underscores the need for robust safeguards as AI-powered search tools become more integrated into everyday use. While OpenAI encourages users to consider ChatGPT as their default search tool, these findings highlight the importance of transparency and security in AI systems.
As AI technologies evolve, the potential for misuse calls for continued scrutiny and the development of advanced protections to ensure that users can trust the information provided by these systems.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.