Google warning of using AI Chatbots
2023-06-19A human being is still the weakest link in cyber security. Whether it's a disgruntled employee, an overconfident employee, or an employee with a lack of knowledge, it's always the human element. And this is why most cyber security breaches are due to human error.
There is Unstoppable Rise of the AI Chatbot Revolution. Alphabet, Google's parent company, is reportedly cautioning employees about how they use chatbots, including its own Bard. Experts say, because of the potential confidential information to be leaked.
It is because AI companies train their chatbots' understanding of language by using messages sent by users. Human reviewers could read the chats and therefore see internal information, or the AI could reproduce and leak it by itself. ChatGPT has taken the world by storm, becoming a hot topic of conversation.
The chatbots, like Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.
Chatbots are trained on large datasets of text and code, which can include sensitive information. If an employee enters confidential information into a chatbot, the chatbot could potentially reproduce that information and share it with others. The employees said that "they’re concerned that the speed of development is not allowing enough time to study potential harms."
In addition, chatbots can be used to generate code. If an employee generates code in a chatbot, that code could also contain confidential information. This could be a security risk, as the code could be used to access sensitive systems or data.
Google is not the only company that has warned employees about the risks of using chatbots. Other companies, such as Microsoft and IBM, have also issued similar warnings.
Some of the specific concerns that Google has about the use of chatbots are:
· Chatbots can be used to reproduce confidential information that is stored in their training datasets.
· Chatbots can be used to generate code that contains confidential information.
· Chatbots can be used to access sensitive systems or data.
Google is also working to improve the security of its chatbots. There are several organisations including Samsung implemented a similar ban on ChatGPT, Amazon banned staffers from sharing any code or confidential information with OpenAI’s chatbot after the company claimed it discovered examples of ChatGPT responses that resembled internal Amazon data. And it's not just tech companies, a number of banks including JPMorgan Chase, Bank of America, Citigroup, Deutsche Bank, Wells
Fargo and Goldman Sachs have banned the use of AI chatbots by staffers worried they could share sensitive financial information.
Overall, Google's warning to employees about the use of chatbots is a reminder of the importance of protecting sensitive information. As chatbots become more sophisticated, it is important to be aware of the potential risks associated with using them.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.