According to a report, Amazon employees have been using ChatGPT to help them with research and solve day-to-day problems. However, Amazon has cautioned its workers against sharing sensitive information with the chatbot.
Amazon has warned employees not to put confidential data on ChatGPT. The company concerns that the risk of ChatGPT learning incorrect or false information can be dangerous. OpenAI, the developer of ChatGPT, may add more capabilities over time, however a mistake in programming could potentially lead to malicious use of the chatbot.
According to reports, a corporate attorney at Amazon warned employees about ChatGPT after seeing it mimic internal Amazon data. The lawyer reportedly said, “This is important because your inputs may be used as training data for further iterations of the ChatGPT application, and we wouldn’t want the application to include or resemble confidential information (and I’ve already seen instances where the application closely matches existing material).
Additionally, Google is reportedly working on a ChatGPT rival, which may cause concern for Amazon’s search engine and the security of data shared on it. While ChatGPT is a unique answer provided based on sources available online, Google Search is a short answer sourced from news reports and journals.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.