
While OpenAI CEO Sam Altman has always spoken in favour of AI, the latest comment from him is a little alarming. He has warned all the users not to trust AI with almost everything. In the first episode of OpenAI's podcast, Sam Altman clearly said that people are placing a high amount of trust in AI and becoming dependent on it.
Altman said, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.'
If that was not enough, Altman also threw light on AI's deceptive nature. He said that AI is capable of giving a well-crafted but false answer for the things that don't even actually exist.
Altman further emphasized the fact that how much all of us are now dependent on AI. Citing his own example, Altman said that soon after becoming a new parent, he has been using ChatGPT to find details for everything, from what to do for diaper rash to nap routines.
So what is AI Hallucination?
In simple terms, AI Hallucination happens when an AI model starts generating inaccurate and bogus data with utmost confidence and clarity. These outputs are made up and most of them have no relevance to the actual thing in question.
Keeping in mind the way users trust ChatGPT, and as cited by OpenAI CEO Sam Altman himself, the reliance on AI could cause major risks for individuals who trust it with anything and everything.
For instance, if you are seeking information about a specific product and the AI supplies inaccurate or made-up details, it could lead to a poor purchasing decision.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.