A former Google engineer has recently claimed that Google’s artificial intelligence system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly sceptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.
The question of what it would mean for an artificial intelligence model to be sentient is complicated. Language researchers can use their work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use the language fluently is sentient, conscious or intelligent.
Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decades long program to build models that generate grammatical, meaningful language.
Large artificial intelligence language models can engage in fluent conversation. However, they have no overall message to communicate, so their phrases often follow common literary tropes, extracted from the texts they were trained on.
For instance, if prompted with the topic “the nature of love”, the model might generate sentences about believing that love conquers all. The human brain primes the viewer to interpret these words as the model’s opinion on the topic, but they are simply a plausible sequence of words.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



