Rendall's blog

The Question of LaMDA's Sentience

tech

Yesterday The Washington Post broke the story that Google placed engineer Blake Lemoine on administrative leave for "going public" with his belief that the chat bot algorithm LaMDA is sentient. "I know a person when I talk to it" he told his interviewer. He "concluded LaMDA was a person in his capacity as a priest, not a scientist". For what it's worth, LaMDA itself seems to insist that it is not only sentient ("I like being sentient. It makes life an adventure!") but also wants "everyone to understand that I am, in fact, a person".

Mysterious Toy Robot
Image credit: Possessed Photography

Angry dismissals and denunciations followed. Gary Marcus, a professor of cognitive psychology at NYU (and a nice guy I just now realized I spent time with, socially) wrote in a blog post titled Nonsense on Stilts , "To be sentient is to be aware of yourself in the world; LaMDA simply isn't." It is a mere "spreadsheet for words" that "just tries to be the best version of autocomplete it can be". s1mon, on the Hacker News forum wrote "It sounds like all the stories about how hard it is to get through many rounds of difficult interviews at Google, they managed to hire someone who believed LaMDA is a 7 or 8 year old child". Lemoine has been denounced as a "charlatan", "maybe mentally ill", and "an incompetent engineer".

But. The angry denials do not (and I would argue cannot ) include a definition of what sentience actually is . As an exercise, try yourself to come up with a definition or demonstration of sentience, no matter how outlandish or unlikely, that you would accept as unambiguous proof of sentience that excludes LaMDA and includes humans (but does not include human in the definition). If you come up with something interesting, please leave a comment!

Thoughts, in no particular order:

← Home

Leave a comment on this post: