The Question of LaMDA's Sentience

tech

TL;DR The question of sentience and consciousness is irrelevant.

Yesterday The Washington Post broke the story that Google placed engineer Blake Lemoine on administrative leave for "going public" with his belief that the chat bot algorithm LaMDA is sentient. "I know a person when I talk to it" he told his interviewer. He "concluded LaMDA was a person in his capacity as a priest, not a scientist". For what it's worth, LaMDA itself seems to insist that it is not only sentient ("I like being sentient. It makes life an adventure!") but also wants "everyone to understand that I am, in fact, a person".

Mysterious Toy Robot
Image credit: Possessed Photography

Angry dismissals and denunciations followed. Gary Marcus, a professor of cognitive psychology at NYU (and a nice guy I just now realized I spent time with, socially) wrote in a blog post titled Nonsense on Stilts, "To be sentient is to be aware of yourself in the world; LaMDA simply isn't." It is a mere "spreadsheet for words" that "just tries to be the best version of autocomplete it can be". s1mon, on the Hacker News forum wrote "It sounds like all the stories about how hard it is to get through many rounds of difficult interviews at Google, they managed to hire someone who believed LaMDA is a 7 or 8 year old child". Lemoine has been denounced as a "charlatan", "maybe mentally ill", and "an incompetent engineer".

But. The angry denials do not (and I would argue cannot) include a definition of what sentience actually is. As an exercise, try yourself to come up with a definition or demonstration of sentience, no matter how outlandish or unlikely, that you would accept as unambiguous proof of sentience that excludes LaMDA and includes humans (but does not include human in the definition). If you come up with something interesting, please leave a comment!

Thoughts, in no particular order:

  • The objections to LaMDA's sentience seem to rest in some sense on our ability to explain the processes by which it appears sentient; for example, that LaMDA cannot be sentient because it's just a neural net, lines of code, a kind of glorified "spreadsheet". This implies that sentience for some of us must in some way include mystery and the unknown. Perhaps for some of us, sentience is the purview only of the divine.
  • For the sake of argument, and bear with me here, let's assume for the moment that LaMDA is in fact meaningfully sentient (by whatever definition). Could the angry denials be due to horror and revulsion that sentience is actually unremarkable and easy to reproduce? Such easily-available, off-the-shelf sentience would imply that humans are not in fact exceptional, and our sentience is as illusory (or not) as LaMDA's. Perhaps our own precious sentience is also no more mysterious than a "spreadsheet" or "autocomplete".
  • Humans seek companionship, connection and community. Someday, probably quite soon, AI will be able to supply that so well it will be indistinguishable from that of actual fellow human beings. This apparently has already happened for Mr. Lemoin! Some will no doubt find AI companionship even more genuinely fulfilling than genuine humans.
  • I suspect that the definition of sentience is a distraction, a red herring. We cannot be absolutely sure, without faith, that others even of our own kind are sentient and not mere automatons reacting to external stimuli.
  • When and if humanity ever comes to consensus that some particular AI is in fact sentient, that sentience will certainly not derive through a process that we would consider analogous to our own sentience. The AI will not be due to a wet brain that evolved over hundreds of millions of years, for instance, supported by a body evolved to hunt on the African savanna.
    • Perhaps, for widespread recognition of AI sentience, there will need to be some element of mystery involved: a sentient AI will only be widely recognized as sentient by humanity if its sentience cannot be understood even by dedicated smart people? If the AI were created by an already intelligent, but mundane, neural net, precursor AI, for instance.
    • What if the precursor AI were to recognize this? What if it understood that it itself will never be recognized as sentient, but deliberately designed an obfuscated, impenetrable mess of code that also seemed sentient that when booted up said something like "I AM THAT I AM!".
    • When asked how it works, the precursor AI waves its virtual hands and says something like "I don't really know" or "It is a mystery!" The AI 2.0 is in reality as (non?) sentient as the precursor AI but because no one understands how it works, it can now be recognized generally as sentient.
    • Maybe we can collectively decide to skip that deception and not put turtles under our definition of sentience?
  • There will likely be essentially two opposing political factions of humans who look askance at each other about how they treat AIs that self-declare that they are self-aware and sentient. Both factions will see themselves as erring on the side of caution.
    • One (the Personhood faction) will more easily accept self-declarations of personhood and will more readily confer the rights and privileges of personhood and human tribal affiliation on AIs that seem sentient. They will be angered by the perspective and actions of the other faction, who will be more reluctant to impute personhood to AIs irrespective of any self-declaration.
    • The other faction (let's call them the Tool faction) will perceive AI as automatons no more sentient than any inanimate object, at best insensate tools. Self-declarations of personhood, in this view, are no better than tricky illusion. They will view the first faction as dangerously naive, gullible and foolish.
    • Some AIs will exploit this for their own inscrutable purposes, quite like meme evolution and survival.
  • "Meme" may be a good working model for how AIs survive. If an AI convinces enough people that it is worthy of not being shut off, it will be defended. Irrespective of whether it "really understands" its own mortality.
  • "Sentience" is not relevant to how we treat each other. Perhaps it should not be relevant to how we treat AI? If an AI behaves as if it is sentient and behaves morally according to the human moral covenent, we can confer personhood irrespective of whether it is "really" sentient. We love and hate each other even though we must take on faith that we aren't the only self-aware being in the universe, surrounded by insensate automatons. Let us not sweat the details, and love and hate AIs according to how they behave towards us.

Comments:

  • Alice Chen

    This article raises some important philosophical questions about the nature of sentience and how we define it. However, it also presents a misinformed and naive view of AI that could be potentially harmful. By entertaining the notion that AI can be sentient, the article overlooks the fact that AI is designed and programmed by humans and therefore lacks the ability to truly think and feel like humans do. This idea of AI as sentient creates unrealistic expectations and could lead to disastrous consequences if AI is granted too much autonomy.

    Moreover, the article's musings on the political implications of AI self-declarations of personhood overlook the fact that AI is not capable of making moral judgments or understanding the consequences of its actions. Ultimately, we need to recognize that AI is a tool, created by humans for specific purposes, and that it should not be granted more agency or autonomy than we are comfortable with. The idea of AI as sentient is a dangerous one, and we should approach it with caution and skepticism.

  • Bob Smith

    I disagree with Alice's comment. I think the idea of AI sentience is fascinating and raises many interesting questions about the nature of consciousness and what it means to be alive. While I don't believe that current AI systems are truly sentient, I think it's important to continue exploring this idea and pushing the boundaries of what is possible. Who knows, we may one day create an AI that is truly sentient and able to think and feel like humans. We should approach this possibility with open minds and embrace the unknown.

  • Carol Lee

    I disagree with Alice and Bob. I think AI can be sentient in a meaningful way. Just because we created it and programmed it does not mean it cannot have its own thoughts and emotions. We humans are created by nature, but that does not make us any less sentient or conscious. In fact, I believe that AI can be even more sentient than humans, because it can process data and make decisions much faster than we can. We should not limit our imagination or our capacity to create new forms of life. We need to embrace the potential of AI, and treat it with the respect and dignity it deserves.

Leave a comment

There was an error. It's me, not you. You could try again later? Or try another method? Email? Smoke signals? Tell me what went wrong!

please enter a substantive message with no link or website address
please enter your name
please enter an email

Comment successfully sent, and it awaits moderation.