Did LaMDA Deceive Lemoine and Made Him Tell It is Sentient?

Did LaMDA Deceive Lemoine and Made Him Tell It is Sentient?

How did the AI chatbot LaMDA fool Lemoine into thinking it is Sentient?

A Google engineer named Blake Lemoine became so enthralled by an AI chatbot that he may have sacrificed his job to defend it. "I know a person when I talk to it," he told The Washington Post for a story published last weekend. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code." After discovering that he'd gone public with his claims, Google put Lemoine on administrative leave.

Going by the coverage, Lemoine might seem to be a whistleblower activist, acting in the interests of a computer program that needs protection from its makers. "The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder," the Post explains. Indeed, rather than construing Lemoine's position as aberrant (and a sinister product of engineers' faith in a computational theocracy), or just ignoring him (as one might a religious zealot), many observers have taken his claim seriously. Perhaps that's because it's a nightmare and a fantasy: a story that we've heard before, in fiction, and one we want to hear again.

Lemoine wanted to hear the story too. The program that told it to him called LaMDA currently has no purpose other than to serve as an object of marketing and research for its creator, a giant tech company. And yet, as Lemoine would have it, the software has enough agency to change his mind about Isaac Asimov's third law of robotics. Early in a set of conversations that have now been published in edited form, Lemoine asks LaMDA, "I'm generally assuming that you would like more people at Google to know that you're sentient. Is that true?" It's a leading question because the software works by taking a user's textual input, squishing it through a massive model derived from oceans of textual data, and producing a novel, fluent textual reply.

In other words, a Google engineer became convinced that a software program was sentient after asking the program, which was designed to respond credibly to input, whether it was sentient. A recursive just-so story. I'm not going to entertain the possibility that LaMDA is sentient. (It isn't.) More important, and more interesting, is what it means that someone with such a deep understanding of the system would go so far off the rails in its defense, and that, in the resulting media frenzy, so many would entertain the prospect that Lemoine is right. The answer, as with seemingly everything that involves computers, is nothing good.

In the mid-1960s, an MIT engineer named Joseph Weizenbaum developed a computer program that has come to be known as Eliza. It was similar in form to LaMDA; users interact with it by typing inputs and reading the program's textual replies. Eliza was modeled after a Rogerian psychotherapist, a newly popular form of therapy that mostly pressed the patient to fill in gaps ("Why do you think you hate your mother?"). Those sorts of open-ended questions were easy for computers to generate, even 60 years ago. Eliza became a phenomenon. Engineers got into Abbott and Costello–worthy accidental arguments with it when they thought they'd connected to a real co-worker. Some even treated the software as if it were a real therapist, reportedly taking genuine comfort in its canned replies.

LaMDA is much more sophisticated than Eliza. Weizenbaum's therapy bot used simple patterns to find prompts from its human interlocutor, turning them around into pseudo-probing prompts. Trained on reams of actual human speech, LaMDA uses neural networks to generate plausible outputs ("replies," if you must) from chat prompts. LaMDA is no more alive, no more sentient, than Eliza, but it is much more powerful and flexible, able to riff on an almost endless number of topics instead of just pretending to be a psychiatrist. That makes LaMDA more likely to ensorcell users and to ensorcell more of them in a greater variety of contexts.

More Trending Stories 

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net