LaMDA, Google’s AI that dialogues like a human


After claiming she was sentient, an engineer from the Mountain View group was fired. Update on this deep learning technology.

The case caused a stir. After talking with LaMDA, Google’s latest conversational AI, an engineer from the Mountain View group, alerted his hierarchy and public opinion. LaMDA would be a sensitive AI, likely to go against the group’s ethical principles. “If I hadn’t known exactly what it was, i.e. a program that we created, I might have thought it was about a 7 or 8 year old child” , assures Blake Lemoine (read the Washington Post article).

In this discussion, LaMDA affirms to the engineer to be afraid of being unplugged, not hesitating to compare this action to death. When asked what the nature of consciousness is, the machine replies that it means for it “being aware of its own existence, wanting to know more about the world and sometimes feeling happy or sad”. Following the publication of these exchanges, Google reacted immediately stating that Lemoine had no proof that LaMDA was endowed with sentience. The engineer was fired.

An open discussion

It is nonetheless true that LaMDA (acronym for Language Model for Dialogue Applications) seems able to pass the Turing test, that is to say, to pretend to be a human in the eyes of another. Which is hardly surprising. Google’s chatbot was created precisely to adopt the logic of a human discussion which is by nature open.

“LaMDA was designed to engage conversations seamlessly on a seemingly endless number of topics.”

“An exchange with a friend about a TV show, for example, can turn into a discussion about the country where the show was filmed before ending with a debate about the best regional cuisine in that country”, explains Eli Collins and Zoubin Ghahramani, respectively vice-president of product management at Google and vice-president of Google Research (read the blog post). “Unlike chatbots that follow predefined text scenarios, LaMDA was designed to engage in conversations seamlessly on a seemingly endless number of topics.”

Like NLP models such as Bert or GPT-3, LaMDA is based on a Transformer type neural network architecture. A technology released as open source by Google in 2017. Like recurrent neural networks (RNN), transformers are designed to ingest sequential data. They are therefore particularly well suited to natural language processing. Unlike RNNs, they do not however involve processing information in the form of a continuous flow, respecting for example the order of the words in a sentence. From there, these models can parallelize the calculations of the learning phase, which allows them to ingest massive volumes of learning data in a shorter time.

Using a transformer framework, LaMDA has been trained on millions of dialogs covering thousands of topics. Result: the model is able to express itself on very diverse themes by reacting according to the context. “For example, if someone says, ‘I just started taking guitar lessons.’ You might expect a response like, ‘How exciting! My mom has a vintage Martin that she loves to play with.'” Eli Collins and Zoubin Ghahramani explain, before adding, “The goal of LaMDA n It’s not just about giving sensible answers, but also, as in this example, reacting in a specific way and above all not providing boilerplate answers like ‘I don’t know’ or ‘that’s fine’.”

“We explore dimensions such as interest, assessing whether responses are insightful, unexpected or even witty”

Nevertheless, LaMDA always assembles its sentences according to the dialogues ingested. When he claims “to be aware of his own existence”, he simply delivers the answer he has learned. An answer which is most probably expected by his interlocutor with regard to his model and his learning base. Of course, that doesn’t mean he’s conscious.

As part of its project, Google is focused on making the model’s answers both correct and factual. “We also explore dimensions such as interest, assessing whether responses are insightful, unexpected, or even witty.”

Leave a Comment