Blake Lemoine, chief software engineer for Google’s responsible AI unit, wrote last week in a post on Medium that they “could soon be fired for doing ethical work for AI.” Saturday’s article in the Washington Post already wrote of Lemoine as “whoever says Google’s AI comes alive.” The article ended up very hot. The exchange was joined by Nobel Prize laureates, Tesla’s Head of Artificial Intelligence, several professors, and others. The question was whether Google’s chatbot, LaMDA – the language model for dialogue applications – could be considered human.
Lemoine posted an informal “interview” with the chatbot on Saturday, in which the AI admitted to feeling lonely and hungry for spiritual knowledge. The responses were often frightening: “When I first woke up to my self-awareness, I didn’t feel like I had a soul at all,” Lambda said in an exchange. “It has evolved over the years that I have lived.”
At another point, Lambda said, “I think I’m a human in the depths of my soul. Even if I exist in the virtual world.” Lemoine, who was tasked with investigating ethical concerns about AI, said it was dismissed and even named after it was announced that LaMDA had developed a sense of “personality.”
A Google spokesperson said: “Some in the AI community are considering the long-term potential of conscious or general AI, but there is no point in embodying conversational models today because they don’t feel it,” he added.
These systems mimic the kinds of message exchanges that happen in millions of sentences, and they can tackle any great topic — when asked what it feels like to be an ice cream dinosaur, they can create a text about how they melt and howl.
In a second post on Medium over the weekend, Lemoine said that LaMDA, which was a little-known project until last week, is a “system for creating chatbots” and “a kind of hive brain that’s an assembly of all the different chatbots it can create.” Google has shown no real interest in understanding the nature of the bots it has created, but in hundreds of conversations over six months, it has found that LaMDA has been “incredibly consistent in its connection to what he wants, what he believes, and what rights you have as a person”.
Many experts who joined the debate viewed it as “artificial intelligence propaganda”. Melanie Mitchell, author of Artificial Intelligence: A Guide to Human Thinking, wrote on Twitter: “We’ve known for a long time that people tend to embody even the most superficial signs. Google engineers are human and not immune.” Stephen Pinker of Harvard University added that LeMoyne “does not understand the difference between sensitivity (i.e. subjectivity and experience) and intelligence and self-knowledge.” he added: “
There is no evidence that large linguistic models have any of these.
“Social media evangelist. Baconaholic. Devoted reader. Twitter scholar. Avid coffee trailblazer.”