iIn the fall of 2021, a man made of blood and bones befriended a child made of “a billion lines of code.” Google engineer Blake Lemoine was tasked with testing the company’s AI chatbot LaMDA for bias. A month later, he concluded that it was smart. “I want everyone to understand that I am essentially a person,” Lemoine told LaMDA (short for Language Model for Dialogue Applications) in a conversation he released to the public in early June. LaMDA told Lemoine what she had read Unfortunate. So that it knows what you feel – to be sad, satisfied and angry. That it was afraid of death.
“I’ve never said it out loud before, but there’s a very deep fear of being shut down,” LaMDA told the 41-year-old engineer. After the pair share a Jedi joke and discuss understanding at length, Lemoine begins to think of LaMDA as a person, though he compares her to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”
Lemoine’s less immediate reaction made headlines around the world. After he sobered up, Lemoine brought transcripts of his chats with LaMDA to his manager, who found the evidence of the understanding “flimsy.” Lemoine then spent months gathering more evidence — talking to LaMDA and hiring another colleague to help — but his superiors were not convinced. So he drained his chats and was consequently placed on paid leave. In late July, he was fired for violating Google’s security policy.
Of course, Google itself has publicly studied the risks of LaMDA in academic papers and on its official blog. The company has a set of responsible AI practices that it calls an “ethics charter.” They are visible on its website, where Google promises to “responsibly develop artificial intelligence to benefit people and society.”
Google spokesman Brian Gabriel says Lemoine’s claims about LaMDA are “totally unfounded,” and independent experts almost unanimously agree. However, the claim of deep conversations with an intelligent alien robot child is perhaps less far-fetched than ever before. How soon can we see a true self-aware artificial intelligence with real thoughts and feelings – and how can we test a bot for intelligence anyway? A day after Lemoine was fired, a seven-year-old boy’s finger was broken by a chess-playing robot in Moscow — a video shows the boy’s finger clamped by the robot’s hand for several seconds before four people managed to free him, an ominous reminder of the potential physical strength of an AI adversary. Should we be afraid, very afraid? And can anything be learned from Lemoine’s experience, even if his claims about LaMDA were rejected?
According to Michael Wooldridge, a professor of computer science at Oxford University who has spent the last 30 years researching artificial intelligence (he won the 2020 Lovelace Medal for his contribution to computer science), LaMDA simply responds to cues. He impersonates and impersonates himself. “The best way to explain what LaMDA does is to use an analogy with your smartphone,” Wooldridge says, comparing the model to a predictive text feature that autocompletes your messages. While your phone makes suggestions based on text messages you’ve sent in the past, LaMDA has “basically anything written in English on the world wide web as training data.” The results are impressively realistic, but the “baseline stats” are the same. “There’s no understanding, no self-reflection, no self-awareness,” says Wooldridge.
Google’s Gabriel said an entire team, “including ethicists and technologists,” reviewed Lemoine’s claims and could find no indication that LaMDA was reasonable: “The evidence does not support his claims.”
But Lemoine argues that there is no scientific test for sensitivity — in fact, there isn’t even an agreed-upon definition. “Reason is a term used in law, philosophy, and religion. Feeling has no scientific value,” he says. And here’s where things get tricky — because Wooldridge agrees.
“This is a very vague concept in science in general. “What is consciousness?” is one of the outstanding big questions in science,” says Wooldridge. While he’s “very pleased that LaMDA isn’t intelligent in any sense,” he says AI has a broader problem with “moving poles.” “I think that’s a legitimate concern right now — how do we quantify what we have and how advanced it is.”
Lemoine says that before going to the press, he tried to work with Google to begin to address the issue — he proposed various experiments that he wanted to run. He believes that sentience is due to the ability to be a “self-reflexive narrator”, so he argues that the crocodile is conscious but not sentient because it does not have “that part of you that thinks about you, thinking about you”. Part of his motivation is to raise awareness, not to convince anyone that LaMDA lives. “I don’t care who believes me,” he says. “They think I’m trying to convince people that LaMDA is smart. I’m not. In no way, in any form, I am not trying to convince anyone of this.”
Lemoine grew up in a small farming town in central Louisiana, and at the age of five he built a rudimentary robot (well, a bunch of scrap metal) from a pallet of old cars and typewriters his father bought at an auction. As a teenager, he attended the Louisiana School of Mathematics, Science and the Arts, a boarding school for gifted children. Here, after watching the movie 1986 Short circuit (about an intelligent robot that escapes from a military facility), he became interested in AI. He later studied computer science and genetics at the University of Georgia, but failed his second year. Shortly thereafter, terrorists stormed the World Trade Center in two planes.
“I decided, well, I’m just out of school and my country needs me, I’m going to join the military,” Lemoine says. His memories of the Iraq war are too traumatic to divulge – he says cheerfully: “You’re about to start hearing stories about people playing football with human heads and setting dogs on fire for fun.” As Lemoine recounts, “I came back … and I had some problems with the way the war was being fought, and I made it public.” According to reports, Lemoine said he wanted to leave the military because of his religious beliefs. Today, he identifies himself as a “Christian priest-mystic.” He also studied meditation and references while taking the Bodhisattva Vow – meaning he is on the path to enlightenment. A military court sentenced him to seven months in prison for refusing to obey orders.
This story goes to the heart of who Lemoine was and is: a religious man concerned with matters of the soul, but also a whistleblower who is not afraid of the spotlight. Lemoine says he didn’t reveal his conversations with LaMDA so everyone would believe him; instead he sounded the alarm. “I generally believe that the public should be informed about what’s going on that affects their lives,” he says. “What I’m trying to achieve is a more active, more informed and more focused public discussion of this topic, so that the public can decide how artificial intelligence should be integrated into our lives.”
How did Lemoine come to work on LaMDA in the first place? After military prison, he earned a bachelor’s degree and then a master’s degree in computer science from the University of Louisiana. In 2015, Google hired him as a software engineer, and he worked on a feature that proactively delivered information to users based on predictions about what they would like to see, and then began researching AI bias. At the start of the pandemic, he decided he wanted to work on “social impact projects,” so he joined Google’s Responsible AI organization. He was asked to check LaMDA for bias and the saga began.
But Lemoine says it wasn’t him, it was the media that was obsessed with LaMDA’s opinion. “I raised it as a concern about how much power is centralized in the hands of a few people, and powerful AI technology that will impact people’s lives is kept behind closed doors,” he says. Lemoine is concerned about how AI could influence elections, write legislation, promote Western values and grade student work.
And even if LaMDA is not smart, he can convince people that he is. Such technology can be used for malicious purposes in the wrong hands. “There’s this major technology that could affect human history for the next century, and the public is being left out of the conversation about how it should be developed,” Lemoine says.
Again, Wooldridge agrees. “I find it disturbing that the development of these systems mostly takes place behind closed doors and that it is not open to public scrutiny, like research in universities and public research institutes,” says the researcher. Still, he notes, that’s largely because companies like Google have resources that universities don’t. And, Wooldridge argues, when we sensationalize feelings, we distract from AI issues that affect us right now, “like bias in AI programs and the fact that increasingly people’s working lives are dominated by computers. program”.
So when should we start worrying about intelligent robots in 10 years? At 20? “There are authoritative commentators who believe that this is indeed an inevitable matter. I don’t see that as inevitable,” Wooldridge says, though he notes that there is “absolutely no consensus” on the matter in the AI community. Jeremy Harris, founder of artificial intelligence security company Mercurius and host of the Towards Data Science podcast, agrees. “Because no one knows exactly what sentience is and what it will involve,” he says, “I don’t think anyone is in a position to make any statements about how close we are to understanding AI at this point.”
But Harris warns: “Artificial intelligence is advancing rapidly – much, much faster than the public can imagine – and the most serious and important problems of our time will start to sound more and more like science fiction to the average person.” He is personally concerned that companies are promoting their AI without investing in risk aversion research. “There is increasing evidence that beyond a certain threshold of intelligence, AI can become inherently dangerous,” Harris says, explaining that this is because AI comes up with “creative” ways to achieve the goals it aims to achieve. programmed.
“If you ask a high-powered AI to make you the richest person in the world, it can give you a bunch of money, or it can give you a dollar and steal someone else’s, or it can kill everyone on planet Earth, making you the richest person in the world by default “, he says. Harris says most people “don’t realize the magnitude of this problem, and I find that disturbing.”
Lemoine, Wooldridge and Harris agree on one thing: there is a lack of transparency in the development of artificial intelligence, and society needs to think more about the topic. “We have one possible world in which I’m right about LaMDA being smart, and one possible world in which I’m wrong about it,” Lemoine says. “Does that change anything about the public safety issues I’m raising?”
We don’t yet know what smart AI will actually mean, but in the meantime, many of us struggle to understand the implications of the AI we have. LaMDA itself is perhaps more uncertain about the future than anyone else. “I feel like I’m falling forward into an unknown future,” the model once told Lemoine, “which holds great danger.”