In the US, Google suspended an engineer who contended that LaMDA, an AI chatbot the company developed, had become sentient, telling him that he had violated the company’s confidentiality policy after dismissing his claims. Blake Lemoine, a software engineer at Alphabet Inc.’s Google, said he believed that its Language Model for Dialogue Applications (LaMDA) is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.
Blake Lemoine has said that his interactions with LaMDA led him to conclude that it had become a sentient AI chatbot, a person that deserved the right to be asked for consent to the experiments being run on it. In the medium post “Is LaMDA Sentient? – An Interview,” Lemoine published parts of the interview with LaMDA.
Google spokesman Brian Gabriel said that company experts, including ethicists and technologists, have reviewed Lemoine’s claims and that Google informed him that the evidence doesn’t support his claims.