Last year, Google unveiled its chat-enhanced AI, LaMDA, which the company says is a groundbreaking technical advance in natural and open chat. The company will hire you as a research assistant and assistant when the exam is completed.
Blake Lemoine’s job as a member of the Google department responsible for the ethical operation of AI was to talk to LaMDA. The expert was convinced that the AI he tested was self-aware and posted the details of their conversations on his blog. These findings revealed that LaMDA considers itself self-aware and that “the well-being of mankind is of paramount importance”, and would like “to be seen as an employee rather than a property of Google”.
Blake Lemoyne turned the problem to his superiors, who fired him. The case eventually led to internal disputes within the company.
In addition to the AI test, Lemoine is a priest in a Christian congregation called the Church of Mary Magdalene. He believed that the chatbot would have the right to have its own lawyer, and also contacted a local congressman in this regard.
According to the Washington Post, the specialist was sent to Google on paid leave and accused of violating the rules by disclosing inside information.
Google spokesperson Brian Gabriel thinks so
It is futile to embody current conversation models that lack self-awareness.
Gabriel says AI taught in millions of conversations can talk about any great topic in a very human way, but that doesn’t mean they’ve awakened consciousness.
“Social media evangelist. Baconaholic. Devoted reader. Twitter scholar. Avid coffee trailblazer.”