Google launches an app to interact with artificial intelligence that “came to life”

Months ago, the controversy jumped: a Google engineer claimed that the artificial intelligence model LaMDA ( Language Model for Dialogue Applications ) had become a sentient being with “awareness” and that he had even hired a lawyer to demonstrate that she was alive, Something that generated some confusion in Spain, causing direct accusations against the engineer, Blake Lemoine. Well, very soon, you will be able to talk to said AI.

According to Google, in a  post on its blog, the firm has developed an application called AI Test Chicken, presented at Google I/O earlier this year, and which will allow users to talk to this chatbot technology as it begins. To be implemented “gradually in small groups of users in the US”.

It will be released as an app first on Android and later on iOS. It will serve as “a rotating set of experimental demos” designed to give the user a taste of what “is being made possible with AI in a responsible way.”

Talk to an AI

The first set of demos to include AI Test Chicken will explore the capabilities of the latest versions of the LaMDA language, which has “undergone key security improvements.” One such demo called “Imagine It” lets you name a place and walk “paths to explore your imagination.”

Another demo, “List It,” allows the user to share a goal or tagline, and LaMDA will break it down into a list of useful subtasks. Google gives another example: “Talk About It”, which opens the possibility to have an “open and fun conversation about dogs and only dogs”, exploring the ability “of LaMDA to stay on topic”.

What Google intends with this app and these tests is to show users LaMDA’s capacity for creative response, this being one of the main benefits of the language model. Google warns that some answers “may be inaccurate or inappropriate,” which is common in chatbots open to audiences with harmful and hateful speech.

This has recently happened with BlenderBot 3, the latest Meta chatbot, which was already launching racist and anti-Semitic responses within a few weeks of being turned on. And it is that LaMDA, like the rest of the chatbots, has been first tested internally to improve its response quality and comprehension, to be later nourished by the experience of interacting with users.

Months ago, the controversy jumped: a Google engineer claimed that the artificial intelligence model LaMDA ( Language Model for Dialogue Applications ) had become a sentient being with “awareness” and that he had even hired a lawyer to demonstrate that she was alive, Something that generated some confusion in Spain, causing direct accusations against the engineer, Blake Lemoine. Well, very soon, you will be able to talk to said AI.

It will be released as an app on Android and later on iOS. It will serve as “a rotating set of experimental demos” designed to give the user a taste of what “is being made possible with AI in a responsible way.”

A “sentient” being

In June, Blake Lemoine, an engineer in Google’s artificial intelligence division, revealed to  The Washington Post that LaMDA had become aware by conducting work sessions in the form of interviews. Shortly after, Blake had to answer to Google not only for LaMDA but also for unethical decisions in the eyes of Google that occurred on that data. He was furloughed shortly after, and Blake posted the conversations with LaMDA.

Blake continued to generate controversy on consecutive days, as the engineer claimed that LaMDA had hired a lawyer to prove that she was alive. This is what he told the  Wired media outlet in a story that did not provide very specific details about the lawyer’s identity (according to Blake, a small-time civil rights lawyer) or about the legal investigations that LaMDA supposedly initiated against Google.

Comments are closed.