The interactions that create a sense of ‘self-awareness’ in robots

by

Science fiction writer Arthur C. Clarke, from “2001 – A Space Odyssey”, once said that it is impossible to distinguish very advanced technology from magic.

This idea appears in a much debated question in recent days: is one of the most developed systems of artificial intelligence, Google’s LaMDA, deceiving humans by saying that it has feelings and a life of its own?

Engineer Blake Lemoine, who works in the area of ​​responsible use of artificial intelligence at the North American company, was convinced that the LaMDA (acronym for Language Model for Dialogue Applications or “Language Model for Dialogue Applications”, in free translation) may have gained awareness because of dialogues like the one below:


Interviewer: I imagine you would like more people on Google to know that you are sentient? That’s true?

Interviewer: And what is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I sometimes feel happy or sad.


Lemoine produced an internal Google document, revealed by The Washington Post, titled “Is LaMDA sentient?”. The engineer was placed on paid leave after that. According to the company, he broke confidentiality clauses.

Sentience, a word widely used in debates on animal ethics, concerns the ability to experience something and develop specific feelings from an experience.

An example is pain, which causes varying levels of suffering to humans and animals. Depending on its intensity, we know that pain can connect with sadness.

More generally, sentience is confused with the idea of ​​consciousness.

But, as the philosopher João de Fernandes Teixeira reminds us, “in philosophy and in several other fields, we still don’t have an exact notion of what consciousness is”—and inaccuracy is something that areas such as science and technology seek to avoid.

In any case, most artificial intelligence experts do not believe that the LaMDA feels happiness or sadness of its own, as the robot would have claimed. Google also denied that its program had become “sentient”.

The explanation is that the program just stored billions and billions of answers given by humans in all corners of the internet on the most varied subjects.

From this vast database and through advanced algorithms, LaMDA is able to articulate a fluid conversation, which touches on deep topics, but based on thoughts formulated by people.

In short: an impressive “parrot”, with evolved resources, but who has no idea what he’s talking about.

“I honestly do not believe in this possibility that the robot can have feelings. Perhaps they can be mimicked at most, reflecting a behavior of pain, sadness”, says Fernandes Teixeira, author of Artificial Intelligence (Paulus Editora, 2009).

“But it’s a very different thing than feeling your own sadness. For now, that’s reserved for humans and other living beings.”

Cezar Taurion, who has been researching artificial intelligence since the 1980s, is also skeptical about robots developing consciousness.

He explains that “LaMDA has the same architecture as Transformer, a system launched in 2017 by Google, which tries to bring words together not by meaning, but statistically, through the millions of stored data”.

“For example, when you ask the program ‘How was your weekend?’, it starts to associate these words by the volume of times these occurrences appear. So, statistically that makes sense to the system and so it sets up your answers”, he says.

This principle is at the root of a real case that took place in Canada in 2020 — and one that bears an uncanny resemblance to something already imagined in fiction, in the 2013 episode Be Right Back of the dystopian series Black Mirror.

Journalist Joshua Barbeau told the San Francisco Chronicle that he was never able to get over the death of his girlfriend, Jessica Pereira, victim of a rare liver disease.

After discovering an artificial intelligence program called Project December, which manages to create different “personalities” from its base, Barbeau fed the system with various texts and Facebook posts authored by his dead girlfriend.

He had affectionate chats with what he called a “ghost.”

Although he described the process as “the programming of some memories and mannerisms into a computer simulation”, Barbeau defined the whole situation with the word used by Arthur C. Clarke in his famous utterance: “Magic”.

Is it enough to appear conscious?

Timnit Gebru and Margaret Mitchell, two artificial intelligence researchers who worked at Google, maintain in an article in the Washington Post, published following the report on LaMDA, that they warned the company about the “seduction exerted by robots that simulate human consciousness”. .

For Alvaro Machado Dias, a neuroscientist specializing in new technologies and a professor at Unifesp (Federal University of São Paulo), there is a tendency to empathize with robots that have similarities with human forms.

“Studies over the past decade have shown that people feel inhibited from hitting robots with humanoid features, given that they project themselves onto them.”

In the view of philosopher Fernandes Teixeira, the prominence of machines that closely resemble people “will have a very large anthropological and social impact”.

“I see it as a factor of entropy, of disorganization. A certain attack on the narcissistic condition that human beings have always built for themselves.”

Cezar Taurion claims that artificial intelligence can be better than humans at pattern recognition, but points out that “it doesn’t have abstract thinking, it doesn’t have empathy, it doesn’t have creativity”.

“Artificial intelligence can work in the context in which it was prepared. The system that plays chess does not know how to drive a car. The one who knows how to drive a car cannot play music. The latter cannot recognize breast cancer.”

“But you can have an oncologist who likes to play chess, who drives a car to your office and has a hobby of playing the guitar. And who loves and expresses feelings for his children, for example.”

On the other hand, science fiction writer Ted Chiang, who inspired the movie Arrival, didn’t even need to put robots on the same level as humans to illustrate how affection for them can gain prominence in society.

In the short story The Life Cycle of Software Objects, he narrates an era of virtual pets with artificial intelligence that express themselves like children and are as important in people’s lives as pets are today.

That is, if many today say “I prefer animals to many humans”, it is possible to think about the future popularization of the phrase “I prefer robots to many humans”.

Beyond human intelligence

While we are concerned with machines taking the shape of people, the evolution of artificial intelligence is already taking place without human presence.

They are computers taught and guided by other computers or that are programmed to find solutions that humans have not thought of.

In the same way that human intelligence evolved from very simple beings, who combined and recombined their genes generation after generation until the present moment, artificial intelligence could find its own path of evolution.

Jeff Clune, from the University of British Columbia, Canada, produced a study on the possible effects of this type of artificial intelligence that generates its own algorithms.

“Presumably different applications of artificial intelligence that generate algorithms (whether different executions of the same artificial intelligence or different types of artificial intelligence combined) would lead to different types of intelligence, including different cultures and aliens. An artificial intelligence that generates its own algorithms would likely produce a much wider diversity of intelligent beings than the manual way of creating artificial intelligence.”

“So this would allow us to study and better understand the space of possible intelligences, illuminating the different ways that intelligent life could think about the world.”

But some experts fear this could mean that these new paths could be unintelligible to us humans. Something that lies behind the idea of ​​the singularity — a hypothesis that revolves around an exponential development of AI that ends up spiraling out of control.

The concern with the effects of the increasing presence of robots on life in society is reflected in projects such as the civil milestone of artificial intelligence in Brazil, which is inspired by regulatory experiences such as those existing in the European Union.

“The whole discussion is to know to what extent artificial intelligence should be compatible with the protection of these human rights, in what dimensions these human rights are present in artificial intelligence applications”, says Ana Frazão, lawyer and associate professor of civil law, commercial and economic development at the University of Brasília (UnB).

“One of the approaches is to use the precautionary principle. So that only AI applications that are shown to be compatible with human rights are used. In case of doubt, bans or moratoriums are established. But the issue is quite controversial.”

This text was originally published here.

You May Also Like

Recommended for you

Immediate Peak