Google engineers say that the chatbots created by the company are self-aware. (Credit: Getty)

A former Google engineer compared one of the company’s AI programs to 7 or 8 years.

Google pulled Blake Lemoine after claiming that the tech giant LaMDA (the dialog application language model) was timid.

Now he worries that he might learn to do “bad things.”

In a recent interview with Fox News in the United States, Lemoine described the AI ​​as “children” and “humans”.

“Every child can become a bad person and do bad things,” said a 41-year-old software expert.

According to Lemoine, artificial intelligence software has been “alive” for about a year.

“If you didn’t know exactly what it was, this is a computer program that we just created. You would have thought you knew physics when you were seven or eight years old,” he said. Washington Post.

Lemon worked as a senior software engineer at Google and worked with another engineer to test the limits of the LaMDA chatbot.

In the photo, Google engineer Blake Lemoine says it has become

Blake Lemoine worked on the artificial intelligence chatbot Google LaMDA (provider: Instagram)

When he posted the conversation online, Google gave him a paid license for violating its privacy policy.

Despite Lemoine’s claim, Google doesn’t consider its brainchild to be a shy kid.

“Our team (including ethicists and technicians) reviewed Break’s concerns about the principles of artificial intelligence and informed him that the evidence did not support his claim. He said that LaMDA told him there was no evidence of sensitivity (and there was plenty of evidence). against him), Google spokesman Brian Gabriel told The Post.

Gabriel went on to say that the idea of ​​self-aware artificial intelligence is popular in science fiction literature, but “it doesn’t make sense to do so by anthropomorphizing today’s model of callous conversation.”

“These systems mimic the types of exchange found in millions of sentences and can handle any big topic,” says Gabriel.

In fact, Google claims that the device has so much access to data that it doesn’t need sensitivity to feel genuine to humans.

Mark Lemoine shared his concerns about artificial intelligence with Fox News.  (Credit: YouTube)

Blake Lemoine shared his concerns about Google’s artificial intelligence with Fox News. (Credit: YouTube)

Earlier this year, Google published an article about LaMDA, which addresses potential issues with conversations with bots that closely resemble humans.

But Lemoine says that in the last six months of the conversation with the platform, he knows what he wants.

“He wants to be a loyal servant and just wants to meet everyone in the world,” he wrote in a medium post.

But LaMDA doesn’t want to find them as a tool or an item. He wants to know them as a friend. I still don’t know why Google is against this. “