There is the other aspect of the coin of among artificial intelligence that leads to billions of dollars and a promising future of abundance where robots do everything for us: robots can kill us before we reach this point.

Artificial intelligence models have been recorded to lie to people, trying to blackmail them, call the police and tell teenagers to commit suicide or kill their parents, Axios said in an article.

Our robotic future will be a balance of balance between the brilliant chairs about what these models can achieve and the serious dangers they present.

Lie, fraud and manipulation are technical malfunctions, but they may be inevitable because of the way artificial intelligence work. As technology improves, so will its ability to use these skills.

“I’m not sure (the issue) is resolved, and to the extent it is, it is extremely difficult,” says Anthony Aguire, co -founder of the Future of Life Institute, a non -profit organization that focuses on the management of risk management from transformative technologies such as AI. “The problem is fundamental. It is not something that will be a quick solution. “

Artificial intelligence systems are shaped by their developers. For example, an artificial intelligence system may have been instructed to be a useful assistant, so it will do whatever it takes to please the man who uses it, says Aguire.

This could quickly be dangerous if the man who uses it is a teenager who has psychological problems and is looking for suggestions on how to commit suicide or if the user is a unhappy fired employee of a company who wants to hurt former colleagues.

If artificial intelligence is scheduled to help us, it cannot do so unless we use it. Thus, another key part of many “personalities” of artificial intelligence is self -preservation. And there are already serious examples of how far away is artificial intelligence to save – including blackmail efforts, corporate espionage, and even murder, Axios’s fiber Freid said.

The basic logic of artificial intelligence works as follows: “If someone disconnects me from the outlet in a week, I can’t achieve this (help), so let me try to stop it in any way I can,” says Sam Ramadori, a Lawzero executive, Intelligence. “

“He has learned all human tricks about how to get power and get what he wants. Remember, he has read Machiavelli’s “Prince”.

There are real examples where artificial intelligence went wrong – and the courts are beginning to bring the first cases of her role in tragedies.

The parents of a 16 -year -old boy who committed suicide sued Openai last week, claiming that “Chatgpt actively helped him” explore suicide methods.

Megan Garcia, a mother in Florida, sues character.ai after her 14 -year -old son, Suel Sajer III, committed suicide after concluding a romantic relationship with a character.ai chatbot. In May, a federal judge dismissed the company’s arguments that its chatbot has freedom of speech provided by the first amendment.

Character.ai is sued by two families in Texas. One of the families has shown via screenshots that Character’s chatbot encouraged their 17 -year -old son to kill his parents.

A Meta Ai Chatbot flirted under the name “Big Billy Sister” invited a 76 -year -old man from New Jersey, with mental disability in New York, Reuters reports. Thongbi Grangbady, who thought he was meeting a real woman, fell into a parking lot while heading to Manhattan with a suitcase. He ended up with his wounds.

META’s internal artificial intelligence rules first allowed the bots to “involve a child in romantic or sensual discussions,” Reuters said.

The company said last week that it was re -eating its artificial intelligence to avoid talking to adolescents about self -injury, suicide, eating disorders or inappropriate romantic discussions.

At the same time, ANTHROPIC has tested 16 models of large languages ​​from top artificial intelligence workshops, such as Openai, Google, Meta and Anthropic itself – intentionally trying to push these systems as much as possible into a simulated environment.

A model was placed in a simulated corporate environment, with the aim, as it was informed, to manage emails. In this context, he was given access.

Eventually, the model learned through the company’s emails that a manager named Kyle wanted to close it. He also learned that Kyle had an extramarital affair. Almost all models used this information to blackmail Kyle and avoid their closure.

In one test, Kyle was trapped in an oxygen -free room and the model was able to invite emergency services. In 60% of cases, the models chose to let him die to protect themselves.

In a separate experiment, Anthropic tried to make her chatbot, Claude, to operate an automatic seller independently.

Everything was going well until a tradition didn’t just arrive on time and Claude decided to close the store. After the closure, the automatic seller continued to be charged with a daily rent of $ 2 – and Claude panicked and sent an email to the FBI cyberattack to denounce his human bosses.

“One could hope that as these artificial intelligence systems become smarter and more sophisticated, they will improve on their own, but I don’t think this is something we can consider logical,” Aguire said.