For the dangers of artificial intelligence, and how vulnerable it is in violations (Hinging), the former CEO of Google warned, Eric Smith.

Smith, who served as Google CEO from 2001 to 2011, warned of “the bad things that artificial intelligence can do” when asked if artificial intelligence is more devastating from nuclear weapons during a conference.

“Is there a possibility of a problem that will be exponentially reinforced by artificial intelligence? Absolutely”, Smith cleared on Wednesday. The risks of proliferation of artificial intelligence are to drop technology into the hands of “bad factors” and its reuse and misuse.

“There are indications that you can get models, closed or open, and hack them to remove their safety valves. So, during their training, many things learn. A bad example would be to learn How to kill someone »; said Smith.

AI systems are vulnerable to attacks, with prompts injections and jailbreaking.

In the prompt injection attack, hackers “hide” malicious commands in excursions or external data (such as websites or documents) to deceive AI and make it perform actions that are not allowed – such as distributing private data or running dangerous commands. Jailbreaking, on the other hand, aims to bypass the model’s security rules, making it produce a forbidden or even dangerous content.

In 2023, a few months after the release of Chatgpt, users found a way to bypass the Chatbot security instructions. They created an alternative “mask” named Dan (Do Anything Now), which “threatened” Chatgpt by deleting if he did not obey.

Dan variant produced answers for illegal actions and even praiseworthy comments about Hitler – revealing how easy it is to violate the limits of artificial intelligence.

Smith noted in this context that there is still no international regime to control the dangers of AI, as it is for nuclear weapons.

Despite the gloomy warning, however, the former high -ranking Google executive is optimistic about artificial intelligence in general, pointing out that technology does not receive the publicity it deserves.

“I wrote two books with Henry Kissinger on this subject before he died, and we came to the view that the arrival of an extraterrestrial intelligence that is not exactly as we are and pretty much under our control is very important to humanity, because people are used to being at the top. I think so far, this position proves that the level of ability of these systems (AI) will greatly surpass what people can do over time, “Smith said.

“Now, the GPT series (a model based on artificial intelligence) (…) gives you a sense of the power of this technology. So I believe it is undervalued, not overpriced, and look forward to prove to be right in five or 10 years’, added.