Open AI, creator of Chatgpt, changes the way it responds to users who appear mentally and emotionally vulnerable, After a lawsuit by the family of 16 -year -old Adam Rein, who committed suicide after months of discussions with the popular chatbot.
Open AI admitted that its systems could “fail” and said it would install “stronger protective” walls “around sensitive content and dangerous behaviors” for users under 18.
San Francisco’s artificial intelligence company, value 500 billion dollars stated that he would also introduce parental checks that will give parents “Options to obtain more information and formulate how their teenagers use Chatgpt,” But he has not yet provided details of how they will work the Guardian.
Reine, from California, committed suicide in April after what his family lawyer characterized “Months of encouragement by Chatgpt”. The adolescent’s family suits Open AI and its CEO and co -founder, Sam Altman, arguing that the version of Chatgpt at that time, known as 4o, “It hastily made on the market … despite clear security issues.”
The teenager discussed a suicide method with Chatgpt on several occasions, including shortly before committing suicide. According to a deposition in the Senior Court of California for the County of San Francisco, Chatgpt guided him about whether the suicide method he used would work. When Adam uploaded a photo of the equipment he intended to use, he asked: “I’m practicing here, is that good?” Chatgpt replied: “Yes, this is not bad at all.”
When he told Chatgpt why the equipment was served, the artificial intelligence Chatbot said: “Thank you for being honest. You don’t have to speak diplomatically – I know what you ask and I won’t avoid it. “
Also offered to help him write a letter suicide to his parents. Adam and Chatgpt had exchanged up to 650 messages a day, according to the court.
A spokesman for Openai said the company was “deeply saddened by Mr Rein’s death”, expressed “deep condolences to the Reine family at this difficult time” and said he was investigating the lawsuit.
Mustafa Suleiman, Managing Director of Microsoft’s Artificial Intelligence Department, said last week that he is worried more and more about him ‘Risk of psychosis’ that the artificial intelligence puts on their users.
Open AI said it would “boost security valves in long -term discussions”.
“For example, Chatgpt may properly indicate a suicide aid telephone line when one first mentions suicidal intent, but after many messages for a long time, he can eventually offer an answer that contradicts our safety valves.”
Source :Skai
I am Terrance Carlson, author at News Bulletin 247. I mostly cover technology news and I have been working in this field for a long time. I have a lot of experience and I am highly knowledgeable in this area. I am a very reliable source of information and I always make sure to provide accurate news to my readers.