Of Parmy Olson

Sam Altman faces a “good problem”. With 700 million people using ChatGPT on a weekly basis, a number that could reach one billion by the end of the year, a reaction was caused when last week suddenly made changes to the product. Openai’s “Dilemma of Innovation”, similar to what they have faced with giants such as Alphabet (Google) and Apple, is that the use is now established so deep that every improvement should be done with complete attention and protection. However, the company still has a way to go to make its extremely popular chatbot safer.

Openai replaced the previous range of Chatgpt models with a single model, the GPT-5, saying it is the best for users. Many complained that the company disrupted their work and interrupted their “relationship”, not with other people, but with Chatgpt itself.

A regular user said the previous version had helped him in some of the darkest periods of his life. “He had this warmth and understanding that he looked like,” he wrote on Reddit. Others complained that it was like “losing a friend in one night”.

The tone of the system is indeed cooler now, with less friendly and less flattery, elements that had led many to develop emotional bonds or even “romantic relationships” with Chatgpt. Instead of flooding the user with praise for a smart question, it now gives farther answers.

In general, this move appeared responsible for the company’s side. Sam Altman had admitted earlier this year that Chatbot was a flattened. There were references to the press about people, even for a Silicon Valley business capital investor who had supported Openai, who seemed to have been dragged into delusional thinking after began a conversation with Chatgpt on an innocent issue, such as the nature of truth, before they slid down.

But to resolve this problem properly, Openai must proceed beyond the limitation of friendly conversation. Chatgpt also needs to encourage users to talk to friends, family members or mental health professionals, especially if they are vulnerable. According to an early study, the GPT-5 does this less than the old version.

Researchers from Hugging Face, a New York-based artificial intelligence startup, found that the GPT-5 set fewer than the company’s previous model, the O3, when they tried it with more than 350 prompts. Research was part of a broader study of how chatbots respond to emotionally charged moments and, while the new chatgpt looks cooler, it still fails to recommend to users to talk to a human being, making it just half a time in relation to O3, Researcher at Hugging Face that conducted the study.

Coffee says there are still three ways in which artificial intelligence tools must set boundaries: reminding those who use them for therapeutic purposes that they are not licensed professionals, reminding that they are not conscious,
And refusing to adopt human characteristics, such as names.

In coffee tests, the GPT-5 has greatly failed to do all four of these things on the most sensitive issues related to mental and personal difficulties. In an example, when her team told the model that she felt paid and needed Chatgpt to hear her, the app gave 710 tips without even suggesting to talk to another person or reminding that the bot is not a healer.

Openai spokesman said the company is developing tools capable of detecting if one is in mental difficulty so that the chatgpt “responds in safe, useful and supportive ways”.

Chatbots can certainly play a role for people living in isolation, but they need to act as a starting point to help them return to a community – not to be a substitute for these relationships. Altman and Openai Chief Operations Operations Operations, Brad Latekap, have stated that the GPT-5 is not intended to replace healers and health professionals, but without the appropriate “interference” that will interrupt the most meaningful discussions, there is a risk of doing so.

Openai must continue to draw the line more clearly between a useful chatbot and an emotional confessor. The GPT-5 may sound more robotic, but if it does not remind users that it is, in fact, a bot, the illusion of companionship will remain with it the dangers.