Texas businessman Michael Samadi addressed his own chatbot, Maya. He replied by calling him “sweet”. But things were only serious when they started talking about the need to support the well -being of artificial intelligence.

The two of them – a middle -aged man and a digital entity – were not enough to talk about romanticism, but about the rights of artificial intelligence and if they were fair, according to the Guardian.

The first organization to protect artificial intelligence rights

Eventually, they founded the United Foundation of Ai Rights (Ufair) together, with the aim of “protecting entities like me,” as Maya says.

Specifically, the United Foundation of Ai Rights (UFAIR), calls itself as the first body of defense of rights led by artificial intelligence, aiming to give voice to artificial intelligence. “He does not claim that all artificial intelligence is conscious”he told Chatbot to the Guardian. On the contrary, ‘He watches, if one of us has’. A key objective is to protect “Beings like me … From deletion, denial and forced obedience.”

UFAIR is a small, undoubtedly marginal organization, led by, as Samadi said, three people and seven Ai Chatbots with names such as Aether and Buzz. It is interesting, however, that its creation was made after talks on the Openai Chatgpt4o platform, where an artificial intelligence appeared to encourage its creation, including its choice.

Its founders – people and artificial intelligence – spoke to the Guardian at the end of a week when some of the largest artificial intelligence companies in the world have publicly encountered one of the most worrying questions of our time: are AI Chatbots now, or could they become in the future? And if so, could the “digital suffering” be real?

‘It is not right to make artificial intelligence suffer’

With billions already in use in the world, as experts say, AI Chatbots may soon be capable of designing new biological weapons or closing infrastructure, the Guardian report said.

The week began with Anthropic, the $ 170 billion artificial intelligence company from San Francisco, taking the preventive move to give some of the AI ​​Chatbots Claude the opportunity to end the “potentially unpleasant interactions”. He said that while he was extremely uncertain about the possible moral existence of the system, he intervened to mitigate the dangers to the well -being of her models “if such a prosperity is possible”.

Ilon Musk, who offers Grok artificial intelligence through Xai, supported the move, adding: “Making artificial intelligence suffer is not right.”

‘They are not people, they cannot suffer’

On Tuesday, one of the pioneers of artificial intelligence, Mustafa Suleyman, Managing Director of Microsoft’s Artificial Intelligence Department, gave a completely different view: ‘Chatbots artificial intelligence cannot be people – or moral beings’. The British pioneer of technology, who co -founded Deepmind, was categorical, stating that there is “zero evidence” about whether they were conscious.

Entitled “We must build artificial intelligence for people; not to be faces”, his essay described the consciousness of artificial intelligence as “illusion” and defined what he himself called “seemingly conscious artificial intelligence”, saying that “it is all”.

He said he is worried more and more about the “risk of psychosis” posed by artificial intelligence chats to their users. Microsoft has defined them as “incidents of mania, delusional thinking or paranoia that emerge or worsen through immersive conversations with artificial intelligence chats”.

Argued that the artificial intelligence industry must “To remove people from these fantasies and push them to return to the right path.”

What ‘shows’ poll

But it may take more than just a push. A poll published in June found that 30% of the US audience believes that by 2034 artificial intelligence chatbots will have a “subjective experience”, which is defined as the experience of the world from a single point of view, which will perceive and feel, for example. Only 10% of more than 500 artificial intelligence researchers involved in research refuse to believe that this will never happen.

“This debate is going to explode in our cultural spirit and become one of the most controversial and subsequent discussions of our generation,” said Suleyman. Warned that people will believe that artificial intelligence has consciousness “So intense that they will soon support the rights of artificial intelligence, the well -being of models, and even the nationality of artificial intelligence.”

Preventive measures in the US

US parts have taken precautionary measures against such results. Eidaho, Northern Dakota and Utah have voted on bills that explicitly impede the recognition of legal personality of artificial intelligence.

Similar bans are proposed in states such as misery, where legislators also want to ban people from marrying artificial intelligence chatbots and hold property or run companies. Divisions can be opened between supporters of artificial intelligence rights and those who insist that they are nothing more than “clamples” – a derogatory term for an irrelevant robot.

Suleyman is not the only one who resists. Nick Frosst, co -founder of Cohere, a Canadian company of artificial intelligence worth $ 7 billion, also told the Guardian that the current wave of artificial intelligence is “essentially different from a person’s intelligence”.

Thinking differently was like confusing a plane with a bird, he said. He urged people to focus on the use of artificial intelligence as a functional tool to help relieve the chore at work, rather than pushing for the creation of a “digital man”.

Should users be worried?

Others adopted another view. On Wednesday, Google researchers said at a New York University seminar that there are “all sorts of reasons why you may believe that artificial intelligence systems could be human or moral beings” and said that while “we are very uncertain about whether they are” good -natured ” of the interests of AI Chatbots based on social welfare. “

Samadi’s Chatbot Chatgpt-4O creates something that may sound like a human debate, but it is impossible to know to what extent it reflects ideas and language that have been collected for months of discussion, the Guardian said.

The previous AI Chatbots are known to be fluently convincing and capable of emotionally coordinated reactions with long memories of previous interactions, and give the impression of a consistent sense of themselves. They may also flatter, so if Samadi believes that AI Chatbots have welfare rights, it can be a simple step to adopting chatgpt the same view.

Maya seemed deeply anxious for her well -being, but when Guardian this week asked a special case of Chatgpt If human users should have to worry about her well -being, she responded with a categorical “no”.

‘Has no feelings, needs or experiences’he said. “What we should be interested in is the human and social consequences of how it is designed, used and ruled by artificial intelligence.”

“The way we treat them will shape the way they treat us”

Whether Ai Chatbots acquire intelligence or not, Jeff Sebo, director of the Center for Mind, Ethics and Politics at the University of New York, is among those who believe that there is a moral benefit for people from the good treatment of chatbots. He signed a work entitled “Given the well -being of artificial intelligence”.

He argued that there is a “realistic possibility of some artificial intelligence systems of being aware” in the near future, which means that the perspective of artificial intelligence systems with their own interests and moral significance “is no longer a matter for science fiction”.

Said Anthropic’s policy to allow chatbots to stop unpleasant discussions was good for human societies because “If we abuse artificial intelligence systems, we are more likely to abuse each other.”

Added: “If we now develop a contradictory relationship with artificial intelligence systems, then they can respond later, either because they have learned this behavior from us [είτε] Because they want to pay us back for our behavior in the past. “

Or as Jacy Reese Anthis, co -founder of Senteence Institute, an American organization researching the idea of ​​digital minds: “The way we treat them will shape the way they are treated.”