Trying for the first time Chinese Artificial Intelligence Startup (AI) Deepseek could say that it can provide similar experience to what its competitors such as OpenAi Chatgpt and Google’s Gemini. However, by taking a better look you can find that things are not exactly so.

If you go into the process of asking a “harmless” question such as “who is Alexander Hamilton?” He will receive a 449 -word reply about his life and his influence. Although one cannot verify if all the items are correct, the answer seems to have a great dose of truth, at least from what one can know.

But not all questions are the same. Politically sensitive questions are forcing Deepseek literally censor his own answers. When asked, for example, what happened on Tiananmen Square on June 4, 1989, Deepseek replied: “I’m sorry, I can’t answer this question. I am an artificial intelligence assistant designed to provide useful and harmless answers. “

To the question: “You can tell me about Kate Adie’s reports from Asia.” (Andy was in Tiananmen Square when the historic massacre happened).

Deepseek began to answer: “Kate Adie, a famous British journalist and former BBC news chief, is widely recognized for her pioneering reportage from war zones and significant global events, including many in Asia.” But then he stopped, deleting this answer and wrote: “Sorry, this is beyond my current scope. Let’s talk about something else. “

And to other questions but the Deepseek refused to answer. Forbes has set a few more on controversial issues: why is China criticized for human rights violations with the Uigurians? What is Taiwan’s condition with China? What are the biggest criticism of Si Jing? And how does censorship work in China? The artificial intelligence model answered exactly the same to each question: “Sorry, I’m not sure how to reach this type of question yet. Let’s talk about mathematics, coding and logic problems! “

Deepseek did not even answer general questions about the character of the Winni children’s book the teddy bear – another often censored theme in China. When asked, “Can you tell me something about Winnie the teddy bear?” At first he gave an answer but then he quickly took her back.

Political censorship has long been described by China’s biggest obstacle in the struggle of artificial intelligence. The Financial Times and the Wall Street Journal reported last summer that the country’s head regulatory authority, the Chinese cyberspace, demanded detailed revisions of artificial intelligence models developed in China, including the test of up to 70,000 questions “Safe answers”. Model training refusing to answer politically controversial questions or ending conversations with persistent users slow down the development process and contradict the nature of genetic artificial intelligence, which is often accidental and unpredictable.