Of Will Oremus

US President Donald Trump signed an executive decree on Wednesday with the aim of directing federal contracts only to companies whose models of artificial intelligence (AI) are considered ideological neutral.

The decree, which is part of the “Artificial Intelligence Action Plan” by the Trump government, targets what he calls himself “Woke ai”, namely chatbot, images creating programs and other tools that are considered to be liberal prejudices. In particular, it forbids federal services from supplying AI models that promote concepts such as diversity, equality and integration (dei).

“From now on,” Trump said, “the US government will only work with AI serving truth, justice and impartiality.”

But what does “Woke AI” mean and how can technology companies avoid it?

According to experts in artificial intelligence, these questions are unclear. Legalists are concerned that Trump’s attempt to define what or what a chatbot may not say can be touched on issues of the first amendment.

“Phrases such as ‘no ideological prejudice’ sound beautiful,” said Rumman Chowdhury, executive director of Humane Intelligence and former head of ethics in Twitter. “But in practice it is impossible to achieve.”

The concerns that the popular artificial intelligence tools have liberal bias began to be reinforced in conservative circles in 2023. Then social media began to circulate examples where Openai’s chatgpt appeared to support policies such as positive discrimination.

The issue was larger next year, when Google’s Virtual Gemini Image Creator was accused of presenting images of inappropriate ethnic diversity for example, depicting blacks, Asians and indigenous Americans as Vikings, Nazis or “Fathers of the American Nation”.

Google apologized and modified the tool, explaining that the results were unintentional product of trying to create a more universally accessible tool.

Fabio Motoki, a lecturer at the University of East Anglia, confirms that AI tools, such as Openai’s GPT-4, in some cases show liberal trends, as he has shown. In a study published last month, he and his associates found that Openai’s GPT-4 responded to political questionnaires expressing views that were closely aligned with those of the Democratic Average.

But, as he points out, “the evaluation of the political placement of a chatbot is not simple.” In other issues, such as US military superiority, the same tools are more aligned with Republican views.

Artificial intelligence models present all kinds of prejudices, according to experts. It is part of the way they work.

Chatbots and programs that create images are based on huge amounts of data collected from the whole internet to predict the most likely or appropriate answer to a user query. Thus, they can answer one question by reproducing misogyny stereotypes that have been drawn from an anonymous and controversial forum, while another to rely on Dei policies coming from corporate recruitment policies.

Training an AI model to avoid such prejudices is extremely difficult, Motoki said. One could try to achieve it by limiting education data by paying people to evaluate their answers to neutrality or writing clear instructions in the model code. But all three of these approaches have restrictions and are known to have the opposite effects, making the model’s answers less useful or accurate.

“It’s too difficult to guide these models to do what we want,” he said.

Google’s Gemini mistake is an example. Another came this year, when Ilon Musk’s Xai planned the Chatbot Grok to prioritize the “search for truth” over political correctness, thereby reproducing racist and anti-Semitic conspiracy theories and, at some point, to call self-reblogged “mecha-hitler”.

Political neutrality for an artificial intelligence model is just “non -existent,” Rumman Chowdhury said. “It’s not a real thing.”

For example, he explained, if you ask a chatbot about gun ownership, he can answer by reporting both Republican and Democrats or trying to find an average solution. However, an average user in Texas can perceive this answer as a liberal bias, while a New York resident can consider it too conservative. To users in Malaysia or France, where strict gun laws are considered given, the same answer may seem extreme.

How the Trump government will determine which AI tools are considered neutral is a crucial question, according to Samir Jain, vice president of the non -profit center for democracy and technology.

The executive decree itself, he noted, is not neutral, as it excludes some views that lean to the left but not those that lean to the right. Specifically, the decree states that concepts such as “critical breed”, “gender identity”, “unconscious prejudice” and “systemic racism” should not be incorporated into AI models.

“I suspect that anything that provides information about trans care of trans’ will be described as’ woke ‘,” Jain said. “But that in itself is a political position.”

Imposing such a view on AI tools produced by private companies may violate the first amendment of the US Constitution, he said, depending on how it will be implemented.

“The government cannot generally impose specific types of discourse or censor certain views,” Samir Jain said. However, executive power has some room to set standards for the products it supplies, provided that restrictions on the ground relate to the purpose for which they are used.

Some technology analysts and advocates, however, believe that Trump’s executive decree is less authoritarian than they were afraid of.

Neil Chilson, head of artificial intelligence policy in conservative non -profit Abundance Institute, said he was worried about the possibility of overly binding guidelines for “Woke AI”, although he generally supports Trump’s plan. But after reading the decree, he said on Thursday that his concerns were “excessive” and believes that compliance would be “easy to implement”.

Mackenzie Arnold, US Political Director at the Institute for Law and Artificial Intelligence, an independent pool of thought, welcomed the decree recognizes the great technical difficulty of planning neutral AI tools and provides for a compliance route through the transparency.

“Although I do not like the wording of the” Woke AI Apostation “decree in the State, the text itself is relatively reasonable,” he said, adding that the crucial issue is how it will be implemented in practice.

“If it focuses on logical transparency requirements, everything will go well,” he noted. “But if it moves on ideological pressures, then we will talk about a serious slip and bad precedent.”