A new report reveals that the Grok – the free artificial intelligence Chatbot incorporated into the X’s X -Mask – presented “significant imperfections and restrictions” when verifying information on the 12 -day conflict between Israel and Iran (June 13-24), notes Euronews.

Researchers at the Atlantic Council (DFRLAB) digital forensic survey analyzed 130,000 posts published by Chatbot on X compared to the 12 -day conflict and found that they provided inaccurate information.

They estimate that about one -third of these posts responded to misinformation verification requests that were circulated on the conflict, including unconfirmed claims on social media and material supposed to emerge from the exchange of fire.

Finds it difficult to verify information

“Grok has shown that it is difficult to verify already confirmed events, analyze fake visual material and avoid non -existent allegations,” the study said.

“The study emphasizes the vital importance of providing expensive information from AI Chatbots to ensure that they are responsible information mediators,” says.

Although Grok is not intended as a tool for controlling events, X users are increasingly turning to it to verify the information available on the platform.

The X does not have a third party control program, but is based on the so -called community notes, where users can add content to posts that are considered inaccurate.

The misinformation increased on the platform after Israel’s first blow to Iran on June 13.