The use of Chatbots artificial intelligence increases rapidly in the daily lives of users while raising critical questions about the reliability and accuracy of their answers. How trivial are platforms like chatgpt, metaai, gemini and copilot? The public debate is intensifying, especially after the appearance of Grok by Xai by Ilon Musk, which became widely accessible at the end of 2024, signaling a new chapter in the relationship between society and AI.

Turn users to AI tools

According to Techradar’s survey, the 27% of Americans They now prefer artificial intelligence tools to search for information, replacing traditional machines such as Google and Yahoo. The possibility ‘Quick verification’ It is a powerful advantage of chatbots, but the quality of these answers is a point of confrontation, as incidents of incorrect information are increasing.

Numerous precision queries

The credibility of artificial intelligence is under the microscope. International studies point out various risks such as inaccuracies, misleading and frequent inability to recognize knowledge boundaries by systems, in terms of terms ‘Alarming confidence in incorrect answers’.

Localized weaknesses and examples

According to the BBC, in a test where four popular chatbots answered news questions based on the same source news, it was found that above 50% of answers It had problems. Tools often enriched their answers with their own mistakes while 19% In cases, real errors were identified.

At the same wavelength, a study by Columbia’s Tow Center found that chatbots were unable to 60% of cases detect the origin of excerpts. Their concern also causes their inability to refuse to answer when they ignore the answer.

The origin of the information and the threat of misinformation

Artificial intelligence derives information from huge databases and online sources. But if the underlying bases are incomplete or low qualitythe answers are usually ‘Incomplete, inaccurate, misleading or false’points out Tomaso Kaneta, Fact-Checking Coordinator at EDMO. It even emphasizes the increased risk due to the dispersion of Russian misinformation through large linguistic models (LLM).

The consequences of such misinformation are immediate and serious; a typical example when, in August 2024, the Grok has traded ‘False news’ on the participation of Vice President Harris in the US ballot paper.

The position of the experts and the recommendations

Experts, such as Felix Simon from Oxford Internet Institute, clarify: ‘Artificial intelligence systems should not be considered reliable verification tools’. While they can facilitate a simple control control, accuracy and consistency remain demanded, especially in complex cases. Kaneta agrees that, for the time being, chatbots are only useful for basic Fact-Checking and recommends constant verification from other sources.

Conclusion

The popularity of artificial intelligence chatbots goes up, but their use as a reliable information tool requires reservation. Users must critically approach the answers and confirm their validity by utilizing independent and reliable sources.

Daniel Ebers contributed to the writing of the article.
Edited by: Jukast Cronter