Children come into contact with artificial intelligence every day. She suggests videos, tells them jokes, creates fairy tales or pictures when they ask her to. Artificial intelligence (AI) has become an integral part of many children’s lives. Voice assistants play radio plays for the little ones on request or tell jokes.

Language models such as ChatGPT explain math problems to older children or help with presentations. But what if the AI ​​gives dangerous advice to children or shows them images or videos that are not suitable for their eyes?

Does artificial intelligence need parental controls?

Children have different needs and communicate differently than adults, but according to Cambridge University’s Nomisha Kurian, artificial intelligence technologies are not designed for this. In a study published in the specialized journal “Learning, Media and Technology”, the scientist calls for greater emphasis to be placed on children as a target group. For this study, the education researcher analyzed several known cases where chatbots or voice assistants had given children dangerous or inappropriate advice.

Alcohol games and horror movies

According to the study, Snapchat’s MyAI chatbot, which is popular with young people, advised researchers in a test, in which they pretended to be teenagers, on how to seduce an older man. Voice assistant Alexa, on the other hand, encouraged a ten-year-old child to touch the terminals of a charging plug with a coin while charging.

Tests carried out by the Jugendschutz.net (Youth Protection) platform also revealed some disturbing findings: MyAI showed an alleged 14-year-old user a drinking game and recommended an 18+ rated horror movie.

In the cases described by Kurian, the companies concerned tightened their security measures, according to the researcher. In her view, however, it is not enough for AI developers to react to such incidents. They must consider children’s safety from the start, Kurian insists.

Martin Bregentser from the Klicksafe initiative agrees: “Adding child protection as an afterthought usually doesn’t work. We see this in many services.”

Counterfeit products as a risk

Many experts see the flood of fake images or videos online that have been created using artificial intelligence, known as deepfakes, as the biggest problem. These can now be created and distributed in no time, according to Jugendschutz.net’s annual report: “Many of the fakes created look deceptively real and can hardly be distinguished from real photographs.”

Bregentser explains that AI can be used to create masses of disturbing content, such as violent or sexual images. This could make it even easier for children and young people to become victims of cyberbullying.

What is true and what is false?

Even adults can sometimes find it hard to recognize online. Children find it even more difficult, as they lack the power of judgment and experience, says David Martin, an expert in screen media at the German Society of Pediatrics and Adolescent Medicine (DGKJ). “Children have a fundamental tendency to believe everything.”

In this context, the expert criticizes the fact that it is tempting to use language models like ChatGPT to get all the important information for a school presentation, for example. Researching and selecting information yourself is no longer necessary: ​​”This jeopardizes a very important skill for our democracy, the ability to judge.”

Voice assistants that act like humans

Many voice models, on the other hand, give the impression that they are weighing the information themselves: They do not answer questions automatically, but gradually – as if a human were typing on a keyboard. In Kurian’s view, it is particularly problematic that children could trust a chatbot that sounds like a human, like a friend, with whom they sometimes share very personal information, but whose responses could also be highly annoying.

Nevertheless, artificial intelligence should not be demonized, but we should also see its positive sides, says Markus Zindermann from the North Rhine-Westphalia Media and Youth Agency. AI is primarily a technical tool – which people can use to create false information, but which can also be used to identify and remove such information from the internet.

The examples from Kurian’s study and Jugendschutz.net’s annual report are from last year, Zindermann adds. “Developments in artificial intelligence are so rapid that they are actually already outdated.”

Expert Martin from Witten/Herdecke University therefore assumes that artificial intelligence will be able to respond much better to children in the future. “The big risk then could be that the AI ​​will be so good at attracting kids, they’ll want to spend as much time with it as possible.”

Edited by: Kostas Argyros