Could efforts to combat online child bullying and sexual violence lead us down dark alleys and unprecedented surveillance of online communication? This question is at the heart of the debate on the new EU directive aimed at protecting minors online, which will be discussed by the relevant ministers on 10 October.

The new legislation obliges platforms such as WhatsApp, iMessage and Signal to “scan” suspicious content and report it to the authorities, at their own risk. The initiators of the new directive argue that this is the only way to put an end to the continuous increase in incidents of harassment and sexual violence. The so-called cybergrooming will be particularly targeted. This is an initial, supposedly friendly approach to minors via the Internet, with the aim of seducing and sexually abusing them in the future.

Critics, on the other hand, are sounding the alarm and warning that the new legislation will not only be ineffective and error-prone, but will result in the flagrant violation of the private sphere of hundreds of millions of Europeans.

“It seems to be a pretext…”

“Obviously we all agree, as a society, that it’s important to fight inappropriate content,” says Anya Lehmann, professor of cyber security at the Potsdam-based Hasso Plattner Institute. “However, there is no reliable indication that the proposed measures will actually be effective. Once again it appears that fighting these criminal offenses is a pretext to breach the secure encryption of online communication.”

In the same vein, Andre Haug, vice-president of the German Bar Association, which represents around 166,000 lawyers across the country, makes it clear that “this directive entails massive violations of fundamental rights, all lawyers agree on that”.

Earlier drafts of the directive have already been rejected. Today’s version is the product of editing by the Hungarian EU presidency, which, according to Haug, “contains some minor modifications, but does not address the heart of the problem.” In other words, the German jurist points out, it violates Article 7 of the EU Charter of Fundamental Rights (“Respect for private and family life”), as well as Article 8 (“Protection of personal data”), intervening even where a heightened degree of protection is required “such as communication between lawyer and client or between doctor and patient”.

How is the check done?

The new draft directive does not state exactly how the communication will be controlled. Experts on the subject consider the so-called “client-side scanning” as the only possible method, where the content is stored in a database and becomes visible on the user’s device, before the encryption even begins. Comparing this method with the conventional reality outside the internet, Anya Lehmann says that “it is as if the state is saying that it will not open files to check our letters, but will sit over us to see what we write”.

Critics of the directive point to two more risks: First, that new AI technologies that are not yet “mature” may make significant errors when scanning human communication, causing a “false alarm.” Second, that if the new surveillance technology starts to be used, it will soon spread to other fields besides the fight against harassment and sexual violence against minors.

In the end, it remains unclear whether on October 10th the necessary majority will emerge among the “27” for the approval of the directive. If not, the text will return to Brussels for further processing. Germany is against it and is looking for allies in the Council. The Netherlands has already announced that it will not support the new draft directive.

Edited by: Yiannis Papadimitriou