The European plan to regulate artificial intelligence passed a critical milestone today, getting a first green light from MEPs, who called for new bans to be imposed, as well as better consideration of the ChatGPT phenomenon.

The European Union wants to be the first in the world to get a full legal framework to limit artificial intelligence diversions while ensuring innovation.

Brussels proposed an ambitious regulatory plan two years ago, but its review is dragging on as it has been delayed in recent months by controversy over the potential risks of generative artificial intelligence.

EU member states did not decide their position until the end of 2022.

MEPs endorsed theirs during a committee-level vote this morning in Strasbourg. It should be ratified in June by the plenary. A difficult negotiation between the various institutions will then begin.

“We received more than 3,000 amendments. It’s enough to turn on the TV, every day we see the importance of this file for the citizens,” said Dragos Tudorake, co-sponsor of the plan.

“Europe needs an ethical approach, based on people,” summed up fellow co-rapporteur Brando Benifay.

Of great technical complexity, artificial intelligence systems are as fascinating as they are troubling.

While they can save lives by enabling a leap forward in medical diagnoses, they are also exploited by authoritarian regimes to mass-surveill citizens.

The general public discovered their enormous potential at the end of last year, with the appearance of ChatGPT from the California company OpenAI, which can compose original treatises, poems or translations in a matter of seconds.

But the distribution on social media of fake images that look real, created by apps like Midjourney, is raising alarm bells about the dangers of manipulating public opinion.

Scientists have even called for a moratorium on the development of the most powerful systems, so that they are better legislated.

The European Parliament’s position broadly confirms the Commission’s approach. The text is inspired by existing product safety regulations and will impose business-based controls.

Man must maintain control

The heart of the plan is a list of regulations, which are imposed only on applications that are considered “high risk” by the companies themselves based on the legislator’s criteria. For the European Commission, this concerns all systems used in sensitive areas such as critical infrastructure, education, human resources, law enforcement or migration management…

Among the obligations are to provide for a human control of the machine, to have technical documentation, to put in place a risk management system.

The respect of these obligations will be controlled by the supervisory authorities designated by each member country.

MEPs want to limit the obligations only to products that may pose a threat to safety, health or fundamental rights.

The European Parliament also wants to take better account of ChatGPT-type regenerative artificial intelligence, demanding a special regime of obligations that essentially reproduces those foreseen for high-risk systems.

MEPs also want to compel providers to install protections against illegal content and to disclose the data (scientific texts, music, photos, etc.) protected by copyright and used to develop their algorithms.

The Commission’s proposal, presented in April 2021, already foresees a framework for artificial intelligence systems that interact with humans. It will force them to inform the user that they are dealing with a machine and will force applications that create images to specify that they are artificially created.

Bans will be rare. They will concern applications that are contrary to European values, such as the mass surveillance systems used in China.

MEPs want to add to these a ban on emotion recognition systems and to remove the exceptions that allow law enforcement to remotely identify people biometrically in public places.

They also want to ban the mass harvesting of photos from the Internet to train algorithms without the consent of the individuals concerned.