The way for the global community is being opened by Europe, which is the first internationally to acquire an institutional framework for artificial intelligence (artificial intelligence-AI). The Council of Europe yesterday gave the final green light to the so-called Artificial Intelligence Act, the first global rules for AI.

The flagship legislation, aimed at harmonizing rules on artificial intelligence, takes a “risk-based” approach, meaning that the higher the risk of harm to society, the stricter the rules. It is the first such piece of legislation in the world, laying the foundations for a global standard for the regulation of artificial intelligence.

Council gives final green light to Artificial Intelligence Act, first global AI rules

In addition, the new regulatory framework introduces fines for breaches of the AI ​​Act, which are set as a percentage of the offending company’s global annual turnover in the preceding financial year or a pre-determined amount (whichever is greater). In the meantime, SMEs and start-ups are subject to proportional administrative fines.

After being signed by the presidents of the European Parliament and the Council, the legislative act will be published in the Official Journal of the EU. in the following days and will take effect twenty days after this publication. The new regulation will apply two years after its entry into force, with some exceptions for specific provisions.

The AI ​​Act only applies to areas within EU law. and provides for exceptions, such as for systems used exclusively for military and defense, as well as research, purposes.

High risk systems

The new law categorizes different types of artificial intelligence, depending on the risk. AI systems that present only a limited risk will be subject to very light transparency obligations.

In contrast, high-risk AI systems will be allowed but subject to a set of requirements and obligations for access to the EU market. AI systems such as, for example, cognitive behavioral manipulation and social scoring will be banned by the EU. , as the risk involved is considered enormous.

The law also prohibits the use of artificial intelligence for “predictive policing” based on the profiling of citizens, while it also prohibits systems that use biometric data to categorize people according to specific categories, such as race, religion or ethnicity. sexual orientation.

The AI ​​Act also addresses the use of general purpose AI (GPAI) models. Models that do not present systemic risks will be subject to some limited requirements, for example regarding transparency, but those with systemic risks will have to comply with stricter rules.

Office of Artificial Intelligence

In order to ensure proper enforcement, various administrative bodies are set up, such as the Artificial Intelligence Office within the Commission to enforce common rules across the EU, but also the Scientific Group of independent experts to support enforcement activities, as well as an AI Council with representatives of Member States to advise and assist the Commission and Member States on the consistent and effective implementation of the AI ​​Law.

“The approval of the law on artificial intelligence is an important milestone for the E.U. This law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI ​​Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies, while at the same time ensuring that this rapidly changing technology can flourish and stimulate European innovation,” commented Mathieu Michel , Minister of Foreign Affairs of Belgium.

Supporting innovation

The AI ​​Act provides an innovation-friendly legal framework and aims to promote evidence-based regulatory learning.

The new law provides that AI regulatory sandboxes, which allow a controlled environment for the development, testing and validation of innovative AI systems, should also allow for the testing of innovative AI systems in real-world conditions.