Legal framework for artificial intelligence should protect people, says rapporteur

by

Artificial intelligence is in credit analysis, in insurance contracts, in forecasts and decisions that affect people’s lives, and therefore needs to be carefully regulated, defends Laura Schertel, rapporteur for the Legal Framework of Artificial Intelligence.

“You end up classifying people, classifying consumers, often assigning notes identifying who has the lowest or highest risk of default. A person can get more expensive credit because of that, or more expensive insurance,” she says.

The commission of jurists was installed on March 30, after PL 872/2021 was approved in the Chamber of Deputies. The objective is to assist in the formulation of the bill, which is in the Senate and received criticism from experts for the speed with which it advanced in Congress.


Laura Schertel, 38

She is the rapporteur of the commission of jurists responsible for subsidizing the elaboration of a substitute on artificial intelligence in Brazil. She is a lawyer and professor at UnB (University of Brasília), at IDP (Brazilian Institute for Teaching, Development and Research) and visiting researcher at the Goethe University of Frankfurt.

For Schertel, regulation should not be rushed. “We also have a problem related to transparency. Often there is a difficulty in understanding exactly what were the criteria used for the decision to be taken”, he says.

Artificial intelligence tools are present both in simpler activities — this interview, for example, was transcribed by a bot (a robot programmed to perform a function) available on Telegram — to more complex ones, such as controlling autonomous cars.

The original project, says the rapporteur, aimed more to deregulate than to regulate by presenting a subjective text, without defining a gradation for the different levels and risks of artificial intelligence.

“The paths will be debated and the commission will generate subsidies for the rapporteur Eduardo Gomes (PL-TO) and the Congress to take a decision”, he says. The jurists’ final project must be presented within 120 days after the commission begins.

Why regulate the application of artificial intelligence in Brazil? We can understand artificial intelligence as a set of technologies that uses a large database and seeks to solve everyday problems. We use these tools for the most diverse sectors of society, both in the public and private sectors.

AI is used to help us make predictions, classify and make decisions. in the area of [análise de] credit, in the insurance area, all those areas where it is necessary to reduce business risks, for example. You end up classifying people, classifying consumers, often assigning notes identifying who has the lowest or highest risk of default.

A person can get a more expensive credit because of that, or a more expensive insurance. We also have a problem related to transparency. It is often difficult to understand exactly what criteria were used to make the decision.

Another problem that has been addressed is algorithmic discrimination. The algorithms that learn from the examples you give them. The data may be imbued with bias.

There are two classic types of risk: the first is economic losses, as in the case of credit, the second concerns those linked to impacts on certain communities, such as the black or indigenous population, for example.

There is also a third risk, those related to automated systems such as medical diagnostic systems, autonomous weapons, caregiver robots, biometric surveillance and self-driving cars. We are talking about applications, still under development, whose possible security flaws can have serious consequences for society.

What should be done to mitigate these risks? Artificial intelligence is not simple. So much so that regulation in the world is still very incipient, very initial. In the European Union, we have a proposal for a regulation that has not yet been approved. In the United States, we have more sectorial initiatives.

It is a very big challenge and it is curious that in Brazil this process began in Congress. The commission of jurists comes at a time to consolidate this.

A number of tools that use AI are in full development. If we think about these automated cars, for example, many of these tools are still in the research phase and are not being fully used, so it is not easy to regulate something that is in the early stages of development.

Should we make a law that has a gradation? Linked to the challenge of mitigating risks, there is also another regulatory problem, which is to mitigate or prevent innovation. It seems to me that one of the great challenges is how to make this regulation in a strong way, which guarantees the protection of the person and his dignity.

I think that’s the central premise. It is essential that we seek a plurality, a diversity of visions. This means that all people, areas and sectors that are somehow impacted need to be heard and need to participate in this regulation.

Finally, I would also put the importance of it being consistent with the norms that we have. Artificial intelligence dialogues with many other norms. Firstly, with the LGPD (General Data Protection Law), with the Civil Rights Framework for the Internet and with the Access to Information Law.

What would be the ideal time for discussion for the country not to fall behind in regulating technology? When it comes to regulation, there is no cake recipe. Who will set the pace will be the public debate. And it will be ready when we manage to strike a balance between the protection that society legitimately expects and that provide the legal certainty that companies need to innovate and to develop new products and services.

And how to guarantee innovation and that the technology adapts to the Brazilian reality? Technology needs to adapt to reality, I believe it is an important aspect that is not simple to solve. Above all, it is being very coherent with our problems and with our social reality.

We are a profoundly unequal society with structural discrimination and racism. We have to think about data quality and statistical methods so as not to perpetuate this discrimination.

If you already have studies and more studies that demonstrate these discriminatory risks, for Brazilian society this element is even more relevant. And this point ended up not entering the PL 21/20.

What was missing from the bill and what points need to be revised? Some points in the PL were not sufficiently discussed. He brought a few principles, but no procedures that could mitigate artificial intelligence risks.

The question is: should a project that seeks to regulate artificial intelligence not bring ways to mitigate these risks? The project also says a lot for what it does not bring.

It establishes subjective responsibility for all situations, including high-risk ones. By establishing this, one could say that he ends up removing the application of responsibility.

Also in relation to discrimination and transparency, he seems to limit the application more than to specify or expand. Is the goal regular or unregulated? We must bring procedures and rights, not very generic principles and rules without concrete application. The commission will actually generate mechanisms for the Senate, the rapporteur Eduardo Gomes (PL-TO) and the Congress to trace a path towards regulation.

How is the public sector and the private sector in this story? What is the incentive model to encourage the implementation of tools in both sectors? I don’t have an answer for that, but I think this is an important question that society needs to debate. Are they general rules applicable, are they similar general rules, applicable to the public or private sector or not, or are they different rules? This is a relevant point to be discussed in the committee

In fact, there are risks that are very specific to the public sector that I think deserve a closer look. So if we think about criminal investigation, public security, you have the risks that are very characteristic of the public sector and the criminal process as a whole, because the risk is the limitation and freedom of the person.

Finally, to conclude, the regulation of artificial intelligence requires a true legal innovation: we need collective and not just individual mechanisms for the implementation of rights; concrete guidelines and not just general rules; we need to think about prevention and not just about repair.

Any regulatory action must take into account the need to adapt to the technological developments of the future, guaranteeing innovation and protecting the rights of individuals and the community.

You May Also Like

Recommended for you

Immediate Peak