Understand algorithmic racism, which also affects blacks outside the networks

by

The use of filters that lighten the skin or sharpen the nose and the predominance of photos of white people in image bank searches are examples of so-called algorithmic racism.

Like a cake recipe, algorithms are instructions entered by a programmer for the system to perform an action or achieve a certain goal. To work well, they need to be permanently trained using artificial intelligence, machine learning and big data.

“It is necessary to break the paradigm that technologies are neutral”, says Tarcízio Silva, researcher and author of the book “Algorithmic Racism: Artificial Intelligence and Digital Networks”.

For him, algorithmic racism is largely the responsibility of those who produce technology, since many of these codes are created through machine learning and are based on the personal experiences of programmers who mostly use white people in their banks. data and testing.

One of the problems is the use of facial recognition technology in police cases and investigations, since most of those listed as suspects are black, which can lead to misidentifications. The apps also have difficulties in recognizing black faces.

It was to combat the inequalities present in digital platforms that researcher Joy Buolamwini, from MIT (Massachusetts Institute of Technology), in the USA, created, in 2016, a project called Algorithmic Justice League.

The idea was born when Joy, who is black, was developing an intelligent mirror prototype capable of recognizing faces and projecting inspiring people’s faces in the reflection.

Using a facial recognition algorithm, she realized that her own face was not detected. That’s when she got the idea to take a fancy white mask and face the mirror again. This time, her face was recognized. This and other examples with facial recognition algorithms are in the documentary “Coded Bias” (Netflix).

The League aims, through art and research, to make the algorithmic ecosystem of machine learning and AI (artificial intelligence) more critical, equitable and accountable, as well as raising awareness of the social implications of the tool.

For Silvana Bahia, co-executive director of Olabi and coordinator of PretaLab, the initiative also shows how much it is necessary to look at situations of oppression through digital technologies.

In this sense, she believes that the inclusion of diverse people in this market is an important way to raise awareness about the impacts of algorithmic racism.

“Only in this way will we have the opportunity to have other experiences, experiences that will be incorporated in the development process of a particular application or in the programming of an algorithmic sequence”, he says.

In Brazil, some social organizations work thinking about technology in a more political and inclusive way, such as MariaLab, Minas Programam, Conexão Malunga and ITS Rio. There are also projects such as PretaLab itself, which offers training courses and fosters a network that connects black women with opportunities in the technology job market.

“We do this based on three pillars. The first is the network of our website, which brings together profiles of more than 700 black women from different areas of technology. The second is the training courses, which seek to contribute by boosting the professional mobility of these women. The third is the connection with the job market, which wants to be more inclusive and has been looking for more talent”, explains Silvana.

Ana Cláudia Santos, 53, inspired by her daughter who participated in PretaLab, decided to take a risk and change her professional area.

A graduate of social work, she says her daughter’s enthusiasm caught her attention, prompting her to sign up for the course. “When I was selected, I couldn’t believe it. Today I’m very happy and I believe that, through programs like these, black women have the opportunity to qualify in the technology market, just like I’m doing.”

Other ways to combat algorithmic racism, according to researcher Tarcízio, is to expand the laws that regulate the use of data.

“The LGPD (General Data Protection Law) guarantees the privacy of people’s information both in the physical and digital environment, but it is still necessary to include anti-discrimination provisions in laws on artificial intelligence, data protection and establish parameters for the security area. public.”

You May Also Like

Recommended for you