Like everything that results from human creation, codes also embed the worldview of their authors. It is from this premise that the concept of algorithmic bias results, when an allegedly neutral technology produces or helps to reproduce inequalities.
Researcher Letícia Simões Gomes, a doctoral student in sociology at USP and a researcher at the Center for the Study of Violence, analyzes the effects of this in the area of public security.
“Algorithms discriminate par excellence — that’s their job. When categorizing, they discriminate in the literal sense of the verb, to separate one thing from another. The point is that there are ways of categorizing that are morally reprehensible and others that are not,” he says.
The researcher’s focus is on the use of technology by the police, not only for patrolling, but in establishing priorities for action and in the decision-making of the police. “Why are we so concerned, for example, with predicting where the next robbery will happen and less concerned with preventing the next slaughter?” she exemplifies.
Letícia Simões Gomes
Visiting researcher at the Institute for Critical Race and Ethnic Studies at Saint John’s University, New York, investigating policing technologies in contexts of racial inequality. Researcher at NEV (Nucleus for the Study of Violence) and doctoral student in sociology at USP. Master in sociology and graduated in international relations from USP.
Most of the algorithms we use on a daily basis were developed in companies run by white people and living in developed countries. What is the relevance of this? When we think of technology, it is important not to think of it as something dissociated from society, but as produced by people, who are positioned in a certain place, who are subjects, flesh and blood people who have desires, political opinions.
However, we cannot think only of these people individually, thinking that they are the only vectors of malaise in the world. When we talk about algorithmic bias, for example, we often look for who the culprit is and their intent.
Are you the programmer? Are you the backer of the platform? Are the users? It is interesting to think in broader terms: how are these people, institutions, companies, corporate cultures connected in order to produce, for example, inequalities?
These people are situated in certain strata of society because society is organized in a certain way. This shapes how these people see the world and presents them with a finite range of possibilities.
What is algorithmic bias? When we talk about algorithmic bias, there is a more technical and statistical aspect, when, for example, there is a population from which a sample is taken that is not representative of the whole.
This can come from how data is collected, how it is handled, what statistical techniques are employed. If there is a distortion between the population and the sample, we have a biased sample. These choices are all made by people when they program the algorithm.
But there is also a normative aspect, which associates algorithmic bias with discrimination and the reproduction of inequalities, whether racial, gender or any other nature.
When we think of forms of artificial intelligence, they are based on taxonomy, on classifications. They process data based on these classifications. It is an ontological question: algorithms serve to classify and order.
Algorithms discriminate par excellence—that’s what they do. By categorizing, he discriminates in the literal sense of the verb, of separating one thing from another. The point is that there are ways of categorizing that are morally reprehensible and others that are not.
Racial discrimination and discrimination against trans people, for example, are morally reprehensible forms of discrimination.
An important part of the academic literature addresses the question of what standards are being sought and why a particular tool is being used.
Some research proposes a positive algorithmic bias, with statistical techniques that seek to correct undesirable patterns, such as racial discrimination, to arrive at socially desirable standards or to eliminate bias. To what extent this is feasible is up for debate.
Other research questions the very choice of variables and the enunciation of the problem to be known through artificial intelligence: why are we so concerned, for example, with predicting where the next robbery will happen and less concerned with preventing the next slaughter?
When we think about the uses of algorithms, we have to think about the assumptions that guide their development, who is building that algorithm, what the application will be and, ultimately, whether it is working as planned.
Is it possible to correct algorithms that have negative biases? Some of the main solutions debated, such as more transparency and auditing, mean at heart the replacement of human moderation.
The human being still has to be there, because ethics is a debate that artificial intelligence cannot embed. Saying that artificial intelligence dispenses with humans is a myth: humans are needed to develop, train, update, implement, evaluate, among others. The English term for this is “keeping humans in the loop” [“mantenha os humanos no ciclo”].
One of the things I’ve been thinking about a lot is that there’s a limit to how productive leaning over the platform is. The platform and the system are always part of a larger whole. Many solutions that researchers discuss in an attempt to respond to the danger of reproducing inequalities through algorithmic bias are social, not necessarily platform-centric.
Technology is a chapter of a larger problem, you can’t solve the problem of algorithmic bias without addressing biases and inequalities in society.
I have over 8 years of experience in the news industry. I have worked for various news websites and have also written for a few news agencies. I mostly cover healthcare news, but I am also interested in other topics such as politics, business, and entertainment. In my free time, I enjoy writing fiction and spending time with my family and friends.