World

Opinion – Latinoamérica21: Algorithmic racism against refugees

by

Currently, the global migratory dynamic is permeated by the development and application of increasingly sophisticated and invasive digital technological devices, greatly conditioning people’s destinies and influencing power strategies.

Functioning as part of a social engineering, digital technologies enhance discrimination through resources to measure biological dimensions and human behavior. It also makes it possible to tie together political strategies that range from the micro-reality of the individual to the macro-reality of transnational power relations.

Russia’s conflict with Ukraine has lasted 9 months, causing the forced displacement of 8.1 million people towards the European Union (EU) to date. With this, the European Commission proposed to activate, in February 2022, the Temporary Protection Directive because of the situation of Ukrainian refugees with the correct justification, by the way, that there is a tendency for a large influx of refugees towards the EU member countries.

However, this Directive had never been applied until then, even though it was created in 2001, and even though masses of refugees had already knocked on the bloc’s doors, being prevented from crossing the borders, which caused thousands of victims. For some analysts, there is political resistance to applying a certain legal mechanism based on prejudice against people from non-European countries, as attested by Prof. Dr. Meltem İneli Ciğer of Suleyman Demirel University School of Law.

The decision to apply the Directive converges with the EU’s own information policy regarding the management of borders, human mobility and asylum, which is based on the permanent surveillance of groups considered undesirable. This political-legal framework is expressed in the very technological devices used to create barriers, rather than facilitating, people’s access to human rights guaranteed by international treaties that regulate asylum and refuge.

In view of this difference in treatment towards refugees, depending on their race, ethnicity, culture and nationality, the discussion on the role of surveillance systems in the scenario of the conflict between Russia and Ukraine seems little explored at the moment, giving the impression that there is a certain suspension of procedures for the identification of Ukrainian refugees, because of the rapid mobilization of EU members to receive them. Another example comes from the American continent, where huge waves of people are heading from Central American countries towards the United States (USA), fleeing violence, hunger and climate change.

The treatment given to Latin American migrants seeking refuge depends on the government on duty in the US, but invariably they are targets of containment policies through security surveillance devices that operate in silence and express contempt, social prejudice and racism. In 2019, former President Donald Trump reinforced the idea of ​​building the wall between the US and Mexico, but presenting a different solution, referring to a smart-wall: “The walls we are building are not medieval walls. designed to meet the needs of frontline border agents”.

In fact, surveillance rules and procedures continue to be in force and bare inequality of treatment between Ukrainian refugees and those from non-European countries, and the same is true when it comes to pointing out Latin American migrants, in forced displacement, as irregular and, therefore, must be contained in a preventive manner. A trend that converges to the impacts of algorithms on the dynamics of migration, in what is presented as an essential aspect in any identification process, which is classifying, segregating, privileging or punishing certain social groups.

However, the bias of the codes programmed for the identification of personal profiles, characterized by racial conditions, behavioral characteristics and their relationship with the governance of migrations as a whole, stands out here. This trend covers a wide spectrum of actions and institutions that deal with surveillance for monitoring and control. This is a situation identified by experts and activist movements on the web, such as the academic research by Joy Buolamwini from the Massachusetts Institute of Technology (MIT), and which was presented in the documentary film Coded Bias.

In research carried out at the MIT Media Lab, Buolamwini, who is black, positioned her face in front of facial recognition devices, but she was not identified. However, when the researcher placed a white mask over her face, it was immediately recognized. The conclusion is that there is a bias in algorithmic programming in face recognition systems, based on Artificial Intelligence. That is, the algorithms are guided by classification processes, distinguishing groups of people who deserve to be recognized, from those who are literally excluded from the system. With this, the fight for more inclusive codes emerges, or even the elimination of the facial recognition device.

This concern was exposed in the White House report, published in 2014, during the Obama period, entitled Big Data: Seizing Opportunities, Preserving Values, referring to the uses of personal data with the aim of privileging or excluding groups of people due to their racial status. and class, especially in relation to “housing, credit, employment, health, education and the market”. But that, according to Canada’s Center for International Governance Innovation, can extend to the areas of immigration, public safety, policing and the justice system, “as additional contexts where algorithmic processing of big data impacts civil rights and liberties.”

Color and other phenotypic elements anticipate, or exert, a pre-classification of those refugees who deserve privileges, differentiating them from those who do not even have the chance to be evaluated and cared for. This, even before immigrants cross borders. The sharpness of color, as well as the national origin of the migrants, anticipates the screening of people, even before their data appear in the databases. Other more sensitive personal data, such as political opinions and religious beliefs, are later identified when cross-referencing information with other databases. These databases are structured in a biased way, legitimizing the differences between human beings and reproducing the inequalities between individuals.

The procedures today are not characterized as exceptional actions, but permanently promote the division between privileged groups, on the one hand; and, on the other hand, the classification, segregation, coercion, apartment, isolation, punishment and banishment of the undesirables, under the justification of the risk that they can take to the State and to society, since they are not part of the white and Christian social “club” of the rich countries.

Thus, it is essential to understand and rethink the role of information policies aimed at surveillance systems to control human mobility. Because, contrary to what it may seem, even white-skinned, blue-eyed and Christian immigrants can also suffer, in one way or another, at some point, from discrimination from systems that bet, above all, on fear of the abroad.

artificial intelligencediscriminationfacial recognitionImmigrantsleafmigrants

You May Also Like

Recommended for you