Artificial intelligence expands state police power, says scientist fired from Google

by

The Ethiopian Timnit Gebru became famous after accusing Google of a racist bias in the development of Artificial Intelligence (AI). Former AI ethics coordinator at big tech, she warns of the risks of this technology, which contributes to an “increasingly carceral and punitive society”.

Out of the big tech loop two years ago, Gebru is now an advisor to the Black in AI research group and founder of the Distributed AI Research Institute (Dair).

Gebru, 39, gained political asylum in the United States in 1999. There, she graduated from Stanford University. She worked at Apple and Microsoft before joining Google — from which she was fired after sending a mass email criticizing the company’s handling of AI. At the time, Jeff Dean, head of the company’s AI unit, told employees that Gebru had threatened to resign after demanding explanations for an unpublished paper, and accepted the scientist’s resignation.

Regarding the global repercussion of the Blake Lemoine case —another Google employee removed for seeing an ethical problem in big tech—, the researcher says that it is a distraction from problems generated by facial recognition, such as mass incarceration and surveillance.

This year, the engineer classified a Google chat robot as “sentient”, that is, endowed with some perception of feelings. “What frustrates me is that the public understanding of AI systems is very much shaped by movies, not reality,” he says in an interview with Sheet by video call.

Earlier this year, Blake Lemoine, a senior engineer at Google, was fired after calling a company chatbot “sentient.” How can this debate negatively influence or even mystify the development of AI? Many similar cases are popping up, with many tech companies wanting general artificial intelligence. [capacidade de entender qualquer habilidade, assim como o ser humano]🇧🇷 It seems like an all-knowing God, and these billionaires are making money left and right.

The conversation they are pushing is abstractions like “what about consciousness?”, “what about sentience?”. It makes us forget that they are tools we are building. They are collections of programs, numbers, hardware and software. These tools are built by specific people and institutions for specific purposes. In every process there are people.

This is a complete distraction. Kind of like science fiction. What frustrates me is that the public understanding of AI systems is very much shaped by movies, not reality. And the film industry could write a lot better, to educate the public.

Facial recognition for public safety has advanced in Brazil. However, the technology is challenged by lack of accuracy and poorly structured databases. As Mrs. see this growth? We have to defend very well that white face recognition is dangerous [assim como o de negros]🇧🇷 It can be very difficult for people to understand, because we’re not talking about a nuclear bomb or anything along those lines. In face recognition, people can understand like “I can recognize faces, why is this a problem?”.

The point is that humans recognize faces, but they can’t recognize 6 billion faces, they can’t connect them to a database and say who that person is, what they’ve done. You cannot do this as a human. It is this capacity that you are giving to the State [com a IA]🇧🇷

Instead of creating a different future where we’re not building a prison state, we’re creating a terrible, scary future where we have Big Brother. [o grande irmão, associado a tecnologia de vigilância]🇧🇷 A lot of people point to China, but you don’t have to go there — the US has facial recognition tools that are like stalking eagles.

Movement is policed. We have to do a better job of explaining why we shouldn’t want this. A lot of people think that facial recognition will make them feel safer, that surveillance will guarantee that security.

The second point is that we are creating an even more carceral and punitive society. We are expanding the state’s police power.

What is the central problem with the advancement of AI, especially in peripheral countries? There are many. One has to do with rising inequality between countries, places and people who are centralizing power. Silicon Valley, for example, is a small region that concentrates a lot of power. There are many billionaires talking random things, but in the region there is no public transport. There are many homeless people. Public universities do not receive investments. Many black and immigrant students are not supported by the state. And this happens on the side of this region that has so much power. It’s a region that has a lot of control over the whole world, more so than other countries. Multinationals have a lot of money, power and are setting policies for everyone.

This growth of inequality is very colonial. It’s taking the raw material, either through labor or data from around the world, packaging it and selling it for a lot of money. It’s a colonial relationship, and I find that scary. It has been used to centralize power by those who have stolen it through colonization.

The second part is that AI is used by authoritarian governments against their own people. To keep people from their civic duties, to spy on citizens. This is also scary. It is to expand the prison system.

Mrs. made his career in companies like Apple, Microsoft and Google. Today Mrs. acts as a researcher. How was this migration? What I’m trying to do with my institute is create a safe place for people who want to take a different path and approach, who want to work in AI or anything related.

For me, this is the foundation for a different vision of the future. Any research we develop prioritizes people who are on the margins. We do this because many harmful technologies are first tested and tried on peripheral people.


X-ray

Timnit Gebru, 39, is an Ethiopian computer scientist who works on ethics for Artificial Intelligence. She went through companies like Apple and Microsoft. In 2020, she was fired from Google after accusing the company of racism and censorship in an email exchange. She is the founder of Dair (Distributed Artificial Intelligence Research Institute) and co-founder and advisor to the Black in AI institute.

You May Also Like

Recommended for you