Hate post surveillance does not understand Portuguese, suggests former Facebook employee

by

Data scientist Frances Haugen, a former Facebook employee who released company documents to the press after leaving her post in May this year, criticizes the lack of transparency in the company’s investments to combat disinformation and hate speech in Latin America and in countries like Myanmar and India.

Facebook has artificial intelligence systems and human teams that work to detect and remove posts contrary to the social network’s guidelines. Documents revealed by Haugen so far show the platform’s difficulty in prioritizing resources for countries outside of North America.

India, one of the main markets of Meta (the new name of Facebook Inc.), does not receive enough resources for the country’s 22 official languages ​​to be recognized, for example. For Haugen, Brazil could be a victim of the same problem.

“One of the things that most disseminates violent content is the fact that people share it in large groups. It goes viral that way. I suspect that Brazilian Portuguese is not well supported by security systems,” he told sheet by email.

It refers to artificial intelligence systems that detect harmful and potentially viral content on the platform.

In a statement, Meta claims that it has Brazilian specialists in different locations and based across the country and that it has invested significantly in people and technology to apply its policies in dozens of languages, including Portuguese.

“We are expected to invest more than $5 billion globally in safety and integrity in 2021 alone, and we have 40,000 people dedicated to these areas,” he adds.

An internal Facebook report, released on Sunday (6), recommends examining “seriously why explicit violence continues to have a wider reach on Facebook and WhatsApp in Brazil”.

The information is part of so-called Facebook Papers, internal reports sent to the US Securities and Exchange Commission (SEC) and provided to the US Congress in an edited form, with names hidden, by Frances Haugen’s lawyers. THE sheet it is part of the consortium of media vehicles that had access to these papers.

Facebook’s scant investment in other countries, a topic already discussed by activists and confirmed in the documents leaked by Haugen, is considered to be one of the causes for the high dissemination of violent, hateful or uninformative content in less developed countries.

Until 2020, the company did not have algorithms capable of detecting misinformation in Burmese, the country’s official language, according to a document. Another file, reported by the New York Times, shows that 87% of the company’s global budget for time spent on sorting misinformation goes to the United States.

Other documents accessed by sheet show that only 0.2% of the removal of hate speech content in Afghanistan was done automatically and that reviewing hate content overall costs Facebook about $2 million a week.

Before the British Parliament, Haugen criticized Facebook speeches as “we support 50 languages” when most of those languages, she says, have only a fraction of the security systems that American English does.

In a report on Facebook’s competitiveness reported by sheet on Sunday, the company says it can “neglect medium-sized countries, especially in Europe and Latin America.”

Brazil appears in a list of countries considered priority for actions to combat harmful use of the platform in 2021. It is in the same group as Nicaragua, Saudi Arabia, Mexico, Turkey, Argentina, Iran, Indonesia, Germany, France and Honduras.

Theme moves MPF in Brazil

Brazilian authorities express the same concern about investing in artificial intelligence tools dedicated to Brazil specifically.

The Federal Prosecutor’s Office opened an inquiry on Tuesday (9) and requested that Facebook inform what percentage of the removal of content from the platform results from human analysis, if the network operates with artificial intelligence adapted to the Portuguese language in the detection and removal actions of content and how much it invested in the last three years to “mitigate organized practices of production and circulation of content that convey disinformation and digital violence in Brazil”.

The inquiry should investigate the posture of the main social networks and messaging applications in Brazil in the fight against false news and digital violence.

Regarding the number of daily active Facebook users, Brazil is in third place in the world, with 97 million, only behind India, with 186 million, and the USA, with 167 million, according to another internal document of the company, this one from 2019.

Other side

In a statement, Meta says that Brazil is a priority country for the group. In addition to the investment planned for 2021, it highlights that since 2017 it has removed more than 150 networks that have tried to manipulate the public debate, originating in more than 50 countries, including Brazil.

“Our actions over time show that we are fighting abuses on our platforms in Brazil,” he says.

This week, the company announced that it has removed 1 million posts with misinformation about Covid-19 from Facebook and Instagram since the start of the pandemic. Like other platforms, Facebook has adopted more robust content removal guidelines for the period of health urgency.

.

You May Also Like

Recommended for you

Immediate Peak