Economy

Facebook does not prioritize Brazil against fake news, says Frances Haugen, from Facebook Papers

by

Facebook does not prioritize Brazil in the fight against coordinated electoral disinformation operations, warns data scientist Frances Haugen, the former employee of the internet platform who released company documents, the so-called Facebook Papers, to the press last year.

“I guarantee that there is much less protection in Brazil against attempts to interfere in elections than in the United States,” Haugen said in an exclusive interview with Sheet.

Haugen, who was in charge of the civic integrity area at the company, arrives in Brazil this Sunday (3) to participate in a public hearing in the Chamber on Tuesday (5) on the fake news bill and talk to civil society organizations.

The data scientist claims that the mechanism announced by Facebook in Brazil in October, which detects electoral disinformation and places tags that direct users to TSE links, is very inefficient.

“At best, Facebook can detect 20% of disinformation content — but realistically, they should be labeling at most 5%,” he said, saying he was based on the numbers he had access to at the company.

She criticizes the lack of investment and transparency in moderation and artificial intelligence in Portuguese. “They only care about content moderation in countries where they are at risk of being regulated, like the United States.”

What is the purpose of your trip to Brazil? Facebook has been telling us for years that the best way to make the platform safe is to moderate content through “magical” artificial intelligence. But they’ve always underinvested in security in languages ​​other than English, and they haven’t invested enough in moderators either.

Brazil is one of the most important democracies in the world and I assure you that Brazilian Portuguese is one of the languages ​​that does not have the basic security systems it should.

The Public Ministry sent a letter to Facebook and other internet platforms asking how many Portuguese-speaking people they have in the content moderation teams and asking for information about artificial intelligence in Portuguese. The companies did not specify. Does Facebook reveal these numbers in any countries? They have been refusing to provide the most basic information about the performance of their systems to all governments in Europe. Governments asked again and again how many content moderators speak German, how many speak French, Spanish, and they never respond.

The goal [empresa dona do Facebook] doesn’t answer because, if she were honest about how much she invests in security in languages ​​other than English, people would be furious. They have been very negligent. I’ll give you an example that shows why I bet the system is really bad with Brazilian Portuguese.

There are 500 million people who speak Spanish in the world, and only 130 million who speak German. In 2019, Facebook spent about 58% of its budget fighting hate speech on English moderation, and only about 2% or 3% for German and Spanish.

That’s because they’re more afraid of Germans lowering Facebook regulations — and that’s deeply unfair.

Do you think that Facebook does not invest enough in moderation in languages ​​other than English, and are not transparent, because it is too expensive? If it were based on need and urgency, they would be investing much more in languages ​​with a history of ethnic violence than in English. The real reason is that Facebook allocates its security resources in the countries where it fears being regulated.

This is why they spend around 87% of their disinformation budget on the English language in 2020, despite a much smaller proportion of users speaking English. Basically, Facebook tries to minimize risks to the company, not optimize resources for people’s safety.

Can you explain the concrete consequences of not investing in content moderation in other languages? In 2015, amid rising tensions in Myanmar that eventually led to the genocide of the Rohyngia, Facebook had only two Burmese-speaking moderators and the platform was a major vehicle for hate speech against the Muslim minority. Mark Zuckerberg admitted in 2018 that engagement-based content promotion is dangerous. It is dangerous because people are more attracted to extremist content. When Facebook doesn’t invest enough in a language, people end up using the most violent version of Facebook.

Brazil will have presidential elections on October 2. President Jair Bolsonaro has been spreading his version of Donald Trump’s “Stop the Steal”, claiming the elections will be rigged. Do you think Facebook learned from the mistakes it made in the 2016 and 2020 US elections? The network that existed in 2020 to handle elections, called Civic Integrity, was dismantled immediately after that year’s election.

And to have artificial intelligence that guarantees security on the platform, it is necessary to build it for each language, each context. There is currently no transparency regarding what Facebook is and is not doing. I am very skeptical of the idea that Facebook has done the necessary work to ensure election security. [brasileira].

What should Facebook be doing to ensure the security of Brazil’s election? In summer 2020 in the US, Facebook was recommending users to join groups for days before the security system could assess whether the groups were safe. There were groups with illegal activities. They were actively promoting these groups [alguns eram sobre o Stop the Steal, a teoria da conspiração sobre suposta fraude nas eleições].

This was very dangerous, because groups were sending out thousands of invites a day, and the security system couldn’t handle it.

By making small changes like reducing the number of group invites that can be sent per day [feito posteriormente na eleição americana]reducing the number of invitations that a group or user can send per day from 2000 to 200, it is already significantly more difficult to spread misinformation and circumvent the rules of use.

These tweaks are easy to “on and off” and Facebook knows it.

andAND How do we know if Facebook is implementing these tweaks? We have no idea. Facebook has refused to treat the Brazilian government and the Brazilian population as partners. I’m sure Facebook isn’t giving Brazil the same level of priority [em comparação aos EUA].

Facebook announced in October last year that it would detect posts with content related to the Brazilian election and would place labels directing users to the official website of the Electoral Justice. But we don’t know how efficient this system is, because the company only reveals the total number of tagged posts, not how many people were exposed to the posts before they were tagged, for example. Can this system detect most electoral disinformation? Taking into account the information we disclose [nos Facebook Papers]we can assume that the system does not work well.

According to Facebook’s own documents, they were only able to filter between 3% and 5% of hate speech. At best, Facebook can detect 20% of disinformation content. But this number is very optimistic, because it depends on the accuracy of artificial intelligence in other languages.

Realistically thinking, they should be labeling no more than 5% [em português]. We can’t trust them and that’s why I’m going to Brazil.

You were in the European Union and the US Congress discussing the regulation of internet platforms. Based on her experience, what kind of regulation is most urgent and should be adopted as quickly as possible everywhere? We definitely need access to Facebook data. Any industry or company in the world with the same level of power as Facebook is more transparent. For example, the auto industry, we can buy the cars, test crashes.

The legislation passed in the EU, the Digital Services Act [Lei de Serviços Digitais], establishes access for researchers to data from the platforms. That is, for now, we are still at the stage of negotiating being able to buy a Model T., we can’t even touch the car.

What about accountability? Some researchers are proposing to repeal or change Section 230 of the US Communications Decency Act, which exempts platforms from liability for content posted by third parties. I spent a lot of time discussing with regulators what kind of legislation would be most constructive.

It seems obvious to say that Facebook would remove more irregular content if we held it responsible. As artificial intelligence is still very inefficient to understand certain content, removing the protection would actually make it impossible for platforms to publish third-party posts.

Now, I think Facebook should be held accountable for some of the decisions it makes. He’s had hundreds of opportunities to make more money from increased hate speech and misinformation, and he’s done it. So, it should be responsible for this pattern of behavior.

Why doesn’t Facebook make these changes to combat misinformation? Certain changes would decrease the time people spend on the platform, make people like posts less, or log in less often. So the decision is between having 95% less misinformation or having 5% more profits.

Considering your knowledge of how Facebook’s civic integrity team works, do you think the platform is concerned about the Brazilian elections? I think there are people on Facebook paying attention to the elections in Brazil, which, I will repeat, is one of the most important democracies in the world.

But are there enough people following the Brazilian elections and how can we know? There is no transparency, no accountability.

Documents revealed in Facebook Papers show that, in Brazil, political statements and messages are the type of disinformation with the greatest reach on the platform in Brazil, in people’s perception. What should Facebook do to combat this type of disinformation in Brazil? First, invest in systems for Brazilian Portuguese. I assure you that there is much less protection in Brazil against attempts to interfere in elections than in the United States. This is very serious.

X-RAY

Frances Haugen, 38

Born in the state of Iowa (USA) in 1984, she holds a degree in computer and electrical engineering from Olin College and an MBA from Harvard University. She specializes in algorithmic product management and has worked at Google, Pinterest, Yelp, and Facebook. In 2019, she was recruited to lead the Facebook civic disinformation group, dealing with issues related to democracy and disinformation. She left the company and, in 2021, revealed Facebook documents criticizing the platform’s practices, in the so-called Facebook Papers.

elections 2022FacebookFacebook Papersfake newsFrances Haugengoalleafsocial networks

You May Also Like

Recommended for you