World

Facebook report warns of circulation of violence, but company does not prioritize Brazil

by

An internal Facebook report recommends that the company investigate the large circulation of violent content on the platform and its WhatsApp messaging app in Brazil. According to the text, the perception in the country is that the circulation of violent content is much greater on Facebook and WhatsApp than on platforms like Instagram, TikTok and Twitter.

“We need to take a serious look at why explicit violence continues to have a wider reach on Facebook and WhatsApp in Brazil,” the document recommends.

The information is contained in so-called Facebook Papers, internal reports sent to the US Securities and Exchange Commission (SEC) and provided to the US Congress by the lawyers of Frances Haugen, a former employee of the company. THE sheet it is part of the consortium of media vehicles that had access to these papers, which were reviewed by lawyers and had portions hidden. Facebook recently changed the name of the company that bundles its platforms to Meta.

The document points out that, in Brazil, there is also a perception that misinformation, incendiary political language, bullying and exploitation of children are much bigger problems on Facebook than on other platforms. The company recommends that a team investigate “why achieve it [de conteúdo de exploração infantil] it’s bigger [no Facebook] than on other platforms in Brazil and Colombia”.

Dated July 2020, the text states that political statements and messages are the type of disinformation with the greatest reach on the platform in Brazil, in people’s perception.

Meanwhile, in the US and UK the general view is that news content is the biggest vehicle for civic disinformation, linked to electoral or institutional integrity, on Facebook. In Indonesia, it is false accounts that are perceived as the main driver in the spread of disinformation.

The perception evaluated by the company is that, in Brazil, Facebook is considered to be the platform on which the incendiary political discourse has the greatest reach. In India, TikTok; in the US, Twitter.

However, the report’s recommendation is that the company’s civic integrity division — which deals with electoral disinformation — “continue to focus on misleading content circulating in the US and the UK” and that the disinformation industry in general have a broader approach, including other countries.

One of the main criticisms leveled at Facebook is that the company neglects content moderation in countries seen as less important than the US, UK and European Union nations.

In a statement, Meta stated that the results of these surveys “do not measure the prevalence or amount of a certain type of content on our services, but show people’s perception of the content they see on our platforms. These perceptions are important, but they depend on a number of factors, including the cultural context”.

“We report on a quarterly basis the prevalence of materials that violate our policies, and we are always looking to identify and remove more infringing content,” the text concludes.

In early April, a former employee, Sophie Zhang, claimed that the company failed to act against leaders in countries like Honduras who used the platform illegitimately, for example through accounts of inauthentic behavior, for authoritarian purposes. According to her, today Meta decided not to act, even after warnings, alleging that it was not worth it.

A March survey by Agência Lupa found that Brazilian president Jair Bolsonaro (no party) had violated the social network’s rules for publications about the pandemic at least 29 times this year, so far. Still, he had not received any punishment: unlike what it did in other countries, the platform did not remove or issue warnings about any of this content.

On October 24th, however, Facebook deleted a live by Bolsonaro, broadcast days earlier, in which he read an alleged news saying that “vaccinated [contra a Covid] are developing the acquired immunodeficiency syndrome [Aids]”. Doctors claim that the association between the immunizing agent and AIDS is false, non-existent and absurd.

A company spokesman said the reason for the exclusion was the network’s policies related to immunization against the coronavirus. “Our policies do not allow for claims that Covid-19 vaccines kill or can cause serious harm to people.”

Another document made public by Haugen and obtained by sheet shows that Facebook, despite promises to combat disinformation that could threaten elections, still resists paying the political price to enforce its rules.

Internal report on India’s 2019 general election attests that tools were used to reduce the scope of civic misinformation, such as decreasing the distribution of highly shared posts — strategies based on the experience of elections in the US (legislative) and in Brazil (presidential) in 2018. Still, the text cautions that all of this “respected the ‘white list’ policy, to limit public relations risks”.

The so-called “white list” exempts certain public figures from complying with community rules — which prohibit, for example, misinformation regarding Covid, incitement to violence, non-consensual nudity and threats to electoral integrity. In practice, while “normal” users can be suspended or penalized for violating these rules, list members have content reviewed by teams that review decisions and often release the post.

That’s what happened to football player Neymar, who made lives on Facebook and Instagram showing nudes uploaded by model Najila Trindade, who accused him of rape. Although the platform has clear rules against non-consensual exposure of nudity, the so-called “vengeance pornography”, the athlete’s videos were aired for more than 24 hours on both platforms, with more than 50 million views, before being taken down —the error was pointed out in a Facebook Papers report.

The concern with the company’s image and the need for public relations offensives appear again in the report that analyzes users’ perception of Facebook. The document shows that in Brazil the impression is that the circulation of hate speech and bullying on the platform is greater (Twitter comes in second). In India, TikTok tops that list and in the US, Twitter.

The recommendation regarding the issue is “to redouble public relations efforts around hate speech and bullying on Facebook”.

To study disinformation trends in Brazil, Facebook analysts compiled a series of news reports shared on the platform. The objective was to understand what could be driving the sharing, through the platform, of false or distorted content during the period analyzed (March and April 2020).

The list includes episodes in which Bolsonaro attacked the press and at least three occasions in which he downplayed or denied the seriousness of the pandemic.

In addition to texts from vehicles such as the British newspaper The Guardian and the Reuters news agency, analysts have included a report from sheet sobre a decisão tomada por Facebook, Twitter e Instagram de excluir um vídeo em que Bolsonaro aparece passeando por Brasília e provocando aglomerações. At the time, companies understood that content created misinformation and could “cause real harm to people.”

In another document revealed by Haugen, the company defines “coordinated social harm” as “an activity coordinated or directed by a state or hostile agents with the intention of causing serious social harm” and proposes a gradation of harm caused and punishment to them.

Among these harms would be “attempts to delegitimize the electoral process or the result of fair elections, with coordination or incitement to overthrow a government or an institution, based on misinformation” — Ethiopia and the US are cited as examples.

This category, according to the Facebook text, includes pages linked to the Golden Order of Brazil, described as an “military-supported organization that combines evangelical religion, pro-Bolsonaro content, conspiracy theory, defense of the military dictatorship, and pro-Bolsonaro content. weapons”.

These pages, according to the document, have “high coordination, mass posts and people with multiple profiles amplifying content”. The recommendation is that pages be removed or restricted in scope.

Sought, Meta sent a note. “Billions of people around the world, including Brazil, use our services because they see them as useful and have good experiences. We’ve invested $13 billion in security globally since 2016 —we’re on track to invest $5 billion this year alone— and we have more than 40,000 people working to keep people safe in our apps,” says the text.

“We also invest in internal research to help proactively identify where we can improve our products and policies.”

.

FacebookFacebook Paperssheet

You May Also Like

Recommended for you