Technology

We are the problem and solution of fake news, says researcher

by

It is unlikely that we will ever eliminate false information on social networks, but it is possible to minimize its effect, says Diogo Pacheco, a researcher in computational social science and modeling of complex social behavior. In addition to denouncing, investigating and punishing, everyone needs to be aware that they are a fundamental part of the problem and, therefore, also of the solution. “Rethink before passing”, urges the researcher.

Graduated in computer engineering, Pacheco holds a PhD in computer science from the University of Florida, a professor at the University of Exeter, in the United Kingdom, where he lives, and a contributor to the Social Media Observatory at Indiana University.

The interview was granted by email.

It is a fact that the internet has increased the flow of information between people. How does this affect their behavior on and off the network? We are social animals and the structure of our relationships is extremely powerful. The flow has increased as we have more connections and content circulating. Paradoxically, this “excess” of information is not promoting more evolved societies, as shown by the growing anti-vaccination movement and distrust in science.

The problem lies in the new vulnerabilities we are exposed to on social media, such as echo chambers and robots. The former are the result of polarized social structures that favor radicalism and, at the same time, distort reality about plurality and representativeness of opinions.

Robots can be used to artificially inflate the popularity of individuals and posts.

It is undeniable that exposure to these contents influences the way we behave outside the network. Offensive comments in WhatsApp groups, for example, have left marks and grudges in real-world relationships.

Is there, or not, an impact of the increase in communication traffic on social networks in the construction of critical social thinking? I think that the internet does create communities of political debate. In fact, it is difficult to find any community that is not represented on the internet.

Politics today is a much more present topic in everyday discussions and conversations, which is an important first step in building critical thinking. However, for a large portion of the population, politics still resembles cheering for teams or following religions. For this class, interest in the debate is minimal. Spaces that could serve to build these communities are commonly misrepresented. Just look at the comments sections of any high traffic website.

In the past, information was provided by specialists via universities, the mainstream press and institutions that, in some way, had to be accountable. Today, the sources are more diffuse. What risks are embedded in this new structure? There are several, starting with the overload of information and the pressure to be aware of everything that happens around us, which leads many to consume only headlines. We receive much more information than we could ever consume, so knowing how to filter becomes fundamental. This filter goes far beyond simply choosing which article to read. It encompasses the choice of whom we will follow.

Perhaps the greatest risk is that we do not assimilate these structural changes and, naively, think that the platforms would be responsible for this accountability. The fact is that they are not responsible for the generation of content, but for providing the means for generation and distribution.

Also note that the decentralization of the media and the democratization of voices in social networks is not intrinsically bad. Everyone can be heard and minorities are much more visible. However, this new reality hides a very high cost – individual responsibility. We are responsible for what we consume and, above all, what we share.

No one is free to receive and spread fake news. Are there groups more susceptible to this type of misinformation? Yup. We published a study in Nature Communications last year in which we investigated these biases, using neutral and automated Twitter accounts, contextualized in US politics. In the experiment, all accounts behaved according to the same probabilistic model. The only difference between them was the first “friend” chosen to follow.

Accounts that initially followed conservatives ended up being exposed to 13 times more low-credibility content than those that followed progressives. It is difficult to explain the cause, but we observe that conservatives form denser and more popular networks, surrounded by more robots than progressives. These factors may be an explanation for the excess of disinformation observed.

It is important to emphasize that these biases are the result of how users use and explore the platform, and not from manipulations by the platform or its algorithms.

What are the main mechanisms that make information reach a certain audience? The main way to circulate information is through your personal social network, that is, through the set of connections that we create on social network platforms. The greater the number of followers, the greater the audience. But information goes beyond direct connections, as followers can propagate news to their own followers, and so on. Therefore, the topology of this complex structure of connections is what makes it possible for information to travel on the network and reach certain audiences.

In addition, we may find new information through active searches (such as hashtags) or promoted (such as trending topics). Robots can inorganically interfere with all these mechanisms. They impersonate humans and create connections to spread content. They tirelessly post content in an attempt to dictate or pollute conversations; coordinate attacks mentioning popular accounts in an attempt to promote agendas or infiltrate new networks, among others.

Taking the recent period of your research, the US election campaign as an example, how do social networks shape voter positioning? We are easily tempted to follow people who agree with us and block those who think differently from us. This gradual movement has increased the segregation of ideas and potentiated radicalism. All this in the face of an information overload in which individuals need to be more active in identifying reliable sources.

In this context, we put critical thinking aside and chose a team to support. We continue to blindly support him and raise flags beyond reason, as long as we are certain that it is us against them.

The problem is even more serious when popular and influential politicians on social media try to subjugate democratic processes and institutions, promoting hatred, destruction and more segregation.

With the October elections approaching, concerns about the manipulation of information on social networks are growing. How to dismantle, in such a short time, an already installed system for disseminating fake news? It is unlikely that one day we will have social networks free of false information, but we can certainly try to minimize the damage caused by it. Reporting, investigating and punishing, where appropriate, are best practices in a democracy.

But we need to remind people that they are a fundamental part of this problem, we suffer the direct consequences as we are all connected and share the same planet. Viruses will continue to kill those who believe, or not, in their existence, and they can be even more lethal when we disagree. But luckily, we can also be part of the solution. Let’s rethink before reposting.


X-ray

Diogo Pacheco, 39

A computer engineer graduated from the Federal University of Pernambuco, he holds a PhD in computer science from the University of Florida, a professor at the University of Exeter (UK) and a contributor to the Social Media Observatory at Indiana University (USA)

digital journalismfake newsinternetjournalismleafmediapresssocial networkstechnology

You May Also Like

Recommended for you