Society is the one who should propose regulation for social networks, says British lawyer


To anyone who fears that new regulations for the internet will prove, in practice, to be a kind of censorship, British lawyer Jamie Susskind issues a warning.

“Freedom of expression is already under threat, because it is in the hands of private companies that do not need to respect laws when deciding what content will be seen and who will be authorized to speak”, he tells the Sheet the bestselling author of “Future Politics: Living Together in a World Transformed by Tech”.

In “The Digital Republic: On Freedom and Democracy in the 21st Century”, a new book released in the US and UK, Susskind proposes that society, democratically and in jointly with the government and companies, establish moderation and operation rules for the platforms —today, according to him, social networks decide, implement and monitor rules unilaterally.

The author also argues that big techs respond to a regulatory agency and that executives have to comply with codes of conduct.

Mr. states that internet platforms should not be responsible for decisions about what kind of content and speech is or is not allowed online or what is an acceptable algorithm. In his book, he says that regulation should determine this, but critics point to the danger of the state becoming a Big Brother censor. Is it possible to regulate networks without stifling freedom of expression? Freedom of expression is already under threat, because it is in the hands of private companies that do not have to respect laws when deciding what content will be seen, who will be allowed to speak, what can be said online, what will be promoted and muted.

The question is not “Should we have rules that restrict free speech?” because we already have that. The question is who should write those rules — and that shouldn’t be decided by companies alone or by the government alone. We need a democratic process that discusses the basic operating parameters of the platforms. Society should be able to determine, through normal political means, their goals with regard to platforms, and then they would implement them in whatever way they saw fit.

Companies would not need to prove that they made correct decisions regarding content moderation, but that they were compatible with the rules established by society. It is very different from having rules decided, implemented and monitored unilaterally by the platforms.

But I know that the system I propose is more suitable for some types of countries. For example, in the UK there is mature telecommunications regulation, and I have a high degree of confidence that regulation can be implemented without the risk of the government trampling on individual freedoms to limit the speech of political opponents. But I wouldn’t have as much confidence in a country with a history of authoritarianism or a more unstable democracy, like India.

Some harmful content online is not necessarily illegal. How to reconcile freedom of expression and an environment that is not toxic? Your question assumes that categories of “illegal speech” are universal and timeless, but I don’t think that’s the case. Societies decide the boundaries for legal discourse based on their own norms, customs, and beliefs—and what they perceive as a threat. 30 years ago, content glorifying anorexia or self-harm was secondary, it would be hard for a teenager to find. Today, the context is different, and on many platforms there is an algorithmic micro-targeting of this type of content towards vulnerable adolescents.

The point is not what to do with legal but harmful content. The question is: should we rethink what is legal? The discussion should not be about making certain content illegal and removing it, but determining that certain speech should not be promoted and amplified or directed towards certain groups. It’s a new category.

Should there be regulation and oversight of technology professionals like doctors and lawyers? Certainly. Isn’t it strange to think that your neighborhood pharmacist is subject to more rules about professional conduct than people who manage platforms that can affect the democratic process of hundreds of millions of people? It’s an anachronism.

Doctors need to uphold certain standards of integrity and dignity, just as lawyers and bankers do. We don’t restrict ourselves to hoping that bankers don’t lose our money, we have laws for that and we punish those who don’t comply with them. We adopt laws and regulation whenever there is a systematic imbalance of power. I don’t understand why people who own and control very powerful technology shouldn’t have duties to those affected by it.

Is it right for a company to decide for itself whether a person like Donald Trump, with more than 80 million followers, should be banned from the platform? It’s inevitable that companies will be able to make decisions about what happens on platforms, and I wouldn’t want to see any government meddling in decisions about particular users. But the control of the platforms must somehow be constitutionalized, so that it is not used in an arbitrary and random way. In the case of Twitter, I don’t think it should simply be Elon Musk’s choice to allow Trump to come back. There should be consistent and transparent rules for this.

What would be binding rules regarding algorithms? Different countries will have different ideas about this. The principles that would guide regulation in France would be different from those applied in the US, even though both countries have the rule of law and a tradition of defending freedom of expression. Saying that something is right for all countries causes problems, because everyone implicitly accepts that the US view should be imposed on the rest of the world.

Having said that, there are basic principles that platforms should follow in any country: reduce foreign interference in political processes, implement measures to curb the spread of political disinformation, encourage speeches that contribute to democratic debate. Rather than platforms always encouraging sensationalism, outrage, and conflict, I wish they were engineered to encourage the free exchange of views more productively. They should, at the very least, have some obligation to try to keep the online environment from turning into a sewer.

Every time regulatory authorities ask platforms for transparency about algorithms or moderation teams, they say it’s about trade secrets. Are technology companies different from pharmaceuticals, which owe satisfaction to the FDA (USA) and Anvisa (Brazil), or from the financial sector, which responds to the SEC and the CVM? Big techs have been saying for years that they are special, but countless industries that deal with confidential information and sensitive data are subject to regulation. Companies use the excuse of privacy to try to avoid regulation. That’s cynical, considering they use our data in every way the law allows to make money.

Many academics say that traditional antitrust law, which uses price to determine whether market concentration hurts consumers, is ill-suited to dealing with big tech. Mr. do you agree? Yes, I propose in the book a revamped approach to antitrust law, so that it is not just a cold instrument of economic investigation. I argue that legislation determines the concentration of power using a broader social and political lens. If a company buys yet another competitor, we shouldn’t just ask whether the acquisition will lead to a price hike, but whether, for example, it might harm media diversity or the health of democracy — or whether it might pose systemic threats to human rights. .

Mr. talks about the “consent trap”, the fact that everyone clicks the “I agree” box about data use without reading terms of use or customizing cookies. How does this protect businesses instead of consumers? No one reads terms of use. Those who read everything understand almost nothing, because it is in legalese. But even for those who understand the terms they say nothing. We will use your data, share it with third parties. What does it mean? Rather than rebalancing the relationship between platforms and individuals, the consent trap consolidates the power of networks.

We need a legal mechanism that gives individuals more control over their lives, not less. This idea of ​​consent also makes no sense in an age of artificial intelligence. How can you allow inferences to be made about you from data without knowing what they will be, since data only reveals things about you when it is combined with massive amounts of other people’s data? Consent is a noble idea, but it is no longer appropriate for mediation between the powerful and the less powerful.

Mr. suggests that larger platforms should benefit from the immunity conferred by section 230 of the Communications Decency Act in the US or by the Marco Civil da internet in Brazil only if they meet certain requirements. How would that work? It is impractical to expect regulators to get involved in the details of individual decisions. Instead, I think the bigger platforms should put in place adequate systems to comply with certain democratically decided rules. It is inevitable that things will go wrong from time to time, but companies have to show that they have a system in place to deal with this in order to be entitled to immunity.

X-ray | Jamie Susskind, 33

British, with a degree in history and political science at Oxford, he is a lawyer, lecturer and author. His main books are “Future Politics: Living Together in a World Transformed by Tech” (2019) and the recently released “The Digital Republic: On Freedom and Democracy in the 21st Century”.

You May Also Like

Recommended for you

Immediate Peak