Online free speech overhaul in the US could be the biggest since the 1990s

by

Social networking giants like Facebook, Twitter and Instagram have operated under two crucial principles for years.

The first is that platforms have the power to decide which content to keep online and which to remove, without government oversight. The second is that websites cannot be legally held responsible for most of what their users post online. This protects companies from lawsuits for defamatory speech, extremist content, and real-world harm linked to their platforms.

The US Supreme Court is on the verge of revising those rules, potentially leading to the most important overhaul of the doctrines governing online speech since US authorities and courts decided to bring the web under some regulation in the 1990s.

On Friday, the Supreme Court was due to discuss whether to hear two lawsuits challenging Texas and Florida laws that prohibit online platforms from removing certain political content. In February, the Court will hear a lawsuit challenging Section 230, a 1996 statute that protects platforms from liability for content posted by their users.

There is a possibility that the lawsuits could change the US-majority exemption legal stance on online speech, potentially subverting the businesses of TikTok, Twitter, Snap and Meta, which owns Facebook and Instagram.

“It’s a time when everything can change,” said Daphne Kelly, a former Google attorney who now directs a program at Stanford University’s Cyber ​​Policy Center.

The lawsuits are part of a growing legal dispute over how to deal with harmful online speech. In recent years, as Facebook and other sites have attracted billions of users and become influential channels of communication, the power they wield has come under increasing attention. The possible undue influence of social networks on elections, genocides, wars and political debates began to be questioned.

The Supreme Court lawsuit challenging Section 230 of the Communications Decency Act is likely to have many knock-on effects. While newspapers and magazines can be sued for the content they publish, Section 230 protects online platforms from lawsuits involving most of the content posted by their users. It also protects platforms from actions when they remove content.

For years judges have cited this law when dismissing complaints against Facebook, Twitter and YouTube, thereby ensuring that the companies are not held legally responsible with every status update, post and video that goes viral. Critics said the law amounted to a “get out of jail free” card handed out to major rigs.

“If they are not held minimally responsible for any of the damages that are facilitated, platforms are basically allowed to be as reckless as possible,” said Mary Anne Franks, a law professor at the University of Miami.

The Supreme Court had previously declined to hear several other lawsuits challenging the statute. In 2020 she rejected a lawsuit filed by the families of people killed in terrorist attacks. Families accused Facebook of being responsible for promoting extremist content. In 2019, the Supreme Court refused to hear the case of a man saying his ex-boyfriend had used the dating app Grindr to send people to harass him. The man sued the app, saying it was a faulty product.

But on Feb. 21, the Supreme Court intends to hear Gonzalez v. Google, moved by the family of an American killed in Paris in an attack by followers of the Islamic State. In the lawsuit, the family said Section 230 should not protect YouTube from the charge that the video site supported terrorism when its algorithms recommended Islamic State videos to its users. The lawsuit argues that endorsements can be seen as a proprietary form of content produced by the platform, which would remove them from the protection afforded by Section 230.

The next day the Court intends to consider a second action, Twitter v. Taamneh. It addresses a related issue involving when platforms are legally responsible for supporting terrorism under federal law.

Eric Schnapper, a law professor at the University of Washington who is one of the attorneys representing the plaintiffs in both lawsuits, said in an interview that the arguments put forward are too ad hoc to result in changes to large parts of the internet. “The whole system is not going to collapse,” he said.

But Google’s chief attorney, Halimah DeLaine Prado, said in an interview that “any negative decision by the Court in this action, whether narrow in scope or not, will fundamentally alter the way the Internet works,” as it could result in the removal of algorithms. recommendations that are “critical” to the web.

Twitter did not respond to a request for comment.

Tech companies are also closely following lawsuits filed in Texas and Florida. After Twitter and Facebook banned then-President Donald Trump following the January 6, 2021 insurrection on Capitol Hill, both states passed laws prohibiting social media from removing certain content. Texas law entitles users to sue a major online platform if it removes your post because of the “point of view” it expresses. Florida law provides fines for platforms that permanently bar the accounts of a candidate for public office in the state.

NetChoice and CCIA, groups founded by Facebook, Google, Twitter and other tech companies, went to court in 2021 to block the laws. The groups argued that companies have a constitutional right to decide what content to house on their platforms.

“This is a roundabout way of punishing companies for exercising their First Amendment constitutional rights that others disagree with,” said Chris Marchese, an attorney for NetChoice.

A federal judge in Florida agreed with the business groups, ruling that the law infringes on the rights enjoyed by platforms under the terms of the First Amendment, and the 11th Court of Appeals upheld most of the decision. But the 5th Court of Appeals upheld the Texas law, rejecting “the idea that corporations have a broad First Amendment right to censor what people say.”

With that, the Supreme Court finds itself pressured to intervene. When different federal courts give different answers to the same question, the Supreme Court often chooses to have the final say in the dispute, said Jeff Kosseff, professor of cybersecurity law at the US Naval Academy.

You May Also Like

Recommended for you

Immediate Peak