Friday, November 22, 2024
Uncategorized

The Supreme Court is about to hear case that could change the internet as we know it

For years, Washington has been stumped about how to regulate the internet—or if it should even try. But the Supreme Court is set to hear a case next week that could completely transform our online world as we know it. 

On Tuesday, justices will hear arguments for Gonzalez v. Google, a case that challenges Section 230 of the Communications Decency Act, a 1996 law that grants internet platforms immunity for most third-party content posted on their websites. The arguments will revolve around tech algorithms, which the plaintiffs say boosted extremist messaging in the lead up to a terrorist attack. They argue that  Section 230’s protections should not apply to the content a company’s algorithm recommends online, and therefore Google is legally liable for the extremist videos published on its YouTube service. 

While the hearing is set for next week, a resolution isn’t expected until June.

Section 230 is the reason why companies like Facebook or Twitter are not liable for content users create, and why a website is not legally at fault for if someone writes a slanderous criticism. But it has come under fire in recent years from critics who say it enables misinformation and protects sites known for spreading hateful and extremist rhetoric. However, experts also fear rollbacks to Section 230 could go too far and irreparably destroy the free speech foundations upon which the internet was built.

Recent A.I. developments, like ChatGPT, have added a new dimension to the fight over 230, as the bots that have so far proven to be unreliable with providing accurate information and getting the facts right could soon be protected by the law.

Some experts say the Supreme Court’s decisions on these cases could represent a unique opportunity to set the rules for Section 230, but others also warn that going too far could gut 230 entirely and make our relationship with the internet scarcely recognizable.

“The more the digital world is interwoven with our physical world, the more urgent this will become,” Lauren Krapf, lead counsel for technology policy and advocacy at the Anti-Defamation League, an anti discrimination group, told Fortune.

The backbone of the modern web

Section 230 has allowed the internet to function the way it does today by enabling websites to publish most content without fear of legal culpability, with one 26-word provision that has been extremely influential in the formation of today’s internet: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

The Electronic Frontier Foundation, a digital rights organization, says that without Section 230, “the free and open internet as we know it couldn’t exist,” while the law’s provision protecting internet companies is often referred to as “the 26 words that created the internet.”

But those words written more than a quarter century ago have come under scrutiny in recent years, and politicians on both sides of the aisle have targeted 230 as part of a larger effort to regulate the Internet. Even tech leaders including Meta CEO Mark Zuckerberg have proposed that Congress should require platforms to demonstrate they have systems in place to identify unlawful content. But how and to what extent the law should be refined has so far escaped consensus

“We are at a point where Congress really does need to update Section 230,” Krapf said. Her organization has filed an amicus brief over Google’s case on the plaintiff’s behalf urging the Supreme Court to consider the ramifications of Section 230’s immunity provision.

But given how far-reaching the effects of Section 230 are, reaching an agreement on how best to revise it is no easy task.

“Because [Section 230] is a high-stakes piece to the puzzle, I think there’s a lot of different viewpoints on how it should be updated or reformed and what we should do about it,” Krapf said. 

The cases

What makes the Gonzalez v. Google case different from previous attempts to refine Section 230 is that the issue is being brought in front of the Supreme Court instead of Congress for the first time, and could set a precedent for future interpretations of the law. 

At the core of its argument is the spread of pro-terrorist messaging on online platforms. The Gonzalez family is alleging the Google-owned service Youtube was complicit in radicalizing ISIS combatants in the buildup to a 2015 terrorist attack in Paris that killed 130 people—including 23-year old Nohemi Gonzalez, an American student who was studying abroad. A lower court ruled in Google’s favor citing 230’s protections and the Gonzalez family turned to the Supreme Court, arguing that Section 230 covers content, but not the algorithmic content recommendations in question.

Google’s isn’t the only case presenting a potential challenge to Section 230 next week. A related case which the court will hear Wednesday, Twitter v. Taamneh, has been put forth by the relatives of Jordanian citizen Nawras Alassaf, who was one of 39 killed in 2017 during an ISIS-affiliated mass shooting in an Istanbul nightclub. 

Alassaf’s family sued Twitter, Google, and Facebook for failing to control pro-terrorist content on their websites, a lawsuit that a lower court allowed to move forward. Twitter then argued that moving the lawsuit forward was an unconstitutional expansion to the Anti-Terrorism Act and appealed the decision to the highest court. The lower court never came to a decision on the case, so Section 230 was never discussed, but it will likely come up in the Supreme Court hearing next week.

Targeting recommendations could be a slippery slope

The Gonzalez family is demanding the Supreme Court clarify whether YouTube’s recommendations are exempted from Section 230, and exceptions to the law are not unheard of. 

In 2018, former president Donald Trump signed off on a carveout to the law that would find online sites liable for content involving sex trafficking. But the difference with Google’s case is that the plaintiffs are not targeting specific content, but rather the online recommendations generated by the company’s algorithms.

“Their claim is their lawsuit targets YouTube’s recommendations, not the content itself, because if they were targeting the content itself, Section 230 clearly comes into play and a lawsuit gets thrown out of court,” Paul Barrett, deputy director and senior research scholar at NYU’s Stern Center for Business and Human Rights, told Fortune.

Virtually every online platform, including Google, Twitter, and Facebook, use algorithms to generate user-curated content recommendations. But Barrett argued that targeting recommendations instead of content could be a slippery slope in view of future lawsuits against online platforms, given how recommendation algorithms have become core to everything tech companies do.

Barrett and the center he is affiliated with has also filed an amicus brief with the court, which acknowledges Section 230’s need for modernization but also argues that the law remains a crucial pillar of free speech online, and that an extreme ruling that opens the door for algorithms to be targeted instead of content could gut these protections.

“A recommendation is not some separate, distinct, and unusual activity for YouTube and the videos that it recommends. Recommendation is, in fact, what social media platforms do in general,” he said.

If the Supreme Court rules in favor of the Gonzalez family it could leave Section 230 vulnerable to future lawsuits targeting online platforms’ algorithms rather than their content, Barrett said, adding that in an extreme case, it could cascade into a complete erosion of the protections the law affords to tech companies.

“I think what you would see is a very dramatic constriction or reduction of what’s available on most platforms, because they just wouldn’t want to take the risk,” he said. Instead, he says online platforms would self-censor themselves into having significantly less “lawsuit-bait” content.

Such an extreme gutting of Section 230 would make life much more difficult for large companies, but could potentially be an existential threat for smaller online platforms that are primarily crowd-sourced and with fewer resources to fall back on, Barrett said, including popular sites like Wikipedia.

“We wanted to raise the alarm that: ‘Hey, if you go down this path you may be doing more than you think you’re doing,” Barrett said.

Both Barrett and Krapf agreed that Section 230 is likely long overdue for refinement, and it is becoming more urgent as technology intertwines itself more and more with our lives. Krapf described the court hearing as a good opportunity to get some clarity on Section 230 as part of a larger need for Congress to regulate tech companies’ behavior and ensure consumers are protected even from the digital world.

“I think that the urgency is just continuing to build on itself,” Krapf said. “We’ve seen the reliance on our digital world really come into its own for the last several years. And then now with a new wave of technological advances coming front and center, we need better rules of the road.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *