Reading view

The Internet Still Works: Yelp Protects Consumer Reviews

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team. 

Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.

The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be. 

Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

Tell me about what your process around complaints around user reviews look like. 

We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

What other types of moderation tools have you built? 

Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

How are international complaints, where Section 230 doesn’t come into play, different? 

We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

This interview was edited for length and clarity.

  •  

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

  •  

Op-ed: Weakening Section 230 Would Chill Online Speech

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

  •  

Congress Wants To Hand Your Parenting to Big Tech

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

  •