Normal view

Received before yesterday

The Internet Still Works: Yelp Protects Consumer Reviews

9 February 2026 at 17:23

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team. 

Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.

The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be. 

Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

Tell me about what your process around complaints around user reviews look like. 

We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

What other types of moderation tools have you built? 

Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

How are international complaints, where Section 230 doesn’t come into play, different? 

We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

This interview was edited for length and clarity.

The Internet Still Works: Wikipedia Defends Its Editors

9 February 2026 at 16:53

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia. 

Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible. 

Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened? 

Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but  I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information. 

Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point. 

Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important. 

If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them? 

If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company. 

Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making. 

What does a typical complaint look like, and how does the complaint process look? 

In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.” 

The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first. 

If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions. 

And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?

That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation. 

Section 230 protects us from that, and allows for quick dismissal of invalid claims. 

When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit. 

How does dealing with those complaints work in the U.S.? And how is it different abroad? 

In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint. 

But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court. 

In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that. 

What’s the cost of having to go to court regularly to defend user speech? 

Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time. 

We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.

I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia? 

What does the community editing process get right that courts can get wrong? 

Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that. 

It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality. 

Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content? 

One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video. 

There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic. 

If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content. 

This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.

Search Engines, AI, And The Long Fight Over Fair Use

23 January 2026 at 20:09

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.

Fair Use Protects Analysis—Even When It’s Automated

U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.

Copying works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.

Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use. 

Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology. 

Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated. 

A Road Forward For AI Training And Fair Use 

One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative. 

The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share. 

AI Can Create Problems, But Expanding Copyright Is the Wrong Fix 

Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times are core functions of government. Copyright law doesn’t help with those tasks in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression. 

Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.  

Fair Use Still Matters

Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.

Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

EFF Condemns FBI Search of Washington Post Reporter’s Home

16 January 2026 at 18:19

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation. 

Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch. 

The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.

In the statement published yesterday, we call on Congress: 

To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions; 

To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources; 

To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists. 

And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment. 

We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.

Further Reading:

Congress Wants To Hand Your Parenting to Big Tech

16 January 2026 at 13:43

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

Fighting to Keep Bad Patents in Check: 2025 in Review

17 December 2025 at 14:16

A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges. 

Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents. 

EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.

Congress Weighed Bills That Would Undo Core Safeguards

The Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.

Together, these bills would have dismantled much of the progress made over the last decade. 

We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.” 

EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it. 

Patent Office Shifts To An “Era of No”

Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.

Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.

2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office. 

The October Rulemaking: Making Bad Patents Untouchable

The USPTO’s proposed rule changes would: 

  • Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit. 
  • Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
  • Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB). 

These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back. 

We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it

In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.

The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.

GitHub emphasized the increased risk and litigation cost for open-source communities.

Twenty-two patent law professors called the proposal unlawful and harmful to innovation.

Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.

Heading Into 2026

The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.

But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Politicians Rushed Through An Online Speech “Solution.” Victims Deserve Better.

24 December 2025 at 11:44

Earlier this year, both chambers of Congress passed the TAKE IT DOWN Act. This bill, while well-intentioned, gives powerful people a new legal tool to force online platforms to remove lawful speech that they simply don't like. 

The bill, sponsored by Senate Commerce Chair Ted Cruz (R-TX) and Rep. Maria Salazar (R-FL), sought to speed up the removal of troubling online content: non-consensual intimate imagery (NCII). The spread of NCII is a serious problem, as is digitally altered NCII, sometimes called “deepfakes.” That’s why 48 states have specific laws criminalizing the distribution of NCII, in addition to the long-existing defamation, harassment, and extortion statutes—all of which can be brought to bear against those who abuse NCII. Congress can and should protect victims of NCII by enforcing and improving these laws. 

Unfortunately, TAKE IT DOWN takes another approach: it creates an unneeded notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it seeks to solve. 

While Congress was still debating the bill, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate outlining our concerns with the proposal. 

First, TAKE IT DOWN’s removal provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the law. We worry that bad-faith actors will use the law’s expansive definition to remove lawful speech that is not NCII and may not even contain sexual content. 

Worse, the law contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The law requires that apps and websites remove content within 48 hours or face significant legal risks. That ultra-tight deadline means that small apps or websites will have to comply so quickly to avoid legal risk, that they won’t be able to investigate or verify claims. 

Finally, there are no legal protections for providers when they believe a takedown request was sent in bad faith to target lawful speech. TAKE IT DOWN is a one-way censorship ratchet, and its fast timeline discourages providers from standing up for their users’ free speech rights. 

This new law could lead to the use of automated filters that tend to flag legal content, from commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers can’t view the contents of messages on their platforms. Platforms could respond by abandoning encryption entirely in order to be able to monitor content, turning private conversations into surveilled spaces.

We asked for several changes to protect legitimate speech that is not NCII, and to include common-sense safeguards for encryption. Thousands of EFF members joined us by writing similar messages to their Senators and Representatives. That resulted in several attempts to offer common-sense amendments during the Committee process. 

However, Congress passed the bill without those needed changes, and it was signed into law in May 2025. The main takedown provisions of the bill will take effect in 2026. We’ll be pushing online platforms to be transparent about the content they take down because of this law, and will be on the watch for takedowns that overreach and censor lawful speech. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review

11 December 2025 at 16:17

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment. 

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy. 

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward. 

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities 

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work. 

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.” 

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.” 

Small Businesses 

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court. 

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters. 

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents. 

EFF Tells Patent Office: Don’t Cut the Public Out of Patent Review

2 December 2025 at 14:59

EFF has submitted its formal comment to the U.S. Patent and Trademark Office (USPTO) opposing a set of proposed rules that would sharply restrict the public’s ability to challenge wrongly granted patents. These rules would make inter partes review (IPR)—the main tool Congress created to fix improperly granted patents—unavailable in most of the situations where it’s needed most.

If adopted, they would give patent trolls exactly what they want: a way to keep questionable patents alive and out of reach.

If you haven’t commented yet, there’s still time. The deadline is today, December 2.

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

Sample comment:

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

IPR Is Already Under Siege, And These Rules Would Make It Worse

Since USPTO Director John Squires was sworn into office just over two months ago, we’ve seen the Patent Office take an increasingly aggressive stance against IPR petitions. In a series of director-level decisions, the USPTO has denied patent challengers the chance to be heard—sometimes dozens of them at a time—without explanation or reasoning. 

That reality makes this rulemaking even more troubling. The USPTO is already denying virtually every new petition challenging patents. These proposed rules would cement that closed-door approach and make it harder for challengers to be heard. 

What EFF Told the USPTO

Our comment lays out how these rules would make patent challenges nearly impossible to pursue for small businesses, nonprofits, software developers, and everyday users of technology. 

Here are the core problems we raised:

First, no one should have to give up their court defenses just to use IPR. The USPTO proposal would force defendants to choose: either use IPR and risk losing their legal defenses, or keep their defenses and lose IPR.

That’s not a real choice. Anyone being sued or threatened for patent infringement needs access to every legitimate defense. Patent litigation is devastatingly expensive, and forcing people to surrender core rights in federal court is unreasonable and unlawful.

Second, one early case should not make a bad patent immune forever. Under the proposed rules, if a patent survives any earlier validity fight—no matter how rushed, incomplete, or poorly reasoned—everyone else could be barred from filing an IPR later.

New prior art? Doesn’t matter. Better evidence? Doesn’t matter. 

Congress never intended IPR to be a one-shot shield for bad patents. 

Third, patent owners could manipulate timing to shut down petitions. The rules would let the USPTO deny IPRs simply because a district court case might move faster.

Patent trolls already game the system by filing in courts with rapid schedules. This rule would reward that behavior. It allows patent owners—not facts, not law, not the merits—to determine whether an IPR can proceed. 

IPR isn't supposed to be a race to the courthouse. It’s supposed to be a neutral review of whether the Patent Office made a mistake.

Why Patent Challenges Matter

IPR isn’t perfect, and it doesn’t apply to every patent. But compared to multimillion-dollar federal litigation, it’s one of the only viable tools available to small companies, developers, and the public. It needs to remain open. 

When an overbroad patent gets waved at hundreds or thousands of people—podcasters, app developers, small retailers—IPR is often the only mechanism that can actually fix the underlying problem: the patent itself. These rules would take that option away.

There’s Still Time To Add Your Voice

If you haven’t submitted a comment yet, now is the time. The more people speak up, the harder it becomes for these changes to slip through.

Comments don’t need to be long or technical. A few clear sentences in your own words are enough. We’ve written a short sample comment below. It’s even more powerful if you add a sentence or two describing your own experience. If you mention EFF in your comment, it helps our collective impact. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Further reading:

The Patent Office Is About To Make Bad Patents Untouchable

19 November 2025 at 15:00

The U.S. Patent and Trademark Office (USPTO) has proposed new rules that would effectively end the public’s ability to challenge improperly granted patents at their source—the Patent Office itself. If these rules take effect, they will hand patent trolls exactly what they’ve been chasing for years: a way to keep bad patents alive and out of reach. People targeted with troll lawsuits will be left with almost no realistic or affordable way to defend themselves.

We need EFF supporters to file public comments opposing these rules right away. The deadline for public comments is December 2. The USPTO is moving quickly, and staying silent will only help those who profit from abusive patents. 

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use legal or technical vocabulary. The important thing is that everyday users and creators of technology have  the chance to speak up, and be counted. 

Below is a short, simple comment you can copy and paste. Your comment will carry more weight if you add a personal sentence or two of your own. Please note that comments should be submitted under your real name and will become part of the public record. 

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Why This Rule Change Matters

Inter partes review, (IPR), isn’t perfect. It hasn’t eliminated patent trolling, and it’s not available in every case. But it is one of the few practical ways for ordinary developers, small companies, nonprofits, and creators to challenge a bad patent without spending millions of dollars in federal court. That’s why patent trolls hate it—and why the USPTO’s new rules are so dangerous.

IPR isn’t easy or cheap, but compared to years of litigation, it’s a lifeline. When the system works, it removes bogus patents from the table for everyone, not just the target of a single lawsuit. 

IPR petitions are decided by the Patent Trial and Appeal Board (PTAB), a panel of specialized administrative judges inside the USPTO. Congress designed  IPR to provide a fresh, expert look at whether a patent should have been granted in the first place—especially when strong prior art surfaces. Unlike  full federal trials, PTAB review is faster, more technical, and actually accessible to small companies, developers, and public-interest groups.

Here are three real examples of how IPR protected the public: 

  • The “Podcasting Patent” (Personal Audio)

Personal Audio claimed it had “invented” podcasting and demanded royalties from audio creators using its so-called podcasting patent. EFF crowdsourced prior art, filed an IPR, and ultimately knocked out the patent—benefiting  the entire podcasting worldUnder the new rules, this kind of public-interest challenge could easily be blocked based on procedural grounds like timing, before the PTAB even examines the patent. 

  • SportBrain’s “upload your fitness data” patent

SportBrain sued more than 80 companies over a patent that claimed to cover basic gathering of user data and sending it over a network. A panel of PTAB judges canceled every claim. Under the new rules, this patent could have survived long enough to force dozens more companies to pay up.

For more than a decade, Shipping & Transit sued companies over extremely broad “delivery notifications”patents. After repeated losses at PTAB and in court (including fee awards), the company finally collapsed. Under the new rules, a troll like this could keep its patents alive and continue carpet-bombing small businesses with lawsuits.

IPR hasn’t ended patent trolling. But when a troll waves a bogus patent at hundreds or thousands of people, IPR is one of the only tools that can actually fix the underlying problem: the patent itself. It dismantles abusive patent monopolies that never should have existed,   saving entire industries from predatory litigation. That’s exactly why patent trolls and their allies have fought so hard to shut it down. They’ve failed to dismantle IPR in court or in Congress—and now they’re counting on the USPTO’s own leadership to do it for them. 

What the USPTO Plans To Do

First, they want you to give up your defenses in court. Under this proposal, a defendant can’t file an IPR unless they promise to never challenge the patent’s validity in court. 

For someone actually being sued or threatened with patent infringement, that’s simply not a realistic promise to make. The choice would be: use IPR and lose your defenses—or keep your defenses and lose IPR.

Second, the rules allow patents to become “unchallengeable” after one prior fight. That’s right. If a patent survives any earlier validity fight, anywhere, these rules would block everyone else from bringing an IPR, even years later and even if new prior art surfaces. One early decision—even one that’s poorly argued, or didn’t have all the evidence—would block the door on the entire public.

Third, the rules will block IPR entirely if a district court case is projected to move faster than PTAB. 

So if a troll sues you with one of the outrageous patents we’ve seen over the years, like patents on watching an ad, showing picture menus, or clocking in to work, the USPTO won’t even look at it. It’ll be back to the bad old days, where you have exactly one way to beat the troll (who chose the court to sue in)—spend millions on experts and lawyers, then take your chances in front of a federal jury. 

The USPTO claims this is fine because defendants can still challenge patents in district court. That’s misleading. A real district-court validity fight costs millions of dollars and takes years. For most people and small companies, that’s no opportunity at all. 

Only Congress Can Rewrite IPR

IPR was created by Congress in 2013 after extensive debate. It was meant to give the public a fast, affordable way to correct the Patent Office’s own mistakes. Only Congress—not agency rulemaking—can rewrite that system.

The USPTO shouldn’t be allowed to quietly undermine IPR with procedural traps that block legitimate challenges.

Bad patents still slip through every year. The Patent Office issues hundreds of thousands of new patents annually. IPR is one of the only tools the public has to push back.

These new rules rely on the absurd presumption that it’s the defendants—the people and companies threatened by questionable patents—who are abusing the system with multiple IPR petitions, and that they should be limited to one bite at the apple. 

That’s utterly upside-down. It’s patent trolls like Shipping & Transit and Personal Audio that have sued, or threatened, entire communities of developers and small businesses.

When people have evidence that an overbroad patent was improperly granted, that evidence should be heard. That’s what Congress intended. These rules twist that intent beyond recognition. 

In 2023, more than a thousand EFF supporters spoke out and stopped an earlier version of this proposal—your comments made the difference then, and they can again. 

Our principle is simple: the public has a right to challenge bad patents. These rules would take that right away. That’s why it’s vital to speak up now. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

❌