Normal view

Received before yesterday

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

9 February 2026 at 13:53

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

States Tried to Censor Kids Online. Courts, and EFF, Mostly Stopped Them: 2025 in Review

31 December 2025 at 12:03

Lawmakers in at least a dozen states believe that they can pass laws blocking  young people from social media or require them to get their parents’ permission before logging on. Fortunately, nearly every trial court to review these laws has ruled that they are unconstitutional.

It’s not just courts telling these lawmakers they are wrong. EFF has spent the past year filing friend-of-the-court briefs in courts across the country explaining how these laws violate young people’s First Amendment rights to speak and get information online. In the process, these laws also burden adults’ rights, and jeopardize everyone’s privacy and data security.

Minors have long had the same First Amendment rights as adults: to talk about politics, create art, comment on the news, discuss or practice religion, and more. The internet simply amplified their ability to speak, organize, and find community.

Although these state laws vary in scope, most have two core features. First, they require social media services to estimate or verify the ages of all users. Second, they either ban minor access to social media, or require parental permission. 

In 2025, EFF filed briefs challenging age-gating laws in California (twice), Florida, Georgia, Mississippi, Ohio, Utah, Texas, and Tennessee. Across these cases we argued the same point: these laws burden the First Amendment rights of both young people and adults. In many of these briefs, the ACLU, Center for Democracy & Technology, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined.

There is no “kid exception” to the First Amendment. The Supreme Court has repeatedly struck down laws that restrict minors’ speech or impose parental-permission requirements. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks. As EFF has urged, lawmakers should pursue strong privacy laws, not censorship, to address online harms.

These laws also burden everyone’s speech requiring users to prove their age. ID-based systems of access can lock people out if they don’t have the right form of ID, and biometric systems are often discriminatory or inaccurate. Requiring users to identify themselves before speaking also chills anonymous speech—protected by the First Amendment, and essential for those who risk retaliation. 

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions. Most of these laws perversely require social media companies to collect even more personal information from everyone, especially children, who can be more vulnerable to identify theft.

EFF will continue to fight for the rights of minors and adults to access the internet, speak freely, and organize online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review

15 December 2025 at 17:25

State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim. 

Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online. 

These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression. 

On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices. 

Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics

EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Strengthen Colorado’s AI Act

19 November 2025 at 12:37

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

❌