Reading view

Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.

Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.

If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love. 

That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet. 

Why Age Verification Mandates Are a Problem

In the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.

The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than goodespecially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.

If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.  

What’s Inside the Resource Hub

Our new hub is built to answer the questions we hear from users every day, such as:

  • How do age verification laws actually work?
  • What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
  • What’s at stake for me, and who else is harmed by these systems?
  • How can I keep myself, my family, and my community safe as these laws continue to roll out?
  • What can I do to fight back?
  • And if not age verification, what else can we do to protect the online safety of our young people?

Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.

Join Us: Reddit AMA & EFFecting Change Livestream Events

To celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:

1. Reddit AMA on r/privacy

Next week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!

2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification

Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.

A Resource to Empower Users

Age-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.

  •  

A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

TAKE ACTION

TELL CONGRESS: The guard act won't keep us safe

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely. 

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.  

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different. 

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.  

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop. 

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

TAKE ACTION

TELL CONGRESS: OPPOSe THE GUARD ACT

  •  

#StopCensoringAbortion: What We Learned and Where We Go From Here

This is the tenth and final installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When we launched Stop Censoring Abortion, our goals were to understand how social media platforms were silencing abortion-related content, gather data and lift up stories of censorship, and hold social media companies accountable for the harm they have caused to the reproductive rights movement.

Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and individuals around the world, we confirmed what many already suspected: this speech is being removed, restricted, and silenced by platforms at an alarming rate. Together, our findings paint a clear picture of censorship in action: platforms’ moderation systems are not only broken, but are actively harming those seeking and sharing vital reproductive health information.

Here are the key lessons from this campaign: what we uncovered, how platforms can do better, and why pushing back against this censorship matters more now than ever.

Lessons Learned

Across our submissions, we saw systemic over-enforcement, vague and convoluted policies, arbitrary takedowns, sudden account bans, and ignored appeals. And in almost every case we reviewed, the posts and accounts in question did not violate any of the platform’s stated rules.

The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But most of the content submitted simply provided factual, educational information that clearly did not violate those rules. As we saw in the M+A Hotline’s case, this kind of misclassification deprives patients, advocates, and researchers of reliable information, and chills those trying to provide accurate and life-saving reproductive health resources.

In one submission, we even saw posts sharing educational abortion resources get flagged under the “Dangerous Organizations and Individuals” policy, a rule intended to prevent terrorism and criminal activity. We’ve seen this policy cause problems in the past, but in the reproductive health space, treating legal and accurate information as violent or unlawful only adds needless stigma and confusion.

Meta’s convoluted advertising policies add another layer of harm. There are specific, additional rules users must navigate to post paid content about abortion. While many of these rules still contain exceptions for purely educational content, Meta is vague about how and when those exceptions apply. And ads that seem like they should have been allowed were frequently flagged under rules about “prescription drugs” or “social issues.” This patchwork of unclear policies forces users to second-guess what content they can post or promote for fear of losing access to their networks.

In another troubling trend, many of our submitters reported experiencing shadowbanning and de-ranking, where posts weren’t removed but were instead quietly suppressed by the algorithm. This kind of suppression leaves advocates without any notice, explanation, or recourse—and severely limits their ability to reach people who need the information most.  

Many users also faced sudden account bans without warning or clear justification. Though Meta’s policies dictate that an account should only be disabled or removed after “repeated” violations, organizations like Women Help Women received no warning before seeing their critical connections cut off overnight.

Finally, we learned that Meta’s enforcement outcomes were deeply inconsistent. Users often had their appeals denied and accounts suspended until someone with insider access to Meta could intervene. For example, the Red River’s Women’s Clinic, RISE at Emory, and Aid Access each had their accounts restored only after press attention or personal contacts stepped in. This reliance on backchannels underscores the inequity in Meta’s moderation processes: without connections, users are left unfairly silenced.

It’s Not Just Meta

Most of our submissions detailed suppression that took place on one of Meta’s platforms (Facebook, Instagram, Whatsapp and Threads), so we decided to focus our analysis on Meta’s moderation policies and practices. But we should note that this problem is by no means confined to Meta.

On LinkedIn, for example, Stephanie Tillman told us about how she had her entire account permanently taken down, with nothing more than a vague notice that she had violated LinkedIn’s User Agreement. When Stephanie reached out to ask what violation she committed, LinkedIn responded that “due to our Privacy Policy we are unable to release our findings,” leaving her with no clarity or recourse. Stephanie suspects that the ban was related to her work with Repro TLC, an advocacy and clinical health care organization, and/or her posts relating to her personal business, Feminist Midwife LLC. But LinkedIn’s opaque enforcement meant she had no way to confirm these suspicions, and no path to restoring her account.

Screenshot submitted by Stephanie Tillman to EFF (with personal information redacted by EFF)

And over on Tiktok, Brenna Miller, a creator who works in health care and frequently posts about abortion, posted a video of her “unboxing” an abortion pill care package from Carafem. Though Brenna’s video was factual and straightforward, TikTok removed it, saying that she had violated TikTok’s Community Guidelines.

Screenshot submitted by Brenna Miller to EFF

Brenna appealed the removal successfully at first, but a few weeks later the video was permanently deleted—this time, without any explanation or chance to appeal again.

Brenna’s far from the only one experiencing censorship on TikTok. Even Jessica Valenti, award-winning writer, activist, and author of the Abortion Every Day newsletter, recently had a video taken down from TikTok for violating its community guidelines, with no further explanation. The video she posted was about the Trump administration calling IUDs and the Pill ‘abortifacients.’ Jessica wrote:

Which rule did I break? Well, they didn’t say: but I wasn’t trying to sell anything, the video didn’t feature nudity, and I didn’t publish any violence. By process of elimination, that means the video was likely taken down as "misinformation." Which is…ironic.

These are not isolated incidents. In the Center for Intimacy Justice’s survey of reproductive rights advocates, health organizations, sex educators, and businesses, 63% reported having content removed on Meta platforms, 55% reported the same on TikTok, and 66% reported having ads rejected from Google platforms (including YouTube). Clearly, censorship of abortion-related content is a systemic problem across platforms.

How Platforms Can Do Better on Abortion-Related Speech

Based on our findings, we're calling on platforms to take these concrete steps to improve moderation of abortion-related speech:

  • Publish clear policies. Users should not have to guess whether their speech is allowed or not.
  • Enforce rules consistently. If a post does not violate a written standard, it should not be removed.
  • Provide real transparency. Enforcement decisions must come with clear, detailed explanations and meaningful opportunities to appeal.
  • Guarantee functional appeals. Users must be able to challenge wrongful takedowns without relying on insider contacts.
  • Expand human review. Reproductive rights is a nuanced issue and can be too complex to be left entirely to error-prone automated moderation systems.

Practical Tips for Users

Don’t get it twisted: Users should not have to worry about their posts being deleted or their accounts getting banned when they share factual information that doesn’t violate platform policies. The onus is on platforms to get it together and uphold their commitments to users. But while platforms continue to fail, we’ve provided some practical tips to reduce the risk of takedowns, including:

  • Consider limiting commonly flagged words and images. Posts with pill images or certain keyword combinations (like “abortion,” “pill,” and “mail”) were often flagged.
  • Be as clear as possible. Vague phrases like “we can help you get what you need” might look like drug sales to an algorithm.
  • Be careful with links. Direct links to pill providers were often flagged. Spell out the links instead.
  • Expect stricter rules for ads. Boosted posts face harsher scrutiny than regular posts.
  • Appeal wrongful enforcement decisions. Requesting an appeal might get you a human moderator or, even better, review from Meta’s independent Oversight Board.
  • Document everything and back up your content. Screenshot all communications and enforcement decisions so you can share them with the press or advocacy groups, and export your data regularly in case your account vanishes overnight.

Keep Fighting

Abortion information saves lives, and social media is the primary—and sometimes only—way for advocates and providers to get accurate information out to the masses. But now we have evidence that this censorship is widespread, unjustified, and harming communities who need access to this information most.

Platforms must be held accountable for these harms, and advocates must continue to speak out. The more we push back—through campaigns, reporting, policy advocacy, and user action—the harder it will be for platforms to look away.

So keep speaking out, and keep demanding accountability. Platforms need to know we're paying attention—and we won't stop fighting until everyone can share information about abortion freely, safely, and without fear of being silenced.

This is the tenth and final post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion.  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

Platforms Have Failed Us on Abortion Content. Here's How They Can Fix It.

This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.

Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.

So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?

After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.

Platforms’ Editorial Freedom to Moderate User Content

First, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?

The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.

The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.

Meta’s Broken Promises

Unfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.

Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”

Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.

Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.  

Our Call to Platforms

Our case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.

1. Publish clear and understandable policies.

Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.

2. Enforce rules consistently and fairly.

If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations. 

3. Provide meaningful transparency in enforcement actions.

When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.

4. Guarantee functional appeals.

Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.   

5. Expand human review.

Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.

Users Deserve Better

Meta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.

Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.

This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

Age Verification Is A Windfall for Big Tech—And A Death Sentence For Smaller Platforms

Update September 10, 2025: Bluesky announced today that it would implement age verification measures in South Dakota and Wyoming to comply with laws there. Bluesky continues to block access in Mississippi.

If you live in Mississippi, you may have noticed that you are no longer able to log into your Bluesky or Dreamwidth accounts from within the state. That’s because, in a chilling early warning sign for the U.S., both social platforms decided to block all users in Mississippi from their services rather than risk hefty fines under the state’s oppressive age verification mandate. 

If this sounds like censorship to you, you’re right—it is. But it’s not these small platforms’ fault. This is the unfortunate result of Mississippi’s wide-sweeping age verification law, H.B. 1126. Though the law had previously been blocked by a federal district court, the Supreme Court lifted that injunction last month, even as one justice (Kavanaugh) concluded that the law is “likely unconstitutional.” This allows H.B. 1126 to go into effect while the broader constitutional challenge works its way through the courts. EFF has opposed H.B. 1126 from the start, arguing consistently and constantly that it violates all internet users’ First Amendment rights, seriously risks our privacy, and forces platforms to implement invasive surveillance systems that ruin our anonymity

Lawmakers often sell age-verification mandates as a silver bullet for Big Tech’s harms, but in practice, these laws do nothing to rein in the tech giants. Instead, they end up crushing smaller platforms that can’t absorb the exorbitant costs. Now that Mississippi’s mandate has gone into effect, the reality is clear: age verification laws entrench Big Tech’s dominance, while pushing smaller communities like Bluesky and Dreamwidth offline altogether. 

Sorry Mississippians, We Can’t Afford You

Bluesky was the first platform to make the announcement. In a public blogpost, Bluesky condemned H.B. 1126’s broad scope, barriers to innovation, and privacy implications, explaining that the law forces platforms to “make every Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site—or risk massive fines.” As Bluesky noted, “This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.” Instead, Bluesky made the decision to cut off Mississippians entirely until the courts consider whether to overturn the law. 

About a week later, we saw a similar announcement from Dreamwidth, an open-source online community similar to LiveJournal where users share creative writing, fanfiction, journals, and other works. In its post, Dreamwidth shared that it too would have to resort to blocking the IP addresses of all users in Mississippi because it could not afford the hefty fines. 

Dreamwidth wrote: “Even a single $10,000 fine would be rough for us, but the per-user, per-incident nature of the actual fine structure is an existential threat.” The service also expressed fear that being involved in the lawsuit against Mississippi left it particularly vulnerable to retaliation—a clear illustration of the chilling effect of these laws. For Dreamwidth, blocking Mississippi users entirely was the only way to survive. 

Age Verification Mandates Don’t Rein In Big Tech—They Entrench It

Proponents of age verification claim that these mandates will hold Big Tech companies accountable for their outsized influence, but really the opposite is true. As we can see from Mississippi, age verification mandates concentrate and consolidate power in the hands of the largest companies—the only entities with the resources to build costly compliance systems and absorb potentially massive fines. While megacorporations like Google (with YouTube) and Meta (with Instagram) are already experimenting with creepy new age-estimation tech on their social platforms, smaller sites like Bluesky and Dreamwidth simply cannot afford the risks. 

We’ve already seen how this plays out in the UK. When the Online Safety Act came into force recently, platforms like Reddit, YouTube, and Spotify implemented broad (and extremely clunky) age verification measures while smaller sites, including forums on parenting, green living, and gaming on Linux, were forced to shutter. Take, for example, the Hamster Forum, “home of all things hamstery,” which announced in March 2025 that the OSA would force it to shut down its community message boards. Instead, users were directed to migrate over to Instagram with this wistful disclaimer: “It will not be the same by any means, but . . . We can follow each other and message on there and see each others [sic] individual posts and share our hammy photos and updates still.” 

When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants.

This perfectly illustrates the market impact of online age verification laws. When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants. These huge companies—those that can afford expensive age verification systems and aren’t afraid of a few $10,000 fines while they figure out compliance—will end up getting more business, more traffic, and more power to censor users and violate their privacy. 

This consolidation of power is a dream come true for the Big Tech platforms, but it’s a nightmare for users. While the megacorporations get more traffic and a whole lot more user data (read: profit), users are left with far fewer community options and a bland, corporate surveillance machine instead of a vibrant public sphere. The internet we all fell in love with is a diverse and colorful place, full of innovation, connection, and unique opportunities for self-expression. That internet—our internet—is worth defending.

  •  

Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout

Age verification has officially arrived in the UK thanks to the Online Safety Act (OSA), a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “harmful” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for eroding privacy, chilling speech, and undermining the safety of the children it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.

Perhaps you’ve seen the memes as large platforms like Spotify and YouTube attempt to comply with the OSA, while smaller sites—like forums focused on parenting, green living, and gaming on Linux—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments. 

But even Reddit, a site that prizes anonymity and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is not alone in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.

It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates

On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now locked behind age verification walls. Under the new policies, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to Persona, the for-profit vendor that Reddit contracts with to provide age verification services. 

 "SUBMIT PHOTO ID" or "ESTIMATE AGE FROM SELFIE."

For many, this was the first time they realized what the OSA would actually mean in practice—and the outrage was immediate. As soon as the policy took effect, reports emerged from users that subreddits dedicated to LGBTQ+ identity and support, global journalism and conflict reporting, and even public health-related forums like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, according to users, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (yup). This is, again, exactly what digital rights advocates warned about. 

Every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

The OSA defines "harmful" in multiple ways that go far beyond pornography, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than kids’ access to clearly adult sites. When fines are at stake, platforms will always default to overcensoring. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

Again, the fact that the OSA has forced Reddit, the “heart of the internet,” to overcensor user-generated content is noteworthy. Reddit has historically succeeded where many others have failed in safeguarding digital rights—particularly the free speech and privacy of its users. It may not be perfect, but Reddit has worked harder than many large platforms to defend Section 230, a key law in the US protecting free speech online. It was one of the first platforms to endorse the Santa Clara Principles, and it was the only platform to receive every star in EFF’s 2019 “Who Has Your Back” (Censorship Edition) report due to its unique approach to moderation, its commitment to notice and appeals of moderation decisions, and its transparency regarding government takedown requests. Reddit’s users are particularly active in the digital rights world: in 2012, they helped EFF and other advocates defeat SOPA/PIPA, a dangerous censorship law. Redditors were key in forcing members of Congress to take a stand against the bill, and were the first to declare a “blackout day,” a historic moment of online advocacy in which over a hundred thousand websites went dark to protest the bill. And Reddit is the only major social media platform where EFF doesn’t regularly share our work—because its users generally do so on their own. 

If a platform with a history of fighting for digital rights is forced to overcensor, how will the rest of the internet look if age verification spreads? Reddit’s attempts to comply with the OSA show the urgency of fighting these mandates on every front. 

We cannot accept these widespread censorship regimes as our new norm. 

Rollout Chaos: The Tech Doesn’t Even Work! 

In the days after the OSA became effective, backlash to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK soared, over 500,000 people signed a petition to repeal the OSA, and some shrewd users even discovered that video game face filters and meme images could fool Persona’s verification software. But these loopholes aren’t likely to last long, as we can expect the age-checking technology to continuously adapt to new evasion tactics. As good as they may be, VPNs cannot save us from the harms of age verification. 

In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

Even when the workarounds inevitably cease to function and the age-checking procedures calcify, age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers will, uh, find a way to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be pushed to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

If that weren’t enough, the slew of practical issues that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users reported various bugs in the age-checking process, like being locked out or asked repeatedly for ID despite complying. UK-based subreddit moderators also reported facing difficulties either viewing NSFW post submissions or vetting users’ post history, even when the particular submission or subreddit in question was entirely SFW. 

Taking all of this together, it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the discriminatory and error-prone technology, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.

Americans, Take Heed: It Will Happen Here Too

The UK age verification rollout, chaotic as it is, is a proving ground for platforms that are looking ahead to implementing these measures on a global scale. In the US, there’s never been a better time to get educated and get loud about the dangers of this legislation. EFF has sounded this alarm before, but Reddit’s attempts to comply with the OSA show its urgency: age verification mandates are censorship regimes, and in the US, porn is just the tip of the iceberg

US legislators have been disarmingly explicit about their intentions to use restrictions on sexually explicit content as a Trojan horse that will eventually help them censor all sorts of other perfectly legal (and largely uncontroversial) content. We’ve already seen them move the goalposts from porn to transgender and other LGBTQ+ content. What’s next? Sexual education materials, reproductive rights information, DEI or “critical race theory” resources—the list goes on. Under KOSA, which last session passed the Senate with an enormous majority but did not make it to the House, we would likely see similar results here that we see in the UK under the OSA.

Nearly half of U.S. states have some sort of online age restrictions in place already, and the Supreme Court recently paved the way for even more age blocks on online sexual content. But Americans—including those under 18—still have a First Amendment right to view content that is not sexually explicit, and EFF will continue to push back against any legislation that expands the age mandates beyond porn, in statehouses, in courts, and in the streets. 

What can you do?

Call or email your representatives to oppose KOSA and any other federal age-checking mandate. Tell your state lawmakers, wherever you are, to oppose age verification laws. Make your voice heard online, and talk to your friends and family. Tell them about what’s happening to the internet in the UK, and make sure they understand what we all stand to lose—online privacy, security, anonymity, and expression—if the age-gated internet becomes a global reality. EFF is building a coalition to stop this enormous violation of digital rights. Join us today.

  •