Normal view

Received before yesterday

Why Isn’t Online Age Verification Just Like Showing Your ID In Person?

11 December 2025 at 03:00

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

One of the most common refrains we hear from age verification proponents is that online ID checks are nothing new. After all, you show your ID at bars and liquor stores all the time, right? And it’s true that many places age-restrict access in-person to various goods and services, such as tobacco, alcohol, firearms, lottery tickets, and even tattoos and body piercings.

But this comparison falls apart under scrutiny. There are fundamental differences between flashing your ID to a bartender and uploading government documents or biometric data to websites and third-party verification companies. Online age-gating is more invasive, affects far more people, and poses serious risks to privacy, security, and free speech that simply don't exist when you buy a six-pack at the corner store.

Online age verification burdens many more people.

Online age restrictions are imposed on many, many more users than in-person ID checks. Because of the sheer scale of the internet, regulations affecting online content sweep in an enormous number of adults and youth alike, forcing them to disclose sensitive personal data just to access lawful speech, information, and services. 

Additionally, age restrictions in the physical world affect only a limited number of transactions: those involving a narrow set of age-restricted products or services. Typically this entails a bounded interaction about one specific purchase.

Online age verification laws, on the other hand, target a broad range of internet activities and general purpose platforms and services, including social media sites and app stores. And these laws don’t just wall off specific content deemed harmful to minors (like a bookstore would); they age-gate access to websites wholesale. This is akin to requiring ID every time a customer walks into a convenience store, regardless of whether they want to buy candy or alcohol.

There are significant privacy and security risks that don’t exist offline.

In offline, in-person scenarios, a customer typically provides their physical ID to a cashier or clerk directly. Oftentimes, customers need only flash their ID for a quick visual check, and no personal information is uploaded to the internet, transferred to a third-party vendor, or stored. Online age-gating, on the other hand, forces users to upload—not just momentarily display—sensitive personal information to a website in order to gain access to age-restricted content. 

This creates a cascade of privacy and security problems that don’t exist in the physical world. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee it will be handled securely. You have no direct control over who receives and stores your personal data, where it is sent, or how it may be accessed, used, or leaked outside the immediate verification process. 

Data submitted online rarely just stays between you and one other party. All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners. 

All of this increases the likelihood that your data will leak or be misused. Unfortunately, data breaches are an endemic part of modern life, and the sensitive, often immutable, personal data required for age verification is just as susceptible to being breached as any other online data. Age verification companies can be—and already have been—hacked. Once that personal data gets into the wrong hands, victims are vulnerable to targeted attacks both online and off, including fraud and identity theft.

Troublingly, many age verification laws don’t even protect user security by providing a private right of action to sue a company if personal data is breached or misused. This leaves you without a direct remedy should something bad happen. 

Some proponents claim that age estimation is a privacy-preserving alternative to ID-based verification. But age estimation tools still require biometric data collection, often demanding users submit a photo or video of their face to access a site. And again, once submitted, there’s no way for you to verify how that data is processed or stored. Requiring face scans also normalizes pervasive biometric surveillance and creates infrastructure that could easily be repurposed for more invasive tracking. Once we’ve accepted that accessing lawful speech requires submitting our faces for scanning, we’ve crossed a threshold that’s difficult to walk back.

Online age verification creates even bigger barriers to access.

Online age gates create more substantial access barriers than in-person ID checks do. For those concerned about privacy and security, there is no online analog to a quick visual check of your physical ID. Users may be justifiably discouraged from accessing age-gated websites if doing so means uploading personal data and creating a potentially lasting record of their visit to that site.

Given these risks, age verification also imposes barriers to remaining anonymous that don't typically exist in-person. Anonymity can be essential for those wishing to access sensitive, personal, or stigmatized content online. And users have a right to anonymity, which is “an aspect of the freedom of speech protected by the First Amendment.” Even if a law requires data deletion, users must still be confident that every website and online service with access to their data will, in fact, delete it—something that is in no way guaranteed.

In-person ID checks are additionally less likely to wrongfully exclude people due to errors. Online systems that rely on facial scans are often incorrect, especially when applied to users near the legal age of adulthood. These tools are also less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds, for users with disabilities, and for transgender individuals. This leads to discriminatory outcomes and exacerbates harm to already marginalized communities. And while in-person shoppers can speak with a store clerk if issues arise, these online systems often rely on AI models, leaving users who are incorrectly flagged as minors with little recourse to challenge the decision.

In-person interactions may also be less burdensome for adults who don’t have up-to-date ID. An older adult who forgets their ID at home or lacks current identification is not likely to face the same difficulty accessing material in a physical store, since there are usually distinguishing physical differences between young adults and those older than 35. A visual check is often enough. This matters, as a significant portion of the U.S. population does not have access to up-to-date government-issued IDs. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification.

We’re talking about First Amendment-protected speech.

It's important not to lose sight of what’s at stake here. The good or service age gated by these laws isn’t alcohol or cigarettes—it’s First Amendment-protected speech. Whether the target is social media platforms or any other online forum for expression, age verification blocks access to constitutionally-protected content. 

Access to many of these online services is also necessary to participate in the modern economy. While those without ID may function just fine without being able to purchase luxury products like alcohol or tobacco, requiring ID to participate in basic communication technology significantly hinders people’s ability to engage in economic and social life.

This is why it’s wrong to claim online age verification is equivalent to showing ID at a bar or store. This argument handwaves away genuine harms to privacy and security, dismisses barriers to access that will lock millions out of online spaces, and ignores how these systems threaten free expression. Ignoring these threats won’t protect children, but it will compromise our rights and safety.

Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.

10 December 2025 at 18:48

Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.

If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love. 

That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet. 

Why Age Verification Mandates Are a Problem

In the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.

The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than goodespecially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.

If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.  

What’s Inside the Resource Hub

Our new hub is built to answer the questions we hear from users every day, such as:

  • How do age verification laws actually work?
  • What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
  • What’s at stake for me, and who else is harmed by these systems?
  • How can I keep myself, my family, and my community safe as these laws continue to roll out?
  • What can I do to fight back?
  • And if not age verification, what else can we do to protect the online safety of our young people?

Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.

Join Us: Reddit AMA & EFFecting Change Livestream Events

To celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:

1. Reddit AMA on r/privacy

Next week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!

2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification

Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.

A Resource to Empower Users

Age-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.

10 (Not So) Hidden Dangers of Age Verification

8 December 2025 at 11:24

It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.

Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children. 

Here’s who gets hurt, and how: 

   1.  Adults Without IDs Get Locked Out

Document-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don't have a driver's license or state ID with their current name and address.

Specifically:

  • 18% of Black adults don't have a driver's license at all.
  • Black and Hispanic Americans are disproportionately less likely to have current licenses.
  • Undocumented immigrants often cannot obtain state IDs or driver's licenses.
  • People with disabilities are less likely to have current identification.
  • Lower-income Americans face greater barriers to maintaining valid IDs.

Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don't own homes, and close to 20% of households don't have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.

   2.  Communities of Color Face Higher Error Rates

Platforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.

   3.  People with Disabilities Face More Barriers

Age-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.

Document-based systems also don't solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver's licenses, so document-based age-gating technologies are equally exclusionary.

   4.  Transgender and Non-Binary People Are Put At Risk

Age-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.

   5.  Anonymity Becomes a Casualty

Age-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.

   6.  Young People Lose Access to Essential Information 

Because state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn't just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being. 

This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers' mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.

   7.  LGBTQ+ Youth Are Denied Vital Lifelines

For many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports. 

When parents must consent to or monitor their children's social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them. 

   8.  Youth in Foster Care Systems Are Completely Left Out

Age verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.

   9.  All of Our Personal Data is Put at Risk

An age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we've seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years. 

The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.

   10.  All of Our Free Speech Rights Are Trampled

The internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.

Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment. 

The Bottom Line

Age-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.

The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.

Privacy is For the Children (Too)

26 November 2025 at 02:44

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here.

Observable harms of age verification legislation in the UK, US, and elsewhere:

As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing:

It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web.

Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone.

The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories:

  • Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.

Parental controls—which already exist!—can help.

These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety. 

However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.  

Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk.

Privacy laws can also help minors online.

Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech.

While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts.

These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well. 

Actions you can take today to protect young people online:

  • Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc.
  • Discuss the importance of online privacy and safety with your kids and community.
  • Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community.
  • Support comprehensive privacy legislation for all.
  • Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads.

Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.

A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

14 November 2025 at 17:34

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

TAKE ACTION

TELL CONGRESS: The guard act won't keep us safe

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely. 

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.  

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different. 

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.  

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop. 

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

TAKE ACTION

TELL CONGRESS: OPPOSe THE GUARD ACT

Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing

13 November 2025 at 12:38

Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.

It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs. 

Yes, really.

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction. 

This follows a notable pattern: As we’ve explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature

Wisconsin’s bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."

This is actually happening. And it's going to be a disaster for everyone.

Here's Why This Is A Terrible Idea 

VPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live. 

So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.

Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.

Almost Everyone Uses VPNs

Let's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.

  1. Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks. 
  2. Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison’s WiscVPN, for example, “allows UW–‍Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP).” 
  3. Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned. 
  4. Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.

It’s A Privacy Nightmare

Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.

We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.

Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.

"Harmful to Minors" Is Not a Catch-All 

Here's another fun feature of these laws: they're trying to broaden the definition of “harmful to minors” to sweep in a host of speech that is protected for both young people and adults.

Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes “harmful to minors” is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors’ “prurient sexual interests.” 

Wisconsin's bill defines “harmful to minors” much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.

Additionally, the bill’s definition would apply to any websites where more than one third of the site’s material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it’s not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information. 

This breadth of the bill’s definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is “harmful” to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities

It Won’t Even Work

Let's say Wisconsin somehow manages to pass this law. Here's what will actually happen:

People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship. 

Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar. 

Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.

Nonetheless, as we’ve mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.

A False Dilemma

People have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy. 

Let's be clear: lawmakers need to abandon this entire approach.

The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors.”

If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works. 

If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

Age Verification, Estimation, Assurance, Oh My! A Guide to the Terminology

30 October 2025 at 18:37

If you've been following the wave of age-gating laws sweeping across the country and the globe, you've probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we're talking about your rights, your privacy, your data, and who gets to access information online.

So let's clear up the confusion. Here's your guide to the terminology that's shaping these laws, and why you should care about the distinctions.

Age Gating: “No Kids Allowed”

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. It simply refers to the fact that a restriction exists. Think of it as the concept of “you must be this old to enter” without getting into the details of how they’re checking. 

Age Assurance: The Umbrella Term

Think of age assurance as the catch-all category. It covers any method an online service uses to figure out how old you are with some level of confidence. That's intentionally vague, because age assurance includes everything from the most basic check-the-box systems to full-blown government ID scanning.

Age assurance is the big tent that contains all the other terms we're about to discuss below. When a company or lawmaker talks about "age assurance," they're not being specific about how they're determining your age—just that they're trying to. For decades, the internet operated on a “self-attestation” system where you checked a box saying you were 18, and that was it. These new age-verification laws are specifically designed to replace that system. When lawmakers say they want "robust age assurance," what they really mean is "we don't trust self-attestation anymore, so now you need to prove your age beyond just swearing to it."

Age Estimation: Letting the Algorithm Decide

Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you.

This might include:

  • Analyzing your face through a video selfie or photo
  • Examining your voice
  • Looking at your online behavior—what you watch, what you like, what you post
  • Checking your existing profile data

Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right?

Here's the problem, “estimation” is exactly that: it’s a guess. And it is inherently imprecise. Age estimation is notoriously unreliable, especially for teenagers—the exact group these laws claim to protect. An algorithm might tell a website you're somewhere between 15 and 19 years old. That's not helpful when the cutoff is 18, and what's at stake is a young person's constitutional rights.

And it gets worse. These systems consistently fail for certain groups:

When estimation fails (and it often does), users get kicked to the next level: actual verification. Which brings us to…

Age Verification: “Show Me Your Papers”

Age verification is the most invasive option. This is where you have to prove your age to a certain date, rather than, for example, prove that you have crossed some age threshold (like 18 or 21 or 65). EFF generally refers to most age gates and mandates on young people’s access to online information as “age verification,” as most of them typically require you to submit hard identifiers like:

  • Government-issued ID (driver's license, passport, state ID)
  • Credit card information
  • Utility bills or other documents
  • Biometric data

This is what a lot of new state laws are actually requiring, even when they use softer language like "age assurance." Age verification doesn't just confirm you're over 18, it reveals your full identity. Your name, address, date of birth, photo—everything.

Here's the critical thing to understand: age verification is really identity verification. You're not just proving you're old enough—you're proving exactly who you are. And that data has to be stored, transmitted, and protected by every website that collects it.

We already know how that story ends. Data breaches are inevitable. And when a database containing your government ID tied to your adult content browsing history gets hacked—and it will—the consequences can be devastating.

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they're actually proposing. A law that requires "age assurance" sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it's not moderate at all—it's mass surveillance. Similarly, when Instagram says it's using "age estimation" to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver's license instead, the privacy promise evaporates.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. 

Here's the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don't know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don't know that verification systems have error rates. They don't even seem to understand that the terms they're using mean different things. The fact that their terminology is all over the place—using "age assurance," "age verification," and "age estimation" interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it's your privacy, your data, and your ability to access the internet without constant identity checks. Don't let fuzzy language disguise what these systems really do.

❌