Normal view

Received before yesterday

Age Verification Threats Across the Globe: 2025 in Review

15 December 2025 at 13:17

Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space. 

The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented. 

Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.

In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:

Social Media Bans for Young People

Banning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight

Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.

Push for Age Assurance on All Users 

This year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.

We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.

Rushed Age Assurance through the EU Digital Wallet

There is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.

These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online. 

However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.  

Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety

Ways Forward

The internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. 

Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Age Assurance Methods Explained

9 December 2025 at 21:19

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against all mandatory age verification. Not only does it turn the internet into an age-gated cul-de-sac, but it also leaves behind many people who can’t get or don’t have proper and up-to-date documentation. While populations like undocumented immigrants and people experiencing homelessness are more obviously vulnerable groups, these restrictions also impact people with more mundane reasons for not having valid documentation on hand. Perhaps they’ve undergone life changes that impact their status or other information—such as a move, name change, or gender marker change—or perhaps they simply haven’t gotten around to updating their documents. Inconvenient events like these should not be a barrier to going online. People should also reserve the right to opt-out of unreliable technology and shady practices that could endanger their personal information.

But age restriction mandates threaten all of that. Not only do age-gating laws block adults and youth alike from freely accessing services on the web, they also force users to trade their anonymity—a pillar of online expression—for a system in which they are bound to their real-life identities. And this surveillance regime stretches beyond just age restrictions on certain content; much of this infrastructure is also connected to government plans for creating a digital system of proof of identity.

So how does age gating actually work? The age and identity verification industry has devised countless different methods platforms can purchase to—in theory—figure out the ages and/or identities of their users.  But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

Every system of age verification or age estimation demands that users hand over sensitive and oftentimes immutable personal information that links their offline identity to their online activity, risking their safety and security in the process.

But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

With that said, as we see more of these laws roll out across the U.S. and the rest of the world, it’s important to understand the differences between these technologies so you can better identify the specific risks of each method, and make smart decisions about how you share your own data.

Age Assurance Methods

There are many different technologies that are being developed, attempted, and deployed to establish user age. In many cases, a single platform will have implemented a mixture of methods. For example, a user may need to submit both a physical government ID and a face scan as part of a liveliness check to establish that they are the person pictured on the physical ID. 

Age assurance methods generally fall into three categories:

  1. Age Attestation
  2. Age Estimation
  3. ID-bound Proof

Age Attestation

Self-attestation 

Sometimes, you’ll be asked to declare your age, without requiring any form of verification. One way this might happen is through one-off self-attestation. This type of age attestation has been around for a while; you may have seen it when an alcohol website asks if you’re over 21, or when Steam asks you to input your age to view game content that may not be appropriate for all ages. It’s usually implemented as a pop-up on a website, and they might ask you for your age every time you enter, or remember it between site accesses. This sort of attestation provides an indication that the site may not be appropriate for all viewers, but gives users the autonomy and respect to make that decision for themselves.

An alternative proposed approach to declaring your own age, called device-bound age attestation, is to have you set your age on your operating system or on App Stores before you can make purchases or browse the web. This age or age range might then be shared with websites or apps. On an Apple device, that age can be modified after creation, as long as an adult age is chosen. It’s important to separate device-bound age attestation from methods that require age verification or estimation at the device or app store level (common to digital ID solutions and some proposed laws). It’s only attestation if you’re permitted to set your age to whatever you choose without needing to prove anything to your provider or another party—providing flexibility for age declaration outside of mandatory age verification.

Attestation through parental controls

The sort of parental controls found on Apple and Android devices, Windows computers, and video game consoles provide the most flexible way for parents to manage what content their minor children can access. These settings can be applied through the device operating system, third-party applications, or by establishing a child account. Decisions about what content a young person can access are made via consent-driven mechanisms. As the manager, the parent or guardian will see requests and activity from their child depending on how strict or lax the settings are set. This could include requests to install an app, make a purchase on an app store, communicate with a new contact, or browse a particular website. The parent or guardian can then choose whether or not to accept the request and allow the activity. 

One survey that collected answers from 1,000 parents found that parental controls are underutilized. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. To help encourage more parents to make use of these settings, companies should continue to make them clearer and easier to use and manage. Parental controls are better suited to accommodating diverse cultural contexts and individual family concerns than a one-size-fits-all government mandate. It’s also safer to use native settings–or settings provided by the operating system itself–than it is to rely on third-party parental control applications. These applications have experienced data breaches and often effectively function as spyware.

Age Estimation

Instead of asking you directly, the system guesses your age based on data it collects about you.

Age estimation through photo and facial estimation

Age estimation by photo or live facial age analysis is when a system uses an image of a face to guess a person’s age.

A poorly designed system might improperly store these facial images or retain them for significant periods, creating a risk of data leakage. Our faces are unique, immutable, and constantly on display. In the hands of an adversary, and cross-referenced to other readily available information about us, this information can expose intimate details about us or lead to biometric tracking.

This technology has also proven fickle and often inaccurate, causing false negatives and positives, exacerbation of racial biases, and unprotected usage of biometric data to complete the analysis. And because it’s usually conducted with AI models, there often isn’t a way for a user to challenge a decision directly without falling back on more intrusive methods like submitting a government ID. 

Age inference based on user data and third party services

Age inference systems are normally conducted through estimating how old someone is based on their account information or querying other databases, where the account may have done age verification already, to cross reference with the existing information they have on that account.

Age inference includes but not limited to:

In order to view how old someone is via account information associated with their email, services often use data brokers to provide this information. This incentivizes even more collection of our data for the sake of age estimation and rewards data brokers for collecting a mass of data on people. Also, regulation of these age inference services varies based on a country’s privacy laws.

ID-bound Proof

ID-bound proofs, methods that use your government issued ID, are often used as a fallback for failed age estimation. Consequently, any government-issued ID backed verification disproportionately excludes certain demographics from accessing online services. A significant portion of the U.S. population does not have access to government-issued IDs, with millions of adults lacking a valid driver’s license or state-issued ID. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification. In addition, non-U.S. citizens, including undocumented immigrants, face barriers to acquiring government-issued IDs. The exclusionary nature of document-based verification systems is a major concern, as it could prevent entire communities from accessing essential services or engaging in online spaces.

Physical ID uploaded and stored as an image 

When an image of a physical ID is required, users are forced to upload—not just momentarily display—sensitive personal information, such as government-issued ID or biometric identifiers, to third-party services in order to gain access to age-restricted content. This creates significant privacy and security concerns, as users have no direct control over who receives and stores their personal data, where it is sent, and how it may be accessed, used, or leaked outside the immediate verification process.

Requiring users to digitally hand over government-issued identification to verify their age introduces substantial privacy risks. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee that it will be handled securely. The verification process typically involves transmitting this data across multiple intermediaries, which means the risk of a data breach is heightened. The misuse of sensitive personal data, such as government IDs, has been demonstrated in numerous high-profile cases, including the breach of the age verification company AU10TIX, which exposed login credentials for over a year, and the hack of the messaging application Discord. Justifiable privacy and security concerns may chill users from accessing platforms they are lawfully entitled to access.

Device-bound digital ID

Device-bound digital ID is a credential that is locally stored on your device. This comes in the form of government or privately-run wallet applications, like those offered by Apple and Google. Digital IDs are subject to a higher level of security within the Google and Apple wallets (as they should be). This means they are not synced to your account or across services. If you lose the device, you will need to reissue a new credential to the new one. Websites and services can directly query your digital ID to reveal only certain information from your ID, like age range, instead of sharing all of your information. This is called “selective disclosure."

There are many reasons someone may not be able to acquire a digital ID, preventing them from relying on this option. This includes lack of access to a smartphone, sharing devices with another person, or inability to get a physical ID. No universal standards exist governing how ID expiration, name changes, or address updates affect the validity of digital identity credentials. How to handle status changes is left up to the credential issuer.

Asynchronous and Offline Tokens

This is an issued token of some kind that doesn’t necessarily need network access to an external party or service every time you use it to establish your age with a verifier when they ask. A common danger in age verification services is the proliferation of multiple third-parties and custom solutions, which vary widely in their implementation and security. One proposal to avoid this is to centralize age checks with a trusted service that provides tokens that can be used to pass age checks in other places. Although this method requires a user to still submit to age verification or estimation once, after passing the initial facial age estimation or ID check, a user is issued a digital token they can present later to to show that they've previously passed an age check. The most popular proposal, AgeKeys, is similar to passkeys in that the tokens will be saved to a device or third-party password store, and can then be easily accessed after unlocking with your preferred on-device biometric verification or pin code.

Lessons Learned

With lessons pulled from the problems with the age verification rollout in the UK and various U.S. states, age verification widens risk for everyone by presenting scope creep and blocking web information access. Privacy-preserving methods to determine age exist such as presenting an age threshold instead of your exact birth date, but have not been mass deployed or stress tested yet. Which is why policy safeguards around the deployed technology matter just as much, if not more. 

Much of the infrastructure around age verification is entangled with other mandates, like deployment of digital ID. Which is why so many digital offerings get coupled with age verification as a “benefit” to the holder. In reality it’s more of a plus for the governments that want to deploy mandatory age verification and the vendors that present their implementation that often contains multiple methods. Instead of working on a singular path to age-gate the entire web, there should be a diversity of privacy-preserving ways to attest age without locking everyone into a singular platform or method. Ultimately, offering multiple options rather than focusing on a single method that would further restrict those who can’t use that particular path.

Privacy is For the Children (Too)

26 November 2025 at 02:44

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here.

Observable harms of age verification legislation in the UK, US, and elsewhere:

As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing:

It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web.

Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone.

The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories:

  • Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.

Parental controls—which already exist!—can help.

These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety. 

However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.  

Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk.

Privacy laws can also help minors online.

Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech.

While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts.

These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well. 

Actions you can take today to protect young people online:

  • Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc.
  • Discuss the importance of online privacy and safety with your kids and community.
  • Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community.
  • Support comprehensive privacy legislation for all.
  • Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads.

Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.

❌