Normal view

Received yesterday — 18 December 2025

Age Assurance Methods Explained

9 December 2025 at 21:19

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against all mandatory age verification. Not only does it turn the internet into an age-gated cul-de-sac, but it also leaves behind many people who can’t get or don’t have proper and up-to-date documentation. While populations like undocumented immigrants and people experiencing homelessness are more obviously vulnerable groups, these restrictions also impact people with more mundane reasons for not having valid documentation on hand. Perhaps they’ve undergone life changes that impact their status or other information—such as a move, name change, or gender marker change—or perhaps they simply haven’t gotten around to updating their documents. Inconvenient events like these should not be a barrier to going online. People should also reserve the right to opt-out of unreliable technology and shady practices that could endanger their personal information.

But age restriction mandates threaten all of that. Not only do age-gating laws block adults and youth alike from freely accessing services on the web, they also force users to trade their anonymity—a pillar of online expression—for a system in which they are bound to their real-life identities. And this surveillance regime stretches beyond just age restrictions on certain content; much of this infrastructure is also connected to government plans for creating a digital system of proof of identity.

So how does age gating actually work? The age and identity verification industry has devised countless different methods platforms can purchase to—in theory—figure out the ages and/or identities of their users.  But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

Every system of age verification or age estimation demands that users hand over sensitive and oftentimes immutable personal information that links their offline identity to their online activity, risking their safety and security in the process.

But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

With that said, as we see more of these laws roll out across the U.S. and the rest of the world, it’s important to understand the differences between these technologies so you can better identify the specific risks of each method, and make smart decisions about how you share your own data.

Age Assurance Methods

There are many different technologies that are being developed, attempted, and deployed to establish user age. In many cases, a single platform will have implemented a mixture of methods. For example, a user may need to submit both a physical government ID and a face scan as part of a liveliness check to establish that they are the person pictured on the physical ID. 

Age assurance methods generally fall into three categories:

  1. Age Attestation
  2. Age Estimation
  3. ID-bound Proof

Age Attestation

Self-attestation 

Sometimes, you’ll be asked to declare your age, without requiring any form of verification. One way this might happen is through one-off self-attestation. This type of age attestation has been around for a while; you may have seen it when an alcohol website asks if you’re over 21, or when Steam asks you to input your age to view game content that may not be appropriate for all ages. It’s usually implemented as a pop-up on a website, and they might ask you for your age every time you enter, or remember it between site accesses. This sort of attestation provides an indication that the site may not be appropriate for all viewers, but gives users the autonomy and respect to make that decision for themselves.

An alternative proposed approach to declaring your own age, called device-bound age attestation, is to have you set your age on your operating system or on App Stores before you can make purchases or browse the web. This age or age range might then be shared with websites or apps. On an Apple device, that age can be modified after creation, as long as an adult age is chosen. It’s important to separate device-bound age attestation from methods that require age verification or estimation at the device or app store level (common to digital ID solutions and some proposed laws). It’s only attestation if you’re permitted to set your age to whatever you choose without needing to prove anything to your provider or another party—providing flexibility for age declaration outside of mandatory age verification.

Attestation through parental controls

The sort of parental controls found on Apple and Android devices, Windows computers, and video game consoles provide the most flexible way for parents to manage what content their minor children can access. These settings can be applied through the device operating system, third-party applications, or by establishing a child account. Decisions about what content a young person can access are made via consent-driven mechanisms. As the manager, the parent or guardian will see requests and activity from their child depending on how strict or lax the settings are set. This could include requests to install an app, make a purchase on an app store, communicate with a new contact, or browse a particular website. The parent or guardian can then choose whether or not to accept the request and allow the activity. 

One survey that collected answers from 1,000 parents found that parental controls are underutilized. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. To help encourage more parents to make use of these settings, companies should continue to make them clearer and easier to use and manage. Parental controls are better suited to accommodating diverse cultural contexts and individual family concerns than a one-size-fits-all government mandate. It’s also safer to use native settings–or settings provided by the operating system itself–than it is to rely on third-party parental control applications. These applications have experienced data breaches and often effectively function as spyware.

Age Estimation

Instead of asking you directly, the system guesses your age based on data it collects about you.

Age estimation through photo and facial estimation

Age estimation by photo or live facial age analysis is when a system uses an image of a face to guess a person’s age.

A poorly designed system might improperly store these facial images or retain them for significant periods, creating a risk of data leakage. Our faces are unique, immutable, and constantly on display. In the hands of an adversary, and cross-referenced to other readily available information about us, this information can expose intimate details about us or lead to biometric tracking.

This technology has also proven fickle and often inaccurate, causing false negatives and positives, exacerbation of racial biases, and unprotected usage of biometric data to complete the analysis. And because it’s usually conducted with AI models, there often isn’t a way for a user to challenge a decision directly without falling back on more intrusive methods like submitting a government ID. 

Age inference based on user data and third party services

Age inference systems are normally conducted through estimating how old someone is based on their account information or querying other databases, where the account may have done age verification already, to cross reference with the existing information they have on that account.

Age inference includes but not limited to:

In order to view how old someone is via account information associated with their email, services often use data brokers to provide this information. This incentivizes even more collection of our data for the sake of age estimation and rewards data brokers for collecting a mass of data on people. Also, regulation of these age inference services varies based on a country’s privacy laws.

ID-bound Proof

ID-bound proofs, methods that use your government issued ID, are often used as a fallback for failed age estimation. Consequently, any government-issued ID backed verification disproportionately excludes certain demographics from accessing online services. A significant portion of the U.S. population does not have access to government-issued IDs, with millions of adults lacking a valid driver’s license or state-issued ID. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification. In addition, non-U.S. citizens, including undocumented immigrants, face barriers to acquiring government-issued IDs. The exclusionary nature of document-based verification systems is a major concern, as it could prevent entire communities from accessing essential services or engaging in online spaces.

Physical ID uploaded and stored as an image 

When an image of a physical ID is required, users are forced to upload—not just momentarily display—sensitive personal information, such as government-issued ID or biometric identifiers, to third-party services in order to gain access to age-restricted content. This creates significant privacy and security concerns, as users have no direct control over who receives and stores their personal data, where it is sent, and how it may be accessed, used, or leaked outside the immediate verification process.

Requiring users to digitally hand over government-issued identification to verify their age introduces substantial privacy risks. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee that it will be handled securely. The verification process typically involves transmitting this data across multiple intermediaries, which means the risk of a data breach is heightened. The misuse of sensitive personal data, such as government IDs, has been demonstrated in numerous high-profile cases, including the breach of the age verification company AU10TIX, which exposed login credentials for over a year, and the hack of the messaging application Discord. Justifiable privacy and security concerns may chill users from accessing platforms they are lawfully entitled to access.

Device-bound digital ID

Device-bound digital ID is a credential that is locally stored on your device. This comes in the form of government or privately-run wallet applications, like those offered by Apple and Google. Digital IDs are subject to a higher level of security within the Google and Apple wallets (as they should be). This means they are not synced to your account or across services. If you lose the device, you will need to reissue a new credential to the new one. Websites and services can directly query your digital ID to reveal only certain information from your ID, like age range, instead of sharing all of your information. This is called “selective disclosure."

There are many reasons someone may not be able to acquire a digital ID, preventing them from relying on this option. This includes lack of access to a smartphone, sharing devices with another person, or inability to get a physical ID. No universal standards exist governing how ID expiration, name changes, or address updates affect the validity of digital identity credentials. How to handle status changes is left up to the credential issuer.

Asynchronous and Offline Tokens

This is an issued token of some kind that doesn’t necessarily need network access to an external party or service every time you use it to establish your age with a verifier when they ask. A common danger in age verification services is the proliferation of multiple third-parties and custom solutions, which vary widely in their implementation and security. One proposal to avoid this is to centralize age checks with a trusted service that provides tokens that can be used to pass age checks in other places. Although this method requires a user to still submit to age verification or estimation once, after passing the initial facial age estimation or ID check, a user is issued a digital token they can present later to to show that they've previously passed an age check. The most popular proposal, AgeKeys, is similar to passkeys in that the tokens will be saved to a device or third-party password store, and can then be easily accessed after unlocking with your preferred on-device biometric verification or pin code.

Lessons Learned

With lessons pulled from the problems with the age verification rollout in the UK and various U.S. states, age verification widens risk for everyone by presenting scope creep and blocking web information access. Privacy-preserving methods to determine age exist such as presenting an age threshold instead of your exact birth date, but have not been mass deployed or stress tested yet. Which is why policy safeguards around the deployed technology matter just as much, if not more. 

Much of the infrastructure around age verification is entangled with other mandates, like deployment of digital ID. Which is why so many digital offerings get coupled with age verification as a “benefit” to the holder. In reality it’s more of a plus for the governments that want to deploy mandatory age verification and the vendors that present their implementation that often contains multiple methods. Instead of working on a singular path to age-gate the entire web, there should be a diversity of privacy-preserving ways to attest age without locking everyone into a singular platform or method. Ultimately, offering multiple options rather than focusing on a single method that would further restrict those who can’t use that particular path.

Received before yesterday

Privacy is For the Children (Too)

26 November 2025 at 02:44

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here.

Observable harms of age verification legislation in the UK, US, and elsewhere:

As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing:

It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web.

Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone.

The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories:

  • Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.

Parental controls—which already exist!—can help.

These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety. 

However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.  

Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk.

Privacy laws can also help minors online.

Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech.

While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts.

These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well. 

Actions you can take today to protect young people online:

  • Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc.
  • Discuss the importance of online privacy and safety with your kids and community.
  • Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community.
  • Support comprehensive privacy legislation for all.
  • Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads.

Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.

Verifying Trust in Digital ID Is Still Incomplete

4 September 2025 at 02:45

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is a little more privacy preserving, but in practice those three methods are how we are seeing digital ID being used today.

Advocates of digital ID often use a framework they call the "Triangle of Trust." This is usually presented as a triangle of exchange between the holder of an ID—those who use a phone or wallet application to access a service; the issuer of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a banking system; and the verifier of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.

This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from phoning home every time you verify your ID with another party.

But it also makes a lot of questionable assumptions, such as:

1) the verifier will only ever ask for a limited amount of information. 

2) the verifier won’t store information it collects.

3) the verifier is always trustworthy. 

The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:

  • Law enforcement when doing a traffic stop and verifying your ID as valid.
  • A government benefits office that requires ID verification to sign up for social security benefits.
  • A porn site in a state or country which requires age verification or identity verification before allowing access.
  • An online retailer selling products like alcohol or tobacco.

Looking at the triangle again, this isn’t quite an equal exchange. Your personal ID like a driver’s license or government ID is both one of the most centralized and sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.

The effort to answer the questions of how to prevent verifier abuse is ongoing. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have already proven disastrous.

One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by Google and now supported by Apple, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can. 

Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:

  • verifiers establishing who they are and what they plan to ask from holders. There should also be an established plan for transparency on verifiers and their data retention policies.
  • ways to identify and report abusive verifiers, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.
  • unlinkable presentations that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.

A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be  fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.

Pop up asking user to make sure they trust the website they are submitting ID info to

Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without verifier accountability, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle. 

While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.

Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy

25 July 2025 at 18:13

In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.  

The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.

Zero Knowledge Proofs: The Good News

The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.

Zero Knowledge Proofs: The Bad News

What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

Protecting The Way Forward

Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.

Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.

❌