Normal view

Received before yesterday

EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate

13 December 2025 at 06:10

The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.

The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country. 

But the case for digital identification has not been made. 

As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:

  1. Mission creep
  2. Infringements on privacy rights
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.

Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.

Read our civil society briefing in full here.

The UK Has It Wrong on Digital ID. Here’s Why.

28 November 2025 at 05:10

In late September, the United Kingdom’s Prime Minister Keir Starmer announced his government’s plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like people’s name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country. 

This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, we’ve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting “Digital passport IDs” in Android, and states contracting with mobile driver’s license providers (mDL). And as we’ve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID. 

But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Here’s three reasons why those policymakers have it wrong. 

Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

Mission Creep 

In his initial announcement, Starmer stated: “You will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively. 

The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services. 

The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association. 

And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system. 

These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do “next to nothing to tackle channel crossings.”

Inclusivity is Not Inevitable, But Exclusion Is 

While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as we’ve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum. 

The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but it’s short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods. 

These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.

Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring.

Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID.

Attacks on Privacy and Surveillance 

Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would “absolutely have very strong encryption” and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that “data associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that “the government will work closely with expert stakeholders to make the programme effective, secure and inclusive.” 

But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged. 

This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the government’s ability to handle giant databases is questionable. Notably, the eVisa’s multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someone’s identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.

Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)—first introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single client—is still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.

And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this.

Ways Forward

Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

We cannot base our human rights on the government’s mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If you’re based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system. 

The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

Joint Statement on the UN Cybercrime Convention: EFF and Global Partners Urge Governments Not to Sign

27 October 2025 at 06:20

Today, EFF joined a coalition of civil society organizations in urging UN Member States not to sign the UN Convention Against Cybercrime. For those that move forward despite these warnings, we urge them to take immediate and concrete steps to limit the human rights harms this Convention will unleash. These harms are likely to be severe and will be extremely difficult to prevent in practice.

The Convention obligates states to establish broad electronic surveillance powers to investigate and cooperate on a wide range of crimes—including those unrelated to information and communication systems—without adequate human rights safeguards. It requires governments to collect, obtain, preserve, and share electronic evidence with foreign authorities for any “serious crime”—defined as an offense punishable under domestic law by at least four years’ imprisonment (or a higher penalty).

In many countries, merely speaking freely; expressing a nonconforming sexual orientation or gender identity; or protesting peacefully can constitute a serious criminal offense per the definition of the convention. People have faced lengthy prison terms, or even more severe acts like torture, for criticizing their governments on social media, raising a rainbow flag, or criticizing a monarch. 

In today’s digital era, nearly every message or call generates granular metadata—revealing who communicates with whom, when, and from where—that routinely traverses national borders through global networks. The UN cybercrime convention, as currently written, risks enabling states to leverage its expansive cross-border data-access and cooperation mechanisms to obtain such information for political surveillance—abusing the Convention’s mechanisms to monitor critics, pressure their families, and target marginalized communities abroad.

As abusive governments increasingly rely on questionable tactics to extend their reach beyond their borders—targeting dissidents, activists, and journalists worldwide—the UN Cybercrime Convention risks becoming a vehicle for globalizing repression, enabling an unprecedented multilateral infrastructure for digital surveillance that allows states to access and exchange data across borders in ways that make political monitoring and targeting difficult to detect or challenge.

EFF has long sounded the alarm over the UN Cybercrime Treaty’s sweeping powers of cross-border cooperation and its alarming lack of human-rights safeguards. As the Convention opens for signature on October 25–26, 2025 in Hanoi, Vietnam—a country repeatedly condemned by international rights groups for jailing critics and suppressing online speech—the stakes for global digital freedom have never been higher.

The Convention’s many flaws cannot easily be mitigated because it fundamentally lacks a mechanism for suspending states that systematically fail to respect human rights or the rule of law. States must refuse to sign or ratify the Convention. 

Read our full letter here.

EFF Backs Constitutional Challenge to Ecuador’s Intelligence Law That Undermines Human Rights

23 October 2025 at 11:11

In early September, EFF submitted an amicus brief to Ecuador’s Constitutional Court supporting a constitutional challenge filed by Ecuadorian NGOs, including INREDH and LaLibre. The case challenges the constitutionality of the Ley Orgánica de Inteligencia (LOI) and its implementing regulation, the General Regulation of the LOI.

EFF’s amicus brief argues that the LOI enables disproportionate surveillance and secrecy that undermine constitutional and Inter-American human rights standards. EFF urges the Constitutional Court to declare the LOI and its regulation unconstitutional in their entirety.

More specifically, our submission notes that:

“The LOI presents a structural flaw that undermines compliance with the principles of legality, legitimate purpose, suitability, necessity, and proportionality; it inverts the rule and the exception, with serious harm to rights enshrined constitutionally and under the Convention; and it prioritizes indeterminate state interests, in contravention of the ultimate aim of intelligence activities and state action, namely the protection of individuals, their rights, and freedoms.”

Core Legal Problems Identified

Vague and Overbroad Definitions

The LOI contains key terms like “national security,” “integral security of the State,” “threats,” and “risks” that are left either undefined or so broadly framed that they could mean almost anything. This vagueness grants intelligence agencies wide, unchecked discretion, and fails short of the standard of legal certainty required under the American Convention on Human Rights (CADH).

Secrecy and Lack of Transparency

The LOI makes secrecy the rule rather than the exception, reversing the Inter-American principle of maximum disclosure, which holds that access to information should be the norm and secrecy a narrowly justified exception. The law establishes a classification system—“restricted,” “secret,” and “top secret”—for intelligence and counterintelligence information, but without clear, verifiable parameters to guide its application on a case-by-case basis. As a result, all information produced by the governing body (ente rector) of the National Intelligence System is classified as secret by default. Moreover, intelligence budgets and spending are insulated from meaningful public oversight, concentrated under a single authority, and ultimately destroyed, leaving no mechanism for accountability.

Weak or Nonexistent Oversight Mechanisms

The LOI leaves intelligence agencies to regulate themselves, with almost no external scrutiny. Civilian oversight is minimal, limited to occasional, closed-door briefings before a parliamentary commission that lacks real access to information or decision making power. This structure offers no guarantee of independent or judicial supervision and instead fosters an environment where intelligence operations can proceed without transparency or accountability.

Intrusive Powers Without Judicial Authorization

The LOI allows access to communications, databases, and personal data without prior judicial order, which enables the mass surveillance of electronic communications, metadata, and databases across public and private entities—including telecommunication operators. This directly contradicts rulings of the Inter-American Court of Human Rights, which establish that any restriction of the right to privacy must be necessary, proportionate, and subject to independent oversight. It also runs counter to CAJAR vs. Colombia, which affirms that intrusive surveillance requires prior judicial authorization.

International Human Rights Standards Applied

Our amicus curiae draws on the CAJAR vs. Colombia judgment, which set strict standards for intelligence activities. Crucially, Ecuador’s LOI fall short of all these tests: it doesn’t constitute an adequate legal basis for limiting rights; contravenes necessary and proportionate principles; fails to ensure robust controls and safeguards, like prior judicial authorization and solid civilian oversight; and completely disregards related data protection guarantees and data subject’s rights.

At its core, the LOI structurally prioritizes vague notions of “state interest” over the protection of human rights and fundamental freedoms. It legalizes secrecy, unchecked surveillance, and the impunity of intelligence agencies. For these reasons, we urge Ecuador’s Constitutional Court to declare the LOI and its regulations unconstitutional, as they violate both the Ecuadorian Constitution and the American Convention on Human Rights (CADH).

Read our full amicus brief here to learn more about how Ecuador’s intelligence framework undermines privacy, transparency, and the human rights protected under Inter-American human rights law.

Opt Out October: Daily Tips to Protect Your Privacy and Security

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.

Support EFF!

All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.

Table of Contents

Tip 1: Establish Good Digital Hygiene

Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.

Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.

Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.

This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.

Tip 2: Learn What a Data Broker Knows About You

Hundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers, advertisers, predatory companies, and even law enforcement agencies.

Data brokers build detailed profiles of our lives but try to keep their own practices hidden. Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.

There are hundreds of known data brokers, but here are a few major ones to start with:

Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.

You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.

Tip 3: Disable Ad Tracking on iPhone and Android

Picture this: you’re doomscrolling and spot a t-shirt you love. Later, you mention it to a friend and suddenly see an ad for that exact shirt in another app. The natural question pops into your head: “Is my phone listening to me?” Take a sigh of relief because, no, your phone is not listening to you. But advertisers are using shady tactics to profile your interests. Here’s an easy way to fight back: disable the ad identifier on your phone to make it harder for advertisers and data brokers to track you.

Disable Ad Tracking on iOS and iPadOS:

  • Open Settings > Privacy & Security > Tracking, and turn off “Allow Apps to Request to Track.”
  • Open Settings > Privacy & Security > Apple Advertising, and disable “Personalized Ads” to also stop some of Apple’s internal tracking for apps like the App Store. 
  • If you use Safari, go to Settings > Apps > Safari > Advanced and disable “Privacy Preserving Ad Measurement.”

Disable Ad Tracking on Android:

  • Open Settings > Security & privacy > Privacy controls > Ads, and tap “Delete advertising ID.”
  • While you’re at it, run through Google’s “Privacy Checkup” to review what info other Google services—like YouTube or your location—may be sharing with advertisers and data brokers.

These quick settings changes can help keep bad actors from spying on you. For a deeper dive on securing your iPhone or Android device, be sure to check out our full Surveillance Self-Defense guides.

Tip 4: Declutter Your Apps

Decluttering is all the rage for optimizers and organizers alike, but did you know a cleansing sweep through your apps can also help your privacy? Apps collect a lot of data, often in the background when you are not using them. This can be a prime way companies harvest your information, and then repackage and sell it to other companies you've never heard of. Having a lot of apps increases the peepholes that companies can gain into your personal life. 

Do you need three airline apps when you're not even traveling? Or the app for that hotel chain you stayed in once? It's best to delete that app and cut off their access to your information. In an ideal world, app makers would not process any of your data unless strictly necessary to give you what you asked for. Until then, to do an app audit:

  • Look through the apps you have and identify ones you rarely open or barely use. 
  • Long-press on apps that you don't use anymore and delete or uninstall them when a menu pops up. 
  • Even on apps you keep, take a swing through the location, microphone, or camera permissions for each of them. For iOS devices you can follow these instructions to find that menu. For Android, check out this instructions page.

If you delete an app and later find you need it, you can always redownload it. Try giving some apps the boot today to gain some memory space and some peace of mind.

Support EFF!

Tip 5: Disable Behavioral Ads on Amazon

Happy Amazon Prime Day! Let’s celebrate by taking back a piece of our privacy.

Amazon collects an astounding amount of information about your shopping habits. While the only way to truly free yourself from the company’s all-seeing eye is to never shop there, there is something you can do to disrupt some of that data use: tell Amazon to stop using your data to market more things to you (these settings are for US users and may not be available in all countries).

  • Log into your Amazon account, then click “Account & Lists” under your name. 
  • Scroll down to the “Communication and Content” section and click “Advertising preferences” (or just click this link to head directly there).
  • Click the option next to “Do not show me interest-based ads provided by Amazon.”
  • You may want to also delete the data Amazon already collected, so click the “Delete ad data” button.

This setting will turn off the personalized ads based on what Amazon infers about you, though you will likely still see recommendations based on your past purchases at Amazon.

Of course, Amazon sells a lot of other products. If you own an Alexa, now’s a good time to review the few remaining privacy options available to you after the company took away the ability to disable voice recordings. Kindle users might want to turn off some of the data usage tracking. And if you own a Ring camera, consider enabling end-to-end encryption to ensure you’re in control of the recording, not the company. 

Tip 6: Install Privacy Badger to Block Online Trackers

Every time you browse the web, you’re being tracked. Most websites contain invisible tracking code that lets companies collect and profit from your data. That data can end up in the hands of advertisers, data brokers, scammers, and even government agencies. Privacy Badger, EFF’s free browser extension, can help you fight back.

Privacy Badger automatically blocks hidden trackers to stop companies from spying on you online. It also tells websites not to share or sell your data by sending the “Global Privacy Control” signal, which is legally binding under some state privacy laws. Privacy Badger has evolved over the past decade to fight various methods of online tracking. Whether you want to protect your sensitive information from data brokers or just don’t want Big Tech monetizing your data, Privacy Badger has your back.

Visit privacybadger.org to install Privacy Badger.

It’s available on Chrome, Firefox, Edge, and Opera for desktop devices and Firefox and Edge for Android devices. Once installed, all of Privacy Badger’s features work automatically. There’s no setup required! If blocking harmful trackers ends up breaking something on a website, you can easily turn off Privacy Badger for that site while maintaining privacy protections everywhere else.

When you install Privacy Badger, you’re not just protecting yourself—you’re joining EFF and millions of other users in the fight against online surveillance.

Tip 7: Review Location Tracking Settings

Data brokers don’t just collect information on your purchases and browsing history. Mobile apps that have the location permission turned on will deliver your coordinates to third parties in exchange for insights or monetary kickbacks. Even when they don’t deliver that data directly to data brokers, if the app serves ad space, your location will be delivered in real-time bid requests not only to those wishing to place an ad, but to all participants in the ad auction—even if they lose the bid. Location data brokers take part in these auctions just to harvest location data en masse, without any intention of buying ad space.

Luckily, you can change a few settings to protect yourself against this hoovering of your whereabouts. You can use iOS or Android tools to audit an app’s permissions, providing clarity on who is providing what info to whom. You can then go to the apps that don’t need your location data and disable their access to that data (you can always change your mind later if it turns out location access was useful). You can also disable real-time location tracking by putting your phone into airplane mode, while still being able to navigate using offline maps. And by disabling mobile advertising identifiers (see tip three), you break the chain that links your location from one moment to the next.

Finally, for particularly sensitive situations you may want to bring an entirely separate, single-purpose device which you’ve kept clean of unneeded apps and locked down settings on. Similar in concept to a burner phone, even if this single-purpose device does manage to gather data on you, it can only tell a partial story about you—all the other data linking you to your normal activities will be kept separate.

For details on how you can follow these tips and more on your own devices, check out our more extensive post on the topic.

Tip 8: Limit the Data Your Gaming Console Collects About You

Oh, the beauty of gaming consoles—just plug in and play! Well... after you speed-run through a bunch of terms and conditions, internet setup, and privacy settings. If you rushed through those startup screens, don’t worry! It’s not too late to limit the data your console is collecting about you. Because yes, modern consoles do collect a lot about your gaming habits.

Start with the basics: make sure you have two-factor authentication turned on for your accounts. PlayStation, Xbox, and Nintendo all have guides on their sites. Between payment details and other personal info tied to these accounts, 2FA is an easy first line of defense for your data.

Then, it’s time to check the privacy controls on your console:

  • PlayStation 5: Go to Settings > Users and Accounts > Privacy to adjust what you share with both strangers and friends. To limit the data your PS5 collects about you, go to Settings > Users and Accounts > Privacy, where you can adjust settings under Data You Provide and Personalization.
  • Xbox Series X|S: Press the Xbox button > Profile & System > Settings > Account > Privacy & online safety > Xbox Privacy to fine-tune your sharing. To manage data collection, head to Profile & System > Settings > Account > Privacy & online safety > Data collection.
  • Nintendo Switch: The Switch doesn’t share as much data by default, but you still have options. To control who sees your play activity, go to System Settings > Users > [your profile] > Play Activity Settings. To opt out of sharing eShop data, open the eShop, select your profile (top right), then go to Google Analytics Preferences > Do Not Share.

Plug and play, right? Almost. These quick checks can help keep your gaming sessions fun—and more private.

Tip 9: Hide Your Start and End Points on Strava

Sharing your personal fitness goals, whether it be extended distances, accurate calorie counts, or GPS paths—sounds like a fun, competitive feature offered by today's digital fitness trackers. If you enjoy tracking those activities, you've probably heard of Strava. While it's excellent for motivation and connecting with fellow athletes, Strava's default settings can reveal sensitive information about where you live, work, or exercise, creating serious security and privacy risks. Fortunately, Strava gives you control over how much of your activity map is visible to others, allowing you to stay active in your community while protecting your personal safety.

We've covered how Strava data exposed classified military bases in 2018 when service members used fitness trackers. If fitness data can compromise national security, what's it revealing about you?

Here's how to hide your start and end points:

  • On the website: Hover over your profile picture > Settings > Privacy Controls > Map Visibility.
  • On mobile: Open Settings > Privacy Controls > Map Visibility.
  • You can then choose from three options: hide portions near a specific address, hide start/end of all activities, or hide entire maps

You can also adjust individual activities:

  • Open the activity you want to edit.
  • Select the three-dot menu icon.
  • Choose "Edit Map Visibility."
  • Use sliders to customize what's hidden or enable "Hide the Entire Map."

Great job taking control of your location privacy! Remember that these settings only apply to Strava, so if you share activities to other platforms, you'll need to adjust those privacy settings separately. While you're at it, consider reviewing your overall activity visibility settings to ensure you're only sharing what you want with the people you choose.

Tip 10: Find and Delete An Account You No Longer Use

Millions of online accounts are compromised each year. The more accounts you have, the more at risk you are of having your personal data illegally accessed and published online. Even if you don’t suffer a data breach, there’s also the possibility that someone could find one of your abandoned social media accounts containing information you shared publicly on purpose in the past, but don’t necessarily want floating around anymore. And companies may still be profiting off details of your personal life, even though you’re not getting any benefit from their service.

So, now’s a good time to find an old account to delete. There may be one you can already think of, but if you’re stuck, you can look through your password manager, look through logins saved on your web browser, or search your email inbox for phrases like “new account,” “password,” “welcome to,” or “confirm your email.” Or, enter your email address on the website HaveIBeenPwned to get a list of sites where your personal information has been compromised to see if any of them are accounts you no longer use.

Once you’ve decided on an account, you’ll need to find the steps to delete it. Simply deleting an app off of your phone or computer does not delete your account. Often you can log in and look in the account settings, or find instructions in the help menu, the FAQ page, or the pop-up customer service chat. If that fails, use a search engine to see if anybody else has written up the steps to deleting your specific type of account.

For more information, check out the Delete Unused Accounts tip on Security Planner.

Support EFF!

Tip 11: Search for Yourself

Today's tip may sound a little existential, but we're not suggesting a deep spiritual journey. Just a trip to your nearest search engine. Pop your name into search engines such as Google or DuckDuckGo, or even AI tools such as ChatGPT, to see what you find. This is one of the simplest things you can do to raise your own awareness of your digital reputation. It can be the first thing prospective employers (or future first dates) do when trying to figure out who you are. From a privacy perspective, doing it yourself can also shed light on how your information is presented to the general public. If there's a defunct social media account you'd rather keep hidden, but it's on the first page of your search results, that might be a good signal for you to finally delete that account. If you shared your cellphone number with an organization you volunteer for and it's on their home page, you can ask them to take it down.

Knowledge is power. It's important to know what search results are out there about you, so you understand what people see when they look for you. Once you have this overview, you can make better choices about your online privacy. 

Tip 12: Tell “People Search” Sites to Delete Your Information

When you search online for someone’s name, you’ll likely see results from people-search sites selling their home address, phone number, relatives’ names, and more. People-search sites are a type of data broker with an especially dangerous impact. They can expose people to scams, stalking, and identity theft. Submit opt out requests to these sites to reduce the amount of personal information that is easily available about you online.

Check out this list of opt-out links and instructions for more than 50 people search sites, organized by priority. Before submitting a request, check that the site actually has your information. Here are a few high-priority sites to start with: 

Data brokers continuously collect new information, so your data could reappear after being deleted. You’ll have to re-submit opt-outs periodically to keep your information off of people-search sites. Subscription-based services can automate this process and save you time, but a Consumer Reports study found that manual opt-outs are more effective.

Tip 13: Remove Your Personal Addresses from Search Engines 

Your home address may often be found with just a few clicks online. Whether you're concerned about your digital footprint or looking to safeguard your physical privacy, understanding where your address appears and how to remove or obscure it is a crucial step. Here's what you need to know.

Your personal addresses can be available through public records like property purchases, medical licensing information, or data brokers. Opting out from data brokers will do a lot to remove what's available commercially, but sometimes you can't erase the information entirely from things like property sales records.

You can ask some search engines to remove your personal information from search indexes, which is the most efficient way to make information like your personal addresses, phone number, and email address a lot harder to find. Google has a form that makes this request quite easy, and we’d suggest starting there.

Day 14: Check Out Signal

Here's the problem: many of your texts aren't actually private. Phone companies, government agencies, and app developers all too often can all peek at your conversations.

So on Global Encryption Day, our tip is to check out Signal—a messaging app that actually keeps your conversations private.

Signal uses end-to-end encryption, meaning only you and your recipient can read your messages—not even Signal can see them. Security experts love Signal because it's run by a privacy-focused nonprofit, funded by donations instead of data collection, and its code is publicly auditable. 

Beyond privacy, Signal offers free messaging and calls over Wi-Fi, helping you avoid SMS charges and international calling fees. The only catch? Your contacts need Signal too, so start recruiting your friends and family!

How to get started: Download Signal from your app store, verify your phone number, set a secure PIN, and start messaging your contacts who join you. Consider also setting up a username so people can reach you without sharing your phone number. For more detailed instructions, check out our guide.

Global Encryption Day is the perfect timing to protect your communications. Take your time to explore the app, and check out other privacy protecting features like disappearing messages, session verification, and lock screen notification privacy.

Tip 15: Switch to a Privacy-Protective Browser

Your browser stores tons of personal information: browsing history, tracking cookies, and data that companies use to build detailed profiles for targeted advertising. The browser you choose makes a huge difference in how much of this tracking you can prevent.

Most people use Chrome or Safari, which are automatically installed on Google and Apple products, but these browsers have significant privacy drawbacks. For example: Chrome's Incognito mode only hides history on your device—it doesn't stop tracking. Safari has been caught storing deleted browser history and collecting data even in private browsing mode.

Firefox is one alternative that puts privacy first. Unlike Chrome, Firefox automatically blocks trackers and ads in Private Browsing mode and prevents websites from sharing your data between sites. It also warns you when websites try to extract your personal information. But Firefox isn't your only option—other privacy-focused browsers like DuckDuckGo, Brave, and Tor also offer strong protections with different features. The key is switching away from browsers that prioritize data collection over your privacy.

Switching is easy: download your chosen browser from the links above and install it. Most browsers let you import bookmarks and passwords during setup.

You now have a new browser! Take some time to explore your new browser's privacy settings to maximize your protection.

Tip 16: Give Yourself Another Online Identity

We all take on different identities at times. Just as it's important to set boundaries in your daily life, the same can be true for your digital identity. For many reasons, people may want to keep aspects of their lives separate—and giving people control over how their information is used is one of the fundamental reasons we fight for privacy. Consider chopping up pieces of your life over separate email accounts, phone numbers, or social media accounts. 

This can help you manage your life and keep a full picture of your private information out of the hands of nosy data-mining companies. Maybe you volunteer for an organization in your spare time that you'd rather keep private, want to keep emails from your kids' school separate from a mountain of spam, or simply would rather keep your professional and private social media accounts separate. 

Whatever the reason, consider whether there's a piece of your life that could benefit from its own identity. When you set up these boundaries, you can also protect your privacy.

Tip 17: Check Out Virtual Card Services

Ever encounter an online vendor selling something that’s just what you need—if you could only be sure they aren’t skimming your credit card number? Or maybe you trust the vendor, but aren’t sure the web site (seemingly written in some arcane e-commerce platform from 1998) won’t be hacked within the hour after your purchase? Buying those bits and bobs shouldn’t cost you your peace of mind on top of the dollar amount. For these types of purchases, we recommend checking out a virtual card service.

These services will generate a seemingly random credit card for your use which is locked down in a particular way which you can specify. This may mean a card locked to a single vendor, where no one else can make charges on it. It could only validate charges for a specific category of purchase, for example clothing. You can not only set limits on vendors, but set spending limits a card can’t exceed, or that it should just be a one-time use card and then close itself. You can even pause a card if you are sure you won’t be using it for some time, and then unpause it later. The configuration is up to you.

There are a number of virtual card services available, like Privacy.com or IronVest, just to name a few. Just like any vendor, though, these services need some way to charge you. So for any virtual card service, pop them into your favored search engine to verify they’re legit, and aren’t going to burden you with additional fees. Some options may also only be available in specific countries or regions, due to financial regulation laws.

Support EFF!

Tip 18: Minimize Risk While Using Digital Payment Apps

Digital payment apps like Cash App, Venmo, and Zelle generally offer fewer fraud protections than credit cards offered by traditional financial institutions. It’s safer to stick to credit cards when making online purchases. That said, there are ways to minimize your risk.

Turn on transaction alerts:

  • On Cash App, tap on your picture or initials on the right side of the app. Tap Notifications, and then Transactions. From there, you can toggle the settings to receive a push notification, a text, and/or an email with receipts or to track activity on the app.
  • On PayPal, tap on the top right icon to access your account. Tap Notification Preferences, click on “Open Settings” and toggle to “Allow Notifications” if you’d like to see those on your phone.
  • On Venmo, tap on your picture or initials to go to the Me tab. Then, tap the Settings gear in the top right of the app, and tap Notifications. From there, you can adjust your text and email notifications, and even turn on push notifications. 

Report suspected fraud quickly

If you receive a notification for a purchase you didn’t make, even if it’s a small amount, make sure to immediately report it. Scammers sometimes test the waters with small amounts to see whether or not their targets are paying attention. Additionally, you may be on the hook for part of the payment if you don’t act fast. PayPal and Venmo say they cover lost funds if they’re reported within 60 days, but Cash App has more complicated restrictions, which can include fees of up to $500 if you lose your device or password and don’t report it within 48 hours.

And don’t forget to turn on multifactor authentication for each app. For more information, check out these tips from Consumer Reports.

Tip 19: Turn Off Ad Personalization to Limit How the Tech Giants Monetize Your Data

Tech companies make billions by harvesting your personal data and using it to sell hyper-targeted ads. This business model drives them to track us far beyond their own platforms, gathering data about our online and offline activity. Surveillance-based advertising isn’t just creepy—it’s harmful. The systems that power hyper-targeted ads can also funnel your personal information to data brokers, advertisers, scammers, and law enforcement agencies. 

To limit how companies monetize your data through surveillance-based advertising, turn off ad personalization on your accounts. This setting looks different depending on the platform, but here are some key places to start:

  • Meta (Facebook & Instagram): Follow this guide to find the setting for disabling ad targeting based on data Meta collects about you from other websites and apps.
  • Google: Visit https://myadcenter.google.com/home and switch the “Personalized ads” option from “On” to “Off.”
  • X (formerly known as Twitter): Visit https://x.com/settings/privacy_and_safety and turn off all settings under “Data sharing and personalization”

Banning online behavioral ads would be a better solution, but turning off ad personalization is a quick and easy step to limit how tech companies profit from your data. And don’t forget to change this same setting on Amazon, too.

Tip 20: Tighten Account Privacy Settings

Just because you want to share information with select friends and family on social media doesn’t necessarily mean you want to broadcast everything to the entire world. Whether you want to make sure you’re not sharing your real-time location with someone you’d rather not bump into or only want your close friends to know about your favorite pop star, you can typically restrict how companies display your status updates and other information.

In addition to whether data is displayed publicly or just to a select group of contacts, you may have some control over how data is collected, used, and shared with advertisers, or how long it is stored for.

To get started, choose an account and review the privacy settings, making changes as needed. Here are links to a few of the major companies to get you started:

Unfortunately, you may need to tweak your privacy settings multiple times to get them the way you want, as companies often introduce new features that are less private by default. And while companies sometimes offer choices on how data is collected, you can’t control most of the data collection that takes place. For more information, see Security Planner.

Tip 21: Protect Your Data When Dating Online

Dating apps like Grindr and Tinder collect vast amounts of intimate details—everything from sexual preferences, location history, and behavioral patterns—all from people that are just looking for love and connection. This data falling into the wrong hands can come with unacceptable consequences, especially for members of the LGBTQ+ community and other vulnerable users that pertinently need privacy protections.

To ensuring that finding love does not involve such a privacy impinging tradeoff, follow these tips to protecting yourself when online dating:

  1. Review your login information and make sure to use a strong, unique password for your accounts; and enable two-factor authentication when offered. 
  2. Disable behavioral ads so personal details about you cannot be used to create a comprehensive portrait of your life—including your sexual orientation.
  3. Review your access to your location and camera roll, and possibly change these in line with what information you would like to keep private. 
  4. Consider what photos you choose, upload, and share; and assume that everything can and will be made public.
  5. Disable the integration of third-party apps like Spotify if you want more privacy. 
  6. Be mindful of what you share with others when you first chat, such as not disclosing financial details, and trust your gut if something feels off. 

There isn't one singular way to use dating apps, but taking these small steps can have a big impact in staying safe when dating online.

Tip 22: Turn Off Automatic Content Recognition (ACR) On Your TV

You might think TVs are just meant to be watched, but it turns out TV manufacturers do their fair share of watching what you watch, too. This is done through technology called “automatic content recognition” (ACR), which snoops on and identifies what you’re watching by snapping screenshots and comparing them to a big database. How many screenshots? The Markup found some TVs captured up to 7,200 images per hour. The main reason? Ad targeting, of course. 

Any TV that’s connected to the internet likely does this alongside now-standard snooping practices, like tracking what apps you open and where you’re located. ACR is particularly nefarious, though, as it can identify not just streaming services, but also offline content, like video games, over-the-air broadcasts, and physical media. What we watch can and should be private, but that’s especially true when we’re using media that’s otherwise not connected to the internet, like Blu-Rays or DVDs.

Opting out of ACR can be a bit of a chore, but it is possible on most smart TVs. Consumer Reports has guides for most of the major TV manufacturers. 

And that’s it for Opt Out October. Hopefully you’ve come across a tip or two that you didn’t know about, and found ways to protect your privacy, and disrupt the astounding amount of data collection tech companies do.

Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So

5 August 2025 at 06:46

The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.

The case of safety online is not solved through technology alone.

No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:

Age Verification Systems Lead to Less Privacy 

Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. 

Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added. 

As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.

This Isn’t Just About Safety—It’s Censorship

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. 

Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.

We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. 

Children deserve a more intentional and holistic approach to protecting their safety and privacy online.

People Do Not Want This 

Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub. 

Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.

 Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.

Users Will Be Exposed to Amplified Discrimination 

To check users' ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.

From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. 

For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.

Not Everyone Has Access to an ID or Personal Device 

Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet. 

These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. 

The Way Forward 

The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.

No, the UK’s Online Safety Act Doesn’t Make Children Safer Online

1 August 2025 at 12:32

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA). 

The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:

  1. Primary priority content that is harmful to children: 
    1. Pornographic content.
    2. Content which encourages, promotes or provides instructions for:
      1. suicide;
      2. self-harm; or 
      3. an eating disorder or behaviours associated with an eating disorder.
  2. Priority content that is harmful to children: 
    1. Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    2. Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; 
    3. Content that encourages, promotes or provides instructions for serious violence against a person; 
    4. Bullying content;
    5. Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); 
    6. Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and 
    7. Content that encourages the self-administration of harmful substances.
  3. Non-designated content that is harmful to children (NDC): 
    1. Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
      1. the content’s potential financial impact;
      2. the safety or quality of goods featured in the content; or
      3. the way in which a service featured in the content may be performed.

    Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:

    1. Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”

      To do this, all users on sites that host this content must verify their age,
      for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. 

    2. Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”

    3. Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.” 

    Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab

    The UK Must Do Better

    The UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December. 

    But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.

    And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. 

    The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

    Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy

    25 July 2025 at 18:13

    In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

    Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.  

    The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.

    Zero Knowledge Proofs: The Good News

    The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.

    Zero Knowledge Proofs: The Bad News

    What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

    ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

    Protecting The Way Forward

    Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.

    Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.

    Dating Apps Need to Learn How Consent Works

    21 July 2025 at 12:29

    Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using your most personal information to train their AI tools. 

    Grindr has big plans for its gay wingman bot, Bumble launched AI Icebreakers, Tinder introduced AI tools to choose profile pictures for users, OKCupid teamed up with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge recently launched an AI tool to help users write prompts.

    The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and connection. This data falling into the wrong hands can—and has—come with unacceptable consequences, especially for members of the LGBTQ+ community. 

    This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ minimization practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools. 

    AI Icebreakers

    Back in December 2023, Bumble introduced AI Icebreakers to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company presented users with a pop-up upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’

    Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—without their consent but with their personal data.

    In response, European nonprofit noyb recently filed a complaint with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app. 

    In its complaint, noyb specifically alleges that Bumble: 

    • Failed to provide information about the processing of personal data for its AI Icebreaker feature 
    • Confused users with a “fake” consent banner
    • Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest 
    • Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR
    • Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.

    AI Chatbots for Dating

    Grindr recently launched its AI wingman. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain. 

    The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has communicated that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also said that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay. 

    In building this ‘gay chatbot,’ Grindr’s CEO said one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a terrible track record of protecting user privacy, and the company was also recently sued for allegedly revealing the HIV status of users. Further, direct messages on Grindr are stored on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties. 

    AI Picture Selection  

    In the past year, Tinder and Bumble have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, Photo Selector, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “learnings” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images. 

    The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy Legislation 

    Putting users in control of their own data is fundamental to protecting individual and collective privacy. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.

    At EFF, we’ve also long advocated for the introduction of comprehensive consumer privacy legislation to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through online behavioral advertising. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models. 

    The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.

    Data Brokers are Selling Your Flight Information to CBP and ICE

    9 July 2025 at 19:06

    For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.

    This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.

    One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrantthey’ve also been trying to hide how they know these things about us. 

    ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers. 

    More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada. 

    In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives. 

    Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution. 

    Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers. 

    At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.

    The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws. 

    ❌