Normal view

Received before yesterday

After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

3 December 2025 at 18:19

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

Once Again, Chat Control Flails After Strong Public Pressure

31 October 2025 at 17:08

The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan

EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it’s been shot down because there’s no public support. The fight is delayed, but not over.

It’s time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don’t violate the human rights of people around the world. 

As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it’s an attack on fundamental human rights. 

The coming EU presidencies should abandon these attempts and work on finding a solution that protects people’s privacy and security.

When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact

23 October 2025 at 13:23

Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.

After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.

At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.

When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.

We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:

Control AI Access to Secure Chat on Android and iOS

Here are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.

How to Check and Limit What Gemini Can Access

If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:

  • Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off. 
  • Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
  • Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete. 

How to Check and Limit what Apple Intelligence and Siri Can Access

Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do: 

  • Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
  • Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it. 

For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.

Why It Matters 

Sending Messages Has Different Privacy Concerns than Receiving Them

Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.

Google Gemini and WhatsApp

On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.

If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.

The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.

For comparison’s sake, let’s see how this works with Apple devices.

Siri and WhatsApp

The closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.

According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.

In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.

Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.

The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.

How Receiving Messages Works

Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too. 

Google Gemini

By default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).

This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.

We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.

If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.

Apple Intelligence

Apple is more clear about how it handles this sort of notification access.

Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”

Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.

New AI Features Must Come With Strong User Controls

As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.

Per-app AI Permissions

Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.

Offer On-Device-Only Modes

Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.

Improve Documentation

Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.

The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.

Tile’s Lack of Encryption Is a Danger for Users Everywhere

3 October 2025 at 14:21

In research shared with Wired this week, security researchers detailed a series of vulnerabilities and design flaws with Life360’s Tile Bluetooth trackers that make it easy for stalkers and the company itself to track the location of Tile devices.

Tile trackers are small Bluetooth trackers, similar to Apple’s Airtags, but they work on their own network, not Apple’s. We’ve been raising concerns about these types of trackers since they were first introduced and provide guidance for finding them if you think someone is using them to track you without your knowledge.

EFF has worked on improving the Detecting Unwanted Location Trackers standard that Apple, Google, and Samsung use, and these companies have at least made incremental improvements. But Tile has done little to mitigate the concerns we’ve raised around stalkers using their devices to track people.

One of the core fundamentals of that standard is that Bluetooth trackers should rotate their MAC address, making them harder for a third-party to track, and that they should encrypt information sent. According to the researchers, Tile does neither.

This has a direct impact on the privacy of legitimate users and opens the device up to potentially even more dangerous stalking. Tile devices do have a rotating ID, but since the MAC address is static and unencrypted, anyone in the vicinity could pick up and track that Bluetooth device.

Other Bluetooth trackers don’t broadcast their MAC address, and instead use only a rotating ID, which makes it much harder for someone to record and track the movement of that tag. Apple, Google, and Samsung also all use end-to-end encryption when data about the location is sent to the companies’ servers, meaning the companies themselves cannot access that information.

In its privacy policy, Life360 states that, “You are the only one with the ability to see your Tile location and your device location.” But if the information from a tracker is sent to and stored by Tile in cleartext (i.e. unencrypted text) as the researchers believe, then the company itself can see the location of the tags and their owners, turning them from single item trackers into surveillance tools.

There are also issues with the “anti-theft mode” that Tile offers. The anti-theft setting hides the tracker from Tile’s “Scan and Secure” detection feature, so it can’t be easily found using the app. Ostensibly this is a feature meant to make it harder for a thief to just use the app to locate a tracker. In exchange for enabling the anti-theft feature, a user has to submit a photo ID and agree to pay a $1 million fine if they’re convicted of misusing the tracker.

But that’s only helpful if the stalker gets caught, which is a lot less likely when the person being tracked can’t use the anti-stalking protection feature in the app to find the tracker following them. As we’ve said before, it is impossible to make an anti-theft device that secretly notifies only the owner without also making a perfect tool for stalking.

Life360, the company that owns Tile, told Wired it “made a number of improvements” after the researchers reported them, but did not detail what those improvements are.

Many of these issues would be mitigated by doing what their competition is already doing: encrypting the broadcasts from its Bluetooth trackers and randomizing MAC addresses. Every company involved in the location tracker industry business has the responsibility to create a safeguard for people, not just for their lost keys.

The UK Is Still Trying to Backdoor Encryption for Apple Users

1 October 2025 at 16:03

The Financial Times reports that the U.K. is once again demanding that Apple create a backdoor into its encrypted backup services. The only change since the last time they demanded this is that the order is allegedly limited to only apply to British users. That doesn’t make it any better.

The demand uses a power called a “Technical Capability Notice” (TCN) in the U.K.’s Investigatory Powers Act. At the time of its signing we noted this law would likely be used to demand Apple spy on its users.

After the U.K. government first issued the TCN in January, Apple was forced to either create a backdoor or block its Advanced Data Protection feature—which turns on end-to-end encryption for iCloud—for all U.K. users. The company decided to remove the feature in the U.K. instead of creating the backdoor.

The initial order from January targeted the data of all Apple users. In August, the US claimed the U.K. withdrew the demand, but Apple did not re-enable Advanced Data Protection. The new order provides insight into why: the U.K. was just rewriting it to only apply to British users.

This is still an unsettling overreach that makes U.K. users less safe and less free. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. It sets a dangerous precedent to demand similar data from other companies, and provides a runway for other authoritarian governments to issue comparable orders. The news of continued server-side access to users' data comes just days after the UK government announced an intrusive mandatory digital ID scheme, framed as a measure against illegal migration.

A tribunal hearing was initially set to take place in January 2026, though it’s currently unclear if that will proceed or if the new order changes the legal process. Apple must continue to refuse these types of backdoors. Breaking end-to-end encryption for one country breaks it for everyone. These repeated attempts to weaken encryption violates fundamental human rights and destroys our right to private spaces.

Opt Out October: Daily Tips to Protect Your Privacy and Security

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.

Support EFF!

All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.

Table of Contents

Tip 1: Establish Good Digital Hygiene

Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.

Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.

Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.

This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.

Tip 2: Learn What a Data Broker Knows About You

Hundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers, advertisers, predatory companies, and even law enforcement agencies.

Data brokers build detailed profiles of our lives but try to keep their own practices hidden. Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.

There are hundreds of known data brokers, but here are a few major ones to start with:

Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.

You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.

Tip 3: Disable Ad Tracking on iPhone and Android

Picture this: you’re doomscrolling and spot a t-shirt you love. Later, you mention it to a friend and suddenly see an ad for that exact shirt in another app. The natural question pops into your head: “Is my phone listening to me?” Take a sigh of relief because, no, your phone is not listening to you. But advertisers are using shady tactics to profile your interests. Here’s an easy way to fight back: disable the ad identifier on your phone to make it harder for advertisers and data brokers to track you.

Disable Ad Tracking on iOS and iPadOS:

  • Open Settings > Privacy & Security > Tracking, and turn off “Allow Apps to Request to Track.”
  • Open Settings > Privacy & Security > Apple Advertising, and disable “Personalized Ads” to also stop some of Apple’s internal tracking for apps like the App Store. 
  • If you use Safari, go to Settings > Apps > Safari > Advanced and disable “Privacy Preserving Ad Measurement.”

Disable Ad Tracking on Android:

  • Open Settings > Security & privacy > Privacy controls > Ads, and tap “Delete advertising ID.”
  • While you’re at it, run through Google’s “Privacy Checkup” to review what info other Google services—like YouTube or your location—may be sharing with advertisers and data brokers.

These quick settings changes can help keep bad actors from spying on you. For a deeper dive on securing your iPhone or Android device, be sure to check out our full Surveillance Self-Defense guides.

Tip 4: Declutter Your Apps

Decluttering is all the rage for optimizers and organizers alike, but did you know a cleansing sweep through your apps can also help your privacy? Apps collect a lot of data, often in the background when you are not using them. This can be a prime way companies harvest your information, and then repackage and sell it to other companies you've never heard of. Having a lot of apps increases the peepholes that companies can gain into your personal life. 

Do you need three airline apps when you're not even traveling? Or the app for that hotel chain you stayed in once? It's best to delete that app and cut off their access to your information. In an ideal world, app makers would not process any of your data unless strictly necessary to give you what you asked for. Until then, to do an app audit:

  • Look through the apps you have and identify ones you rarely open or barely use. 
  • Long-press on apps that you don't use anymore and delete or uninstall them when a menu pops up. 
  • Even on apps you keep, take a swing through the location, microphone, or camera permissions for each of them. For iOS devices you can follow these instructions to find that menu. For Android, check out this instructions page.

If you delete an app and later find you need it, you can always redownload it. Try giving some apps the boot today to gain some memory space and some peace of mind.

Support EFF!

Tip 5: Disable Behavioral Ads on Amazon

Happy Amazon Prime Day! Let’s celebrate by taking back a piece of our privacy.

Amazon collects an astounding amount of information about your shopping habits. While the only way to truly free yourself from the company’s all-seeing eye is to never shop there, there is something you can do to disrupt some of that data use: tell Amazon to stop using your data to market more things to you (these settings are for US users and may not be available in all countries).

  • Log into your Amazon account, then click “Account & Lists” under your name. 
  • Scroll down to the “Communication and Content” section and click “Advertising preferences” (or just click this link to head directly there).
  • Click the option next to “Do not show me interest-based ads provided by Amazon.”
  • You may want to also delete the data Amazon already collected, so click the “Delete ad data” button.

This setting will turn off the personalized ads based on what Amazon infers about you, though you will likely still see recommendations based on your past purchases at Amazon.

Of course, Amazon sells a lot of other products. If you own an Alexa, now’s a good time to review the few remaining privacy options available to you after the company took away the ability to disable voice recordings. Kindle users might want to turn off some of the data usage tracking. And if you own a Ring camera, consider enabling end-to-end encryption to ensure you’re in control of the recording, not the company. 

Tip 6: Install Privacy Badger to Block Online Trackers

Every time you browse the web, you’re being tracked. Most websites contain invisible tracking code that lets companies collect and profit from your data. That data can end up in the hands of advertisers, data brokers, scammers, and even government agencies. Privacy Badger, EFF’s free browser extension, can help you fight back.

Privacy Badger automatically blocks hidden trackers to stop companies from spying on you online. It also tells websites not to share or sell your data by sending the “Global Privacy Control” signal, which is legally binding under some state privacy laws. Privacy Badger has evolved over the past decade to fight various methods of online tracking. Whether you want to protect your sensitive information from data brokers or just don’t want Big Tech monetizing your data, Privacy Badger has your back.

Visit privacybadger.org to install Privacy Badger.

It’s available on Chrome, Firefox, Edge, and Opera for desktop devices and Firefox and Edge for Android devices. Once installed, all of Privacy Badger’s features work automatically. There’s no setup required! If blocking harmful trackers ends up breaking something on a website, you can easily turn off Privacy Badger for that site while maintaining privacy protections everywhere else.

When you install Privacy Badger, you’re not just protecting yourself—you’re joining EFF and millions of other users in the fight against online surveillance.

Tip 7: Review Location Tracking Settings

Data brokers don’t just collect information on your purchases and browsing history. Mobile apps that have the location permission turned on will deliver your coordinates to third parties in exchange for insights or monetary kickbacks. Even when they don’t deliver that data directly to data brokers, if the app serves ad space, your location will be delivered in real-time bid requests not only to those wishing to place an ad, but to all participants in the ad auction—even if they lose the bid. Location data brokers take part in these auctions just to harvest location data en masse, without any intention of buying ad space.

Luckily, you can change a few settings to protect yourself against this hoovering of your whereabouts. You can use iOS or Android tools to audit an app’s permissions, providing clarity on who is providing what info to whom. You can then go to the apps that don’t need your location data and disable their access to that data (you can always change your mind later if it turns out location access was useful). You can also disable real-time location tracking by putting your phone into airplane mode, while still being able to navigate using offline maps. And by disabling mobile advertising identifiers (see tip three), you break the chain that links your location from one moment to the next.

Finally, for particularly sensitive situations you may want to bring an entirely separate, single-purpose device which you’ve kept clean of unneeded apps and locked down settings on. Similar in concept to a burner phone, even if this single-purpose device does manage to gather data on you, it can only tell a partial story about you—all the other data linking you to your normal activities will be kept separate.

For details on how you can follow these tips and more on your own devices, check out our more extensive post on the topic.

Tip 8: Limit the Data Your Gaming Console Collects About You

Oh, the beauty of gaming consoles—just plug in and play! Well... after you speed-run through a bunch of terms and conditions, internet setup, and privacy settings. If you rushed through those startup screens, don’t worry! It’s not too late to limit the data your console is collecting about you. Because yes, modern consoles do collect a lot about your gaming habits.

Start with the basics: make sure you have two-factor authentication turned on for your accounts. PlayStation, Xbox, and Nintendo all have guides on their sites. Between payment details and other personal info tied to these accounts, 2FA is an easy first line of defense for your data.

Then, it’s time to check the privacy controls on your console:

  • PlayStation 5: Go to Settings > Users and Accounts > Privacy to adjust what you share with both strangers and friends. To limit the data your PS5 collects about you, go to Settings > Users and Accounts > Privacy, where you can adjust settings under Data You Provide and Personalization.
  • Xbox Series X|S: Press the Xbox button > Profile & System > Settings > Account > Privacy & online safety > Xbox Privacy to fine-tune your sharing. To manage data collection, head to Profile & System > Settings > Account > Privacy & online safety > Data collection.
  • Nintendo Switch: The Switch doesn’t share as much data by default, but you still have options. To control who sees your play activity, go to System Settings > Users > [your profile] > Play Activity Settings. To opt out of sharing eShop data, open the eShop, select your profile (top right), then go to Google Analytics Preferences > Do Not Share.

Plug and play, right? Almost. These quick checks can help keep your gaming sessions fun—and more private.

Tip 9: Hide Your Start and End Points on Strava

Sharing your personal fitness goals, whether it be extended distances, accurate calorie counts, or GPS paths—sounds like a fun, competitive feature offered by today's digital fitness trackers. If you enjoy tracking those activities, you've probably heard of Strava. While it's excellent for motivation and connecting with fellow athletes, Strava's default settings can reveal sensitive information about where you live, work, or exercise, creating serious security and privacy risks. Fortunately, Strava gives you control over how much of your activity map is visible to others, allowing you to stay active in your community while protecting your personal safety.

We've covered how Strava data exposed classified military bases in 2018 when service members used fitness trackers. If fitness data can compromise national security, what's it revealing about you?

Here's how to hide your start and end points:

  • On the website: Hover over your profile picture > Settings > Privacy Controls > Map Visibility.
  • On mobile: Open Settings > Privacy Controls > Map Visibility.
  • You can then choose from three options: hide portions near a specific address, hide start/end of all activities, or hide entire maps

You can also adjust individual activities:

  • Open the activity you want to edit.
  • Select the three-dot menu icon.
  • Choose "Edit Map Visibility."
  • Use sliders to customize what's hidden or enable "Hide the Entire Map."

Great job taking control of your location privacy! Remember that these settings only apply to Strava, so if you share activities to other platforms, you'll need to adjust those privacy settings separately. While you're at it, consider reviewing your overall activity visibility settings to ensure you're only sharing what you want with the people you choose.

Tip 10: Find and Delete An Account You No Longer Use

Millions of online accounts are compromised each year. The more accounts you have, the more at risk you are of having your personal data illegally accessed and published online. Even if you don’t suffer a data breach, there’s also the possibility that someone could find one of your abandoned social media accounts containing information you shared publicly on purpose in the past, but don’t necessarily want floating around anymore. And companies may still be profiting off details of your personal life, even though you’re not getting any benefit from their service.

So, now’s a good time to find an old account to delete. There may be one you can already think of, but if you’re stuck, you can look through your password manager, look through logins saved on your web browser, or search your email inbox for phrases like “new account,” “password,” “welcome to,” or “confirm your email.” Or, enter your email address on the website HaveIBeenPwned to get a list of sites where your personal information has been compromised to see if any of them are accounts you no longer use.

Once you’ve decided on an account, you’ll need to find the steps to delete it. Simply deleting an app off of your phone or computer does not delete your account. Often you can log in and look in the account settings, or find instructions in the help menu, the FAQ page, or the pop-up customer service chat. If that fails, use a search engine to see if anybody else has written up the steps to deleting your specific type of account.

For more information, check out the Delete Unused Accounts tip on Security Planner.

Support EFF!

Tip 11: Search for Yourself

Today's tip may sound a little existential, but we're not suggesting a deep spiritual journey. Just a trip to your nearest search engine. Pop your name into search engines such as Google or DuckDuckGo, or even AI tools such as ChatGPT, to see what you find. This is one of the simplest things you can do to raise your own awareness of your digital reputation. It can be the first thing prospective employers (or future first dates) do when trying to figure out who you are. From a privacy perspective, doing it yourself can also shed light on how your information is presented to the general public. If there's a defunct social media account you'd rather keep hidden, but it's on the first page of your search results, that might be a good signal for you to finally delete that account. If you shared your cellphone number with an organization you volunteer for and it's on their home page, you can ask them to take it down.

Knowledge is power. It's important to know what search results are out there about you, so you understand what people see when they look for you. Once you have this overview, you can make better choices about your online privacy. 

Tip 12: Tell “People Search” Sites to Delete Your Information

When you search online for someone’s name, you’ll likely see results from people-search sites selling their home address, phone number, relatives’ names, and more. People-search sites are a type of data broker with an especially dangerous impact. They can expose people to scams, stalking, and identity theft. Submit opt out requests to these sites to reduce the amount of personal information that is easily available about you online.

Check out this list of opt-out links and instructions for more than 50 people search sites, organized by priority. Before submitting a request, check that the site actually has your information. Here are a few high-priority sites to start with: 

Data brokers continuously collect new information, so your data could reappear after being deleted. You’ll have to re-submit opt-outs periodically to keep your information off of people-search sites. Subscription-based services can automate this process and save you time, but a Consumer Reports study found that manual opt-outs are more effective.

Tip 13: Remove Your Personal Addresses from Search Engines 

Your home address may often be found with just a few clicks online. Whether you're concerned about your digital footprint or looking to safeguard your physical privacy, understanding where your address appears and how to remove or obscure it is a crucial step. Here's what you need to know.

Your personal addresses can be available through public records like property purchases, medical licensing information, or data brokers. Opting out from data brokers will do a lot to remove what's available commercially, but sometimes you can't erase the information entirely from things like property sales records.

You can ask some search engines to remove your personal information from search indexes, which is the most efficient way to make information like your personal addresses, phone number, and email address a lot harder to find. Google has a form that makes this request quite easy, and we’d suggest starting there.

Day 14: Check Out Signal

Here's the problem: many of your texts aren't actually private. Phone companies, government agencies, and app developers all too often can all peek at your conversations.

So on Global Encryption Day, our tip is to check out Signal—a messaging app that actually keeps your conversations private.

Signal uses end-to-end encryption, meaning only you and your recipient can read your messages—not even Signal can see them. Security experts love Signal because it's run by a privacy-focused nonprofit, funded by donations instead of data collection, and its code is publicly auditable. 

Beyond privacy, Signal offers free messaging and calls over Wi-Fi, helping you avoid SMS charges and international calling fees. The only catch? Your contacts need Signal too, so start recruiting your friends and family!

How to get started: Download Signal from your app store, verify your phone number, set a secure PIN, and start messaging your contacts who join you. Consider also setting up a username so people can reach you without sharing your phone number. For more detailed instructions, check out our guide.

Global Encryption Day is the perfect timing to protect your communications. Take your time to explore the app, and check out other privacy protecting features like disappearing messages, session verification, and lock screen notification privacy.

Tip 15: Switch to a Privacy-Protective Browser

Your browser stores tons of personal information: browsing history, tracking cookies, and data that companies use to build detailed profiles for targeted advertising. The browser you choose makes a huge difference in how much of this tracking you can prevent.

Most people use Chrome or Safari, which are automatically installed on Google and Apple products, but these browsers have significant privacy drawbacks. For example: Chrome's Incognito mode only hides history on your device—it doesn't stop tracking. Safari has been caught storing deleted browser history and collecting data even in private browsing mode.

Firefox is one alternative that puts privacy first. Unlike Chrome, Firefox automatically blocks trackers and ads in Private Browsing mode and prevents websites from sharing your data between sites. It also warns you when websites try to extract your personal information. But Firefox isn't your only option—other privacy-focused browsers like DuckDuckGo, Brave, and Tor also offer strong protections with different features. The key is switching away from browsers that prioritize data collection over your privacy.

Switching is easy: download your chosen browser from the links above and install it. Most browsers let you import bookmarks and passwords during setup.

You now have a new browser! Take some time to explore your new browser's privacy settings to maximize your protection.

Tip 16: Give Yourself Another Online Identity

We all take on different identities at times. Just as it's important to set boundaries in your daily life, the same can be true for your digital identity. For many reasons, people may want to keep aspects of their lives separate—and giving people control over how their information is used is one of the fundamental reasons we fight for privacy. Consider chopping up pieces of your life over separate email accounts, phone numbers, or social media accounts. 

This can help you manage your life and keep a full picture of your private information out of the hands of nosy data-mining companies. Maybe you volunteer for an organization in your spare time that you'd rather keep private, want to keep emails from your kids' school separate from a mountain of spam, or simply would rather keep your professional and private social media accounts separate. 

Whatever the reason, consider whether there's a piece of your life that could benefit from its own identity. When you set up these boundaries, you can also protect your privacy.

Tip 17: Check Out Virtual Card Services

Ever encounter an online vendor selling something that’s just what you need—if you could only be sure they aren’t skimming your credit card number? Or maybe you trust the vendor, but aren’t sure the web site (seemingly written in some arcane e-commerce platform from 1998) won’t be hacked within the hour after your purchase? Buying those bits and bobs shouldn’t cost you your peace of mind on top of the dollar amount. For these types of purchases, we recommend checking out a virtual card service.

These services will generate a seemingly random credit card for your use which is locked down in a particular way which you can specify. This may mean a card locked to a single vendor, where no one else can make charges on it. It could only validate charges for a specific category of purchase, for example clothing. You can not only set limits on vendors, but set spending limits a card can’t exceed, or that it should just be a one-time use card and then close itself. You can even pause a card if you are sure you won’t be using it for some time, and then unpause it later. The configuration is up to you.

There are a number of virtual card services available, like Privacy.com or IronVest, just to name a few. Just like any vendor, though, these services need some way to charge you. So for any virtual card service, pop them into your favored search engine to verify they’re legit, and aren’t going to burden you with additional fees. Some options may also only be available in specific countries or regions, due to financial regulation laws.

Support EFF!

Tip 18: Minimize Risk While Using Digital Payment Apps

Digital payment apps like Cash App, Venmo, and Zelle generally offer fewer fraud protections than credit cards offered by traditional financial institutions. It’s safer to stick to credit cards when making online purchases. That said, there are ways to minimize your risk.

Turn on transaction alerts:

  • On Cash App, tap on your picture or initials on the right side of the app. Tap Notifications, and then Transactions. From there, you can toggle the settings to receive a push notification, a text, and/or an email with receipts or to track activity on the app.
  • On PayPal, tap on the top right icon to access your account. Tap Notification Preferences, click on “Open Settings” and toggle to “Allow Notifications” if you’d like to see those on your phone.
  • On Venmo, tap on your picture or initials to go to the Me tab. Then, tap the Settings gear in the top right of the app, and tap Notifications. From there, you can adjust your text and email notifications, and even turn on push notifications. 

Report suspected fraud quickly

If you receive a notification for a purchase you didn’t make, even if it’s a small amount, make sure to immediately report it. Scammers sometimes test the waters with small amounts to see whether or not their targets are paying attention. Additionally, you may be on the hook for part of the payment if you don’t act fast. PayPal and Venmo say they cover lost funds if they’re reported within 60 days, but Cash App has more complicated restrictions, which can include fees of up to $500 if you lose your device or password and don’t report it within 48 hours.

And don’t forget to turn on multifactor authentication for each app. For more information, check out these tips from Consumer Reports.

Tip 19: Turn Off Ad Personalization to Limit How the Tech Giants Monetize Your Data

Tech companies make billions by harvesting your personal data and using it to sell hyper-targeted ads. This business model drives them to track us far beyond their own platforms, gathering data about our online and offline activity. Surveillance-based advertising isn’t just creepy—it’s harmful. The systems that power hyper-targeted ads can also funnel your personal information to data brokers, advertisers, scammers, and law enforcement agencies. 

To limit how companies monetize your data through surveillance-based advertising, turn off ad personalization on your accounts. This setting looks different depending on the platform, but here are some key places to start:

  • Meta (Facebook & Instagram): Follow this guide to find the setting for disabling ad targeting based on data Meta collects about you from other websites and apps.
  • Google: Visit https://myadcenter.google.com/home and switch the “Personalized ads” option from “On” to “Off.”
  • X (formerly known as Twitter): Visit https://x.com/settings/privacy_and_safety and turn off all settings under “Data sharing and personalization”

Banning online behavioral ads would be a better solution, but turning off ad personalization is a quick and easy step to limit how tech companies profit from your data. And don’t forget to change this same setting on Amazon, too.

Tip 20: Tighten Account Privacy Settings

Just because you want to share information with select friends and family on social media doesn’t necessarily mean you want to broadcast everything to the entire world. Whether you want to make sure you’re not sharing your real-time location with someone you’d rather not bump into or only want your close friends to know about your favorite pop star, you can typically restrict how companies display your status updates and other information.

In addition to whether data is displayed publicly or just to a select group of contacts, you may have some control over how data is collected, used, and shared with advertisers, or how long it is stored for.

To get started, choose an account and review the privacy settings, making changes as needed. Here are links to a few of the major companies to get you started:

Unfortunately, you may need to tweak your privacy settings multiple times to get them the way you want, as companies often introduce new features that are less private by default. And while companies sometimes offer choices on how data is collected, you can’t control most of the data collection that takes place. For more information, see Security Planner.

Tip 21: Protect Your Data When Dating Online

Dating apps like Grindr and Tinder collect vast amounts of intimate details—everything from sexual preferences, location history, and behavioral patterns—all from people that are just looking for love and connection. This data falling into the wrong hands can come with unacceptable consequences, especially for members of the LGBTQ+ community and other vulnerable users that pertinently need privacy protections.

To ensuring that finding love does not involve such a privacy impinging tradeoff, follow these tips to protecting yourself when online dating:

  1. Review your login information and make sure to use a strong, unique password for your accounts; and enable two-factor authentication when offered. 
  2. Disable behavioral ads so personal details about you cannot be used to create a comprehensive portrait of your life—including your sexual orientation.
  3. Review your access to your location and camera roll, and possibly change these in line with what information you would like to keep private. 
  4. Consider what photos you choose, upload, and share; and assume that everything can and will be made public.
  5. Disable the integration of third-party apps like Spotify if you want more privacy. 
  6. Be mindful of what you share with others when you first chat, such as not disclosing financial details, and trust your gut if something feels off. 

There isn't one singular way to use dating apps, but taking these small steps can have a big impact in staying safe when dating online.

Tip 22: Turn Off Automatic Content Recognition (ACR) On Your TV

You might think TVs are just meant to be watched, but it turns out TV manufacturers do their fair share of watching what you watch, too. This is done through technology called “automatic content recognition” (ACR), which snoops on and identifies what you’re watching by snapping screenshots and comparing them to a big database. How many screenshots? The Markup found some TVs captured up to 7,200 images per hour. The main reason? Ad targeting, of course. 

Any TV that’s connected to the internet likely does this alongside now-standard snooping practices, like tracking what apps you open and where you’re located. ACR is particularly nefarious, though, as it can identify not just streaming services, but also offline content, like video games, over-the-air broadcasts, and physical media. What we watch can and should be private, but that’s especially true when we’re using media that’s otherwise not connected to the internet, like Blu-Rays or DVDs.

Opting out of ACR can be a bit of a chore, but it is possible on most smart TVs. Consumer Reports has guides for most of the major TV manufacturers. 

And that’s it for Opt Out October. Hopefully you’ve come across a tip or two that you didn’t know about, and found ways to protect your privacy, and disrupt the astounding amount of data collection tech companies do.

Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped

29 September 2025 at 12:42

The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.

Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.

Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.

This is absurd.

We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.

If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.

This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.

Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”

Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.

We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.

Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.

Further reading:

What WhatsApp’s “Advanced Chat Privacy” Really Does

2 September 2025 at 14:28

In April, WhatsApp launched its “Advanced Chat Privacy” feature, which, once enabled, disables using certain AI features in chats and prevents conversations from being exported. Since its launch, an inaccurate viral post has been ping-ponging around social networks, creating confusion around what exactly it does.

The viral post falsely claims that if you do not enable Advanced Chat Privacy, Meta’s AI tools will be able to access your private conversations. This isn’t true, and it misrepresents both how Meta AI works and what Advanced Chat Privacy is.

The confusion seems to spawn from the fact that Meta AI can be invoked through a number of methods, including in any group chat with the @Meta AI command. While the chat contents between you and other people are always end-to-end encrypted on the app, what you say to Meta AI is not. Similarly, if you or anyone else in the chat chooses to use Meta AI's “Summarize” feature, which uses Meta’s “Private Processing” technology, that feature routes the text of the chat through Meta’s servers. However, the company claims that they cannot view the content of those messages. This feature remains opt-in, so it's up to you to decide if you want to use it. The company also recently released the results of two audits detailing the issues that have been found thus far and what they’ve done to fix it.

For example, if you and your buddy are chatting, and your friend types in @Meta AI and asks it a question, that part of the conversion, which you can both see, is not end-to-end encrypted, and is usable for AI training or whatever other purposes are included in Meta’s privacy policy. But otherwise, chats remain end-to-end encrypted.

Advanced Chat Privacy offers some bit of control over this. The new privacy feature isn’t a universal setting in WhatsApp; you can enable or disable it on a per-chat basis, but it’s turned off by default. When enabled, Advanced Chat Privacy does three core things:

  • Blocks anyone in the chat from exporting the chats,
  • Disables auto-downloading media to chat participant’s phones, and
  • Disables some Meta AI features

Outside disabling some Meta AI features, Advanced Chat Privacy can be useful in other instances. For example, while someone can always screenshot chats, if you’re concerned about someone easily exporting an entire group chat history, Advanced Chat Privacy makes this harder to do because there’s no longer a one-tap option to do so. And since media can’t be automatically downloaded to someone’s phone (the “Save to Photos” option on the chat settings screen), it’s harder for an attachment to accidentally end up on someone’s device.

How to Enable Advanced Chat Privacy

Advanced Chat Privacy is enabled or disabled per chat. To enable it:

  • Tap the chat name at the top of the screen.
  • Select Advanced chat privacy, then tap the toggle to turn it on.

There are some quirks to how this works, though. For one, by default, anyone involved in a chat can turn Advanced Chat Privacy on or off at will, which limits its usefulness but at least helps ensure something doesn’t accidentally get sent to Meta AI.

whatsapp group permissions screen

There’s one way around this, which is for a group admin to lock down what users in the group can do. In an existing group chat that you are the administrator of, tap the chat name at the top of the screen, then:

  • Scroll down to Group Permissions.
  • Disable the option to “Edit Group Settings.” This makes it so only the administrator can change several important permissions, including Advanced Chat Privacy.

You can also set this permission when starting a new group chat. Just be sure to pop into the permissions page when prompted. Even without Advanced Chat Privacy, the “Edit Group Settings” option is an important one for privacy, because it also includes whether participants can change the length that disappearing messages can be viewed, so it’s something worth considering for every group chat you’re an administrator of, and something WhatsApp should require admins to choose before starting a new chat.

When it comes to one-on-one chats, there is currently no way to block the other person from changing the Advanced Chat Privacy feature, so you’ll have to come to an agreement with the other person on keeping it enabled if that’s what you want. If the setting is changed, you’ll see a notice in the chat stating so:

There are already serious concerns with how much metadata WhatsApp collects, and as the company introduces ads and AI, it’s going to get harder and harder to navigate the app, understand what each setting does, and properly protect the privacy of conversations. One of the reasons alternative encrypted chat options like Signal tend to thrive is because they keep things simple and employ strong default settings and clear permissions. WhatsApp should keep this in mind as it adds more and more features.

Ryanair’s CFAA Claim Against Booking.com Has Nothing To Do with Actual Hacking

29 July 2025 at 15:03

The Computer Fraud and Abuse Act (CFAA) is supposed to be about attacks on computer systems. It is not, as a federal district court suggested in Ryanair v. Booking.com, applicable when someone uses valid login credentials to access information to which those credentials provide access. Now that the case is on appeal, EFF has filed an amicus brief asking the Third Circuit to clarify that this case is about violations to policy, not hacking, and does not qualify as access “without authorization” under CFAA.

The case concerns transparency in airfare pricing. Ryanair complained that Booking republished Ryanair’s prices, some of which were only visible when a user logged in. Ryanair sent a cease and desist to Booking, but didn't deactivate the usernames and passwords associated with the uses they disliked. When the users allegedly connected to Booking kept using those credentials to gather pricing data, Ryanair claimed it was a CFAA violation. If this doesn’t sound like “computer hacking” to you, you’re right.

The CFAA has proven bad for research, security, competition, and innovation. For years we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Ryanair’s claims here: what amounts to a terms of use violation because the information that was accessed is available to anyone with login credentials. This is the course charted Van Buren v. United States, where the Supreme Court explained that “authorization” refers to technical concepts of computer authentication. As we stated in our brief:

The CFAA does not apply to every person who merely violates terms of service by sharing account credentials with a family member or by withholding sensitive information like one’s real name and birthdate when making an account.

Building on the good decisions in Van Buren and the Ninth Circuit’s ruling in hiQ Labs v. LinkedIn, we weighed in at the Third Circuit urging the court to hold clearly that triggering a CFAA violation requires bypassing a technology that restricts access. In this case, the login credentials that were created were legit access. But the rule adopted by the lower court would criminize many everyday behaviors, like logging into a streaming service account with a partner’s login, or logging into a spouse’s bank account to pay a bill at their behest. This is not hacking or a violation of the CFAA, it’s just violating a company’s wish list in its Terms of Service.

This rule would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may make different accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, the company could not just ban the researchers from using the site, it could render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

Many other examples and common research techniques used by journalists, academic researchers, and security researchers would be at risk under this rule, but the end result would be the same no matter what: it would chill valuable research that keeps us all safer online.

A broad reading of CFAA in this case would also undermine competition by providing a way for companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

Courts must follow Van Buren’s lead and interpret the CFAA as narrowly as it was designed. Logging into a public website with valid credentials, even if you scrape the data once you’re logged in, is not hacking. A broad reading leads to unintended consequences, and website owners do not need new shields against independent accountability.

You can read our amicus brief here.

❌