Normal view

Received yesterday — 12 December 2025EFF Deeplinks

EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate

13 December 2025 at 06:10

The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.

The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country. 

But the case for digital identification has not been made. 

As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:

  1. Mission creep
  2. Infringements on privacy rights
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.

Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.

Read our civil society briefing in full here.

Received before yesterdayEFF Deeplinks

EFF Benefit Poker Tournament at DEF CON 33

9 December 2025 at 13:46

In the brand new Planet Hollywood Poker Room, 48 digital rights supporters played No-Limit Texas Hold’Em in the 4th Annual EFF Benefit Poker Tournament at DEF CON, raising $18,395 for EFF.

The tournament was hosted by EFF board member Tarah Wheeler and emceed by lintile, lending his Hacker Jeopardy hosting skills to help EFF for the day.

Every table had two celebrity players with special bounties for the player that knocked them out of the tournament. This year featured Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams, Bryson Bort, Kym “KymPossible” Price, Adam Shostack, and Dr. Allan Friedman.

Excellent poker player and teacher Jason Healey, Professor of International Affairs at Columbia University’s School of International and Public Affairs noted that “the EFF poker tournament is where you find all the hacker royalty in one room."

The day started at with a poker clinic run by Tarah’s father, professional poker player Mike Wheeler. The hour-long clinic helped folks get brushed up on their casino literacy before playing the big game.

Mike told the story of first teaching Tarah to play poker with jellybeans when she was only four. He then taught poker noobs how to play and when to check, when to fold, and when to go all-in.

After the clinic, lintile roused the crowd to play for real, starting the tournament off by announcing “Shuffle up and deal!”

The first hour saw few players get knocked out, but after the blinds began to rise, the field began to thin, with a number of celebrity knock outs.
At every knockout, lintile took to the mic to encourage the player to donate to EFF, which allowed them to buy back into the tournament and try their luck another round.

Jay Salzberg knocked out Kym Price to win a l33t crate.


Kim Holt knocked out Mike Wheeler, collecting the bounty on his head posted by Tarah, and winning a $250 donation to EFF in his name. This is the second time Holt has sent Mike home.

Tarah knocked out Adam Shostack, winning a number of fun prizes, including a signed copy of his latest book, Threats: What Every Engineer Should Learn From Star Wars.

Bryson Bort was knocked out by privacy attorney Marcia Hofmann.

Play continued for three hours until only the final table of players remained: Allaen Friedman, Luke Hanley, Jason Healey, Kim Holt, Igor Ignatov, Sid, Puneet Thapliyal, Charles Thomas and Tarah Wheeler herself.

As blinds continues to rise, players went all-in more and more. The most exciting moment was won by Sid, tripling up with TT over QT and A8s, and then only a few hands later knocking out Tarah, who finished 8th.

For the first time, the Jellybean Trophy sat on the final table awaiting the winner. This year, it was a Seattle Space Needle filled with green and blue jellybeans celebrating the lovely Pacific Northwest where Tarah and Mike are from.

The final three players were Allen Friedman, Kim Holt and Syd. Sid doubled up with KJ over Holt’s A6, and then knocked Holt out with his Q4 beating Holt’s 22.

Friedman and Sid traded blinds until Allan went all in with A6 and Syd called with JT. A jack landed on the flop and Syd won the day!

Sid becomes the first player to win the tournament more than once, taking home the jellybean trophy two years in a row.

It was an exciting afternoon of competition raising over $18,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah and EFF to make sure we have chips and a chair for you at DEF CON 34.

Be ready for this next year’s special benefit poker event: The Digital Rights Attack Lawyers Edition! Our special celebrity guests will all be our favorite digital rights attorneys including Cindy Cohn, Marcia Hofmann, Kurt Opsahl, and more!

Photo Gallery

Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review

11 December 2025 at 16:17

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment. 

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy. 

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward. 

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities 

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work. 

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.” 

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.” 

Small Businesses 

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court. 

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters. 

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents. 

Why Isn’t Online Age Verification Just Like Showing Your ID In Person?

11 December 2025 at 03:00

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

One of the most common refrains we hear from age verification proponents is that online ID checks are nothing new. After all, you show your ID at bars and liquor stores all the time, right? And it’s true that many places age-restrict access in-person to various goods and services, such as tobacco, alcohol, firearms, lottery tickets, and even tattoos and body piercings.

But this comparison falls apart under scrutiny. There are fundamental differences between flashing your ID to a bartender and uploading government documents or biometric data to websites and third-party verification companies. Online age-gating is more invasive, affects far more people, and poses serious risks to privacy, security, and free speech that simply don't exist when you buy a six-pack at the corner store.

Online age verification burdens many more people.

Online age restrictions are imposed on many, many more users than in-person ID checks. Because of the sheer scale of the internet, regulations affecting online content sweep in an enormous number of adults and youth alike, forcing them to disclose sensitive personal data just to access lawful speech, information, and services. 

Additionally, age restrictions in the physical world affect only a limited number of transactions: those involving a narrow set of age-restricted products or services. Typically this entails a bounded interaction about one specific purchase.

Online age verification laws, on the other hand, target a broad range of internet activities and general purpose platforms and services, including social media sites and app stores. And these laws don’t just wall off specific content deemed harmful to minors (like a bookstore would); they age-gate access to websites wholesale. This is akin to requiring ID every time a customer walks into a convenience store, regardless of whether they want to buy candy or alcohol.

There are significant privacy and security risks that don’t exist offline.

In offline, in-person scenarios, a customer typically provides their physical ID to a cashier or clerk directly. Oftentimes, customers need only flash their ID for a quick visual check, and no personal information is uploaded to the internet, transferred to a third-party vendor, or stored. Online age-gating, on the other hand, forces users to upload—not just momentarily display—sensitive personal information to a website in order to gain access to age-restricted content. 

This creates a cascade of privacy and security problems that don’t exist in the physical world. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee it will be handled securely. You have no direct control over who receives and stores your personal data, where it is sent, or how it may be accessed, used, or leaked outside the immediate verification process. 

Data submitted online rarely just stays between you and one other party. All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners. 

All of this increases the likelihood that your data will leak or be misused. Unfortunately, data breaches are an endemic part of modern life, and the sensitive, often immutable, personal data required for age verification is just as susceptible to being breached as any other online data. Age verification companies can be—and already have been—hacked. Once that personal data gets into the wrong hands, victims are vulnerable to targeted attacks both online and off, including fraud and identity theft.

Troublingly, many age verification laws don’t even protect user security by providing a private right of action to sue a company if personal data is breached or misused. This leaves you without a direct remedy should something bad happen. 

Some proponents claim that age estimation is a privacy-preserving alternative to ID-based verification. But age estimation tools still require biometric data collection, often demanding users submit a photo or video of their face to access a site. And again, once submitted, there’s no way for you to verify how that data is processed or stored. Requiring face scans also normalizes pervasive biometric surveillance and creates infrastructure that could easily be repurposed for more invasive tracking. Once we’ve accepted that accessing lawful speech requires submitting our faces for scanning, we’ve crossed a threshold that’s difficult to walk back.

Online age verification creates even bigger barriers to access.

Online age gates create more substantial access barriers than in-person ID checks do. For those concerned about privacy and security, there is no online analog to a quick visual check of your physical ID. Users may be justifiably discouraged from accessing age-gated websites if doing so means uploading personal data and creating a potentially lasting record of their visit to that site.

Given these risks, age verification also imposes barriers to remaining anonymous that don't typically exist in-person. Anonymity can be essential for those wishing to access sensitive, personal, or stigmatized content online. And users have a right to anonymity, which is “an aspect of the freedom of speech protected by the First Amendment.” Even if a law requires data deletion, users must still be confident that every website and online service with access to their data will, in fact, delete it—something that is in no way guaranteed.

In-person ID checks are additionally less likely to wrongfully exclude people due to errors. Online systems that rely on facial scans are often incorrect, especially when applied to users near the legal age of adulthood. These tools are also less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds, for users with disabilities, and for transgender individuals. This leads to discriminatory outcomes and exacerbates harm to already marginalized communities. And while in-person shoppers can speak with a store clerk if issues arise, these online systems often rely on AI models, leaving users who are incorrectly flagged as minors with little recourse to challenge the decision.

In-person interactions may also be less burdensome for adults who don’t have up-to-date ID. An older adult who forgets their ID at home or lacks current identification is not likely to face the same difficulty accessing material in a physical store, since there are usually distinguishing physical differences between young adults and those older than 35. A visual check is often enough. This matters, as a significant portion of the U.S. population does not have access to up-to-date government-issued IDs. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification.

We’re talking about First Amendment-protected speech.

It's important not to lose sight of what’s at stake here. The good or service age gated by these laws isn’t alcohol or cigarettes—it’s First Amendment-protected speech. Whether the target is social media platforms or any other online forum for expression, age verification blocks access to constitutionally-protected content. 

Access to many of these online services is also necessary to participate in the modern economy. While those without ID may function just fine without being able to purchase luxury products like alcohol or tobacco, requiring ID to participate in basic communication technology significantly hinders people’s ability to engage in economic and social life.

This is why it’s wrong to claim online age verification is equivalent to showing ID at a bar or store. This argument handwaves away genuine harms to privacy and security, dismisses barriers to access that will lock millions out of online spaces, and ignores how these systems threaten free expression. Ignoring these threats won’t protect children, but it will compromise our rights and safety.

Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.

10 December 2025 at 18:48

Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.

If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love. 

That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet. 

Why Age Verification Mandates Are a Problem

In the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.

The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than goodespecially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.

If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.  

What’s Inside the Resource Hub

Our new hub is built to answer the questions we hear from users every day, such as:

  • How do age verification laws actually work?
  • What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
  • What’s at stake for me, and who else is harmed by these systems?
  • How can I keep myself, my family, and my community safe as these laws continue to roll out?
  • What can I do to fight back?
  • And if not age verification, what else can we do to protect the online safety of our young people?

Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.

Join Us: Reddit AMA & EFFecting Change Livestream Events

To celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:

1. Reddit AMA on r/privacy

Next week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!

2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification

Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.

A Resource to Empower Users

Age-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.

The Best Big Media Merger Is No Merger at All

10 December 2025 at 15:08

The state of streaming is... bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years. 

Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.” 

It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control. 

In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.  

Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.  

They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.  

Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.  

First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.  

When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.  

Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years,  b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.  

On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.  

On the other hand, well, there's everything else. 

The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competiing against another recently merged media megazord of Paramount Skydance.  

Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover 

Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role pushing a specific view of American culture.    

The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.  

Note 12/10/2025: One line in this blog has been modified a few hours post-publication. The substance remains the same. 

EFF Launches Age Verification Hub as Resource Against Misguided Laws

10 December 2025 at 12:15
EFF Also Will Host a Reddit AMA and a Livestreamed Panel Discussion

SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back. 

To mark the hub's launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws. 

“These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children's safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.” 

Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids. 

It’s not just in the U.S.  Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. 

We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many. 

Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.  

To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law. 

EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days. 

EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age. 

For the age verification resource hub: https://www.eff.org/age 

For the Reddit AMA: https://www.reddit.com/r/privacy/  

For the Jan. 15 livestream: https://www.eff.org/livestream-age  

 

Contact: 
Molly
Buckley
Activist

10 (Not So) Hidden Dangers of Age Verification

8 December 2025 at 11:24

It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.

Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children. 

Here’s who gets hurt, and how: 

   1.  Adults Without IDs Get Locked Out

Document-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don't have a driver's license or state ID with their current name and address.

Specifically:

  • 18% of Black adults don't have a driver's license at all.
  • Black and Hispanic Americans are disproportionately less likely to have current licenses.
  • Undocumented immigrants often cannot obtain state IDs or driver's licenses.
  • People with disabilities are less likely to have current identification.
  • Lower-income Americans face greater barriers to maintaining valid IDs.

Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don't own homes, and close to 20% of households don't have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.

   2.  Communities of Color Face Higher Error Rates

Platforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.

   3.  People with Disabilities Face More Barriers

Age-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.

Document-based systems also don't solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver's licenses, so document-based age-gating technologies are equally exclusionary.

   4.  Transgender and Non-Binary People Are Put At Risk

Age-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.

   5.  Anonymity Becomes a Casualty

Age-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.

   6.  Young People Lose Access to Essential Information 

Because state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn't just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being. 

This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers' mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.

   7.  LGBTQ+ Youth Are Denied Vital Lifelines

For many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports. 

When parents must consent to or monitor their children's social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them. 

   8.  Youth in Foster Care Systems Are Completely Left Out

Age verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.

   9.  All of Our Personal Data is Put at Risk

An age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we've seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years. 

The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.

   10.  All of Our Free Speech Rights Are Trampled

The internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.

Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment. 

The Bottom Line

Age-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.

The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.

EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

4 December 2025 at 13:04

The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.

The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.

It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.

Let’s break it down. 

 Changing What Constitutes Personal Data 

 The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.  

 The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.  

 The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue. 

 This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

Privileging AI 

In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.  

 Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.  

And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it

A Broad Reform Beyond the GDPR

There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here 

Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.

Browser Signals and Cookie Fatigue

There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights. 

While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later. 

Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on. 

However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps. 

Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected. 

A Muddled Legal Landscape

The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

The result is the opposite of  “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.

The Digital Package Is Not a Done Deal

Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection. 

Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration. 

Axon Tests Face Recognition on Body-Worn Cameras

3 December 2025 at 19:00

Axon Enterprise Inc. is working with a Canadian police department to test the addition of face recognition technology (FRT) to its body-worn cameras (BWCs). This is an alarming development in government surveillance that should put communities everywhere on alert. 

As many as 50 officers from the Edmonton Police Department (EPD) will begin using these FRT-enabled BWCs today as part of a proof-of-concept experiment. EPD is the first police department in the world to use these Axon devices, according to a report from the Edmonton Journal

This kind of technology could give officers instant identification of any person that crosses their path. During the current trial period, the Edmonton officers will not be notified in the field of an individual’s identity but will review identifications generated by the BWCs later on. 

“This Proof of Concept will test the technology’s ability to work with our database to make officers aware of individuals with safety flags and cautions from previous interactions,” as well as “individuals who have outstanding warrants for serious crime,” Edmonton Police described in a press release, suggesting that individuals will be placed on a watchlist of sorts.

FRT brings a rash of problems. It relies on extensive surveillance and collecting images on individuals, law-abiding or otherwise. Misidentifications can cause horrendous consequences for individuals, including prolonged and difficult fights for innocence and unfair incarceration for crimes never committed. In a world where police are using real-time face recognition, law-abiding individuals or those participating in legal, protected activity that police may find objectionable — like protest — could be quickly identified. 

With the increasing connections being made between disparate data sources about nearly every person, BWCs enabled with FRT can easily connect a person minding their own business, who happens to come within view of a police officer, with a whole slew of other personal information. 

Axon had previously claimed it would pause the addition of face recognition to its tools due to concerns raised in 2019 by the company’s AI and Policing Technology Ethics Board. However, since then, the company has continued to research and consider the addition of FRT to its products. 

This BWC-FRT integration signals possible other FRT integrations in the future. Axon is building an entire arsenal of cameras and surveillance devices for law enforcement, and the company grows the reach of its police surveillance apparatus, in part, by leveraging relationships with its thousands of customers, including those using its flagship product, the Taser. This so-called “ecosystem” of surveillance technologyq includes the Fusus system, a platform for connecting surveillance cameras to facilitate real-time viewing of video footage. It also involves expanding the use of surveillance tools like BWCs and the flying cameras of “drone as first responder” (DFR) programs.

Face recognition undermines individual privacy, and it is too dangerous when deployed by police. Communities everywhere must move to protect themselves and safeguard their civil liberties, insisting on transparency, clear policies, public accountability, and audit mechanisms. Ideally, communities should ban police use of the technology altogether. At a minimum, police must not add FRT to BWCs.

After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

3 December 2025 at 18:19

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

EFF Tells Patent Office: Don’t Cut the Public Out of Patent Review

2 December 2025 at 14:59

EFF has submitted its formal comment to the U.S. Patent and Trademark Office (USPTO) opposing a set of proposed rules that would sharply restrict the public’s ability to challenge wrongly granted patents. These rules would make inter partes review (IPR)—the main tool Congress created to fix improperly granted patents—unavailable in most of the situations where it’s needed most.

If adopted, they would give patent trolls exactly what they want: a way to keep questionable patents alive and out of reach.

If you haven’t commented yet, there’s still time. The deadline is today, December 2.

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

Sample comment:

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

IPR Is Already Under Siege, And These Rules Would Make It Worse

Since USPTO Director John Squires was sworn into office just over two months ago, we’ve seen the Patent Office take an increasingly aggressive stance against IPR petitions. In a series of director-level decisions, the USPTO has denied patent challengers the chance to be heard—sometimes dozens of them at a time—without explanation or reasoning. 

That reality makes this rulemaking even more troubling. The USPTO is already denying virtually every new petition challenging patents. These proposed rules would cement that closed-door approach and make it harder for challengers to be heard. 

What EFF Told the USPTO

Our comment lays out how these rules would make patent challenges nearly impossible to pursue for small businesses, nonprofits, software developers, and everyday users of technology. 

Here are the core problems we raised:

First, no one should have to give up their court defenses just to use IPR. The USPTO proposal would force defendants to choose: either use IPR and risk losing their legal defenses, or keep their defenses and lose IPR.

That’s not a real choice. Anyone being sued or threatened for patent infringement needs access to every legitimate defense. Patent litigation is devastatingly expensive, and forcing people to surrender core rights in federal court is unreasonable and unlawful.

Second, one early case should not make a bad patent immune forever. Under the proposed rules, if a patent survives any earlier validity fight—no matter how rushed, incomplete, or poorly reasoned—everyone else could be barred from filing an IPR later.

New prior art? Doesn’t matter. Better evidence? Doesn’t matter. 

Congress never intended IPR to be a one-shot shield for bad patents. 

Third, patent owners could manipulate timing to shut down petitions. The rules would let the USPTO deny IPRs simply because a district court case might move faster.

Patent trolls already game the system by filing in courts with rapid schedules. This rule would reward that behavior. It allows patent owners—not facts, not law, not the merits—to determine whether an IPR can proceed. 

IPR isn't supposed to be a race to the courthouse. It’s supposed to be a neutral review of whether the Patent Office made a mistake.

Why Patent Challenges Matter

IPR isn’t perfect, and it doesn’t apply to every patent. But compared to multimillion-dollar federal litigation, it’s one of the only viable tools available to small companies, developers, and the public. It needs to remain open. 

When an overbroad patent gets waved at hundreds or thousands of people—podcasters, app developers, small retailers—IPR is often the only mechanism that can actually fix the underlying problem: the patent itself. These rules would take that option away.

There’s Still Time To Add Your Voice

If you haven’t submitted a comment yet, now is the time. The more people speak up, the harder it becomes for these changes to slip through.

Comments don’t need to be long or technical. A few clear sentences in your own words are enough. We’ve written a short sample comment below. It’s even more powerful if you add a sentence or two describing your own experience. If you mention EFF in your comment, it helps our collective impact. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Further reading:

AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance

EFF intern Alexandra Halbeck contributed to this blog

When people talk to a chatbot, they often reveal highly personal information they wouldn’t share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.

AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.

Chat logs are deeply personal, just like your emails.

Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal people’s most sensitive information.

Without privacy protections, users would be chilled in their use of AI systems.


Consider the sensitivity of the following prompts: “how to get abortion pills,” “how to protect myself at a protest,” or “how to escape an abusive relationship.” These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.

Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.

Chat logs require a warrant.

Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.

For over a century, the Fourth Amendment has protected the content of private communications—such as letters, emails, and search engine prompts—from unreasonable government searches. AI prompts require the same constitutional protection.

This protection is not aspirational—it already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.

AI companies must resist bulk surveillance orders.

AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.

What do these overbroad orders look like? In the past decade or so, police have often sought “reverse” search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included “tower dumps” or “geofence warrants,” in which police order a company to search all users’ location data to identify anyone that’s been near a particular place at a particular time. It has also included “keyword” warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politician’s name or busy street, or a geofence warrant near a protest or church.

Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficult—if not impossible—to provide mass location data in response to a geofence warrant.

This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.

In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users. These aren’t radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:

  1. commit to fighting bulk orders for user data in court,
  2. commit to providing users with advanced notice before complying with a legal demand so that users can choose to fight on their own behalf, and 
  3. commit to publishing periodic transparency reports, which tally up how many legal demands for user data the company receives (including the number of bulk orders specifically).

How to Identify Automated License Plate Readers at the U.S.-Mexico Border

2 December 2025 at 11:23

U.S. Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), and scores of state and local law enforcement agencies have installed a massive dragnet of automated license plate readers (ALPRs) in the US-Mexico borderlands. 

In many cases, the agencies have gone out of their way to disguise the cameras from public view. And the problem is only going to get worse: as recently as July 2025, CBP put out a solicitation to purchase 100 more covert trail cameras with license plate-capture ability. 

Last month, the Associated Press published an in-depth investigation into how agencies have deployed these systems and exploited this data to target drivers. But what do these cameras look like? Here's a guide to identifying ALPR systems when you're driving the open road along the border.

Special thanks to researcher Dugan Meyer and AZ Mirror's Jerod MacDonald-Evoy. All images by EFF and Meyer were taken within the last three years. 

ALPR at Checkpoints and Land Ports of Entry 

All land ports of entry have ALPR systems that collect all vehicles entering and exiting the country. They typically look like this: 

License plate readers along the lanes leading into a border crossing

ALPR systems at the Eagle Pass International Bridge Port of Entry. Source: EFF

Most interior checkpoints, which are anywhere from a few miles to more than 60 from the border, are also equipped with ALPR systems operated by CBP. However, the DEA operates a parallel system at most interior checkpoints in southern border states. 

When it comes to checkpoints, here's the rule of thumb: If you're traveling away from the border, you are typically being captured by a CBP/Border Patrol system (Border Patrol is a sub-agency of CBP). If you're traveling toward the border, it is most likely a DEA system.

Here's a representative example of a CBP checkpoint camera system:

ALPR cameras next to white trailers along the lane into a checkpoint

ALPR system at the Border Patrol checkpoint near Uvalde, Texas. Source: EFF

At a typical port of entry or checkpoint, each vehicle lane will have an ALPR system. We've even seen border patrol checkpoints that were temporarily closed continue to funnel people through these ALPR lanes, even though there was no one on hand to vet drivers face-to-face. According CBP's Privacy Impact Assessments (2017, 2020), CBP keeps this data for 15 years, but generally agents can only search the most recent five years worth of data. 

The scanners were previously made by a company called Perceptics which was infamously hacked, leading to a breach of driver data. The systems have since been "modernized" (i.e. replaced) by SAIC.

Here's a close up of the new systems:

Close up of a camera marked "Front."

Frontal ALPR camera at the checkpoint near Uvalde, Texas. Source: EFF

In 2024, the DEA announced plans to integrate port of entry ALPRs into its National License Plate Reader Program (NLPRP), which the agency says is a network of both DEA systems and external law enforcement ALPR systems that it uses to investigate crimes such as drug trafficking and bulk cash smuggling.

Again, if you're traveling towards the border and you pass a checkpoint, you're often captured by parallel DEA systems set up on the opposite side of the road. However, these systems have also been found to be installed on their own away from checkpoints. 

These are a major component of the DEA's NLPRP, which has a standard retention period of 90 days. This program dates back to at least 2010, according to records obtained by the ACLU. 

Here is a typical DEA system that you will find installed near existing Border Patrol checkpoints:

A series of cameras next to a trailer by the side of the road.

DEA ALPR set-up in southern Arizona. Source: EFF

These are typically made by a different vendor, Selex ES, which also includes the brands ELSAG and Leonardo. Here is a close-up:

Close-up of an ALPR cameras

Close-up of a DEA camera near the Tohono O'odham Nation in Arizona. Source: EFF

Covert ALPR

As you drive along border highways, law enforcement agencies have disguised cameras in order to capture your movements. 

The exact number of covert ALPRs at the border is unknown, but to date we have identified approximately 100 sites. We know CBP and DEA each operate covert ALPR systems, but it isn't always possible to know which agency operates any particular set-up. 

Another rule of thumb: if a covert ALPR has a Motorola Solutions camera (formerly Vigilant Solutions) inside, it's likely a CBP system. If it has a Selex ES camera inside, then it is likely a DEA camera. 

Here are examples of construction barrels with each kind of camera: 

A camera hidden inside an orange traffic barrell

A covert ALPR with a Motorola Solutions ALPR camera near Calexico, Calif. Source: EFF

These are typically seen along the roadside, often in sets of three, but almost always connected to some sort of solar panel. They are often placed behind existing barriers.

A camera hidden inside an orange traffic barrel

A covert ALPR with a Selex ES camera in southern Arizona. Source: EFF

The DEA models are also found by the roadside, but they also can be found inside or near checkpoints. 

If you're curious (as we were), here's what they look like inside, courtesy of the US Patent and Trademark Office:

Patent drawings showing a traffic barrel and the camera inside it

Patent for portable covert license plate reader. Source: USPTO

In addition to orange construction barrels, agencies also conceal ALPRs in yellow sandbarrels. For example, these can be found throughout southern Arizona, especially in the southeastern part of the state.

A camera hidden in a yellow sand barrel.

A covert ALPR system in Arizona. Source: EFF

ALPR Trailers

Sometimes a speed trailer or signage trailer isn't designed so much for safety but to conceal ALPR systems. Sometimes ALPRs are attached to indistinct trailers with no discernible purpose that you'd hardly notice by the side of the road. 

It's important to note that its difficult to know who these belong to, since they aren't often marked. We know that all levels of government, even in the interior of the country, have purchased these set ups.  

Here are some of the different flavors of ALPR trailers:

A speed trailer capturing ALPR. Speed limit 45 sign.

An ALPR speed trailer in Texas. Source: EFF

A white flat trailer by the side of the road with camera portals on either end.

ALPR trailer in Southern California. Source. EFF

An orange trailer with an ALPR camera and a solar panel.

ALPR trailer in Southern California. Source. EFF

An orange trailer with ALPR cameras by the side of the road.

An ALPR unit in southern Arizona. Source: EFF

A trailer with a pole with mounted ALPR cameras in the desert.

ALPR unit in southern Arizona. Source: EFF

A trailer with a solar panel and an ALPR camera.

A Jenoptik Vector ALPR trailer in La Joya, Texas. Source: EFF

One particularly worrisome version of an ALPR trailer is the Jenoptik Vector: at least two jurisdictions along the border have equipped these trailers not only with ALPR, but with TraffiCatch technology that gathers Bluetooth and Wi-Fi identifiers. This means that in addition to gathering plates, these devices would also document mobile devices, such as phones, laptops, and even vehicle entertainment systems.

Stationary ALPR 

Stationary or fixed ALPR is one of the more traditional ways of installing these systems. The cameras are placed on existing utility poles or other infrastructure or on poles installed by the ALPR vendor. 

For example, here's a DEA system installed on a highway arch:

The back of a highway overpass sign with ALPR cameras.

The lower set of ALPR cameras belong to the DEA. Source: Dugan Meyer CC BY

A camera and solar panel attached to a streetlight pole.

ALPR camera in Arizona. Source: Dugan Meyer CC BY

Flock Safety

At the local level, thousands of cities around the United States have adopted fixed ALPR, with the company Flock Safety grabbing a huge chunk of the market over the last few years. County sheriffs and municipal police along the border have also embraced the trend, with many using funds earmarked for border security to purchase these systems. Flock allows these agencies to share with one another and contribute their ALPR scans to a national pool of data. As part of a pilot program, Border Patrol had access to this ALPR data for most of 2025. 

A typical Flock Safety setup involves attaching cameras and solar panels to poles. For example:

A red truck passed a pair of Flock Safety ALPR cameras on poles.

Flock Safety ALPR poles installed just outside the Tohono O'odham Nation in Arizona. Source: EFF

A black Flock Safety camera with a small solar panel

A close-up of a Flock Safety camera in Douglas, Arizona. Source: EFF

We've also seen these camera poles placed outside the Santa Teresa Border Patrol station in New Mexico.

Flock may now be the most common provider nationwide, but it isn't the only player in the field. DHS recently released a market survey of 16 different vendors providing similar technology.  

Mobile ALPR 

ALPR cameras can also be found attached to patrol cars. Here's an example of a Motorola Solutions ALPR attached to a Hidalgo County Constable vehicle in South Texas:

An officer stands beside patrol car. Red circle identifies mobile ALPR

Mobile ALPR on a Hidalgo County Constable vehicle. Source: Weslaco Police Department

These allow officers not only to capture ALPR data in real time as they are driving along, but they will also receive an in-car alert when a scan matches a vehicle on a "hot list," the term for a list of plates that law enforcement has flagged for further investigation. 

Here's another example: 

A masked police officer stands next to a patrol vehicle with two ALPR cameras.

Mobile ALPR in La Mesa, Calif.. Source: La Mesa Police Department Facebook page

Identifying Other Technologies 

EFF has been documenting the wide variety of technologies deployed at the border, including surveillance towers, aerostats, and trail cameras. To learn more, download EFF's zine, "Surveillance Technology at the US-Mexico Border" and explore our map of border surveillance, which includes Google Streetview links so you can see exactly how each installation looks on the ground. Currently we have mapped out most DEA and CBP checkpoint ALPR setups, with covert cameras planned for addition in the near future.

We’re Doubling Down on Digital Rights. You Can, Too.

2 December 2025 at 03:03

Technology can uplift democracy, or it can be an authoritarian weapon. EFF is making sure it stays on the side of freedom. We’re defending encryption, exposing abusive surveillance tech, fighting government overreach, and standing up for free expression. But we need your help to protect digital rights—and right now, your donation will be matched dollar-for-dollar.

Power up!

Join EFF Today & Get a Free Donation Match

It’s Power Up Your Donation Week and all online contributions get an automatic match up to $302,700. Many thanks to the passionate EFF supporters who created this year's matching fund! The Power Up matching challenge offers a rare opportunity to double your impact on EFF’s legal, educational, advocacy, and free software work when it’s needed most. If you’ve been waiting for the right moment to give—this is it.

Digital rights are human rights. Governments have silenced online speech, corporations seek to exploit our data for profit, and police are deploying dystopian tools to track our every move. But the fight is far from over, with the support of EFF’s members.

How EFF is fighting back:

  • Creating tools to help people understand and protect their rights
  • Holding powerful institutions accountable in court when those rights are threatened
  • Pushing back against surveillance regimes through the justice system and in legislatures
  • Locking arms with attorneys, technologists, and defenders of digital freedom—including you

Person wearing a black shirt with the EFF35 Cityscape design next to a person wearing a green and gold Motherboard hoodie.

As an EFF member, you’ll have your choice of conversation-starting gear as a token of our thanks. Choose from stickers, EFF's 35th Anniversary Cityscape t-shirt, Motherboard hoodie, and more. You’ll also get a bonus Take Back CTRL-themed camera cover set with any member gift.

Will you donate today for privacy and free speech? Your gift will be matched for free, fueling the fight to stop tech from being a tyrant’s dream.

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:

Don't let democracy be undermined by tools of surveillance and control. Donate to EFF this week and you'll get an automatic match. https://eff.org/power-up

BlueskyFacebook | LinkedInMastodon
(More at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

The UK Has It Wrong on Digital ID. Here’s Why.

28 November 2025 at 05:10

In late September, the United Kingdom’s Prime Minister Keir Starmer announced his government’s plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like people’s name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country. 

This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, we’ve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting “Digital passport IDs” in Android, and states contracting with mobile driver’s license providers (mDL). And as we’ve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID. 

But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Here’s three reasons why those policymakers have it wrong. 

Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

Mission Creep 

In his initial announcement, Starmer stated: “You will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively. 

The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services. 

The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association. 

And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system. 

These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do “next to nothing to tackle channel crossings.”

Inclusivity is Not Inevitable, But Exclusion Is 

While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as we’ve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum. 

The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but it’s short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods. 

These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.

Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring.

Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID.

Attacks on Privacy and Surveillance 

Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would “absolutely have very strong encryption” and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that “data associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that “the government will work closely with expert stakeholders to make the programme effective, secure and inclusive.” 

But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged. 

This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the government’s ability to handle giant databases is questionable. Notably, the eVisa’s multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someone’s identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.

Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)—first introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single client—is still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.

And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this.

Ways Forward

Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

We cannot base our human rights on the government’s mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If you’re based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system. 

The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

EFF’s Holiday Gift Guide

28 November 2025 at 03:35

Technology is supercharging the attack on democracy and EFF is fighting back. We’re suing to stop government surveillance. We're fighting to protect free expression online. And we're building tools to protect your data privacy.

Help support our mission with new gear from EFF's online store, perfect gifts for the digital rights defender in your life. Take 20% your order today with code BLACKFRI. Thanks for being an EFF supporter!

Seven multi-shaped translucent red dice with a liquid glitter core.Liquid Core Dice are perfect for tabletop games. The metal clear-view EFF display tin contains a seven piece set of sharp-edge dice. These glittery dice will show that you roll with the crew protecting our civil liberties online.

Close up on the raised lines of EFF's Lady Justice Braille Sticker

Celebrate equity and accessibility with this tactile braille sticker that depicts the fiery figure of Lady Justice with braille characters reading "justice" and "EFF." With this embossed sticker, you won't just be showing off your support for justice, you'll actually be able to feel it.

Tote bag, sticker, and heat-changing mug featuring EFF's reproductive rights icon Lady Lock.Applaud reproductive rights with this gift bundle hailing your data privacy and personal freedom. The bundle includes all items featuring our mascot for choice and privacy, Lady Lock: the "My Body, My Data, My Choice" tote bag, a "Honey, I Encrypt Everything" sticker, and a heat-changing mug that reveals its secret slogan when hot.

Enamel lapel pin depicting Bigfoot carrying a sign that says "Privacy is a 'Human' Right!"Explore the mysteries of the web with an iconic Bigfoot de la Sasquatch lapel pinprivacy is a "human" right! Continue the journey with with campfire tales from The Encryptids, the rarely-seen creatures who’ve become digital rights legends. This sparkling cloisonne pin measures 1.5 inches tall and features a high quality spring backing.

Two people wearing green hooded EFF sweatshirts with a gold "Motherboard" design on the back.

Find all these items, plus t-shirts, hoodies, beanies, and more at the EFF Online Shop. And as always, you can donate to EFF and give the gift of membership to the digital rights defender or newbie in your life.

Shop Now

Support Digital Rights with Every Purchase

Are you hoping for delivery by December 25 in the continental U.S.? Please place your order by Thursday, December 10. Email us with any questions.

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech

26 November 2025 at 17:33

Legal Intern Alexandra Rhodes contributed to this blog post. 

EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.  

Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s Home 

The case, Merrill v. Marana Unified School District, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: “GANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.  

Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft email—even though he was physically off campus at the time, before school hours, and had never sent the email.  

After the student’s suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment. 

Public School Students Have Greater First Amendment Protection for Off-Campus Speech 

The U.S. Supreme Court has addressed the First Amendment rights of public school students in a handful of cases. 

Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech “materially and substantially” disrupted the school day or invaded the rights of others. 

Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021), in which EFF filed a brief, the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” 

The Ninth Circuit has further held that off-campus speech is only punishable if it bears a “sufficient nexus” to the school and poses a credible threat of violence. 

In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.  

Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speech—and makes all student speech—automatically “on campus” for purposes of justifying punishment under the First Amendment.  

Schools Provide Students with Valuable Digital Tools—But Also Subject Them to Surveillance 

EFF supports the plaintiffs’ argument that the student’s speech was “off campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a student’s speech “on campus.”   

Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.  

As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.  

Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft) to the websites they visit.  

School Digital Surveillance Chills Student Speech and Further Harms Students 

In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as “on campus,” even if the student is geographically off campus or outside of school hours.  

First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in Mahanoy not to regulate “all the speech a student utters during the full 24-hour day.” There must be some place that is “off campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.  

Second, we urged the court to reject such an “on campus” rule to mitigate the chilling effect of digital surveillance on students’ freedom of speech—that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always “on campus,” students will undoubtedly curb their speech. 

Third, we argued that such an “on campus” rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a “pay for privacy” scheme: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections. 

Fourth, such an “on campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate, causing various harms to students—including unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning. 

We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology. As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone else—to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes.  

Privacy is For the Children (Too)

26 November 2025 at 02:44

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here.

Observable harms of age verification legislation in the UK, US, and elsewhere:

As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing:

It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web.

Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone.

The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories:

  • Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.

Parental controls—which already exist!—can help.

These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety. 

However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.  

Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk.

Privacy laws can also help minors online.

Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech.

While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts.

These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well. 

Actions you can take today to protect young people online:

  • Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc.
  • Discuss the importance of online privacy and safety with your kids and community.
  • Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community.
  • Support comprehensive privacy legislation for all.
  • Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads.

Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.

✋ Get A Warrant | EFFector 37.17

26 November 2025 at 13:16

Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter!

In our latest issue, we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposing new rules that would make bad patents untouchable; and sharing a privacy victory—Sacramento is forced to end its dragnet surveillance program of power meter data.

Prefer to listen in? Check out our audio companion, where EFF Surveillance Litigation Director Andrew Crocker explains our new lawsuit challenging the warrantless mass surveillance of drivers in San Jose. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.17 - ✋ GET A WARRANT

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Rights Organizations Demand Halt to Mobile Fortify, ICE's Handheld Face Recognition Program

26 November 2025 at 09:46

Mobile Fortify, the new app used by Immigration and Customs Enforcement (ICE) to use face recognition technology (FRT) to identify people during street encounters, is an affront to the rights and dignity of migrants and U.S. citizens alike. That's why a coalition of privacy, civil liberties and civil rights organizations are demanding the Department of Homeland Security (DHS) shut down the use of Mobile Fortify, release the agency's privacy analyses of the app, and clarify the agency's policy on face recognition. 

As the organizations, including EFF, Asian Americans Advancing Justice and the Project on Government Oversight, write in a letter sent by EPIC

ICE’s reckless field practices compound the harm done by its use of facial recognition. ICE does not allow people to opt-out of being scanned, and ICE agents apparently have the discretion to use a facial recognition match as a definitive determination of a person’s immigration status even in the face of contrary evidence.  Using face identification as a definitive determination of immigration status is immensely disturbing, and ICE’s cavalier use of facial recognition will undoubtedly lead to wrongful detentions, deportations, or worse.  Indeed, there is already at least one reported incident of ICE mistakenly determining a U.S. citizen “could be deported based on biometric confirmation of his identity.”

As if this dangerous use of nonconsensual face recognition isn't bad enough, Mobile Fortify also queries a wide variety of government databases. Already there have been reports that federal officers may be using this FRT to target protesters engaging in First Amendment-protected activities. Yet ICE concluded it did not need to conduct a new Privacy Impact Assessment, which is standard practice for proposed government technologies that collect people's data. 

While Mobile Fortify is the latest iteration of ICE’s mobile FRT, EFF has been tracking this type of technology for more than a decade. In 2013, we identified how a San Diego agency had distributed face recognition-equipped phones to law enforcement agencies across the region, including federal immigration officers. In 2019, EFF helped pass a law temporarily banning collecting biometric data with mobile devices, resulting in the program's cessation

We fought against handheld FRT then, and we will fight it again today. 

Speaking Freely: Laura Vidal

25 November 2025 at 18:57

Interviewer: Jillian York

Laura Vidal is a Venezuelan researcher and writer focused on digital rights, community resilience, and the informal ways people learn and resist under authoritarian pressure. She holds a Doctorate in Education Sciences and intercultural communication, and her work explores how narratives, digital platforms, and transnational communities shape strategies of care, resistance, and belonging, particularly in Latin America and within the Venezuelan diaspora. She has investigated online censorship, disinformation, and digital literacy and is currently observing how regional and diasporic actors build third spaces online to defend civic space across borders. Her writing has appeared in Global Voices, IFEX, EFF, APC and other platforms that amplify underrepresented voices in tech and human rights.

Jillian York: Hi Laura, first tell me who you are. 

Laura Vidal: I am an independent researcher interested in digital security and how people learn about digital security. I'm also a consultant and a person of communications for IFEX and Digital Action. 

JY: Awesome. And what does free speech mean to you? 

LV: It means a responsibility. Free speech is a space that we all hold. It is not about saying what you want when you want, but understanding that it is a right that you have and others have. And that also means keeping the space as safe as possible and as free as possible for everybody to express themselves as much as possible safely. 

JY: We've known each other for nearly 20 years at this point. And like me, you have this varied background. You're a writer, you've shifted toward digital rights, you pursued a PhD. Tell me more about the path that led you to this work and why you do it. 

LV: Okay, so as you know well, we both started getting into these issues with Global Voices. I started at Global Voices as a translator and then as an author, then as an editor, and then as a community organizer. Actually, community organizer before editor, but anyways, because I started caring a lot about the representation of Latin America in general and Venezuela in particular. When I started with Global Voices, I saw that the political crisis and the narratives around the crisis were really prevalent. And it would bother me that there would be a portrait that is so simplistic. And at that time, we were monitoring the blogosphere, and the blogosphere was a reflection of this very interesting place where so many things happened. 

And so from there, I started my studies and I pursued a PhD in education sciences because I was very interested in observing how communities like Global Voices could be this field in which there was potential for intercultural exchange and learning about other cultures. At the end, of course, things were a lot more complicated than that. There are power imbalances and backgrounds that were a lot more complex, and there was this potential, but not in the way I thought it would be. Once my time in Global Voices was up and then I started pursuing research, I was very, very interested in moving from academia to research among communities and digital rights organizations and other non profits. I started doing consultancies with The Engine Room, with Tactical Tech, Internews, Mozilla and with other organizations in different projects. I've been able to work on issues that have to do with freedom of expression, with digital security and how communities are formed around digital security. And my big, big interest is how is it that we can think about security and digital rights as something that is ours, that is not something that belongs only to the super techies or the people that are super experts and that know very well this, because this is a world that can be a bit intimidating for some. It was definitely intimidating for me. So I really wanted to study and to follow up on the ways that this becomes more accessible and it becomes part of, becomes a good element to digital literacy for everyone. 

JY: That really resonates with me. I hadn't heard you articulate it that way before, but I remember when you were starting this path. I think we had that meeting in Berlin. Do you remember? 

LV: Yeah. In like 2017. Many meetings in Berlin, and we were talking about so many things. 

JY: Yeah, and I just, I remember like, because we've seen each other plenty of times over the past few years, but not as much as we used to….It's interesting, right, though, because we've both been in this space for so long. And we've seen it change, we've seen it grow. You know, I don't want to talk about Global Voices too much, but that was our entry point, right?

LV: It was. 

JY: And so that community—what did it mean for you coming from Venezuela? For me, coming from the US, we’ve both come from our home countries and moved to other countries…we have similar but different life paths. I guess I just see myself in you a little bit.

LV: That’s flattering to me. 

JY: I admire you so much. I've known you for 17 years.

LV: It's definitely mutual. 

JY: Thank you. But a lot of that comes from privilege, I recognize that.

LV: But it's good that you do, but it's also good that you use privilege for good things. 

JY: That's the thing: If you have privilege, you have to use it. And that's what I was raised with. My mother works for a non-profit organization. And so the idea of giving back has always been part of me. 

LV: I get it. And I also think that we are all part of a bigger chain. And it's very easy to get distracted by that. I definitely get distracted by those values, like the idea of being validated by a community. Coming from academia, that's definitely the case, that you really need to shine to be able to think that you're doing some work. And then also coming into the maturity of thinking, we're part of a chain. We're doing something bigger. Sometimes we are kind of going all places and we're making mistakes as a whole, but we're all part of a bigger system. And if you're part of the chain, if you have certain privileges and you can push forward the rest of the chain, that's what it is for. 

JY: Tell me about an experience that shaped your views on free expression, like a personal experience. 

LV: I'm thinking of the experience of writing about Venezuela while being abroad. That has been a very complicated, complex experience because I left Venezuela in 2008. 

JY: That's the year we met. 

LV: Exactly. I was in Budapest [for the Global Voices Summit] in 2008. And then I left Venezuela a few months later. So this experience about freedom of expression…when I left, it wasn't yet the time of the big exodus. This exodus translates today into a huge Venezuelan community all around the world that had to leave, not because they wanted to, but because they had basically no choice. It was very complicated to talk about the crisis because immediately you will get hit back. I will never forget that even in that summit that we keep discussing, the Budapest Summit of Global Voices, whenever I would talk about Venezuela, people would shut me down—people that were not Venezuelans. It was the big beginning of what we call the “Venezuelansplaining”. Because it was this political movement that was very much towards the left, that it was very much non-aligned…

JY: You had that in common with Syria. 

LV: Yeah. And so at the same time, they [the Venezuelan government] were so good at selling themselves as this progressive, non-aligned, global majority movement, feminist, you see…to me, it was shocking to see a lot of feminist groups aligning with the government, that it was a government led by a big, strong man, with a lot of discourse and very little policy change behind it. However, it was the ones that for the first time were talking about these issues from the side of the state. So from the outside, it really looked like this big government that was for the people and all the narratives of the 1960s, of the American interventions in the South that were definitely a reality, but in the case of Venezuela in the 2010s and now it is a lot more complex. And so whenever I would talk about the situation in Venezuela, it was very easy to shut me down. At first, I literally had somebody telling me, somebody who's not from Venezuela, telling me “You don't know what you're talking about. I cannot hear what you say about Venezuela because you're a privileged person.”

And I could totally take the idea of privilege, yes, but I did grow up in that country. He didn’t know it, and I did, and he definitely didn’t know anything about me. It was very easy to be shut down and very easy to self-censor because after that experience, plus writing about it or having opinions about it and constantly being told “you're not there, you cannot speak,” I just started not talking about it. And I think my way of responding to that was being able to facilitate conversations about that. 

And so I was very happy to become the editor of the Americas of Global Voices back then, because if I couldn't write about it because of these reasons—which I guess I understand—I will push others to talk about it. And not only about Venezuela, but Latin America, there are so many narratives that are very reductive, really simplistic about the region that I really wanted to really push back against. So that's why I see freedom of expression as this really complex thing, this really, really complicated thing. And I guess that's why I also see it not only as a right too, but also as a responsibility. Because the space that we have today is so messy and polluted with so many things that you can claim freedom of expression just to say anything, and your goal is not to express yourself, but to harm other people, vulnerable people in particular. 

JY: What do you think is the ideal online environment for free expression? What are the boundaries or guardrails that should be put in place? What guides you? 

LV: I'm not even sure that something guides me completely. I guess that I'm guided by the organizations that observe and defend the space, because they're constantly monitoring, they're constantly explaining, they're talking to people, they have an ear on the ground. It is impossible to think of a space that can be structured and have certain codes. We are a really complicated species. We had these platforms that we started seeing as this hope for people to connect, and then they ended up being used to harm. 

I guess that's also why the conversations about regulations are always so complicated, because whenever we push for legislation or for different kinds of regulations, those regulations then take a life of their own and everybody's able to weaponize them or manipulate them. So yes, there are definitely guidelines and regulations, but I think it's a pendular movement. You know, it's recognizing that the space in which people communicate is always going to be chaotic because everybody will want to have their say. But at the same time, it's important to keep observing and having guidelines. I will go with you, having UN guidelines that translate from organizations that observe the space. I hate to answer saying that I have no guidelines, but at the same time, I guess it's also the idea of the acceptance that it's a chaotic space. And for it to be healthy, we need to accept that it's going to be. It cannot be very structured. It cannot function if it's too structured because there will not be free expression. 

JY: I get that. So ultimately then, where do you stand on regulation? 

LV: I think it's necessary; at some point we need rules to go by and we need some rules of the game. But it cannot be blindly, and we cannot think that regulations are going to stay the same over time. Regulations need to be discussed. They need to evolve. They need to be studied. Once they're in place, you observe how they're used and then how they can be adjusted. It's like they need to be as alive as the spaces of expression are. 

JY: Yes. What countries do you think or entities do you think are doing the best job of this right now? I feel that the EU is maybe trying its hardest, but it's not necessarily enough. 

LV: And I think it's also a little bit dangerous to think of whatever the European Union does as an example. There have been so many cases of copy-paste legislation that has nothing to do with the context. When we talk about privacy, for example, the way that Europe, the way that France and Germany understand privacy, it's not the way that Colombia, for example, understands privacy. It's very different. Culturally, it's different. You can see that people understand legislation, thinking about privacy very differently. And so this kind of way, which I think is like, I will even dare to say is a bit colonial, you know? Like, we set the example, we put the rules and you should follow suit. And why? I like the effort of the European Union as an entity. The fact that so many countries that have been at war for so long managed to create a community, I'm impressed. The jury's still out on how that's working, but I'm still impressed. 

JY: Do you think that because—maybe because of Global Voices or our experience of moving countries, or our friendships—having a global worldview and seeing all of these different regulations and different failures and different successes makes it more complex for us than, say, somebody who's working only on policy in the EU or in the US or in the UK? Do you think it's harder for us then to reconcile these ideas, because we see this broader picture?

LV: That's a really good point. I'm not sure. I do believe very strongly in the idea that we should be in contact. As with everything that has to do with freedom of expression, initiatives, and the fight for spaces and to protect journalists and to regulate platforms, we should be looking at each other's notes. Absolutely. Is there a way to look at it globally? I don't know. I don't think so. I think that I was very much a believer of the idea of a global world where we're all in contact and the whole thing of the global village. 

But then when you start exchanging and when you see how things play out—whenever we think about “globalities”—there's always one overpowering the rest. And that's a really difficult balance to get. Nothing will ever be [truly] global. It will not. We're still communicating in English, we're still thinking about regulations, following certain values. I'm not saying that's good or bad. We do need to create connections. I wouldn't have been able to make friendships and beautiful, beautiful relations that taught me a lot about freedom of expression and digital security had I not spoken this language, because I don't speak Arabic, and these Egyptian friends [that I learned from early on] don't speak Spanish. So those connections are important. They're very important. But the idea of a globality where everybody is the same…I see that as very difficult. And I think it goes back to this idea that we could have perfect regulation or perfect structures—like, if we had these perfect structures, everything would be fine. And I think that we're learning very painfully that is just not possible. 

Everything that we will come up with, every space that we will open, will be occupied by many other people's powers and interests. So I guess that the first step could be to recognize that there's this uneasy relation of things that cannot be global, that cannot be completely horizontal, that doesn't obey rules, it doesn't obey structures…to see what it is that we're going to do. Because so far, I believe that there's been so many efforts towards equalizing spaces. I have been thinking about this a lot. We tend to think so much about solutions and ways in which we all connect and everything. And at the end, it ends up emptying those words of their meaning, because we're reproducing imbalances, we reproduce power relations. So, I don't know how to go back to the question, because I don't think that there's an ideal space. If there was an ideal space, I don't think that we'd be human, you know? I think that part of what will make it realistic is that it moves along. So I guess the ideal place is, it will be one that is relatively safe for most, and especially that it will have special attention to protect vulnerable groups. 

If I could dream of a space with regulations and structures that will help, I think that my priority would be structures that at least favor the safety of the most vulnerable, and then the others will find their ground. I hope this makes sense. 

JY: No, it does. It does. I mean, it might not make sense to someone who is purely working on policy, but it makes sense to me because I feel the same way. 

LV: Yeah, I think a policy person will already be like looking away, you know, like really hoping to get away from me as soon as possible because this woman is just rambling. But they have this really tough job. They need to put straight lines where there are only curves. 

JY: Going back for a moment to something you mentioned, learning from people elsewhere in the world. That Global Voices meeting changed my life.

LV: It changed my life too. I was 26.

JY: I was 26 too! I’d been living in Morocco until just recently, and I remember meeting all of these people from other parts of the region, and beginning to understand through meeting people how different Morocco was from Syria, or Egypt. How the region wasn’t a monolith.

LV: And that’s so important. These are the things I feel that we might know intellectually, but when you actually “taste” them, there are no words you can express when you realize the complexity of people that you didn’t think of as complex as you. That was the year I met Mohamed El Gohary. I will never forget that as critical as I was of the government of Venezuela back then, never in a million years would I have imagined that they would be like they are now. I used to work in a ministry, which means that I was very much in contact with people that were really big believers of [Chavismo’s] project, and I would listen to them being really passionate and see how people changed their lives because they had employment and many other things they lacked before: representation in government among them. All of those projects ended up being really short-term solutions, but they changed the perspective of a lot of people and a lot of people that believed so wholeheartedly in it. I remember that most of the Latin America team, we were very shaken by the presentations coming from Advox, seeing the blogs and the bloggers were in prison. I remember Gohary asking me “have you had any platforms blocked, or shutdowns, or have any newspapers been closed?” I said no, and he said “that’s coming.”

JY: I remember this. I feel like Tunisia and Egypt really served as examples to other countries of what states could do with the internet. And I think that people without a global view don’t recognize that as clearly.

LV: That's very true. And I think we still lack this global view. And in my opinion, we lack a global view that doesn't go through the United States or Europe. Most of the conveners and the people that put us in contact have been linked or rooted in Western powers. And connections were made, which is good. I would have never understood these issues of censorship had it not been for these Egyptian friends that were at Global Voices. That's very important. And ever since, I am convinced that you can grow through people from backgrounds that are very different from yours, because you align on one particular thing. And so I've always been really interested in South, quote unquote, “South-South” relationships, the vision Latin America has of Africa. And I really dislike saying Africa as if it was one thing. 

But the vision that we need to have is...I love, there's a writer that I love, Ryszard Kapuściński, and he wrote a book about Africa. He's a Polish journalist and he wrote about the movements of independence because he was the only journalist that the newspaper had for internationals. He would go to every place around, and it was the 60s. So there were like independence movements all around. And at the end, he wrote this big summary of his experiences in “Africa.” And the first page says, other than for the geographic name that we put to it, Africa doesn't exist. This is a whole universe. This is a whole world. And so the vision, this reductionist vision that a lot of us in Latin America have come through these, you know, glasses that come from the West. So to me, when I see cases in which you have groups from Venezuela, collaborating with groups in Senegal because the shutdowns that happen in both countries rhyme, I am passionately interested in these connections, because these are connections of people that don't think are similar, but they're going through similar, very similar things, and they realize how similar they are in the process. That was my feeling with [other friends from Egypt] and Gohary. The conversations that we had, the exchanges that we had, let's say at the center of our table, our excuse was this idea of freedom on the internet and how digital security will work. But that was the way that we could dialogue. And to me, it was one proof of how you grow through the experiences of people that you mistakenly think are not like you. 

JY: Yes. Yeah, no, exactly, And that was really, that was my experience too, because in the U.S. at the time, obviously there were no restrictions on the internet, but I moved to Morocco and immediately on my first day there, I had a LiveJournal. I think I've written about this many times. I had LiveJournal, which was my blogging platform at the time, and I went to log in and the site was blocked. And LiveJournal was blocked because there had been a lot of blogs about the Western Sahara, which was a taboo topic at the time, still is in many ways. And so I had to, I had to make a decision. Do I figure out a circumvention tool? I had an American friend who was emailing me about how to get around this, or maybe we had a phone call. And so I ended up, I ended up becoming a public blogger because of censorship. 

LV: That's so interesting because it is the reaction. Somebody says, I like, I didn't want to talk, but now that you don't want me to, now I will. 

JY:  Yeah, now I will. And I never crossed the red lines while I was living there because I didn't want to get in trouble. And I wrote about things carefully. But that experience connected me to people. That's how I found Global Voices. 

I want to ask you another question. When we met in Portugal in September, we discussed the idea that what’s happening in the U.S. has made it easier for people there to understand repression in other countries…that some Americans are now more able to see creeping authoritarianism or fascism elsewhere because they’re experiencing it themselves. What are your thoughts on that?

LV: So what pops in my mind is this, because I always find this fantasy very interesting that things cannot happen in certain countries, even if they've already happened. There are a lot of ideas of, we were talking about having the European Union as an example. And yes, the United States were very much into, you know, this idea of freedom of the press, freedom of expression. But there was also this idea, this narrative that these kinds of things will never happen in a place like the United States, which I think is a very dangerous idea, because it gets you to not pay attention. And there are so many ways in which expression can be limited, manipulated, weaponized, and it was a long time coming, that there were a lot of pushes to censor books. When you start seeing that, you push for libraries to take certain books out, you really start seeing like the winds blowing in that direction. And so now that it has become probably more evident, with the case of the Jimmy Kimmel show and the ways that certain media have been using their time to really misinform, you really start seeing parallels with other parts of the continent. I think it's very important, this idea that we look at each other. I will always defend the idea that we need to be constantly in dialogue and not necessarily look for examples.

Let’s say from Mexico downward, this idea of “look at this thing that people are doing in the States”—I don’t think that has ever served us, and it won’t serve us now. It is very important that we remain in dialogue. Because one thing that I found beautiful and fascinating that is happening among Venezuelan journalists is that you will see media that would  be competing with one in other circumstances are  now working together. They wouldn't survive otherwise. And also countries in the region that wouldn't look at each other before, they are working together as well. So you have Venezuelan journalists working with Nicaraguan journalists and also human rights defenders really looking at each other's cases because authoritarian regimes look at each other. We were talking about Egypt as an example before. And we keep seeing this but we're not paying enough attention. When we see events, for example, how they are regional, and that is really important. We need to talk amongst ourselves. We understand the realities of our regions, but it is so important that there's always somebody invited, somebody looking at other regions, how is it playing out, what are people doing. Latin America is a really great place where people should be looking at when thinking about counter-power and looking for examples of different ways of resistance. And unfortunately, also where things can go. How are technologies being used to censor? 

In the case of Venezuela, you had newspapers being progressively harassed. Then they wouldn't find paper. Then they had to close down. So they pushed them online where they're blocking them and harassing them. So it is a slow movement. It's very important to understand that this can happen anywhere. Everyone is at risk of having an authoritarian regime. This idea, these regressive ideas about rights, they are happening globally and they're getting a lot of traction. So the fact that we need to be in contact is crucial. It is crucial to really go beyond the narratives that we have of other countries and other cultures and to think that is particular to that place because of this and that. I think if there's a moment in which we can understand all of us as a whole group, as a region, like from the whole of the Americas, it is now. 

JY: That's such a good point. I agree. And I think it's important both to look at it on that semi-local scale and then also scale it globally, but understand like the Americas in particular, yeah, have so much in common. 

LV: No. I really believe that if there was something that I will be pushing forward, it's this idea that, first of all, these borders that are imagined, they're artificial, we created it to protect things that we have accumulated. And we, like the whole of the continent, have this history of people that came to occupy other people's lands. That's their origin story. All of the continent. Yeah. So maybe trying to understand that in terms of resistance and in terms of communities, we should be aware of that and really think about communities of counter power, resistance and fight for human rights should be, I guess they should have its own borders, you know, like not American groups or Nicaraguan groups or Colombian groups, like really create some sort of I guess, way to understand that these national borders are, they're not serving us. We really need to collaborate in ways that go really beyond that. Fully understanding the backgrounds and the differences and everything, but really connecting in things in ways that make sense. I don't think that one human rights defense community can go against its own state. They are outnumbered. The power imbalance is too big. But these groups in combination, looking at each other and learning from each other, being in contact, collaborating, it makes, well, you know, it's just simple math. It will make for more of us working together. 

JY: Absolutely. At EFF, we have a team that works on issues in Latin America, and some are based in Latin America. And it’s been interesting, because I came to EFF from having worked in a Middle East perspective, and my colleague Katitza Rodriguez, who started just a year or two before me came from a Latin American perspective, and apart from our EU work, those remain the two regional strongholds of EFF’s international work. And we’ve bridged that. I remember a couple of years ago having calls between Colombians and Palestinians because they were experiencing the same censorship issues online.

LV: That’s what I dream of.

JY: That's the sort of bridging work that you and I kind of came up in. And I think that like that experience for me, and similarly for Katitza, and then bringing that to EFF. And so we had these ties. And I think of everything you’ve said, one of the things that struck me the most is that this is a generational thing. We’re all Gen X, or early Millennials, or whatever you want to call it. I know it differs globally, but we all grew up under similar circumstances in terms of the information age, and I think that shaped our worldview in a way that—if we’re open to it—our generation thinks uniquely from the ones before and after us, because we lived a little bit in both worlds. I think it’s a really unique experience.

LV: I feel really excited to hear you say this because at times I feel that I'm thinking about this and it looks like it sounds like very weird ideas, but we are definitely part of this generation that lived the transition to online worlds and we are living in these—I love to call them digital third spaces. We're constantly negotiating our identities. We are creating new ones. We're creating homes that are “in the air.” Because yes, you are in Berlin now and I'm in France and other friends are in Venezuela, others are in Colombia and so on. But we are in this kind of commonplace, in this space where we meet that is neither nor. And it is a place that has let me understand borders very differently and understand identity very differently. And I think that is the door that we have to go through to understand how community and collaboration cross regionally and beyond borders. It's not only necessary, it's more realistic. 

JY: Absolutely, I agree. Let me ask you the last question: Who's your free expression hero? Or somebody who's inspired you. Somebody who really changed your world. 

LV: I am so proud of the Venezuelan community. So proud. They're all people that are inspiring, intelligent, dynamic. And if I had to pick one with a lot of pain, I would say Valentina Aguana. She works with Connexion Segura y Libre. She's like twenty-something. I love to see this person in her twenties. And very often, especially now that you see younger generations going to places that we don't understand. I love that she's a young person in this space, and I love how well she understands a lot of these things. I love very much how she integrates this idea of having the right to do things. That was very hard for me when I was growing up. It was very hard when I was her age to understand I had the right to do things, that I had the right to express myself. Not only does she understand that her work is devoted to ensuring that other people have the right as well, and they have the space to do that safely. 

JY: I love that. Thank you so much Laura.

Celebrating Books on Building a Better Future

21 November 2025 at 16:18

Update: Cindy's conversation with Bruce Schneier and Nathan E. Sanders on their book "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship" has been moved to January 24, 2026 at 11 AM PT. More info below!

One of our favorite—and most important—things that we do at EFF is to work toward a better future. It can be easy to get caught up in all the crazy things that are happening in the moment, especially with the fires that need to be put out. But it’s just as important to keep our eyes on new technologies, how they are impacting digital rights, and how we can ensure that our rights and freedoms expand over time.

That's why EFF is excited to spotlight two free book events this December that look ahead, providing insight on how to build this better future. Featuring EFF’s Executive Director Cindy Cohn, we’ll be exploring how stories, technology, and policy shape the world around us. Here’s how you can join us this year and learn more about next year’s events:

Exploring Progressive Social Change at The Booksmith - We Will Rise Again 

December 2 | 7:00 PM Pacific Time | The Booksmith, San Francisco 

We’re celebrating the release of We Will Rise Again, a new anthology of speculative stories from writers across the world, including Cindy Cohn, Annalee Newitz, Charlie Jane Anders, Reo Eveleth, Andrea Dehlendorf, and Vida Jame. This collection explores topics ranging from disability justice and environmental activism to community care and collective worldbuilding to offer tools for organizing, interrogating the status quo, and a blueprint for building a better world.

Join Cindy Cohn and her fellow panelists at this event to learn how speculative fiction helps us think critically about technology, civil liberties, and the kind of world we want to create. We hope to see some familiar faces there! 

RSVP AND LEARN MORE

AI, Politics, and the Future of Democracy - Rewiring Democracy (NEW DATE!)

January 24, 2026 | 11:00 AM Pacific Time | Virtual

We’re also geared up to join an online discussion with EFF Board Member Bruce Schneier and Nathan E. Sanders about their new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship. In this time when AI is taking up every conversation—from generative AI tools to algorithmic decision-making in government—this book cuts through the hype to examine the ways that the technology is transforming every aspect of democracy, for good and bad. 

Cindy Cohn will join Schneier and Sanders for a forward-looking conversation about what’s possible, and what’s at stake, as AI weaves itself into our governments and how to steer it in the right direction. We’ll see you online for this one! 

RSVP AND LEARN MORE

Announcing Cindy Cohn's New Book, Privacy's Defender

In March we’ll be kicking off the celebration for Cindy Cohn’s new book, Privacy’s Defender, chronicling her thirty-year battle to protect everyone’s right to digital privacy and offering insights into the ongoing fight for our civil liberties online. Stay tuned for more information about our first event at City Lights on Tuesday, March 10!

The celebration doesn’t stop there. Look out for more celebrations for Privacy’s Defender throughout the year, and we hope we’ll see you at one of them. Plus, you can learn more about the book and even preorder it today

PREORDER PRIVACY'S DEFENDER

You can keep up to date on these book events, and more EFF happenings when you sign up for our EFFector newsletter and check out our full event calendar.

 

Victory! Court Ends Dragnet Electricity Surveillance Program in Sacramento

21 November 2025 at 11:30

A California judge ordered the end of a dragnet law enforcement program that surveilled the electrical smart meter data of thousands of Sacramento residents.

The Sacramento County Superior Court ruled that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police violated a state privacy statute, which bars the disclosure of residents’ electrical usage data with narrow exceptions. For more than a decade, SMUD coordinated with the Sacramento Police Department and other law enforcement agencies to sift through the granular smart meter data of residents without suspicion to find evidence of cannabis growing.

EFF and its co-counsel represent three petitioners in the case: the Asian American Liberation Network, Khurshid Khoja, and Alfonso Nguyen. They argued that the program created a host of privacy harms—including criminalizing innocent people, creating menacing encounters with law enforcement, and disproportionately harming the Asian community.

The court ruled that the challenged surveillance program was not part of any traditional law enforcement investigation. Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.

“[T]he process of making regular requests for all customer information in numerous city zip codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation,” the court ruled, finding that SMUD violated its “obligations of confidentiality” under a data privacy statute.

Granular electrical usage data can reveal intimate details inside the home—including when you go to sleep, when you take a shower, when you are away, and other personal habits and demographics.

The dragnet turned 650,000 SMUD customers into suspects.

In creating and running the dragnet surveillance program, according to the court, SMUD and police “developed a relationship beyond that of utility provider and law enforcement.” Multiple times a year, the police asked SMUD to search its entire database of 650,000 customers to identify people who used a large amount of monthly electricity and to analyze granular 1-hour electrical usage data to identify residents with certain electricity “consumption patterns.” SMUD passed on more than 33,000 tips about supposedly “high” usage households to police.

While this is a victory, the Court unfortunately dismissed an alternate claim that the program violated the California Constitution’s search and seizure clause. We disagree with the court’s reasoning, which misapprehends the crux of the problem: At the behest of law enforcement, SMUD searches granular smart meter data and provides insights to law enforcement based on that granular data.

Going forward, public utilities throughout California should understand that they cannot disclose customers’ electricity data to law enforcement without any “evidence to support a suspicion” that a particular crime occurred.

EFF, along with Monty Agarwal of the law firm Vallejo, Antolin, Agarwal, Kanter LLP, brought and argued the case on behalf of Petitioners.

How Cops Are Using Flock Safety's ALPR Network to Surveil Protesters and Activists

20 November 2025 at 18:58

It's no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car. 

Through an analysis of 10 months of nationwide searches on Flock Safety's servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock's national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate. 

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don't have reason to believe a targeted vehicle left the region. 

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between. 

The Tulsa Police Department in Oklahoma was one of the most consistent users of Flock Safety's ALPR system for investigating protests, logging at least 38 such searches. This included running searches that corresponded to a protest against deportation raids in February, a protest at Tulsa City Hall in support of pro-Palestinian activist Mahmoud Khalil in March, and the No Kings protest in June. During the most recent No Kings protests in mid-October, agencies such as the Lisle Police Department in Illinois, the Oro Valley Police Department in Arizona, and the Putnam County (Tenn.) Sheriff's Office all ran protest-related searches. 

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a "reason" field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.” 

Crime does sometimes occur at protests, whether that's property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest. 

Search and Dissent 

2025 was an unprecedented year of street action. In June and again in October, thousands across the country mobilized under the banner of the “No Kings” movement—marches against government overreach, surveillance, and corporate power. By some estimates, the October demonstrations ranked among the largest single-day protests in U.S. history, filling the streets from Washington, D.C., to Portland, OR. 

EFF identified 19 agencies that logged dozens of searches associated with the No Kings protests in June and October 2025. In some cases the "No Kings" was explicitly used, while in others the term "protest" was used but coincided with the massive protests.

Law Enforcement Agencies that Ran Searches Corresponding with "No Kings" Rallies

  • Anaheim Police Department, Calif.
  • Arizona Department of Public Safety
  • Beaumont Police Department, Texas
  • Charleston Police Department, SC
  • Flagler County Sheriff's Office, Fla.
  • Georgia State Patrol
  • Lisle Police Department, Ill.
  • Little Rock Police Department, Ark.
  • Marion Police Department, Ohio
  • Morristown Police Department, Tenn.
  • Oro Valley Police Department, Ariz.
  • Putnam County Sheriff's Office, Tenn.
  • Richmond Police Department, Va.
  • Riverside County Sheriff's Office, Calif.
  • Salinas Police Department, Calif.
  • San Bernardino County Sheriff's Office, Calif.
  • Spartanburg Police Department, SC
  • Tempe Police Department, Ariz.
  • Tulsa Police Department, Okla.
  • US Border Patrol

For example: 

  • In Washington state, the Spokane County Sheriff's Office listed "no kings" as the reason for three searches on June 15, 2025 [Note: date corrected]. The agency queried 95 camera networks, looking for vehicles matching the description of "work van," "bus" or "box truck." 
  • In Texas, the Beaumont Police Department ran six searches related to two vehicles on June 14, 2025, listing "KINGS DAY PROTEST" as the reason. The queries reached across 1,774 networks. 
  • In California, the San Bernardino County Sheriff's Office ran a single search for a vehicle across 711 networks, logging "no king" as the reason. 
  • In Arizona, the Tempe Police Department made three searches for "ATL No Kings Protest" on June 15, 2025 searching through 425 networks. "ATL" is police code for "attempt to locate." The agency appears to not have been looking for a particular plate, but for any red vehicle on the road during a certain time window.

But the No Kings protests weren't the only demonstrations drawing law enforcement's digital dragnet in 2025. 

For example:

  • In Nevada's state capital, the Carson City Sheriff's Office ran three searches that correspond to the February 50501 Protests against DOGE and the Trump administration. The agency searched for two vehicles across 178 networks with "protest" as the reason.
  • In Florida, the Seminole County Sheriff's Office logged "protest" for five searches that correspond to a local May Day rally.
  • In Alabama, the Homewood Police Department logged four searches in early July 2025 for three vehicles with "PROTEST CASE" and "PROTEST INV." in the reason field. The searches, which probed 1,308 networks, correspond to protests against the police shooting of Jabari Peoples.
  • In Texas, the Lubbock Police Department ran two searches for a Tennessee license plate on March 15 that corresponds to a rally to highlight the mental health impact of immigration policies. The searches hit 5,966 networks, with the logged reason "protest veh."
  • In Michigan, Grand Rapids Police Department ran five searches that corresponded with the Stand Up and Fight Back Rally in February. The searches hit roughly 650 networks, with the reason logged as "Protest."

Some agencies have adopted policies that prohibit using ALPRs for monitoring activities protected by the First Amendment. Yet many officers probed the nationwide network with terms like "protest" without articulating an actual crime under investigation.

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protestnot just those under suspicion.

Border Patrol's Expanding Reach 

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities. 

USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.” 

USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics.

USBP also used the Flock Safety network to investigate a motorist who had “extended his middle finger” at Border Patrol vehicles that were transporting detainees. The motorist then allegedly drove in front of one of the vehicles and slowed down, forcing the Border Patrol vehicle to brake hard. An officer ran seven searches for his plate, citing "assault on agent" and "18 usc 111," the federal criminal statute for assaulting, resisting or impeding a federal officer. The individual was charged in federal court in early August. 

USBP had access to the Flock system during a trial period in the first half of 2025, but the company says it has since paused the agency's access to the system. However, Border Patrol and other federal immigration authorities have been able to access the system’s data through local agencies who have run searches on their behalf or even lent them logins

Targeting Animal Rights Activists

Law enforcement's use of Flock's ALPR network to surveil protesters isn't limited to large-scale political demonstrations. Three agencies also used the system dozens of times to specifically target activists from Direct Action Everywhere (DxE), an animal-rights organization known for using civil disobedience tactics to expose conditions at factory farms.

Delaware State Police queried the Flock national network nine times in March 2025 related to DxE actions, logging reasons such as "DxE Protest Suspect Vehicle." DxE advocates told EFF that these searches correspond to an investigation the organization undertook of a Mountaire Farms facility. 

Additionally, the California Highway Patrol logged dozens of searches related to a "DXE Operation" throughout the day on May 27, 2025. The organization says this corresponds with an annual convening in California that typically ends in a direct action. Participants leave the event early in the morning, then drive across the state to a predetermined but previously undisclosed protest site. Also in May, the Merced County Sheriff's Office in California logged two searches related to "DXE activity." 

As an organization engaged in direct activism, DxE has experienced criminal prosecution for its activities, and so the organization told EFF they were not surprised to learn they are under scrutiny from law enforcement, particularly considering how industrial farmers have collected and distributed their own intelligence to police.

The targeting of DxE activists reveals how ALPR surveillance extends beyond conventional and large-scale political protests to target groups engaged in activism that challenges powerful industries. For animal-rights activists, the knowledge that their vehicles are being tracked through a national surveillance network undeniably creates a chilling effect on their ability to organize and demonstrate.

Fighting Back Against ALPR 

Two Flock Safety cameras on a pole

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don't just capture data on "criminals" but on everyone, all the timeand that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it. 

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the "reason" field when documenting their search. It's likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like "investigation," "suspect," and "query" in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches. 

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

For local officials, this should serve as another example of how systems marketed as protecting your community may actually threaten the values your communities hold most dear. The best way to protect people is to shut down these camera networks.  

Everyone should have the right to speak up against injustice without ending up in a database. 

The Trump Administration’s Order on AI Is Deeply Misguided

20 November 2025 at 15:10

Widespread news reports indicate that President Donald Trump’s administration has prepared an executive order to punish states that have passed laws attempting to address harms from artificial intelligence (AI) systems. According to a draft published by news outlets, this order would direct federal agencies to bring legal challenges to state AI regulations that the administration deems “onerous,”  to restrict funding to those states that have these laws, and to adopt new federal law that overrides state AI laws.

This approach is deeply misguided.

As we’ve said before, the fact that states are regulating AI is often a good thing. Left unchecked, company and government use of automated decision-making systems in areas such as housing, health care, law enforcement, and employment have already caused discriminatory outcomes based on gender, race, and other protected statuses.

While state AI laws have not been perfect, they are genuine attempts to address harms that people across the country face from certain uses of AI systems right now. Given the tone of the Trump Administration’s draft order, it seems clear that the preemptive federal legislation backed by this administration will not stop ways that automated decision making systems can result in discriminatory decisions.

For example, a copy of the draft order published by Politico specifically names the Colorado AI Act as an example of supposedly “onerous” legislation. As we said in our analysis of Colorado’s law, it is a limited but crucial step—one that needs to be strengthened to protect people more meaningfully from AI harms. It is possible to guard against harms and support innovation and expression. Ignoring the harms that these systems can cause when used in discriminatory ways is not the way to do that.

Again: stopping states from acting on AI will stop progress. Proposals such as the executive order, or efforts to put a broad moratorium on state AI laws into the National Defense Authorization Act (NDAA), will hurt us all. Companies that produce AI and automated decision-making software have spent millions in state capitals and in Congress to slow or roll back legal protections regulating artificial intelligence. If reports about the Trump administration’s executive order are true, those efforts are about to get a supercharged ally in the federal government.

And all of us will pay the price.

EFF Demands Answers About ICE-Spotting App Takedowns

20 November 2025 at 11:30
Potential Government Coercion Raises First Amendment Concerns

SAN FRANCISCO – The Electronic Frontier Foundation (EFF) sued the departments of Justice (DOJ) and Homeland Security (DHS) today to uncover information about the federal government demanding that tech companies remove apps that document immigration enforcement activities in communities throughout the country. 

Tech platforms took down several such apps (including ICE Block, Red Dot, and DeICER) and webpages (including ICE Sighting-Chicagoland) following communications with federal officials this year, raising important questions about government coercion to restrict protected First Amendment activity.

"We're filing this lawsuit to find out just what the government told tech companies," said EFF Staff Attorney F. Mario Trujillo. "Getting these records will be critical to determining whether federal officials crossed the line into unconstitutional coercion and censorship of protected speech."

In October, Apple removed ICEBlock, an app that allows users to report Immigration and Customs Enforcement (ICE) activity in their area, from its App Store. Attorney General Pamela Bondi publicly took credit for the takedown, telling reporters, “We reached out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.” In the days that followed, Apple removed several similar apps from the App Store. Google and Meta removed similar apps and webpages from platforms they own as well. Bondi vowed to “continue engaging tech companies” on the issue. 

People have a protected First Amendment right to document and share information about law enforcement activities performed in public. If government officials coerce third parties into suppressing protected activity, this can be unconstitutional, as the government cannot do indirectly what it is barred from doing directly.

Last month, EFF submitted Freedom of Information Act (FOIA) requests to the DOJ, DHS and its component agencies ICE and Customs and Border Protection. The requests sought records and communications about agency demands that technology companies remove apps and pages that document immigration enforcement activities. So far, none of the agencies have provided these records. EFF's FOIA lawsuit demands their release.

For the complaint: https://www.eff.org/document/complaint-eff-v-doj-dhs-ice-tracking-apps

For more about the litigation: https://www.eff.org/cases/eff-v-doj-dhs-ice-tracking-apps

Tags: 
Contact: 
F. Mario
Trujillo
Staff Attorney

The Patent Office Is About To Make Bad Patents Untouchable

19 November 2025 at 15:00

The U.S. Patent and Trademark Office (USPTO) has proposed new rules that would effectively end the public’s ability to challenge improperly granted patents at their source—the Patent Office itself. If these rules take effect, they will hand patent trolls exactly what they’ve been chasing for years: a way to keep bad patents alive and out of reach. People targeted with troll lawsuits will be left with almost no realistic or affordable way to defend themselves.

We need EFF supporters to file public comments opposing these rules right away. The deadline for public comments is December 2. The USPTO is moving quickly, and staying silent will only help those who profit from abusive patents. 

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use legal or technical vocabulary. The important thing is that everyday users and creators of technology have  the chance to speak up, and be counted. 

Below is a short, simple comment you can copy and paste. Your comment will carry more weight if you add a personal sentence or two of your own. Please note that comments should be submitted under your real name and will become part of the public record. 

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Why This Rule Change Matters

Inter partes review, (IPR), isn’t perfect. It hasn’t eliminated patent trolling, and it’s not available in every case. But it is one of the few practical ways for ordinary developers, small companies, nonprofits, and creators to challenge a bad patent without spending millions of dollars in federal court. That’s why patent trolls hate it—and why the USPTO’s new rules are so dangerous.

IPR isn’t easy or cheap, but compared to years of litigation, it’s a lifeline. When the system works, it removes bogus patents from the table for everyone, not just the target of a single lawsuit. 

IPR petitions are decided by the Patent Trial and Appeal Board (PTAB), a panel of specialized administrative judges inside the USPTO. Congress designed  IPR to provide a fresh, expert look at whether a patent should have been granted in the first place—especially when strong prior art surfaces. Unlike  full federal trials, PTAB review is faster, more technical, and actually accessible to small companies, developers, and public-interest groups.

Here are three real examples of how IPR protected the public: 

  • The “Podcasting Patent” (Personal Audio)

Personal Audio claimed it had “invented” podcasting and demanded royalties from audio creators using its so-called podcasting patent. EFF crowdsourced prior art, filed an IPR, and ultimately knocked out the patent—benefiting  the entire podcasting worldUnder the new rules, this kind of public-interest challenge could easily be blocked based on procedural grounds like timing, before the PTAB even examines the patent. 

  • SportBrain’s “upload your fitness data” patent

SportBrain sued more than 80 companies over a patent that claimed to cover basic gathering of user data and sending it over a network. A panel of PTAB judges canceled every claim. Under the new rules, this patent could have survived long enough to force dozens more companies to pay up.

For more than a decade, Shipping & Transit sued companies over extremely broad “delivery notifications”patents. After repeated losses at PTAB and in court (including fee awards), the company finally collapsed. Under the new rules, a troll like this could keep its patents alive and continue carpet-bombing small businesses with lawsuits.

IPR hasn’t ended patent trolling. But when a troll waves a bogus patent at hundreds or thousands of people, IPR is one of the only tools that can actually fix the underlying problem: the patent itself. It dismantles abusive patent monopolies that never should have existed,   saving entire industries from predatory litigation. That’s exactly why patent trolls and their allies have fought so hard to shut it down. They’ve failed to dismantle IPR in court or in Congress—and now they’re counting on the USPTO’s own leadership to do it for them. 

What the USPTO Plans To Do

First, they want you to give up your defenses in court. Under this proposal, a defendant can’t file an IPR unless they promise to never challenge the patent’s validity in court. 

For someone actually being sued or threatened with patent infringement, that’s simply not a realistic promise to make. The choice would be: use IPR and lose your defenses—or keep your defenses and lose IPR.

Second, the rules allow patents to become “unchallengeable” after one prior fight. That’s right. If a patent survives any earlier validity fight, anywhere, these rules would block everyone else from bringing an IPR, even years later and even if new prior art surfaces. One early decision—even one that’s poorly argued, or didn’t have all the evidence—would block the door on the entire public.

Third, the rules will block IPR entirely if a district court case is projected to move faster than PTAB. 

So if a troll sues you with one of the outrageous patents we’ve seen over the years, like patents on watching an ad, showing picture menus, or clocking in to work, the USPTO won’t even look at it. It’ll be back to the bad old days, where you have exactly one way to beat the troll (who chose the court to sue in)—spend millions on experts and lawyers, then take your chances in front of a federal jury. 

The USPTO claims this is fine because defendants can still challenge patents in district court. That’s misleading. A real district-court validity fight costs millions of dollars and takes years. For most people and small companies, that’s no opportunity at all. 

Only Congress Can Rewrite IPR

IPR was created by Congress in 2013 after extensive debate. It was meant to give the public a fast, affordable way to correct the Patent Office’s own mistakes. Only Congress—not agency rulemaking—can rewrite that system.

The USPTO shouldn’t be allowed to quietly undermine IPR with procedural traps that block legitimate challenges.

Bad patents still slip through every year. The Patent Office issues hundreds of thousands of new patents annually. IPR is one of the only tools the public has to push back.

These new rules rely on the absurd presumption that it’s the defendants—the people and companies threatened by questionable patents—who are abusing the system with multiple IPR petitions, and that they should be limited to one bite at the apple. 

That’s utterly upside-down. It’s patent trolls like Shipping & Transit and Personal Audio that have sued, or threatened, entire communities of developers and small businesses.

When people have evidence that an overbroad patent was improperly granted, that evidence should be heard. That’s what Congress intended. These rules twist that intent beyond recognition. 

In 2023, more than a thousand EFF supporters spoke out and stopped an earlier version of this proposal—your comments made the difference then, and they can again. 

Our principle is simple: the public has a right to challenge bad patents. These rules would take that right away. That’s why it’s vital to speak up now. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Strengthen Colorado’s AI Act

19 November 2025 at 12:37

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

Lawsuit Challenges San Jose’s Warrantless ALPR Mass Surveillance

18 November 2025 at 13:11
EFF and the ACLU of Northern California Sue on Behalf of Local Nonprofits

Contact: Josh Richman, EFF, jrichman@eff.org;  Carmen King, ACLU of Northern California, cking@aclunc.org

SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warrantless searches of the stored records of millions of drivers’ private habits, movements, and associations, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union of Northern California (ACLU-NC) argue in a lawsuit filed Tuesday. 

The lawsuit, filed in Santa Clara County Superior Court on behalf of the Services, Immigrant Rights and Education Network (SIREN) and the Council on American-Islamic Relations – California (CAIR-CA), challenges San Jose police officers’ practice of searching for location information collected by automated license plate readers (ALPRs) without first getting a warrant.  

ALPRs are an invasive mass-surveillance technology: high-speed, computer-controlled cameras that automatically capture images of the license plates of every driver that passes by, without any suspicion that the driver has broken the law. 

“A person who regularly drives through an area subject to ALPR surveillance can have their location information captured multiple times per day,” the lawsuit says. “This information can reveal travel patterns and provide an intimate window into a person’s life as they travel from home to work, drop off their children at school, or park at a house of worship, a doctor’s office, or a protest. It could also reveal whether a person crossed state lines to seek health care in California.”

The San Jose Police Department has blanketed the city’s roadways with nearly 500 ALPRs – indiscriminately collecting millions of records per month about people’s movements – and keeps this data for an entire year. Then the department permits its officers and other law enforcement officials from across the state to search this ALPR database to instantly reconstruct people’s locations over time – without first getting a warrant. This is an unchecked police power to scrutinize the movements of San Jose’s residents and visitors as they lawfully travel to work, to the doctor, or to a protest. 

San Jose’s ALPR surveillance program is especially pervasive: Few California law enforcement agencies retain ALPR data for an entire year, and few have deployed nearly 500 cameras.  

The lawsuit, which names the city, its Police Chief Paul Joseph, and its Mayor Matt Mahan as defendants, asks the court to stop the city and its police from searching ALPR data without first obtaining a warrant. Location information reflecting people’s physical movements, even in public spaces, is protected under the Fourth Amendment according to U.S. Supreme Court case law. The California Constitution is even more protective of location privacy, at both Article I, Section 13 (the ban on unreasonable searches) and Article I, Section 1 (the guarantee of privacy). “The SJPD’s widespread collection and searches of ALPR information poses serious threats to communities’ privacy and freedom of movement."

“This is not just about data or technology — it’s about power, accountability, and our right to move freely without being watched,” said CAIR-San Francisco Bay Area Executive Director Zahra Billoo. “For Muslim communities, and for anyone who has experienced profiling, the knowledge that police can track your every move without cause is chilling. San Jose’s mass surveillance program violates the California Constitution and undermines the privacy rights of every person who drives through the city. We’re going to court to make sure those protections still mean something." 

"The right to privacy is one of the strongest protections that our immigrant communities have in the face of these acts of violence and terrorism from the federal government," said SIREN Executive Director Huy Tran. "This case does not raise the question of whether these cameras should be used. What we need to guard against is a surveillance state, particularly when we have seen other cities or counties violate laws that prohibit collaborating with ICE. We can protect the privacy rights of our residents with one simple rule: Access to the data should only happen once approved under a judicial warrant.”  

For the complaint: https://www.eff.org/files/2025/11/18/siren_v._san_jose_-_filed_complaint.pdf

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

Speaking Freely: Benjamin Ismail

18 November 2025 at 10:58

Interviewer: Jillian York

Benjamin Ismail is the Campaign and Advocacy Director for GreatFire, where he leads efforts to expose the censorship apparatus of authoritarian regimes worldwide. He also runs/oversees the App Censorship Project, including the AppleCensorship.com and GoogleCensorship.org platforms, which track mobile app censorship globally. From 2011 to 2017, Benjamin headed the Asia-Pacific desk at Reporters Without Borders (RSF).

Jillian York: Hi Benjamin, it's great to chat with you. We got to meet at the Global Gathering recently and we did a short video there and it was wonderful to get to know you a little bit. I'm going to start by asking you my first basic question: What does free speech or free expression mean to you?

Benjamin Ismail: Well, it starts with a very, very big question. What I have in mind is a cliche answer, but it's what I genuinely believe. I think about all freedoms. So when you say free expression, free speech, or freedom of information or Article 19, all of those concepts are linked together, I immediately think of all human rights at once. Because what I have seen during my current or past work is how that freedom is really the cornerstone of all freedom. If you don’t have that, you can’t have any other freedom. If you don’t have freedom of expression, if you don't have journalism, you don't have pluralism of opinions—you have self-censorship.

You have realities, violations, that exist but are not talked about, and are not exposed, not revealed, not tackled, and nothing is really improved without that first freedom. I also think about Myanmar because I remember going there in 2012, when the country had just opened after the democratic revolution. We got the chance to meet with many officials, ministers, and we got to tell them that they should start with that because their speech was “don’t worry, don’t raise freedom of speech, freedom of the press will come in due time.”

And we were saying “no, that’s not how it works!” It doesn’t come in due time when other things are being worked on. It starts with that so you can work on other things. And so I remember very well those meetings and how actually, unfortunately, the key issues that re-emerged afterwards in the country were precisely due to the fact that they failed to truly implement free speech protections when the country started opening.

JY: What was your path to this work?

BI: This is a multi-faceted answer. So, I was studying Chinese language and civilization at the National Institute of Oriental Languages and Civilizations in Paris along with political science and international law. When I started that line of study, I considered maybe becoming a diplomat…that program led to preparing for the exams required to enter the diplomatic corps in France.

But I also heard negative feedback on the Ministry of Foreign Affairs and, notably, first-hand testimonies from friends and fellow students who had done internships there. I already knew that I had a little bit of an issue with authority. My experience as an assistant at Reporters Without Borders challenged the preconceptions I had about NGOs and civil society organizations in general. I was a bit lucky to come at a time when the organization was really trying to find its new direction, its new inspiration. So it a brief phase where the organization itself was hungry for new ideas.

Being young and not very experienced, I was invited to share my inputs, my views—among many others of course. I saw that you can influence an organization’s direction, actions, and strategy, and see the materialization of those strategic choices. Such as launching a campaign, setting priorities, and deciding how to tackle issues like freedom of information, and the protection of journalists in various contexts.

That really motivated me and I realized that I would have much less to say if I had joined an institution such as the Ministry of Foreign Affairs. Instead, I was part of a human-sized group, about thirty-plus employees working together in one big open space in Paris.

After that experience I set my mind on joining the civil society sector, focusing on freedom of the press. on journalistic issues, you get to touch on many different issues in many different regions, and I really like that. So even though it’s kind of monothematic, it's a single topic that's encompassing everything at the same time.

I was dealing with safety issues for Pakistani journalists threatened by the Taliban. At the same time I followed journalists pressured by corporations such as TEPCO and the government in Japan for covering nuclear issues. I got to touch on many topics through the work of the people we were defending and helping. That’s what really locked me onto this specific human right.

 I already had my interest when I was studying in political and civil rights, but after that first experience, at the end of 2010, I went to China and got called by Reporters Without Borders. They told me that the head of the Asia desk was leaving and invited me to apply for the position. At that time, I was in Shanghai, working to settle down there. The alternative was accepting a job that would take me back to Paris but likely close the door on any return to China. Once you start giving interviews to outlets like the BBC and CNN, well… you know how that goes—RSF was not viewed favorably in many countries. Eventually, I decided it was a huge opportunity, so I accepted the job and went back to Paris, and from then on I was fully committed to that issue.

 JY: For our readers, tell us what the timeline of this was.

 BI: I finished my studies in 2009. I did my internship with Reporters Without Borders that year and continued to work pro bono for the organization on the Chinese website in 2010. Then I went to China, and in January 2011, I was contacted by Reporters without Borders about the departure of the former head of the Asia Pacific Desk.

I did my first and last fact-finding mission in China, and went to Beijing. I met the artist Ai Weiwei in Beijing just a few weeks before he was arrested, around March 2011, and finally flew back to Paris and started heading the Asia desk. I left the organization in 2017. 

JY: Such an amazing story. I’d love to hear more about the work that you do now.

 BI: The story of the work I do now actually starts in 2011. That was my first year heading the Asia Pacific Desk. That same year, a group of anonymous activists based in China started a group called GreatFire. They launched their project with a website where you can type any URL you want and that website will test the connection from mainland China to that URL and tell you know if it’s accessible or blocked. They also kept the test records so that you can look at the history of the blocking of a specific website, which is great. That was GreatFire’s first project for monitoring web censorship in mainland China.

We started exchanging information, working on the issue of censorship in China. They continued to develop more projects which I tried to highlight as well. I also helped them to secure some funding. At the very beginning, they were working on these things as a side job. And progressively they managed to get some funding, which was very difficult because of the anonymity.

One of the things I remember is that I helped them get some funding from the EU through a mechanism called “Small Grants”, where every grant would be around €20- 30,000. The EU, you know, is a bureaucratic entity and they were demanding some paperwork and documents. But I was telling them that they wouldn’t be able to get the real names of the people working at GreatFire, but that they should not be concerned about that because, what they wanted was to finance that tool. So if we were to show them that the people they were going to send the money to were actually the people controlling that website, then it would be fine. And so we featured a little EU logo just for one day, I think on the footer of the website so they could check that. And that’s how we convinced the EU to support GreatFire for that work. Also, there's this tactic called “Collateral Freedom” that GreatFire uses very well.

The idea is that you host sensitive content on HTTPS servers that belong to companies which also operate inside China and are accessible there. Because it’s HTTPS, the connection is encrypted, so the authorities can’t just block a specific page—they can’t see exactly which page is being accessed. To block it, they’d have to block the entire service. Now, they can do that, but it comes at a higher political and economic cost, because it means disrupting access to other things hosted on that same service—like banks or major businesses. That’s why it’s called “collateral freedom”: you’re basically forcing the authorities to risk broader collateral damage if they want to censor your content.

When I was working for RSF, I proposed that we replicate that tactic on the 12th of March—that's the World Day against Cyber Censorship. We had the habit of publishing what we called the “enemies of the Internet” report, where we would highlight and update the situation on the countries which were carrying out the harshest repression online; countries like Iran, Turkmenistan, North Korea, and of course, China. I suggested in a team meeting: “what if we highlighted the good guys? Maybe we could highlight 10 exiled media and use collateral freedom to uncensor those. And so we did: some Iranian media, Egyptian media, Chinese media, Turkmen media were uncensored using mirrors hosted on https servers owned by big, and thus harder to block, companies...and that’s how we started to do collateral freedom and it continued to be an annual thing.

I also helped in my personal capacity, including after I left Reporters Without Borders. After I left RSF, I joined another NGO focusing on China, which I knew also from my time at RSF. I worked with that group for a year and a half; GreatFire contacted me to work on a website specifically. So here we are, at the beginning of 2020, they had just started this website called Applecensorship.com that allowed users to test availability of any app in any of Apple’s 175 App Stores worldwide They needed a better website—one that allowed advocacy content—for that tool.

The idea was to make a website useful for academics doing research, journalists investigating app store censorship and control and human rights NGOs, civil society organizations interested in the availability of any tools. Apple’s censorship in China started quickly after the company entered the Chinese market, in 2010.

In 2013, one of the projects by GreatFire which had been turned into an iOS app was removed by Apple 48 hours after its release on the App Store, at the demand of the Chinese authorities. That project was Free Weibo, which is a website which features censored posts from Weibo, the Chinese equivalent of Twitter—we crawl social media and detect censored posts and republish them on the site. In 2017 it was reported that Apple had removed all VPNs from the Chinese app store.

So between that episode in 2013, and the growing censorship of Apple in China (and in other places too) led to the creation of AppleCensorship in 2019. GreatFire asked me to work on that website. The transformation into an advocacy platform was successful. I then started working full time on that project, which has since evolved into the App Censorship Project, which includes another website, googlecensorship.org (offering features similar to Applecensorship.com but for the 224 Play Stores worldwide). In the meantime, I became the head of campaigns and advocacy, because of my background at RSF.  

 JY: I want to ask you, looking beyond China, what are some other places in the world that you're concerned about at the moment, whether on a professional basis, but also maybe just as a person. What are you seeing right now in terms of global trends around free expression that worry you?

BI: I think, like everyone else, that what we're seeing in Western democracies—in the US and even in Europe—is concerning. But I'm still more concerned about authoritarian regimes than about our democracies. Maybe it's a case of not learning my lesson or of naive optimism, but I'm still more concerned about China and Russia than I am about what I see in France, the UK, or the US.

There has been some recent reporting about China developing very advanced censorship and surveillance technologies and exporting them to other countries like Myanmar and Pakistan. What we’re seeing in Russia—I’m not an expert on that region, but we heard experts saying back in 2022 that Russia was trying to increase its censorship and control, but that it couldn’t become like China because China had exerted control over its internet from the very beginning: They removed Facebook back in 2009, then Google was pushed away by the authorities (and the market). And the Chinese authorities successfully filled the gaps left by the absence of those foreign Western companies.

Some researchers working on Russia were saying that it wasn’t really possible for Russia to do what China had done because it was unprepared and that China had engineered it for more than a decade. What we are seeing now is that Russia is close to being able to close its Internet, to close the country, to replace services by its own controlled ones. It’s not identical, but it’s also kind of replicating what China has been doing. And that’s a very sad observation to make.

 Beyond the digital, the issue of how far Putin is willing to go in escalating concerns. As a human being and an inhabitant of the European continent, I’m concerned by the ability of a country like Russia to isolate itself while waging a war. Russia is engaged in a real war and at the same time is able to completely digitally close down the country. Between that and the example of China exporting censorship, I’m not far from thinking that in ten or twenty years we’ll have a completely splintered internet.

JY: Do you feel like having a global perspective like this has changed or reshaped your views in any way?

BI: Yes, in the sense that when you start working with international organizations, and you start hearing about the world and how human rights are universal values, and you get to meet people and go to different countries, you really get to experience how universal those freedoms and aspirations are. When I worked RSF and lobbied governments to pass a good law or abolish a repressive one, or when I worked on a case of a jailed journalist or blogger, I got to talk to authorities and to hear weird justifications from certain governments (not mentioning any names but Myanmar and Vietnam) like “those populations are different from the French” and I would receive pushback that the ideas of freedoms I was describing were not applicable to their societies. It’s a bit destabilizing when you hear that for the first time. But as you gain experience, you can clearly explain why human rights are universal and why different populations shouldn’t be ruled differently when it comes to human rights.

Everyone wants to be free. This notion of “universality” is comforting because when you’re working for something universal, the argument is there. The freedoms you defend can’t be challenged in principle, because everyone wants them. If governments and authorities really listened to their people, they would hear them calling for those rights and freedoms.

Or that’s what I used to think. Now we hear this growing rhetoric that we (people from the West) are exporting democracy, that it’s a western value, and not a universal one. This discourse, notably developed by Xi Jinping in China, “Western democracy” as a new concept— is a complete fallacy. Democracy was invented in the West, but democracy is universal. Unfortunately, I now believe that, in the future, we will have to justify and argue much more strongly for the universality of concepts like democracy, human rights and fundamental freedoms. 

JY: Thank you so much for this insight. And now for our final question: Do you have a free speech hero?

BI: No.

JY: No? No heroes? An inspiration maybe.

BI: On the contrary, I’ve been disappointed so much by certain figures that were presented as human rights heroes…Like Aung San Suu Kyi during the Rohingya crisis, on which I worked when I was at RSF.

Myanmar officially recognizes 135 ethnic groups, but somehow this one additional ethnic minority (the Rohingya) is impossible for them to accept. It’s appalling. It’s weird to say, but some heroes are not really good people either. Being a hero is doing a heroic action, but people who do heroic actions can also do very bad things before or after, at a different level. They can be terrible persons, husbands or friends and be a “human rights” hero at the same time.

Some people really inspired me but they’re not public figures. They are freedom fighters, but they are not “heroes”. They remain in the shadows. I know their struggles; I see their determination, their conviction, and how their personal lives align with their role as freedom fighters. These are the people who truly inspire me.

A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

14 November 2025 at 17:34

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

TAKE ACTION

TELL CONGRESS: The guard act won't keep us safe

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely. 

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.  

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different. 

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.  

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop. 

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

TAKE ACTION

TELL CONGRESS: OPPOSe THE GUARD ACT

Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing

13 November 2025 at 12:38

Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.

It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs. 

Yes, really.

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction. 

This follows a notable pattern: As we’ve explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature

Wisconsin’s bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."

This is actually happening. And it's going to be a disaster for everyone.

Here's Why This Is A Terrible Idea 

VPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live. 

So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.

Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.

Almost Everyone Uses VPNs

Let's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.

  1. Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks. 
  2. Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison’s WiscVPN, for example, “allows UW–‍Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP).” 
  3. Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned. 
  4. Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.

It’s A Privacy Nightmare

Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.

We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.

Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.

"Harmful to Minors" Is Not a Catch-All 

Here's another fun feature of these laws: they're trying to broaden the definition of “harmful to minors” to sweep in a host of speech that is protected for both young people and adults.

Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes “harmful to minors” is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors’ “prurient sexual interests.” 

Wisconsin's bill defines “harmful to minors” much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.

Additionally, the bill’s definition would apply to any websites where more than one third of the site’s material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it’s not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information. 

This breadth of the bill’s definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is “harmful” to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities

It Won’t Even Work

Let's say Wisconsin somehow manages to pass this law. Here's what will actually happen:

People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship. 

Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar. 

Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.

Nonetheless, as we’ve mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.

A False Dilemma

People have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy. 

Let's be clear: lawmakers need to abandon this entire approach.

The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors.”

If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works. 

If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

🔔 Ring's Face Scan Plan | EFFector 37.16

12 November 2025 at 19:26

Cozy up next to the fireplace and we'll catch you up on the latest digital rights news with EFF's EFFector newsletter.

In our latest issue, we’re exposing surveillance logs that reveal racist policing; explaining the harms of Google’s plan for Android app gatekeepingand continuing our new series, Gate Crashing, exploring how the internet empowers people to take nontraditional paths into the traditional worlds of journalism, creativity, and criticism.

Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Mario Trujillo explains why Ring's upcoming facial recognition tool could violate the privacy rights of millions of people. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.16 - 🔔 RING'S FACE SCAN PLAN

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Washington Court Rules That Data Captured on Flock Safety Cameras Are Public Records

12 November 2025 at 18:00

A Washington state trial court has shot down local municipalities’ effort to keep automated license plate reader (ALPR) data secret.

The Skagit County Superior Court in Washington rejected the attempt to block the public’s right to access data gathered by Flock Safety cameras, protecting access to information under the Washington Public Records Act (PRA). Importantly, the ruling from the court makes it clear that this access is protected even when a Washington city uses Flock Safety, a third-party vendor, to conduct surveillance and store personal data on behalf of a government agency.

"The Flock images generated by the Flock cameras...are public records," the court wrote in its ruling. "Flock camera images are created and used to further a governmental purpose. The Flock images created by the cameras located in Stanwood and Sedro-Woolley were paid for by Stanwood and Sedro Wooley [sic] and were generated for the benefit of Stanwood and Sedro-Woolley."

The cities’ move to exempt the records from disclosure was a dangerous attempt to deny transparency and reflects another problem with the massive amount of data that police departments collect through Flock cameras and store on Flock servers: the wiggle room cities seek when public data is hosted on a private company’s server.

Flock Safety's main product is ALPRs, camera systems installed throughout communities to track all drivers all the time. Privacy activists and journalists across the country recently have used public records requests to obtain data from the system, revealing a variety of controversial uses. This has included agencies accessing data for immigration enforcement and to investigate an abortion, the latter of which may have violated Washington law. A recent report from the University of Washington found that some cities in the state are also sharing the ALPR data from their Flock Safety systems with federal immigration agents. 

In this case, a member of the public in April filed a records request with a Flock customer, the City of Stanwood, for all footage recorded during a one-hour period in March. Shortly afterward, Stanwood and another Flock user, the City of Sedro-Woolley requested the local court rule that this data is not a public record, asserting that “data generated by Flock [automated license plate reader cameras (ALPRs)] and stored in the Flock cloud system are not public records unless and until a public agency extracts and downloads that data." 

If a government agency is conducting mass surveillance, EFF supports individuals’ access to data collected specifically on them, at the very least. And to address legitimate privacy concerns, governments can and should redact personal information in these records while still disclosing information about how the systems work and the data that they capture. 

This isn’t what these Washington cities offered, though. They tried a few different arguments against releasing any information at all. 

The contract between the City of Sedron-Woolley and Flock Safety clearly states that "As between Flock and Customer, all right, title and interest in the Customer Data, belong to and are retained solely by Customer,” and “Customer Data” is defined as "the data, media, and content provided by Customer through the Services. For the avoidance of doubt, the Customer Data will include the Footage." Other Flock-using police departments across the country have also relied on similar contract language to insist that footage captured by Flock cameras belongs to the jurisdiction in question. 

The contract language notwithstanding, officials in Washington attempted to restrict public access by claiming that video footage stored on Flock’s servers and requests for that information would constitute the generation of a new record. This part of the argument claimed that any information that was gathered but not otherwise accessed by law enforcement, including thousands of images taken every day by the agency’s 14 Flock ALPR cameras, had nothing to do with government business, would generate a new record, and should not be subject to records requests. The cities shut off their Flock cameras while the litigation was ongoing.

If the court had ruled in favor of the cities’ claim, police could move to store all their data — from their surveillance equipment and otherwise — on private company servers and claim that it's no longer accessible to the public. 

The cities threw another reason for withholding information at the wall to see if it would stick, claiming that even if the court found that data collected on Flock cameras are in fact public record, the cities should still be able to block the release of the requested one hour of footage either because all of the images captured by Flock cameras are sensitive investigation material or because they should be treated the same way as automated traffic safety cameras

EFF is particularly opposed to this line of reasoning. In 2017, the California Supreme Court sided with EFF and ACLU in a case arguing that “the license plate data of millions of law-abiding drivers, collected indiscriminately by police across the state, are not ‘investigative records’ that law enforcement can keep secret.” 

Notably, when Stanwood Police Chief Jason Toner made his pitch to the City Council to procure the Flock cameras in April 2024, he was adamant that the ALPRs would not be the same as traffic cameras. “Flock Safety Cameras are not ‘red light’ traffic cameras nor are they facial recognition cameras,” Chief Toner wrote at the time, adding that the system would be a “force multiplier” for the department. 

If the court had gone along with this part of the argument, cities could have been able to claim that the mass surveillance conducted using ALPRs is part of undefined mass investigations, pulling back from the public huge amounts of information being gathered without warrants or reason.

The cities seemed to be setting up contradictory arguments. Maybe the footage captured by the cities’ Flock cameras belongs to the city — or maybe it doesn’t until the city accesses it. Maybe the data collected by the cities’ taxpayer-funded cameras are unrelated to government business and should be inaccessible to the public — or maybe it’s all related to government business and, specifically, to sensitive investigations, presumably of every single vehicle that goes by the cameras. 

The requester, Jose Rodriguez, still won’t be getting his records, despite the court’s positive ruling. 

“The cities both allowed the records to be automatically deleted after I submitted my records requests and while they decided to have their legal council review my request. So they no longer have the records and can not provide them to me even though they were declared to be public records,” Rodriguez told 404 Media — another possible violation of that state’s public records laws. 

Flock Safety and its ALPR system have come under increased scrutiny in the last few months, as the public has become aware of illegal and widespread sharing of information. 

The system was used by the Johnson County Sheriff’s Office to track someone across the country who’d self-administered an abortion in Texas. Flock repeatedly claimed that this was inaccurate reporting, but materials recently obtained by EFF have affirmed that Johnson County was investigating that individual as part of a fetal death investigation, conducted at the request of her former abusive partner. They were not looking for her as part of a missing person search, as Flock said. 

In Illinois, the Secretary of State conducted an audit of Flock use within the state and found that the Flock Safety system was facilitating Customs and Border Protection access, in violation of state law. And in California, the Attorney General recently sued the City of El Cajon for using Flock to illegally share information across state lines.


Police departments are increasingly relying on third-party vendors for surveillance equipment and storage for the terabytes of information they’re gathering. Refusing the public access to this information undermines public records laws and the assurances the public has received when police departments set these powerful spying tools loose in their streets. While it’s great that these records remain public in Washington, communities around the country must be swift to reject similar attempts at blocking public access.

EFFecting Change: This Title Was Written by a Human

12 November 2025 at 15:58

Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss how to address complex questions and risks in AI while protecting civil liberties and human rights online.

Join EFF Director of Policy and Advocacy Katharine Trendacosta, EFF Staff Attorney Tori NobleBerkeley Center for Law & Technology Co-Director Pam Samuelson, and Icarus Salon Artist Şerife Wong for a live discussion with Q&A. 

EFFecting Change Livestream Series:
This Title Was Written by a Human
Thursday, November 13th (New Date!)
10:00 AM - 11:00 AM Pacific
This event is LIVE and FREE!


RSVP Today


Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

EFF Teams Up With AV Comparatives to Test Android Stalkerware Detection by Major Antivirus Apps

6 November 2025 at 12:10

EFF has, for many years, raised the alarm about the proliferation of stalkerwarecommercially-available apps designed to be installed covertly on another person’s device and exfiltrate data from that device without their knowledge. In particular, we have urged the makers of anti-virus products for Android phones to improve their detection of stalkerware and call it out explicitly to users when it is found. In 2020  and 2021, AV Comparatives ran tests to see how well the most popular anti-virus products detected stalkerware from many different vendors. The results were mixed, with some high-scoring companies and others that had alarmingly low detection rates. Since malware detection is an endless game of cat and mouse between anti-virus companies and malware developers, we felt that the time was right to take a more up-to-date snapshot of how well the anti-virus companies are performing. We’ve teamed up with the researchers at AV Comparatives to test the most popular anti-virus products for Android to see how well they detect the most popular stalkerware products in 2025.

Here is what we found:

Stalkerware detection is still a mixed bag. Notably, Malwarebytes detected 100% of the stalkerware products we tested for. ESET, Bitdefender, McAfee, and Kaspersky detected all but one sample. This is a marked improvement over the 2021 test, which also found only one app with a 100% detection rate (G Data), but the next-best performing products had detect rates of 80-85%. Google Play Protect and Trend Micro had the lowest detection rates in the 2025 test, at 53% and 59% respectively. The poor performance of Google Play Protect is unsurprising: because it is the anti-virus solution on so many Android phones by default, some stalkerware includes specific instructions to disable detection by Google Play Protect as part of the installation process.

There are fewer stalkerware products out there. In 2020 and 2021, AV Comparatives tested 20 unique stalkerware products from different vendors. In 2025, we tested 17. We found that many stalkerware apps are essentially variations on the same underlying product and that the number of unique underlying products appears to have decreased in recent years. We cannot be certain about the cause of this decline, but we speculate that increased attention from regulators may be a factor. The popularity of small, cheap, Bluetooth-enabled physical trackers such as Apple AirTags and Tiles as an alternative method of location-tracking may also be undercutting the stalkerware market. 

We hope that these tests will help survivors of domestic abuse and others who are concerned about stalkerware on their Android devices make informed choices about their anti-virus apps. We also hope that exposing the gaps that these products have in stalkerware detection will renew interest in this problem at anti-virus companies.

You can find the full results of the test here (PDF). 

The Legal Case Against Ring’s Face Recognition Feature

3 November 2025 at 18:27

Amazon Ring’s upcoming face recognition tool has the potential to violate the privacy rights of millions of people and could result in Amazon breaking state biometric privacy laws.

Ring plans to introduce a feature to its home surveillance cameras called “Familiar Faces,” to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.

When turned on, the feature will scan the faces of all people who approach the camera.

Many biometric privacy laws across the country are clear: Companies need your affirmative consent before running face recognition on you. In at least one state, ordinary people with the help of attorneys can challenge Amazon’s data collection. Where not possible, state privacy regulators should step in.

Sen. Ed Markey (D-Mass.) has already called on Amazon to abandon its plans and sent the company a list of questions. Ring spokesperson Emma Daniels answered written questions posed by EFF, which can be viewed here.

What is Ring’s “Familiar Faces”?

Amazon describes “Familiar Faces” as a tool that “intelligently recognizes familiar people.” It says this tool will provide camera owners with “personalized context of who is detected, eliminating guesswork and making it effortless to find and review important moments involving specific familiar people.” Amazon plans to release the feature in December.

The feature will allow camera owners to tag particular people so Ring cameras can automatically recognize them in the future. In order for Amazon to recognize particular people, it will need to perform face recognition on every person that steps in front of the camera. Even if a camera owner does not tag a particular face, Amazon says it may retain that biometric information for up to six months. Amazon said it does not currently use the biometric data for “model training or algorithmic purposes.”

In order to biometrically identify you, a company typically will take your image and extract a faceprint by taking tiny measurements of your face and converting that into a series of numbers that is saved for later. When you step in front of a camera again, the company takes a new faceprint and compares it to a list of previous prints to find a match. Other forms of biometric tracking can be done with a scan of your fingertip, eyeball, or even your particular gait.

Amazon has told reporters that the feature will be off by default and that it would be unavailable in certain jurisdictions with the most active biometric privacy enforcement—including the states of Illinois and Texas, and the city of Portland, Oregon. The company would not promise that this feature will remain off by default in the future.

Why is This a Privacy Problem?

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination.

Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. Ring’s close partnership with police amplifies that threat. For example, in a city dense with face recognition cameras, the entirety of a person’s movements could be tracked with the click of a button, or all people could be identified at a particular location. A recent and unrelated private-public partnership in New Orleans unfortunately shows that mass surveillance through face recognition is not some far flung concern.

Amazon has already announced a related tool called “search party” that can identify and track lost dogs using neighbors’ cameras. A tool like this could be repurposed for law enforcement to track people. At least for now, Amazon says it does not have the technical capability to comply with law enforcement demanding a list of all cameras in which a person has been identified. Though, it complies with other law enforcement demands.

In addition, data breaches are a perpetual concern with any data collection. Biometrics magnify that risk because your face cannot be reset, unlike a password or credit card number. Amazon says it processes and stores biometrics collected by Ring cameras on its own servers, and that it uses comprehensive security measure to protect the data.

Face recognition has also been shown to have higher error rates with certain groups—most prominently with dark-skinned women. Similar technology has also been used to make questionable guesses about a person’s emotions, age, and gender.

Will Ring’s “Familiar Faces” Violate State Biometric Laws?

Any Ring collection of biometric information in states that require opt-in consent poses huge legal risk for the company. Amazon already told reporters that the feature will not be available in Illinois and Texas—strongly suggesting its feature could not survive legal scrutiny there. The company said it is also avoiding Portland, Oregon, which has a biometric privacy law that similar companies have avoided.

Its “familiar faces” feature will necessarily require its cameras to collect a faceprint from of every person who comes into view of an enabled camera, to try and find a match. It is impossible for Amazon to obtain consent from everyone—especially people who do not own Ring cameras. It appears that Amazon will try to unload some consent requirements onto individual camera owners themselves. Amazon says it will provide in-app messages to customers, reminding them to comply with applicable laws. But Amazon—as a company itself collecting, processing, and storing this biometric data—could have its own consent obligations under numerous laws.

Lawsuits against similar features highlight Amazon’s legal risks. In Texas, Google paid $1.375 billion to settle a lawsuit that alleged, among other things, that Google’s Nest cameras "indiscriminately capture the face geometry of any Texan who happens to come into view, including non-users." In Illinois, Facebook paid $650 million and shut down its face recognition tools that automatically scanned Facebook photos—even the faces of non-Facebook users—in order to identify people to recommend tagging. Later, Meta paid another $1.4 billion to settle a similar suit in Texas.

Many states aside from Illinois and Texas now protect biometric data. While the state has never enforced its law, Washington in 2017 passed a biometric privacy law. In 2023, the state passed an ever stronger law that protects biometric privacy, which allows individuals to sue on their own behalf. And at least 16 states have recently passed comprehensive privacy laws that often require companies to obtain opt-in consent for the collection of sensitive data, which typically includes biometric data. For example, in Colorado, a company that jointly with others determines the purpose and means of processing biometric data must obtain consent. Maryland goes farther, and such companies are essentially prohibited from collecting or processing biometric data from bystanders.

Many of these comprehensive laws have numerous loopholes and can only be enforced by state regulators—a glaring weakness facilitated in part by Amazon lobbyists.

Nonetheless, Ring’s new feature provides regulators a clear opportunity to step up to investigate, protect people’s privacy, and test the strength of their laws.

License Plate Surveillance Logs Reveal Racist Policing Against Romani People

3 November 2025 at 16:05

More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation. 

When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as "roma" and "g*psy," and in many instances, without any mention of a suspected crime. Other uses include "g*psy vehicle," "g*psy group," "possible g*psy," "roma traveler" and "g*psy ruse," perpetuating systemic harm by demeaning individuals based on their race or ethnicity. 

These queries were run through thousands of police departments' systems—and it appears that none of these agencies flagged the searches as inappropriate. 

These searches are, by definition, racist. 

Word Choices and Flock Searches 

We are using the terms "Roma" and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized "Anti-Roma Racism" as including behaviors such as "stereotyping Roma as persons who engage in criminal behavior" and using the slur "g*psy." According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.” 

Nevertheless, police officers have run hundreds of searches for license plates using the terms "roma" and "g*psy." (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like "g*psy scam" and "roma burglary," when ethnicity should have no relevance to how a crime is investigated or prosecuted. 

A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally. 

"The use of the term does not reflect the values or expected practices of our department," a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term "g*psy." "We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language."

Of course, the broader issue is that allowing "g*psy" or "Roma" as a reason for a search isn't just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for "g*psy" six times while using Flock's "Convoy" feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime. 

At the bottom of this post is a list of agencies and the terms they used when searching the Flock system. 

Anti-Roma Racism in an Age of Surveillance 

Racism against Romani people has been a problem for centuries, with one of its most horrific manifestations  during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices: 

In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.

Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing. 

Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate. 

The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability. 

Cops In Their Own Words

EFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.

1. Lake County Sheriff's Office, IL 

A screen grab of three searches

In June 2025, the Lake County Sheriff's Office ran three searches for a dark colored pick-up truck, using the reason: "G*PSY Scam." The search covered 1,233 networks, representing 14,467 different ALPR devices. 

In response to EFF, a sheriff's representative wrote via email:

“Thank you for reaching out and for bringing this to our attention.  We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.

Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization. 

We appreciate you bringing this to our attention so we can look further into this and address it.”

2. Sacramento Police Department, CA

A screen grab of three searches

In May 2025, the Sacramento Police Department ran six searches using the term "g*psy."  The search covered 468 networks, representing 12,885 different ALPR devices. 

In response to EFF, a police representative wrote:

“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference. 

We appreciate the chance to clarify.”

3. Palos Heights Police Department, IL

A screen grab of three searches

In September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as "g*psy vehicle," "g*psy scam" and "g*psy concrete vehicle." Most searches hit roughly 1,000 networks. 

In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a "suspicious circumstance/fraudulent contracting incident" and is "not indicative of a general search based on racial or ethnic profiling." However, the agency acknowledged the language was inappropriate: 

“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.

We appreciate your outreach on this matter and the opportunity to provide clarification.”

4. Irvine Police Department, CA

A screen grab of three searches

In February and May 2025, the Irvine Police Department ran eight searches using the term "roma" in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices. 

In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, "I think it's an opportunity for our agency to look at those entries and to use a case number or use a different term." 

5. Fairfax County Police Department, VA

A screen grab of three searches

Between December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as "g*psy case" and "roma crew burglaries." Fairfax County PD continued to defend its use of this language.

In response to EFF, a police representative wrote:

“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.

A National Trend

Roma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime. 

Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people. 

The racially coded language in Flock's logs mirrors long-standing patterns of discriminatory policing. Terms like "furtive movements," "suspicious behavior," and "high crime area" have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they're embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock's network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines. 

Two Flock safety cameras at 90 degrees on a pole with a solar panel

The Path Forward

U.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first. 

We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety's clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged. 

Elected officials must act decisively to address the racist policing enabled by Flock's infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock's nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.

Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.

As Sen. Wyden astutely explained, "local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”

Table Overview and Notes

The following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers' names that were present in the reason field. 

We removed one agency from the list due to the agency indicating that the word was a person's name and not a reference to Romani people. 

In general, we did not include searches that used the term "Romanian," although many of those may also be indicative of anti-Roma bias. We also did not include uses of "traveler" or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.  

A text-based version of the spreadsheet is available here

November 12, 2025 update: Due to a clerical error a term was misattributed to Eugene Police Department in Oregon. We regret the error. 

Application Gatekeeping: An Ever-Expanding Pathway to Internet Censorship

3 November 2025 at 15:57

It’s not news that Apple and Google use their app stores to shape what apps you can and cannot have on many of your devices. What is new is more governments—including the U.S. government—using legal and extralegal tools to lean on these gatekeepers in order to assert that same control. And rather than resisting, the gatekeepers are making it easier than ever. 

Apple’s decision to take down the ICEBlock app at least partially in response to threats from the U.S. government—with Google rapidly and voluntarily following suit—was bad enough. But it pales in comparison with Google’s new program, set to launch worldwide next year, requiring developers to register with the company in order to have their apps installable on Android certified devices—including paying a fee and providing personal information backed by government-issued identification. Google claims the new program of “is an extra layer of security that deters bad actors and makes it harder for them to spread harm,” but the registration requirements are barely tied to app effectiveness or security. Why, one wonders, does Google need to see your driver’s license to evaluate whether your app is safe?  Why, one also wonders, does Google want to create a database of virtually every Android app developer in the world? 

Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools. 

F-Droid, a free and open-source repository for Android apps, has been sounding the alarm. As they’ve explained in an open letter, Google’s central registration system will be devastating for the Android developer community. Many mobile apps are created, improved, and distributed by volunteers, researchers, and/or small teams with limited financial resources. Others are created by developers who do not use the name attached to any government-issued identification. Others may have good reason to fear handing over their personal information to Google, or any other third party. Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools. 

Google’s promise that it’s “working on” a program for “students and hobbyists” that may have different requirements falls far short of what is necessary to alleviate these concerns. 

It’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work. 

The point here is not that all the apps are necessarily perfect or even safe. The point is that when you set up a gate, you invite authorities to use it to block things they don’t like. And when you build a database, you invite governments (and private parties) to try to get access to that database. If you build it, they will come.  

Imagine you have developed a virtual private network (VPN) and corresponding Android mobile app that helps dissidents, journalists, and ordinary humans avoid corporate and government surveillance. In some countries, distributing that app could invite legal threats and even prosecution. Developers in those areas should not have to trust that Google would not hand over their personal information in response to a government demand just because they want their app to be installable by all Android users. By the same token, technologists that work on Android apps for reporting ICE misdeeds should not have to worry that Google will hand over their personal information to, say, the U.S. Department of Homeland Security. 

It’s easy to see how a new registration requirement for developers could give Google a new lever for maintaining its app store monopoly

Our tech infrastructure’s substantial dependence on just a few platforms is already creating new opportunities for those platforms to be weaponized to serve all kinds of disturbing purposes, from policing to censorship. In this context, it’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work. 

Not coincidentally, the registration system Google announced would also help cement Google’s outsized competitive power, giving the company an additional window—if it needed one, given the company’s already massive surveillance capabilities—into what apps are being developed, by whom, and how they are being distributed. It’s more than ironic that Google’s announcement came at the same time the company is fighting a court order (in the Epic Games v. Google lawsuit) that will require it to stop punishing developers who distribute their apps through app stores that compete with Google’s own. It’s easy to see how a new registration requirement for developers, potentially enforced by technical measures on billions of Android certified mobile devices, could give Google a new lever for maintaining its app store monopoly.  

EFF has signed on to F-Droid’s open letter. If you care about taking back control of tech, you should too. 

What EFF Needs in a New Executive Director

3 November 2025 at 13:25

By Gigi Sohn, Chair, EFF Board of Directors 

With the impending departure of longtime, renowned, and beloved Executive Director Cindy Cohn, EFF and leadership advisory firm Russell Reynolds Associates have developed a profile for her successor.  While Cindy is irreplaceable, we hope that everyone who knows and loves EFF will help us find our next leader.  

First and foremost, we are looking for someone who’ll meet this pivotal moment in EFF’s history. As authoritarian surveillance creeps around the globe and society grapples with debates over AI and other tech, EFF needs a forward-looking, strategic, and collaborative executive director to bring fresh eyes and new ideas while building on our past successes.  

The San Francisco-based executive director, who reports to our board of directors, will have responsibility over all administrative, financial, development and programmatic activities at EFF.  They will lead a dedicated team of legal, technical, and advocacy professionals, steward EFF’s strong organizational culture, and ensure long-term organizational sustainability and impact. That means being: 

  • Our visionary — partnering with the board and staff to define and advance a courageous, forward-looking strategic vision for EFF; leading development, prioritization, and execution of a comprehensive strategic plan that balances proactive agenda-setting with responsive action; and ensuring clarity of mission and purpose, aligning organizational priorities and resources for maximum impact. 
  • Our face and voice — serving as a compelling, credible public voice and thought leader for EFF’s mission and work, amplifying the expertise of staff and engaging diverse audiences including media, policymakers, and the broader public, while also building and nurturing partnerships and coalitions across the technology, legal, advocacy, and philanthropic sectors. 
  • Our chief money manager — stewarding relationships with individual donors, foundations, and key supporters; developing and implementing strategies to diversify and grow EFF’s revenue streams, including membership, grassroots, institutional, and major gifts; and ensuring financial discipline, transparency, and sustainability in partnership with the board and executive team. 
  • Our fearless leader — fostering a positive, inclusive, high-performing, and accountable culture that honors EFF’s activist DNA while supporting professional growth, partnering with unionized staff, and maintaining a collaborative, constructive relationship with the staff union. 

It’ll take a special someone to lead us with courage, vision, personal integrity, and deep understanding of EFF’s unique role at the intersection of law and technology. For more details — including the compensation range and how to apply — click here for the full position specification. And if you know someone who you believe fits the bill, all nominations (strictly confidential, of course) are welcome at eff@russellreynolds.com 

EFF Stands With Tunisian Media Collective Nawaat

3 November 2025 at 13:37

When the independent Tunisian online media collective Nawaat announced that the government had suspended its activities for one month, the news landed like a punch in the gut for anyone who remembers what the Arab uprisings promised: dignity, democracy, and a free press.

But Tunisia’s October 31 suspension of Nawaat—delivered quietly, without formal notice, and justified under Decree-Law 2011-88—is not just a bureaucratic decision. It’s a warning shot aimed at the very idea of independent civic life.

The silencing of a revolutionary media outlet

Nawaat’s statement, published last week, recounts how the group discovered the suspension: not through any official communication, but by finding the order slipped under its office door. The move came despite Nawaat’s documented compliance with all the legal requirements under Decree 88, the 2011 law that once symbolized post-revolutionary openness for associations.

Instead, the Decree, once seen as a safeguard for civic freedom, is now being weaponized as a tool of control. Nawaat’s team describes the action as part of a broader campaign of harassment: tax audits, financial investigations, and administrative interrogations that together amount to an attempt to “stifle all media resistance to the dictatorship.”

For those who have followed Tunisia’s post-2019 trajectory, the move feels chillingly familiar. Since President Kais Saied consolidated power in 2021, civil society organizations, journalists, and independent voices have faced escalating repression. Amnesty International has documented arrests of reporters, the use of counter-terrorism laws against critics, and the closure of NGOs. And now, the government has found in Decree 88 a convenient veneer of legality to achieve what old regimes did by force.

Adopted in the hopeful aftermath of the revolution, Decree-Law 2011-88 was designed to protect the right to association. It allowed citizens to form organizations without prior approval and receive funding freely—a radical departure from the Ben Ali era’s suffocating controls.

But laws are only as democratic as the institutions that enforce them. Over the years, Tunisian authorities have chipped away at these protections. Administrative notifications, once procedural, have become tools for sanction. Financial transparency requirements have turned into pretexts for selective punishment.

When a government can suspend an association that has complied with every rule, the rule of law itself becomes a performance.

Bureaucratic authoritarianism

What’s happening in Tunisia is not an isolated episode. Across the region, governments have refined the art of silencing dissent without firing a shot. But whether through Egypt’s NGO Law, Morocco’s press code, or Algeria’s foreign-funding restrictions, the outcome is the same: fewer independent outlets, and fewer critical voices.

These are the tools of bureaucratic authoritarianism…the punishment is quiet, plausible, and difficult to contest. A one-month suspension might sound minor, but for a small newsroom like Nawaat—which operates with limited funding and constant political pressure—it can mean disrupted investigations, delayed publications, and lost trust from readers and sources alike.

A decade of resistance

To understand why Nawaat matters, remember where it began. Founded in 2004 under Zine El Abidine Ben Ali’s dictatorship, Nawaat became a rare space for citizen journalism and digital dissent. During the 2011 uprising, its reporting and documentation helped the world witness Tunisia’s revolution.

Over the past two decades, Nawaat has earned international recognition, including an EFF Pioneer Award in 2011, for its commitment to free expression and technological empowerment. It’s not just a media outlet; it’s a living archive of Tunisia’s struggle for dignity and rights.

That legacy is precisely what makes it threatening to the current regime. Nawaat represents a continuity of civic resistance that authoritarianism cannot easily erase.

The cost of silence

Administrative suspensions like this one are designed to send a message: You can be shut down at any time. They impose psychological costs that are harder to quantify than arrests or raids. Journalists start to self-censor. Donors hesitate to renew grants. The public, fatigued by uncertainty, tunes out.

But the real tragedy lies in what this means for Tunisians’ right to know. Nawaat’s reporting on corruption, surveillance, and state violence fills the gaps left by state-aligned media. Silencing it deprives citizens of access to truth and accountability.

As Nawaat’s statement puts it:

“This arbitrary decision aims to silence free voices and stifle all media resistance to the dictatorship.”

The government’s ability to pause a media outlet, even temporarily, sets a precedent that could be replicated across Tunisia’s civic sphere. If Nawaat can be silenced today, so can any association tomorrow.

So what can be done? Nawaat has pledged to challenge the suspension in court, but litigation alone won’t fix a system where independence is eroding from within. What’s needed is sustained, visible, and international solidarity.

Tunisia’s government may succeed in pausing Nawaat’s operations for a month. But it cannot erase the two decades of documentation, dissent, and hope the outlet represents. Nor can it silence the networks of journalists, technologists, and readers who know what is at stake.

EFF has long argued that the right to free expression is inseparable from the right to digital freedom. Nawaat’s suspension shows how easily administrative and legal tools can become weapons against both. When states combine surveillance, regulatory control, and economic pressure, they don’t need to block websites or jail reporters outright—they simply tighten the screws until free expression becomes impossible.

That’s why what happens in Tunisia matters far beyond its borders. It’s a test of whether the ideals of 2011 still mean anything in 2025.

And Nawaat, for its part, has made its position clear:

“We will continue to defend our independence and our principles. We will not be silenced.”

Once Again, Chat Control Flails After Strong Public Pressure

31 October 2025 at 17:08

The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan

EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it’s been shot down because there’s no public support. The fight is delayed, but not over.

It’s time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don’t violate the human rights of people around the world. 

As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it’s an attack on fundamental human rights. 

The coming EU presidencies should abandon these attempts and work on finding a solution that protects people’s privacy and security.

The Department of Defense Wants Less Proof its Software Works

31 October 2025 at 11:29

When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it. 

As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”

The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight

All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights. 

The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent. 

Age Verification, Estimation, Assurance, Oh My! A Guide to the Terminology

30 October 2025 at 18:37

If you've been following the wave of age-gating laws sweeping across the country and the globe, you've probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we're talking about your rights, your privacy, your data, and who gets to access information online.

So let's clear up the confusion. Here's your guide to the terminology that's shaping these laws, and why you should care about the distinctions.

Age Gating: “No Kids Allowed”

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. It simply refers to the fact that a restriction exists. Think of it as the concept of “you must be this old to enter” without getting into the details of how they’re checking. 

Age Assurance: The Umbrella Term

Think of age assurance as the catch-all category. It covers any method an online service uses to figure out how old you are with some level of confidence. That's intentionally vague, because age assurance includes everything from the most basic check-the-box systems to full-blown government ID scanning.

Age assurance is the big tent that contains all the other terms we're about to discuss below. When a company or lawmaker talks about "age assurance," they're not being specific about how they're determining your age—just that they're trying to. For decades, the internet operated on a “self-attestation” system where you checked a box saying you were 18, and that was it. These new age-verification laws are specifically designed to replace that system. When lawmakers say they want "robust age assurance," what they really mean is "we don't trust self-attestation anymore, so now you need to prove your age beyond just swearing to it."

Age Estimation: Letting the Algorithm Decide

Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you.

This might include:

  • Analyzing your face through a video selfie or photo
  • Examining your voice
  • Looking at your online behavior—what you watch, what you like, what you post
  • Checking your existing profile data

Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right?

Here's the problem, “estimation” is exactly that: it’s a guess. And it is inherently imprecise. Age estimation is notoriously unreliable, especially for teenagers—the exact group these laws claim to protect. An algorithm might tell a website you're somewhere between 15 and 19 years old. That's not helpful when the cutoff is 18, and what's at stake is a young person's constitutional rights.

And it gets worse. These systems consistently fail for certain groups:

When estimation fails (and it often does), users get kicked to the next level: actual verification. Which brings us to…

Age Verification: “Show Me Your Papers”

Age verification is the most invasive option. This is where you have to prove your age to a certain date, rather than, for example, prove that you have crossed some age threshold (like 18 or 21 or 65). EFF generally refers to most age gates and mandates on young people’s access to online information as “age verification,” as most of them typically require you to submit hard identifiers like:

  • Government-issued ID (driver's license, passport, state ID)
  • Credit card information
  • Utility bills or other documents
  • Biometric data

This is what a lot of new state laws are actually requiring, even when they use softer language like "age assurance." Age verification doesn't just confirm you're over 18, it reveals your full identity. Your name, address, date of birth, photo—everything.

Here's the critical thing to understand: age verification is really identity verification. You're not just proving you're old enough—you're proving exactly who you are. And that data has to be stored, transmitted, and protected by every website that collects it.

We already know how that story ends. Data breaches are inevitable. And when a database containing your government ID tied to your adult content browsing history gets hacked—and it will—the consequences can be devastating.

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they're actually proposing. A law that requires "age assurance" sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it's not moderate at all—it's mass surveillance. Similarly, when Instagram says it's using "age estimation" to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver's license instead, the privacy promise evaporates.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. 

Here's the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don't know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don't know that verification systems have error rates. They don't even seem to understand that the terms they're using mean different things. The fact that their terminology is all over the place—using "age assurance," "age verification," and "age estimation" interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it's your privacy, your data, and your ability to access the internet without constant identity checks. Don't let fuzzy language disguise what these systems really do.

❤️ Let's Sue the Government! | EFFector 37.15

29 October 2025 at 13:06

There are no tricks in EFF's EFFector newsletter, just treats to keep you up-to-date on the latest in the fight for digital privacy and free expression. 

In our latest issue, we're explaining a new lawsuit to stop the U.S. government's viewpoint-based surveillance of online speech; sharing even more tips to protect your privacy; and celebrating a victory for transparency around AI police reports.

Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Lisa Femia explains why EFF is suing to stop the Trump administration's ideological social media surveillance program. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.15 - ❤️ LET'S SUE THE GOVERNMENT!

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Science Must Decentralize

24 October 2025 at 16:55

Knowledge production doesn’t happen in a vacuum. Every great scientific breakthrough is built on prior work, and an ongoing exchange with peers in the field. That’s why we need to address the threat of major publishers and platforms having an improper influence on how scientific knowledge is accessed—or outright suppressed.

In the digital age, the collaborative and often community-governed effort of scholarly research has gone global and unlocked unprecedented potential to improve our understanding and quality of life. That is, if we let it. Publishers continue to monopolize access to life-saving research and increase the burden on researchers through article processing charges and a pyramid of volunteer labor. This exploitation makes a mockery of open inquiry and the denial of access as a serious human rights issue.

While alternatives like Diamond Open Access are promising, crashing through publishing gatekeepers isn’t enough. Large intermediary platforms are capturing other aspects of the research process—inserting themselves between researchers and between the researchers and these published works—through platformization

Funneling scholars into a few major platforms isn’t just annoying, it’s corrosive to privacy and intellectual freedom. Enshittification has come for research infrastructure, turning everyday tools into avenues for surveillance. Most professors are now worried their research is being scrutinized by academic bossware, forcing them to worry about arbitrary metrics which don’t always reflect research quality. While playing this numbers game, a growing threat of surveillance in scholarly publishing gives these measures a menacing tilt, chilling the publication and access of targeted research areas. These risks spike in the midst of governmental campaigns to muzzle scientific knowledge, buttressed by a scourge of platform censorship on corporate social media.

The only antidote to this ‘platformization’ is Open Science and decentralization. Infrastructure we rely on must be built in the open and on interoperable standards, and hostile to corporate (or governmental) takeovers. Universities and the science community are well situated to lead this fight. As we’ve seen in EFF’s TOR University Challenge, promoting access to knowledge and public interest infrastructure is aligned with the core values of higher education. 

Using social media as an example, universities have a strong interest in promoting the work being done at their campuses far and wide. This is where traditional platforms fall short: algorithms typically prioritizing paid content, downrank off-site links, and prioritize sensational claims to drive engagement. When users are free from enshittification and can themselves control the  platform’s algorithms, as they can on platforms like Bluesky, scientists get more engagement and find interactions are more useful

Institutions play a pivotal role in encouraging the adoption of these alternatives, ranging from leveraging existing IT support to assist with account use and verification, all the way to shouldering some of the hosting with Mastodon instances and/or Bluesky PDS for official accounts. This support is good for the research, good for the university, and makes our systems of science more resilient to attacks on science and the instability of digital monocultures.

This subtle influence of intermediaries can also appear in other tools relied on by researchers, while there are a number of open alternatives and interoperable tools developed for everything from citation managementdata hosting to online chat among collaborators. Individual scholars and research teams can implement these tools today, but real change depends on institutions investing in tech that puts community before shareholders.

When infrastructure is too centralized, gatekeepers gain new powers to capture, enshittify, and censor. The result is a system that becomes less useful, less stable, and with more costs put on access. Science thrives on sharing and access equity, and its future depends on a global and democratic revolt against predatory centralized platforms.

EFF is proud to celebrate Open Access Week.

Joint Statement on the UN Cybercrime Convention: EFF and Global Partners Urge Governments Not to Sign

27 October 2025 at 06:20

Today, EFF joined a coalition of civil society organizations in urging UN Member States not to sign the UN Convention Against Cybercrime. For those that move forward despite these warnings, we urge them to take immediate and concrete steps to limit the human rights harms this Convention will unleash. These harms are likely to be severe and will be extremely difficult to prevent in practice.

The Convention obligates states to establish broad electronic surveillance powers to investigate and cooperate on a wide range of crimes—including those unrelated to information and communication systems—without adequate human rights safeguards. It requires governments to collect, obtain, preserve, and share electronic evidence with foreign authorities for any “serious crime”—defined as an offense punishable under domestic law by at least four years’ imprisonment (or a higher penalty).

In many countries, merely speaking freely; expressing a nonconforming sexual orientation or gender identity; or protesting peacefully can constitute a serious criminal offense per the definition of the convention. People have faced lengthy prison terms, or even more severe acts like torture, for criticizing their governments on social media, raising a rainbow flag, or criticizing a monarch. 

In today’s digital era, nearly every message or call generates granular metadata—revealing who communicates with whom, when, and from where—that routinely traverses national borders through global networks. The UN cybercrime convention, as currently written, risks enabling states to leverage its expansive cross-border data-access and cooperation mechanisms to obtain such information for political surveillance—abusing the Convention’s mechanisms to monitor critics, pressure their families, and target marginalized communities abroad.

As abusive governments increasingly rely on questionable tactics to extend their reach beyond their borders—targeting dissidents, activists, and journalists worldwide—the UN Cybercrime Convention risks becoming a vehicle for globalizing repression, enabling an unprecedented multilateral infrastructure for digital surveillance that allows states to access and exchange data across borders in ways that make political monitoring and targeting difficult to detect or challenge.

EFF has long sounded the alarm over the UN Cybercrime Treaty’s sweeping powers of cross-border cooperation and its alarming lack of human-rights safeguards. As the Convention opens for signature on October 25–26, 2025 in Hanoi, Vietnam—a country repeatedly condemned by international rights groups for jailing critics and suppressing online speech—the stakes for global digital freedom have never been higher.

The Convention’s many flaws cannot easily be mitigated because it fundamentally lacks a mechanism for suspending states that systematically fail to respect human rights or the rule of law. States must refuse to sign or ratify the Convention. 

Read our full letter here.

When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact

23 October 2025 at 13:23

Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.

After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.

At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.

When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.

We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:

Control AI Access to Secure Chat on Android and iOS

Here are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.

How to Check and Limit What Gemini Can Access

If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:

  • Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off. 
  • Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
  • Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete. 

How to Check and Limit what Apple Intelligence and Siri Can Access

Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do: 

  • Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
  • Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it. 

For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.

Why It Matters 

Sending Messages Has Different Privacy Concerns than Receiving Them

Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.

Google Gemini and WhatsApp

On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.

If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.

The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.

For comparison’s sake, let’s see how this works with Apple devices.

Siri and WhatsApp

The closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.

According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.

In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.

Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.

The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.

How Receiving Messages Works

Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too. 

Google Gemini

By default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).

This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.

We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.

If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.

Apple Intelligence

Apple is more clear about how it handles this sort of notification access.

Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”

Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.

New AI Features Must Come With Strong User Controls

As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.

Per-app AI Permissions

Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.

Offer On-Device-Only Modes

Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.

Improve Documentation

Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.

The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.

Civil Disobedience of Copyright Keeps Science Going

23 October 2025 at 12:17

Creating and sharing knowledge are defining traits of humankind, yet copyright law has grown so restrictive that it can require acts of civil disobedience to ensure that students and scholars have the books they need and to preserve swaths of culture from being lost forever.

Reputable research generally follows a familiar pattern: Scientific articles are written by scholars based on their researchoften with public funding. Those articles are then peer-reviewed by other scholars in their fields and revisions are made according to those comments. Afterwards, most large publishers expect to be given the copyright on the article as a condition of packaging it up and selling it back to the institutions that employ the academics who did the research and to the public at large. Because research is valuable and because copyright is a monopoly on disseminating the articles in question, these publishers can charge exorbitant fees that place a strain even on wealthy universities and are simply out of reach for the general public or universities with limited budgets, such as those in the global south. The result is a global human rights problem.

This model is broken, yet science goes on thanks to widespread civil disobedience of the copyright regime that locks up the knowledge created by researchers. Some turn to social media to ask that a colleague with access share articles they need (despite copyright’s prohibitions on sharing). Certainly, at least some such sharing is protected fair use, but scholars should not have to seek a legal opinion or risk legal threats from publishers to share the collective knowledge they generate.

Even more useful, though on shakier legal ground, are so-called “shadow archives” and aggregators such as SciHub, Library Genesis (LibGen), Z-Library, or Anna’s Archive. These are the culmination of efforts from volunteers dedicated to defending science.

SciHub alone handles tens of millions of requests for scientific articles each year and remains operational despite adverse court rulings thanks both to being based in Russia, and to the community of academics who see it as an ethical response to the high access barriers that publishers impose and provide it their log-on credentials so it can retrieve requested articles. SciHub and LibGen are continuations of samizdat, the Soviet-era practice of disobeying state censorship in the interests of learning and free speech.

Unless publishing gatekeepers adopt drastically more equitable practices and become partners in disseminating knowledge, they will continue to lose ground to open access alternatives, legal or otherwise.

EFF is proud to celebrate Open Access Week.

❌