Reading view

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” 

This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.   

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.  

 This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.  

Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.  

Meta Should Know the Privacy and Legal Risks  

Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.  

In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates. 

Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.   

In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law. 

And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.  

 Privacy Advocates Will Continue to Focus our Resources on Meta  

 Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.  

Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.  

The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

  •  

Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach

Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out.

At EFF, we’ve been raising the alarm about age verification mandates for years. In December, we launched our Age Verification Resource Hub to push back against laws and platform policies that require users to hand over sensitive personal information just to access basic online services. At the time, age gates were largely enforced in polities where it was mandated by law. Now they’re landing in platforms and jurisdictions where they’re not required.

Beginning in early March, users who are either (a) estimated by Discord to be under 18, or (b) Discord doesn't have enough information on, may find themselves locked into a “teen-appropriate experience.” That means content filters, age gates, restrictions on direct messages and friend requests, and the inability to speak in “Stage channels,” which are the large-audience audio spaces that power many community events. Discord says most adults may be sorted automatically through a new “age inference” system that relies on account tenure, device and activity data, and broader platform patterns. Those whose age isn’t estimated due to lack of information or who are estimated to not be adults will be asked to scan their face or upload a government ID through a third-party vendor if they want to avoid the default teen account restrictions.

We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns. Here’s why:

The 2025 Breach and What's Changed Since

Discord literally won our 2025 “We Still Told You So” Breachies Award. Last year, attackers accessed roughly 70,000 users’ government IDs, selfies, and other sensitive information after compromising Discord’s third-party customer support system.

To be clear: Discord is no longer using that system, which involved routing ID uploads through its general ticketing system for age verification. It now uses dedicated age verification vendors (k-ID globally and Persona for some users in the United Kingdom).

That’s an improvement. But it doesn’t eliminate the underlying potential for data breaches and other harms. Discord says that it will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. But platforms are closed-source, audits are limited, and history shows that data (especially this ultra-valuable identity data) will leak—whether through hacks, misconfigurations, or retention mistakes. Users are being asked to simply trust that this time will be different.

Age Verification and Anonymous Speech

For decades, we’ve taught young people a simple rule: don’t share personal information with strangers online.

Age verification complicates that advice. Suddenly, some Discord users will now be asked to submit a government ID or facial scan to access certain features if their age-inference technology fails. Discord has said on its blog that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how those safeguards operate in practice or whether they are sufficient to prevent identification.

Even if Discord can technically separate IDs from accounts, many users are understandably skeptical, especially after the platform’s recent breach involving age-verification data. For people who rely on pseudonymity, being required to upload a face scan or government ID at all can feel like crossing a line.

Many people rely on anonymity to speak freely. LGBTQ+ youth, survivors of abuse, political dissidents, and countless others use aliases to explore identity, find support, and build community safely. When identity checks become a condition of participation, many users will simply opt out. The chilling effect isn’t only about whether an ID is permanently linked to an account; it’s about whether users trust the system enough to participate in the first place. When you’re worried that what you say can be traced back to your government ID, you speak differently—or not at all.

No one should have to choose between accessing online communities and protecting their privacy.

Age Verification Systems Are Not Ready for Prime Time

Discord says it is trying to address privacy concerns by using device-based facial age estimation and separating government IDs from user accounts, retaining only a user’s age rather than their identity documents. This is meant to reduce the risks associated with retaining and collecting this sensitive data. However, even when privacy safeguards are in place, we are faced with another problem: There is no current technology that is fully privacy-protective, universally accessible, and consistently accurate. Facial age estimation tools are notoriously unreliable, particularly for people of color, trans and nonbinary people, and people with disabilities. The internet has now proliferated with stories of people bypassing these facial age estimation tools. But when systems get it wrong, users may be forced into appeals processes or required to submit more documentation, such as government-issued IDs, which would exclude those whose appearance doesn’t match their documents and the millions of people around the world who don’t have government-issued identity documents at all.

Even newer approaches (things like age inference, behavior tracking, financial database checks, digital ID systems) expand the web of data collection, and carry their own tradeoffs around access and error. As we mentioned earlier, no current approach is simultaneously privacy-protective, universally accessible, and consistently accurate across all demographics. 

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

The Aftermath

Discord reports over 200 million monthly active users, and is one of the largest platforms used by gamers to chat. The video game industry is larger than movies, TV, and music combined, and Discord represents an almost-default option for gamers looking to host communities.

Many communities, including open-source projects, sports teams, fandoms, friend groups, and families, use Discord to stay connected. If communities or individuals are wrongly flagged as minors, or asked to complete the age verification process, they may face a difficult choice: submit to facial scans or ID checks, or accept a more restricted “teen” experience. For those who decline to go through the process, the result can mean reduced functionality, limited communication tools, and the chilling effects that follow. 

Most importantly, Discord did not have to “comply in advance” by requiring age verification for all users, whether or not they live in a jurisdiction that mandates it. Other social media platforms and their trade groups have fought back against more than a dozen age verification laws in the U.S., and Reddit has now taken the legal fight internationally. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable. 

So You’ve Hit an Age Gate. Now What?

Discord should reconsider whether expanding identity checks is worth the harm to its communities. But in the meantime, many users are facing age checks today.

That’s why we created our guide, “So You’ve Hit an Age Gate. Now What?” It walks through practical steps to minimize risk, such as:

  • Submit the least amount of sensitive data possible.
  • Ask: What data is collected? Who can access it? How long is it retained?
  • Look for evidence of independent, security-focused audits.
  • Be cautious about background details in selfies or ID photos.

There is unfortunately no perfect option, only tradeoffs. And every user will have their own unique set of safety concerns to consider. Amidst this confusion, our goal is to help keep you informed, so you can make the best choices for you and your community.

In light of the harms imposed by age-verification systems, EFF encourages all services to stop adopting these systems when they are not mandated by law. And lawmakers across the world that are considering bills that would make Discord’s approach the norm for every platform should watch this backlash and similarly move away from the idea.

If you care about privacy, free expression, and the right to participate online without handing over your identity, now is the time to speak up.

Join us in the fight.

  •  

🗣 Homeland Security Wants Names | EFFector 38.3

Criticize the government online? The Department of Homeland Security (DHS) might ask Google to cough up your name. By abusing an investigative tool called "administrative subpoenas," DHS has been demanding that tech companies hand over users' names, locations, and more. We're explaining how companies can stand up for users—and covering the latest news in the fight for privacy and free speech online—with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks our campaign to expand end-to-end encryption protections, a bill to stop government face scans from Immigration and Customs Enforcement (ICE) and others, and why Section 230 remains the best available system to protect everyone’s ability to speak online.


Prefer to listen in? In our audio companion, EFF Senior Staff Attorney F. Mario Trujillo explains how Homeland Security's lawless subpoenas differ from court orders. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.3 - 🗣 Homeland Security Wants Names

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against unlawful government surveillance when you support EFF today!

  •  

“Free” Surveillance Tech Still Comes at a High and Dangerous Cost

Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement “free” access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.

The cost of “free” surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties. 

The cost of “free” surveillance tools is measured not in tax dollars, but in the erosion of civil liberties.

The collection and sharing of our data quietly generates detailed records of people’s movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.   

Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech. 

If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products. 

Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.

Here are some of the most common methods “free” surveillance tech makes its way into communities.

Trials and Pilots

Police departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren’t accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city. 

The public may have no idea that a pilot program for surveillance technology is happening in their city.  

In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne. 

Functional, Even Without Funding 

We’ve seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter’s $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access. 

 Police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later.

In May 2025, Denver's city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston’s office allowed the cameras to keep running through a “task force” review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city “turn Flock cameras off now,” a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.

 Importantly, police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later. 

Gifts from Police Foundations and Wealthy Donors

Police foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.

In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city’s surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department’s (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city’s residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city’s surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.

Free Tech for Federal Data Pipelines

Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs. 

Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for "homeland security" equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.

Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as "grant-ready" solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for "law enforcement surveillance equipment" and "video surveillance, warning, and access control" systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a "License Plate Readers Grant Assistance Program" that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects. 

Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.

On paper, these systems arrive “for free” through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city's entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.

The most dangerous cost of this "free" funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79–80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to "mature their capabilities," with some centers reporting that 100 percent of their annual expenditures are covered by these grants. 

Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city’s ALPR network to track undocumented drivers in a self‑described sanctuary city.

Ultimately, federal grants follow the same script as trials and foundation gifts: what looks “free” ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.

Protecting Yourself Against “Free” Technology

The most important protection against "free" surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to "free" tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears. 

For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.

And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse. 

“Free” surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of “free” surveillance.

  •  

No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare

Amazon Ring’s Super Bowl ad offered a vision of our streets that should leave every person unsettled about the company’s goals for disintegrating our privacy in public.

In the ad, disguised as a heartfelt effort to reunite the lost dogs of the country with their innocent owners, the company previewed future surveillance of our streets: a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.

The ad for Ring’s “Search Party” feature highlighted the doorbell camera’s ability to scan footage across Ring devices in a neighborhood, using AI analysis to identify potential canine matches among the many personal devices within the network. 

Amazon Ring already integrates biometric identification, like face recognition, into its products via features like "Familiar Faces,” which depends on scanning the faces of those in sight of the camera and matching it against a list of pre-saved, pre-approved faces. It doesn’t take much to imagine Ring eventually combining these two features: face recognition and neighborhood searches. 

Ring’s “Familiar Faces” feature could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone. Unfortunately, not all states have similar privacy protections for their residents. 

Ring has a history of privacy violations, enabling surveillance of innocents and protestors, and close collaboration with law enforcement, and EFF has spent years reporting on its many privacy problems.

The cameras, which many people buy and install to identify potential porch pirates or get a look at anyone that might be on their doorstep, feature microphones that have been found to capture audio from the street. In 2023, Ring settled with the Federal Trade Commission over the extensive access it gave employees to personal customer footage. At that time, just three years ago, the FTC wrote: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data for their own purposes.”

The company has made law enforcement access a regular part of its business. As early as 2016, the company was courting police departments through free giveaways. The company provided law enforcement warrantless access to people’s footage, a practice they claimed to cut off in 2024. Not long after, though, the company established partnerships with major police companies Axon and Flock Safety to facilitate the integration of Ring cameras into police intelligence networks. The partnership allows law enforcement to again request Ring footage directly from users. This supplements the already wide-ranging apparatus of data and surveillance feeds now available to law enforcement. 

This feature is turned on by default, meaning that Ring owners need to go into the controls to change it. According to Amazon Ring’s instructions, this is how to disable the “search party” feature: 

  1. Open the Ring app to the main dashboard.
  2. Tap the menu (☰).
  3. Tap Control Center.
  4. Select Search Party.
  5. Tap Disable Search for Lost Pets. Tap the blue Pet icon next to "Search for Lost Pets" to turn the feature off for each camera. (You also have the option to "Disable Natural Hazards (Fire Watch)" and the option to tap the blue Flame icon next to Natural Hazards (Fire Watch) to turn the feature on or off for each camera.)

The addition of AI-driven biometric identification is the latest entry in the company’s history of profiting off of public safety worries and disregard for individual privacy, one that turbocharges the extreme dangers of allowing this to carry on. People need to reject this kind of disingenuous framing and recognize the potential end result: a scary overreach of the surveillance state designed to catch us all in its net.

  •  

Open Letter to Tech Companies: Protect Your Users From Lawless DHS Subpoenas

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security's (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

These subpoenas are unlawful, and the government knows it.

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

  •  

Coalition Urges California to Revoke Permits for Federal License Plate Reader Surveillance

Group led by EFF and Imperial Valley Equity & Justice Asks Gov. Newsom and Caltrans Director to Act Immediately

SAN FRANCISCO – California must revoke permits allowing federal agencies such as Customs and Border Patrol (CBP) and the Drug Enforcement Administration (DEA) to put automated license plate readers along border highways, a coalition led by the Electronic Frontier Foundation (EFF) and Imperial Valley Equity & Justice (IVEJ) demanded today. 

In a letter to Gov. Gavin Newsom and California Department of Transportation (Caltrans) Director Dina El-Tawansy, the coalition notes that this invasive mass surveillance – automated license plate readers (ALPRs) often disguised as traffic barrels – puts both residents and migrants at risk of harassment, abuse, detention, and deportation.  

“With USBP (U.S. Border Patrol) Chief Greg Bovino reported to be returning to El Centro sector, after leading a brutal campaign against immigrants and U.S. citizens alike in Los Angeles, Chicago, and Minneapolis, it is urgent that your administration take action,” the letter says. “Caltrans must revoke any permits issued to USBP. CBP, and DEA for these surveillance devices and effectuate their removal.” 

Coalition members signing the letter include the California Nurses Association; American Federation of Teachers Guild, Local 1931; ACLU California Action; Fight for the Future; Electronic Privacy Information Center; Just Futures Law; Jobs to Move America; Project on Government Oversight; American Friends Service Committee U.S./Mexico Border Program; Survivors of Torture, International; Partnership for the Advancement of New Americans; Border Angels; Southern California Immigration Project; Trust SD Coalition; Alliance San Diego; San Diego Immigrant Rights Consortium; Showing Up for Racial Justice San Diego; San Diego Privacy; Oakland Privacy; Japanese American Citizens League and its Florin-Sacramento Valley, San Francisco, South Bay, Berkeley, Torrance, and Greater Pasadena chapters; Democratic Socialists of America- San Diego; Center for Human Rights and Privacy; The Becoming Project Inc.; Imperial Valley for Palestine; Imperial Liberation Collaborative; Comité de Acción del Valle Inc.; CBFD Indivisible; South Bay People Power; and queercasa. 

California law prevents state and local agencies from sharing ALPR data with out-of-state agencies, including federal agencies involved in immigration enforcement. However, USBP, CBP, and DEA are bypassing these regulations by installing their own ALPRs. 

EFF researchers have released a map of more than 40 of these covert ALPRs along highways in San Diego and Imperial counties that are believed to belong to federal agencies engaged in immigration enforcement.  In response to a June 2025 public records request, Caltrans has released several documents showing CBP and DEA have applied for permits for ALPRs, with more expected as Caltrans continues to locate records responsive to the request. 

“California must not allow Border Patrol and other federal agencies to use surveillance on our roadways to unleash violence and intimidation on San Diego and Imperial Valley residents,” the letter says. “We ask that your administration investigate and release the relevant permits, revoke them, and initiate the removal of these devices. No further permits for ALPRs or tactical checkpoints should be approved for USBP, CBP, or DEA.” 

"The State of California must not allow Border Patrol to exploit our public roads and bypass state law," said Sergio Ojeda, IVEJ’s Lead Community Organizer for Racial and Economic Justice Programs.  "It's time to stop federal agencies from installing hidden cameras that they use to track, target and harass our communities for travelling between Imperial Valley, San Diego and Yuma." 

For the letter: https://www.eff.org/document/coalition-letter-re-covert-alprs

For the map of the covert ALPRs: https://www.eff.org/covertALPRmap

For high-res images of two of the covert ALPRs: https://www.eff.org/node/111725

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

 

Contact: 
Dave
Maass
Director of Investigations

  •  

Speaking Freely: Yazan Badran

Interviewer: Jillian York

Yazan Badran is an assistant professor in international media and communication studies at the Vrije Universiteit Brussel, and a researcher at the Echo research group. His research focuses on the intersection between media, journalism and politics particularly in the MENA region and within its exilic and diasporic communities.

*This interview has been edited for length and clarity. 

Jillian York: What does free speech or free expression mean to you?

Yazan Badran: So I think there are a couple of layers to that question. There's a narrow conception of free speech that is related to, of course, your ability to think about the world.

And that also depends on having the resources to be able to think about the world, to having resources of understanding about the world, having resources to translate that understanding into thoughts and analysis yourself, and then being able to express that in narratives about yourself with others in the world. And again, that also requires resources of expression, right?

So there's that layer, which means that it's not simply the absence of constraints around your expression and around your thinking, but actually having frameworks that activate you expressing yourself in the world. So that's one element of free expression or free speech, or however you want to call it. 

But I feel that remains too narrow if we don't account also for the counterpart, which is having frameworks that listen to you as you express yourself into the world, right? Having people, institutions, frameworks that are actively also listening, engaging, recognizing you as a legitimate voice in the world. And I think these two have to come together in any kind of broad conception of free speech, which entangles you then in a kind of ethical relationship that you have to listen to others as well, right? It becomes a mutual responsibility from you towards the other, towards the world, and for the world towards you, which also requires access to resources and access to platforms and people listening to you.

So I think these two are what I, if I want to think of free speech and free expression, I would have to think about these two together. And most of the time there is a much narrower focus on the first, and somewhat neglecting the second, I think.

JY: Yeah, absolutely. Okay, now I have to ask, what is an experience that shaped these views for you?

YB: I think two broad experiences. One is the…let's say, the 2000s, the late 2000s, so early 2010 and 2011, where we were all part of this community that was very much focused on expression and on limiting the kind of constraints around expression and thinking of tools and how resources can be brought towards that. And there were limits to where that allowed us to go at a certain point.

And I think the kind of experiences of the Arab uprisings and what happened afterwards and the kind of degeneration across the worlds in which we lived kind of became a generative ground to think of how that experience went wrong or how that experience fell short.

And then building on that, I think when I started doing research on journalism and particularly on exiled journalists and thinking about their practice and their place in the world and the fact that in many ways there were very little constraints on what they could do and what they could voice and what they could express, et cetera.

Not that there are no constraints, there are always constraints, but that the nature of constraints were different - they were of the order of listening; who is listening to this? Who is on the other side? Who are you engaged in a conversation with? And that was, from speaking to them, a real kind of anxiety that came through to me.

JY: I think you're sort of alluding to theory of change…

YB: Yes, to some extent, but also to…when we think about our contribution into the world, to what kind of the normative framework we imagine. As people who think about all of these structures that circulate information and opinion and expressions, et cetera, there is often a normative focus, where there should be, about opening up constraints around expression and bringing resources to bear for expression, and we don't think enough of how these structures need also to foster listening and to foster recognition of these expressions.

And that is the same with, when we think about platforms on the internet and when we think about journalism, when we think about teaching… For example, in my field, when we think about academic research, I think you can bring that framework in different places where expression is needed and where expression is part of who we are. Does that make sense?

JY:  Absolutely. It absolutely makes sense. I think about this all the time. I'm teaching now too, and so it's very, very valuable. Okay, so let's shift a little bit. You're from Syria. You've been in Brussels for a long time. You were in Japan in between. You have a broad worldview, a broad perspective. Let’s talk about press freedom.

YB: Yeah, I've been thinking about this because, I mean, I work on journalism and I'm trying to do some work on Syria and what is happening in Syria now. And I feel there are times where people ask me about the context for journalistic work in Syria. And the narrow answer and the clear answer is that we've never had more freedom to do journalism in the country, right? And there are many reasons. Part of it is that this is a new regime that perhaps doesn't still have complete control over the ground. There are differentiated contexts where in some places it's very easy to go out and to access information and to speak to people. In other places, it's less easy, it's more dangerous, etc. So it's differentiated and it's not the same everywhere.

But it's clear that journalists come out and in from Syria. They can do their job relatively unmolested, which is a massive kind of change, contrast to the last thirteen or fourteen years where Syria was an information black hole. You couldn't do anything.

But that remains somewhat narrow in thinking about journalism in Syria. What is journalism about Syria in this context? What kind of journalism do we need to be thinking about? In a place that is in, you know, ruins, if not material destruction, then economic and societal disintegration, et cetera. So there are, I think, two elements. Sure, you can do journalism, but what kind of journalism is being done in Syria? I feel that we have to be asking a broader question about what is the role of information now more broadly in Syria? 

And that is a more difficult question to answer, I feel. Or a more difficult question to answer positively. Because it highlights questions about who has access to the means of journalism now in Syria? What are they doing with it? Who has access to the sources, and can provide actual understanding about the political or economic developments that are happening in the country. Very few people who have genuine understanding of the processes are going into building a new regime, a new state. In general, we have very, very little access. There are few avenues to participate and gain access to what is happening there.

So sure, you can go on the ground, you can take photos, you can speak to people, but in terms of participating in that broader nation-building exercise that is happening; this is happening at a completely different level to the places that we have access to. And with few exceptions, journalism as practiced now is not bringing us closer to these spaces. 

In a narrow sense, it's a very exciting time to be looking at experiments in doing journalism in Syria, to also be seeing the interaction between international journalists and local journalists and also the kind of tensions and collaborations and discussion around structural inequalities between them; especially from a researcher’s perspective. But it remains very, very narrow. In terms of the massive story, which is a complete revolution in the identity of the country, in its geopolitical arrangement, in its positioning in the world, and that we have no access to whatsoever. This is happening well over our heads—we are almost bystanders. 

JY:  That makes sense. I mean, it doesn't make sense, but it makes sense. What role does the internet and maybe even specifically platforms or internet companies play in Syria? Because with sanctions lifted, we now have access to things that were not previously available. I know that the app stores are back, although I'm getting varied reports from people on the ground about how much they can actually access, although people can download Signal now, which is good. How would you say things have changed online in the past year?

YB:  In the beginning, platforms, particularly Facebook, and it's still really Facebook, were the main sphere of information in the country. And to a large extent, it remains the main sphere where some discussions happen within the country.

These are old networks that were reactivated in some ways, but also public spheres that were so completely removed from each other that opened up on each other after December. So you had really almost independent spheres of activity and discussion. Between areas that were controlled by the regime, areas that were controlled by the opposition, which kind of expanded to areas of Syrian refugees and diaspora outside.

And these just collapsed on each other after 8th of December with massive chaos, massive and costly chaos in some ways. The spread of disinformation, organic disinformation, in the first few months was mind-boggling. I think by now there's a bit of self-regulation, but also another movement of siloing, where you see different clusters hardening as well. So that kind of collapse over the first few months didn't last very long.

You start having conversations in isolation of each other now. And I'm talking mainly about Facebook, because that is the main network, that is the main platform where public discussions are happening. Telegram was the public infrastructure of the state for a very long time, for the first six months. Basically, state communication happened through Telegram, through Telegram channels, also causing a lot of chaos. But now you have a bit more stability in terms of having a news agency. You have the television, the state television. So the importance of Telegram has waned off, but it's still a kind of parastructure of state communication, it remains important.

I think more structurally, these platforms are basically the infrastructure of information circulation because of the fact that people don't have access to electricity, for example, or for much of the time they have very low access to bandwidth. So having Facebook on their phone is the main way to keep in touch with things. They can't turn on the television, they can't really access internet websites very easily. So Facebook becomes materially their access to the world. Which comes with all of the baggage that these platforms bring with them, right? The kind of siloing, the competition over attention, the sensationalism, these clustering dynamics of these networks and their algorithms.

JY: Okay, so the infrastructural and resource challenges are real, but then you also have the opening up for the first time of the internet in many, many years, or ever, really. And as far as I understand from what friends who’ve been there have reported, is that nothing being blocked yet. So what impact do you see or foresee that having on society as people get more and more online? I know a lot of people were savvy, of course, and got around censorship, but not everyone, right?

YB: No, absolutely, absolutely not everyone. Not everyone has the kind of digital literacy to understand what going online means, right? Which accounts for one thing, the avalanche of fake information and disinformation that is now Syria, basically.

JY: It's only the second time this has happened. I mean, Tunisia is the only other example I can think of where the internet just opened right up.

YB: Without having gateways and infrastructure that can kind of circulate and manage and curate this avalanche of information. While at the same time, you have a real disintegration in the kind of social institutions that could ground a community. So you have really a perfect storm of a thin layer of digital connectivity, for a lot of people who didn't have access to even that thin layer, but it's still a very thin layer, right? You're connecting from your old smartphone to Facebook. You're getting texts, et cetera, and perhaps you're texting with the family over WhatsApp. And a real collapse of different societal institutions that also grounded you with others, right? The education system, of different clubs and different neighborhoods, small institutions that brought different communities together of the army, for example, universities, all of these have been disrupted over the past year in profound ways and along really communitarian ways as well. I don't know the kind of conditions that this creates, the combination of these two. But it doesn't seem like it's a positive situation or a positive dynamic.

JY:  Yeah, I mean, it makes me think of, for example, Albania or other countries that opened up after a long time and then all of a sudden just had this freedom.

YB: But still combined, I mean, that is one thing, the opening up and the avalanche, and that is a challenge. But it is a challenge that perhaps within a settled society with some institutions in which you can turn to, through which you can regulate this, through which you can have countervailing forces and countervailing forums for… that’s one thing. But with the collapse of material institutions that you might have had, it's really creating a bewildering world for people, where you turn back and you have your family that maybe lives two streets away, and this is the circle in which you move, or you feel safe to move.

Of course, for certain communities, right? That is not the condition everywhere. But that is part of what is happening. There's a real sense of bewilderment in the kind of world that you live in. Especially in areas that used to be controlled by the regime where everything that you've known in terms of state authority, from the smallest, the lowliest police officer in your neighborhood, to people, bureaucrats that you would talk to, have changed or your relationship to them has fundamentally changed. There's a real upheaval in your world at different levels. And, you know, and you're faced with a swirling world of information that you can't make sense out of.

JY: I do want to put you on the spot with a question that popped into my head, which is, I often ask people about regulation and depending on where they're working in the world, especially like when I'm talking to folks in Africa and elsewhere. In this case, though, it's a nation-building challenge, right? And so—you're looking at all of these issues and all of these problems—if you were in a position to create press or internet regulation from the ground up in Syria, what do you feel like that should look like? Are there models that you would look to? Are there existing structures or is there something new or?

YB:  I think maybe I don't have a model, but I think maybe a couple of entry points that you would kind of use to think of what model of regulation you want is to understand that there the first challenge is at the level of nation building. Of really recreating a national identity or reimagining a national identity, both in terms of a kind of shared imaginary of what these people are to each other and collectively represent, but also in terms of at the very hyper-local level of how these communities can go back to living together.

And I think that would have to shape how you would approach, say, regulation. I mean, around the Internet, that's a more difficult challenge. But at least in terms of your national media, for example, what is the contribution of the state through its media arm? What kind of national media do you want to put into place? What kind of structures allow for really organic participation in this project or not, right? But also at the level of how do you regulate the market for information in a new state with that level of infrastructural destruction, right? Of the economic circuit in which these networks are in place. How do you want to reconnect Syria to the world? In what ways? For what purposes?

And how do you connect all of these steps to open questions around identity and around that process of national rebuilding, and activating participation in that project, right? Rather than use them to foreclose these questions.

There are also certain challenges that you have in Syria that are endogenous, that are related to the last 14 years, to the societal disintegration and geographic disintegration and economic disintegration, et cetera. But on top of that, of course, we live in an information environment that is, at the level of the global information environment, also structurally cracking down in terms of how we engage with information, how we deal with journalism, how we deal with questions of difference. These are problems that go well beyond Syria, right? These are difficult issues that we don't know how to tackle here in Brussels or in the US, right? And so there's also an interplay between these two. There's an interplay between the fact that even here, we are having to come to terms with some of the myths around liberalism, around journalism, the normative model of journalism, of how to do journalism, right? I mean, we have to come to terms with it. The last two years—of the Gaza genocide—didn't happen in a vacuum. It was earth shattering for a lot of these pretensions around the world that we live in. Which I think is a bigger challenge, but of course it interacts with the kind of challenges that you have in a place like Syria.

JY: To what degree do you feel that the sort of rapid opening up and disinformation and provocations online and offline are contributing to violence?

YB: I think they're at the very least exacerbating the impact of that violence. I can't make claims about how much they're contributing, though I think they are contributing. I think there are clear episodes in which the kind of the circulation of misinformation online, you could directly link it to certain episodes of violence, like what happened in Jaramana before the massacre of the Druze. So a couple of weeks before the Druze, there was this piece of disinformation that led to actual violence and that set the stage to the massive violence later on. During the massacres on the coast, you could also link the kind of panic and the disinformation around the attacks of former regime officers and the effects of that to the mobilization that has happened. The scale of the violence is linked to the circulation of panic and disinformation. So there is a clear contribution. But I think the greater influence is how it exacerbates what happens after that violence, how it exacerbates the depth, for example, of divorce between between the population of Sweida after the massacre, the Druze population of Sweida and the rest of Syria. That is tangible. And that is embedded in the kind of information environment that we have. There are different kinds of material causes for it as well. There is real structural conflict there. But the kind of ideological, discursive, and affective, divorce that has happened over the past six months, that is a product of the information environment that we have.

JY: You are very much a third country, 4th country kid at this point. Like me, you connected to this global community through Global Voices at a relatively young age. In what ways do you feel that global experience has influenced your thinking and your work around these topics, around freedom of expression? How has it shaped you?

YB: I think in a profound way. What it does is it makes you to some extent immune from certain nationalist logics in thinking about the world, right? You have stakes in so many different places. You've built friendships, you've built connections, you've left parts of you in different places. And that is also certainly related to certain privileges, but it also means that you care about different places, that you care about people in many different places. And that shapes the way that you think about the world - it produces commitments that are diffused, complex and at times even contradictory, and it forces you to confront these contradictions. You also have experience, real experience in how much richer the world is if you move outside of these narrow, more nationalist, more chauvinistic ways of thinking about the world. And also you have kind of direct lived experience of the complexity of global circulation in the world and the fact, at a high level, it doesn't produce a homogenized culture, it produces many different things and they're not all equal and they're not all good, but it also leaves spaces for you to contribute to it, to engage with it, to actively try to play within the little spaces that you have.

JY: Okay, here’s my final question that I ask everyone. Do you have a free speech hero? Or someone who's inspired you?

YB: I mean, there are people whose sacrifices humble you. Many of them we don't know by name. Some of them we do know by name. Some of them are friends of ours. I keep thinking of Alaa [Abd El Fattah], who was just released from prison—I was listening to his long interview with Mada Masr (in Arabic) yesterday, and it’s…I mean…is he a hero? I don’t know but he is certainly one of the people I love at a distance and who continues to inspire us.

JY: I think he’d hate to be called a hero.

YB: Of course he would. But in some ways, his story is a tragedy that is inextricable from the drama of the last fifteen years, right? It’s not about turning him into a symbol. He's also a person and a complex person and someone of flesh and blood, etc. But he's also someone who can articulate in a very clear, very simple way, the kind of sense of hope and defeat that we all feel at some level and who continues to insist on confronting both these senses critically and analytically.

JY: I’m glad you said Alaa. He’s someone I learned a lot from early on, and there’s a lot of his words and thinking that have guided me in my practice. 

YB: Yeah, and his story is tragic in the sense that it kind of highlights that in the absence of any credible road towards collective salvation, we're left with little moments of joy when there is a small individual salvation of someone like him. And that these are the only little moments of genuine joy that we get to exercise together. But in terms of a broader sense of collective salvation, I think in some ways our generation has been profoundly and decisively defeated.

JY:  And yet the title of his book, “you have not yet been defeated.”

YB: Yeah, it's true. It's true.

JY: Thank you Yazan for speaking with me.

  •  

EFFecting Change: Get the Flock Out of Our City

Flock contracts have quietly spread to cities across the country. But Flock ALPR (Automated License Plate Readers) erode civil liberties from the moment they're installed. While officials claim these cameras keep neighborhoods safe, the evidence tells a different story. The data reveals how Flock has enabled surveillance of people seeking abortions, protesters exercising First Amendment rights, and communities targeted by discriminatory policing.

This is exactly why cities are saying no. From Austin to Cambridge to small towns across Texas, jurisdictions are rejecting Flock contracts altogether, proving that surveillance isn't inevitable—it's a choice.

Join EFF's Sarah Hamid and Andrew Crocker along with Reem Suleiman from Fight for the Future and Kate Bertash from Rural Privacy Coalition to explore what's happening as Flock contracts face growing resistance across the U.S. We'll break down the legal implications of the data these systems collect, examine campaigns that have successfully stopped Flock deployments, and discuss the real-world consequences for people's privacy and freedom. The conversation will be followed by a live Q&A. 

EFFecting Change Livestream Series:
Get the Flock Out of Our City
Thursday, February 19th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!

RSVP Today


Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

  •  

The Internet Still Works: Yelp Protects Consumer Reviews

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team. 

Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.

The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be. 

Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

Tell me about what your process around complaints around user reviews look like. 

We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

What other types of moderation tools have you built? 

Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

How are international complaints, where Section 230 doesn’t come into play, different? 

We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

This interview was edited for length and clarity.

  •  

The Internet Still Works: Wikipedia Defends Its Editors

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia. 

Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible. 

Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened? 

Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but  I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information. 

Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point. 

Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important. 

If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them? 

If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company. 

Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making. 

What does a typical complaint look like, and how does the complaint process look? 

In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.” 

The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first. 

If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions. 

And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?

That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation. 

Section 230 protects us from that, and allows for quick dismissal of invalid claims. 

When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit. 

How does dealing with those complaints work in the U.S.? And how is it different abroad? 

In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint. 

But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court. 

In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that. 

What’s the cost of having to go to court regularly to defend user speech? 

Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time. 

We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.

I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia? 

What does the community editing process get right that courts can get wrong? 

Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that. 

It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality. 

Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content? 

One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video. 

There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic. 

If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content. 

This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.

  •  

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

  •  

RIP Dave Farber, EFF Board Member and Friend

We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC).  Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.  

Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet.  Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board.  Dave always gave us credibility as well as ballast.  He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count.  He also had an encyclopedic knowledge of the internet's technical history. 

From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment.  Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.  

Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital.  His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world. 

We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.  

  •  

Op-ed: Weakening Section 230 Would Chill Online Speech

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

  •  

Yes to the “ICE Out of Our Faces Act”

Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.

Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.

To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.

The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.

As EFF explains in the new bill’s announcement:

It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.

You can read the bill here, and the bill’s announcement here.

  •  

Protecting Our Right to Sue Federal Agents Who Violate the Constitution

Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.

To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.

Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.

The Problem

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.

So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.”  He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”

Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.

In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.

In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling.  In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”

Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.

The Solution

At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.

In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.

State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.” 

This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.

We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.

We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.

  •  

Smart AI Policy Means Examining Its Real Harms and Benefits

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

  •  

EFF to Close Friday in Solidarity with National Shutdown

The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.

We do not make this decision lightly, but we will not remain silent. 

  •  

Introducing Encrypt It Already

Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.

End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.  

We’ve divided this up into three categories, each with three different demands:

  • Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
    • Facebook should use end-to-end encryption for group messages
    • Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
    • Bluesky should launch its promised end-to-end encryption for DMs
  • Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
    • Telegram should default to end-to-end encryption for DMs
    • WhatsApp should use end-to-end encryption for backups by default
    • Ring should enable end-to-end encryption for its cameras by default
  • Protect Our Data: New features that companies should launch, often because their competition is doing it already.
    • Google should launch end-to-end encryption for Google Authenticator backups
    • Google should offer end-to-end encryption for Android backup data
    • Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps

What is only half the problem. How is just as important.

What Companies Should Do When They Launch End-to-End Encryption Features

There’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:

  • A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
  • Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
  • Data minimization principles whenever feasible, storing as little metadata as possible.

Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.

What You Can Do

When it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.

For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like. 

In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:

As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.

End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!

Join EFF

Help protect digital privacy & free speech for everyone

  •  

Google Settlement May Bring New Privacy Controls for Real-Time Bidding

EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.

What Is Real-Time Bidding?

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.   

Why Is Real-Time Bidding Harmful?

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.

Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.

The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

Proposed Settlement with Google Is a Step in the Right Direction

As the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.

Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email. 

While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default. 

The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.

The Real Solution: Ban Online Behavioral Advertising

Limiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.

  •  

✍️ The Bill to Hand Parenting to Big Tech | EFFector 38.2

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. We're diving into the latest attempt to control how kids access the internet and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks what to do when you hit an age gate online, explains why rent-only copyright culture makes us all worse off, and covers the dangers of law enforcement purchasing straight-up military drones.

Prefer to listen in? In our audio companion, EFF Senior Policy Analyst Joe Mullin explains what lawmakers should do if they really want to help families. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.2 - ✍️ THE BILL TO HAND PARENTING TO BIG TECH

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

  •  

DSA Human Rights Alliance Publishes Principles Calling for DSA Enforcement to Incorporate Global Perspectives

The Digital Services Act (DSA) Human Rights Alliance has, since its founding by EFF and Access Now in 2021, worked to ensure that the European Union follows a human rights-based approach to platform governance by integrating a wide range of voices and perspectives to contextualise DSA enforcement and examining the DSA’s effect on tech regulations around the world.

As the DSA moves from legislation to enforcement, it has become increasingly clear that its impact depends not only on the text of the Act but also how it’s interpreted and enforced in practice. This is why the Alliance has created a set of recommendations to include civil society organizations and rights-defending stakeholders in the enforcement process. 

 The Principles for a Human Rights-Centred Application of the DSA: A Global Perspective, a report published this week by the Alliance, outlines steps the European Commission, as the main DSA enforcer, as well as national policymakers and regulators, should take to bring diverse groups to the table as a means of ensuring that the implementation of the DSA is grounded in human rights standards.

 The Principles also offer guidance for regulators outside the EU who look to the DSA as a reference framework and international bodies and global actors concerned with digital governance and the wider implications of the DSA. The Principles promote meaningful stakeholder engagement and emphasize the role of civil society organisations in providing expertise and acting as human rights watchdogs.

“Regulators and enforcers need input from civil society, researchers, and affected communities to understand the global dynamics of platform governance,” said EFF International Policy Director Christoph Schmon. “Non-EU-based civil society groups should be enabled to engage on equal footing with EU stakeholders on rights-focused elements of the DSA. This kind of robust engagement will help ensure that DSA enforcement serves the public interest and strengthens fundamental rights for everyone, especially marginalized and vulnerable groups.”

“As activists are increasingly intimidated, journalists silenced, and science and academic freedom attacked by those who claim to defend free speech, it is of utmost importance that the Digital Services Act's enforcement is centered around the protection of fundamental rights, including the right to the freedom of expression,” said Marcel Kolaja, Policy & Advocacy Director—Europe at Access Now. “To do so effectively, the global perspective needs to be taken into account. The DSA Human Rights Principles provide this perspective and offer valuable guidance for the European Commission, policymakers, and regulators for implementation and enforcement of policies aiming at the protection of fundamental rights.”

“The Principles come at the crucial moment for the EU candidate countries, such as Serbia, that have been aligning their legislation with the EU acquis but still struggle with some of the basic rule of law and human rights standards,” said Ana Toskic Cvetinovic, Executive Director for Partners Serbia. “The DSA HR Alliance offers the opportunity for non-EU civil society to learn about the existing challenges of DSA implementation and design strategies for impacting national policy development in order to minimize any negative impact on human rights.”

 The Principles call for:

◼ Empowering EU and non-EU Civil Society and Users to Pursue DSA Enforcement Actions

◼ Considering Extraterritorial and Cross-Border Effects of DSA Enforcement

◼ Promoting Cross-Regional Collaboration Among CSOs on Global Regulatory Issues

◼ Establishing Institutionalised Dialogue Between EU and Non-EU Stakeholders

◼ Upholding the Rule of Law and Fundamental Rights in DSA Enforcement, Free from Political Influence

◼ Considering Global Experiences with Trusted Flaggers and Avoid Enforcement Abuse

◼ Recognising the International Relevance of DSA Data Access and Transparency Provisions for Human Rights Monitoring

The Principles have been signed by 30 civil society organizations,researchers, and independent experts.

The DSA Human Right Alliance represents diverse communities across the globe to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation.

 

  •  

Beware: Government Using Image Manipulation for Propaganda

U.S. Homeland Security Secretary Kristi Noem last week posted a photo of the arrest of Nekima Levy Armstrong, one of three activists who had entered a St. Paul, Minn. church to confront a pastor who also serves as acting field director of the St Paul Immigration and Customs Enforcement (ICE) office. 

A short while later, the White House posted the same photo – except that version had been digitally altered to darken Armstrong’s skin and rearrange her facial features to make it appear she was sobbing or distraught. The Guardian one of many media outlets to report on this image manipulation, created a handy slider graphic to help viewers see clearly how the photo had been changed.  

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world. 

The New York Times reported it had run the two images through Resemble.AI, an A.I. detection system, which concluded Noem’s image was real but the White House’s version showed signs of manipulation. "The Times was able to create images nearly identical to the White House’s version by asking Gemini and Grok — generative A.I. tools from Google and Elon Musk’s xAI start-up — to alter Ms. Noem’s original image." 

Most of us can agree that the government shouldn’t lie to its constituents. We can also agree that good government does not involve emphasizing cruelty or furthering racial biases. But this abuse of technology violates both those norms. 

“Accuracy and truthfulness are core to the credibility of visual reporting,” the National Press Photographers Association said in a statement issued about this incident. “The integrity of photographic images is essential to public trust and to the historical record. Altering editorial content for any purpose that misrepresents subjects or events undermines that trust and is incompatible with professional practice.” 

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world.

Reworking an arrest photo to make the arrestee look more distraught not only is a lie, but it’s also a doubling-down on a “the cruelty is the point” manifesto. Using a manipulated image further humiliates the individual and perpetuate harmful biases, and the only reason to darken an arrestee’s skin would be to reinforce colorist stereotypes and stoke the flames of racial prejudice, particularly against dark-skinned people.  

History is replete with cruel and racist images as propaganda: Think of Nazi Germany’s cartoons depicting Jewish people, or contemporaneously, U.S. cartoons depicting Japanese people as we placed Japanese-Americans in internment camps. Time magazine caught hell in 1994 for using an artificially darkened photo of O.J. Simpson on its cover, and several Republican politcal campaigns in recent years have been called out for similar manipulation in recent years. 

But in an age when we can create or alter a photo with a few keyboard strokes, when we can alter what viewers think is reality so easily and convincingly, the danger of abuse by government is greater.   

Had the Trump administration not ham-handedly released the retouched perp-walk photo after Noem had released the original, we might not have known the reality of that arrest at all. This dishonesty is all the more reason why Americans’ right to record law enforcement activities must be protected. Without independent records and documentation of what’s happening, there’s no way to contradict the government’s lies. 

This incident raises the question of whether the Trump Administration feels emboldened to manipulate other photos for other propaganda purposes. Does it rework photos of the President to make him appear healthier, or more awake? Does it rework military or intelligence images to create pretexts for war? Does it rework photos of American citizens protesting or safeguarding their neighbors to justify a military deployment? 

In this instance, like so much of today’s political trolling, there’s a good chance it’ll be counterproductive for the trolls: The New York Times correctly noted that the doctored photograph could hinder the Armstrong’s right to a fair trial. “As the case proceeds, her lawyers could use it to accuse the Trump administration of making what are known as improper extrajudicial statements. Most federal courts bar prosecutors from making any remarks about court filings or a legal proceeding outside of court in a way that could prejudice the pool of jurors who might ultimately hear the case.” They also could claim the doctored photo proves the Justice Department bore some sort of animus against Armstrong and charged her vindictively. 

In the past, we've urged caution when analyzing proposals to regulate technologies that could be used to create false images. In those cases, we argued that any new regulation should rely on the established framework for addressing harms caused by other forms of harmful false information. But in this situation, it is the government itself that is misusing technology and propagating harmful falsehoods. This doesn't require new laws; the government can and should put an end to this practice on its own. 

Any reputable journalism organization would fire an employee for manipulating a photo this way; many have done exactly that. It’s a shame our government can’t adhere to such a basic ethical and moral code too. 

  •  

EFF Statement on ICE and CBP Violence

Dangerously unchecked surveillance and rights violations have been a throughline of the Department of Homeland Security since the agency’s creation in the wake of the September 11th attacks. In particular, Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been responsible for countless civil liberties and digital rights violations since that time. In the past year, however, ICE and CBP have descended into utter lawlessness, repeatedly refusing to exercise or submit to the democratic accountability required by the Constitution and our system of laws.  

The Trump Administration has made indiscriminate immigration enforcement and mass deportation a key feature of its agenda, with little to no accountability for illegal actions by agents and agency officials. Over the past year, we’ve seen massive ICE raids in cities from Los Angeles to Chicago to Minneapolis. Supercharged by an unprecedented funding increase, immigration enforcement agents haven’t been limited to boots on the ground: they’ve been scanning faces, tracking neighborhood cell phone activity, and amassing surveillance tools to monitor immigrants and U.S. citizens alike. 

Congress must vote to reject any further funding of ICE and CBP

The latest enforcement actions in Minnesota have led to federal immigration agents killing Renee Good and Alex Pretti. Both were engaged in their First Amendment right to observe and record law enforcement when they were killed. And it’s only because others similarly exercised their right to record that these killings were documented and widely exposed, countering false narratives the Trump Administration promoted in an attempt to justify the unjustifiable.  

These constitutional violations are systemic, not one-offs. Just last week, the Associated Press reported a leaked ICE memo that authorizes agents to enter homes solely based on “administrative” warrants—lacking any judicial involvement. This government policy is contrary to the “very core” of the Fourth Amendment, which protects us against unreasonable search and seizure, especially in our own homes 

These violations must stop now. ICE and CBP have grown so disdainful of the rule of law that reforms or guardrails cannot suffice. We join with many others in saying that Congress must vote to reject any further funding of ICE and CBP this week. But that is not enough. It’s time for Congress to do the real work of rebuilding our immigration enforcement system from the ground up, so that it respects human rights (including digital rights) and human dignity, with real accountability for individual officers, their leadership, and the agency as a whole.

  •  

Search Engines, AI, And The Long Fight Over Fair Use


We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.

Fair Use Protects Analysis—Even When It’s Automated

U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.

Copying works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.

Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use. 

Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology. 

Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated. 

A Road Forward For AI Training And Fair Use 

One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative. 

The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share. 

AI Can Create Problems, But Expanding Copyright Is the Wrong Fix 

Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times are core functions of government. Copyright law doesn’t help with those tasks in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression. 

Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.  

Fair Use Still Matters

Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.

Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

  •  

Rent-Only Copyright Culture Makes Us All Worse Off

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

In the Netflix/Spotify/Amazon era, many of us access copyrighted works purely in digital form – and that means we rarely have the chance to buy them. Instead, we are stuck renting them, subject to all kinds of terms and conditions. And because the content is digital, reselling it, lending it, even preserving it for your own use inevitably requires copying. Unfortunately, when it comes to copying digital media, US copyright law has pretty much lost the plot.

As we approach the 50th anniversary of the 1976 Copyrights, the last major overhaul of US copyright law, we’re not the only ones wondering if it’s time for the next one. It’s a high-risk proposition, given the wealth and influence of entrenched copyright interests who will not hesitate to send carefully selected celebrities to argue for changes that will send more money, into fewer pockets, for longer terms. But it’s equally clear that and nowhere is that more evident than the waning influence of Section 109, aka the first sale doctrine.

First sale—the principle that once you buy a copyrighted work you have the right to re-sell it, lend it, hide it under the bed, or set it on fire in protest—is deeply rooted in US copyright law. Indeed, in an era where so many judges are looking to the Framers for guidance on how to interpret current law, it’s worth noting that the first sale principles (also characterized as “copyright exhaustion”) can be found in the earliest copyright cases and applied across the rights in the so-called “copyright bundle.”

Unfortunately, courts have held that first sale, at least as it was codified in the Copyright Act, only applies to distribution, not reproduction. So even if you want to copy a rented digital textbook to a second device, and you go through the trouble of deleting it from the first device, the doctrine does not protect you.

We’re all worse off as a result. Our access to culture, from hit songs to obscure indie films, are mediated by the whims of major corporations. With physical media the first sale principle built bustling second hand markets, community swaps, and libraries—places where culture can be shared and celebrated, while making it more affordable for everyone.

And while these new subscription or rental services have an appealing upfront cost, it comes with a lot more precarity. If you love rewatching a show, you may be chasing it between services or find it is suddenly unavailable on any platform. Or, as fans of Mad Men or Buffy the Vampire Slayer know, you could be stuck with a terrible remaster as the only digital version available

Last year we saw one improvement with California Assembly Bill 2426 taking effect. In California companies must now at least disclose to potential customers if a “purchase” is a revocable license—i.e. If they can blow it up after you pay. A story driving this change was Ubisoft revoking access to “The Crew” and making customers’ copies unplayable a decade after launch. 

On the federal level, EFF, Public Knowledge, and 15 other public interest organizations backed Sen. Ron Wyden’s message to the FTC to similarly establish clear ground rules for digital ownership and sales of goods. Unfortunately FTC Chairman Andrew Ferguson has thus far turned down this easy win for consumers.

As for the courts, some scholars think they have just gotten it wrong. We agree, but it appears we need Congress to set them straight. The Copyright Act might not need a complete overhaul, but Section 109 certainly does. The current version hurts consumers, artists, and the millions of ordinary people who depend on software and digital works every day for entertainment, education, transportation, and, yes, to grow our food. 

We realize this might not be the most urgent problem Congress confronts in 2026—to be honest, we wish it was—but it’s a relatively easy one to solve. That solution could release a wave of new innovation, and equally importantly, restore some degree of agency to American consumers by making them owners again.

  •  

Copyright Kills Competition

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.

Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.

Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.

AI Training

In the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would  limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.

For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.

Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.

Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.

The DMCA’s “Anti-Circumvention” Provision

The Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.

Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers

In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)

Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.

  •  

Copyright Should Not Enable Monopoly

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

There’s a crisis of creativity in mainstream American culture. We have fewer and fewer studios and record labels and fewer and fewer platforms online that serve independent artists and creators.  

At its core, copyright is a monopoly right on creative output and expression. It’s intended to allow people who make things to make a living through those things, to incentivize creativity. To square the circle that is “exclusive control over expression” and “free speech,” we have fair use.

However, we aren’t just seeing artists having a time-limited ability to make money off of their creations. We are also seeing large corporations turn into megacorporations and consolidating huge stores of copyrights under one umbrella. When the monopoly right granted by copyright is compounded by the speed and scale of media company mergers, we end up with a crisis in creativity. 

People have been complaining about the lack of originality in Hollywood for a long time. What is interesting is that the response from the major studios has rarely, especially recently, to invest in original programming. Instead, they have increased their copyright holdings through mergers and acquisitions. In today’s consolidated media world, copyright is doing the opposite of its intended purpose: instead of encouraging creativity, it’s discouraging it. The drive to snap up media franchises (or “intellectual properties”) that can generate sequels, reboots, spinoffs, and series for years to come has crowded out truly original and fresh creativity in many sectors. And since copyright terms last so long, there isn’t even a ticking clock to force these corporations to seek out new original creations. 

In theory, the internet should provide a counterweight to this problem by lowering barriers to entry for independent creators. But as online platforms for creativity likewise shrink in number and grow in scale, they have closed ranks with the major studios.  

It’s a betrayal of the promise of the internet: that it should be a level playing field where you get to decide what you want to do, watch, listen to, read. And our government should be ashamed for letting it happen.  

  •  

Statutory Damages: The Fuel of Copyright-based Censorship

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.

This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.

Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.

One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.

On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.

Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.

By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.

“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.

Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.

  •  

💾 The Worst Data Breaches of 2025—And What You Can Do | EFFector 38.1

So many data breaches happen throughout the year that it can be pretty easy to gloss over not just if, but how many different breaches compromised your data. We're diving into these data breaches and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks U.S. Immigration and Customs Enforcement's (ICE) surveillance spending spree, explains how hackers are countering ICE's surveillance, and invites you to our free livestream covering online age verification mandates.

Prefer to listen in? In our audio companion, EFF Security and Privacy Activist Thorin Klosowski explains what you can do to protect yourself from data breaches and how companies can better protect their users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.1 - 💾 THE WORST DATA BREACHES OF 2025—and what you can do

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

  •  

EFF Joins Internet Advocates Calling on the Iranian Government to Restore Full Internet Connectivity

Earlier this month, Iran’s internet connectivity faced one of its most severe disruptions in recent years with a near-total shutdown from the global internet and major restrictions on mobile access.

EFF joined architects, operators, and stewards of the global internet infrastructure in calling upon authorities in Iran to immediately restore full and unfiltered internet access. We further call upon the international technical community to remain vigilant in monitoring connectivity and to support efforts that ensure the internet remains open, interoperable, and accessible to all.

This is not the first time the people in Iran have been forced to experience this, with the government suppressing internet access in the country for many years. In the past three years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media and sometimes cutting off internet access altogether. 

EFF has long maintained that governments and occupying powers must not disrupt internet or telecommunication access. Cutting off telecommunications and internet access is a violation of basic human rights and a direct attack on people's ability to access information and communicate with one another. 

Our joint statement continues:

“We assert the following principles:

  1. Connectivity is a Fundamental Enabler of Human Rights: In the 21st century, the right to assemble, the right to speak, and the right to access information are inextricably linked to internet access.
  2. Protecting the Global Internet Commons: National-scale shutdowns fragment the global network, undermining the stability and trust required for the internet to function as a global commons.
  3. Transparency: The technical community condemns the use of BGP manipulation and infrastructure filtering to obscure events on the ground.”

Read the letter in full here

  •  

EFF Condemns FBI Search of Washington Post Reporter’s Home

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation. 

Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch. 

The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.

In the statement published yesterday, we call on Congress: 

To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions; 

To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources; 

To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists. 

And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment. 

We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.

Further Reading:

  •  

EFF to California Appeals Court: First Amendment Protects Journalist from Tech Executive’s Meritless Lawsuit

EFF asked a California appeals court to uphold a lower court’s decision to strike a tech CEO’s lawsuit against a journalist that sought to silence reporting the CEO, Maury Blackman, didn’t like.

The journalist, Jack Poulson, reported on Maury Blackman’s arrest for felony domestic violence after receiving a copy of the arrest report from a confidential source. Blackman didn’t like that. So, he sued Poulson—along with Substack, Amazon Web Services, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take his articles down from the internet.

Fortunately, the trial court saw this case for what it was: a classic SLAPP, or a strategic lawsuit against public participation. The court dismissed the entire complaint under California’s anti-SLAPP statute, which provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.

The appeals court should affirm the trial court’s correct decision.  

Poulson’s reporting is just the kind of activity that the state’s anti-SLAPP law was designed to protect: truthful speech about a matter of public interest. The felony domestic violence arrest of the CEO of a controversial surveillance company with U.S. military contracts is undoubtedly a matter of public interest. As we explained to the court, “the public has a clear interest in knowing about the people their government is doing business with.”

Blackman’s claims are totally meritless, because they are barred by the First Amendment. The First Amendment protects Poulson’s right to publish and report on the incident report. Blackman argues that a court order sealing the arrest overrides Poulson’s right to report the news—despite decades of Supreme Court and California Court of Appeals precedent to the contrary. The trial correctly rejected this argument and found that the First Amendment defeats all of Blackman’s claims. As the trial court explained, “the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing.”

The court of appeals should reach the same conclusion.

  •  

Baton Rouge Acquires a Straight-Up Military Surveillance Drone

The Baton Rouge Police Department announced this week that it will begin using a drone designed by military equipment manufacturer Lockheed Martin and Edge Autonomy, making it one of the first local police departments to use an unmanned aerial vehicle (UAV) with a history of primary use in foreign war zones. Baton Rouge is now one of the first local police departments in the United States to deploy an unmanned aerial vehicle (UAV) with such extensive surveillance capabilities — a dangerous escalation in the militarization of local law enforcement.

This is a troubling development in an already long history of local law enforcement acquiring and utilizing military-grade surveillance equipment. It should be a cautionary tale that prods  communities across the country to be proactive in ensuring that drones can only be acquired and used in ways that are well-documented, transparent, and subject to public feedback. 

Baton Rouge bought the Stalker VXE30 from Edge Autonomy, which partners with Lockheed Martin and began operating under the brand Redwire this week. According to reporting from WBRZ ABC2 in Louisiana, the drone, training, and batteries, cost about $1 million. 

Baton Rouge Police Department officers stand with the Stalker VXE30 drone in a photo shared by the BRPD via Facebook.

All of the regular concerns surrounding drones apply to this new one in use by Baton Rouge:

  • Drones can access and view spaces that are otherwise off-limits to law enforcement, including backyards, decks, and other areas of personal property.
  • Footage captured by camera-enabled drones may be stored and shared in ways that go far beyond the initial flight.
  • Additional camera-based surveillance can be installed on the drone, including automated license plate readers and the retroactive application of biometric analysis, such as face recognition.

However, the use of a military-grade drone hypercharges these concerns. Stalker VXE30's surveillance capabilities extend for dozens of miles, and it can fly faster and longer than standard police drones already in use. 

“It can be miles away, but we can still have a camera looking at your face, so we can use it for surveillance operations," BRPD Police Chief TJ Morse told reporters.

Drone models similar to the Stalker VXE30 have been used in military operations around the world and are currently being used by the U.S. Army and other branches for long-range reconnaissance. Typically, police departments deploy drone models similar to those commercially available from companies like DJI, which until recently was the subject of a proposed Federal Communications Commission (FCC) ban, or devices provided by police technology companies like Skydio, in partnership with Axon and Flock Safety

Additionally troubling is the capacity to add additional equipment to these drones: so-called “payloads” that could include other types of surveillance equipment and even weapons. 

The Baton Rouge community must put policies in place that restrict and provide oversight of any possible uses of this drone, as well as any potential additions law enforcement might make. 

EFF has filed a public records request to learn more about the conditions of this acquisition and gaps in oversight policies. We've been tracking the expansion of police drone surveillance for years, and this acquisition represents a dangerous new frontier. We'll continue investigating and supporting communities fighting back against the militarization of local police and mass surveillance. To learn more about the surveillance technologies being used in your city, please check out the Atlas of Surveillance.

  •  

Congress Wants To Hand Your Parenting to Big Tech

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

  •  

Report: ICE Using Palantir Tool That Feeds On Medicaid Data

EFF last summer asked a federal judge to block the federal government from using Medicaid data to identify and deport immigrants.  

We also warned about the danger of the Trump administration consolidating all of the government’s information into a single searchable, AI-driven interface with help from Palantir, a company that has a shaky-at-best record on privacy and human rights. 

Now we have the first evidence that our concerns have become reality. 

“Palantir is working on a tool for Immigration and Customs Enforcement (ICE) that populates a map with potential deportation targets, brings up a dossier on each person, and provides a “confidence score” on the person’s current address,” 404 Media reports today. “ICE is using it to find locations where lots of people it might detain could be based.” 

The tool – dubbed Enhanced Leads Identification & Targeting for Enforcement (ELITE) – receives peoples’ addresses from the Department of Health and Human Services (which includes Medicaid) and other sources, 404 Media reports based on court testimony in Oregon by law enforcement agents, among other sources. 

This revelation comes as ICE – which has gone on a surveillance technology shopping spree – floods Minneapolis with agents, violently running roughshod over the civil rights of immigrants and U.S. citizens alike; President Trump has threatened to use the Insurrection Act of 1807 to deploy military troops against protestors there. Other localities are preparing for the possibility of similar surges. 

Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected.

This kind of consolidation of government records provides enormous government power that can be abused. Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected. 

As EFF Executive Director Cindy Cohn wrote in a Mercury News op-ed last August, “While couched in the benign language of eliminating government ‘data silos,’ this plan runs roughshod over your privacy and security. It’s a throwback to the rightly mocked ‘Total Information Awareness’ plans of the early 2000s that were, at least publicly, stopped after massive outcry from the public and from key members of Congress. It’s time to cry out again.” 

In addition to the amicus brief we co-authored challenging ICE’s grab for Medicaid data, EFF has successfully sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, filed an amicus brief in a suit challenging ICE’s grab for taxpayer data, and sued the departments of State and Homeland Security to halt a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. 

But litigation isn’t enough. People need to keep raising concerns via public discourse and Congress should act immediately to put brakes on this runaway train that threatens to crush the privacy and security of each and every person in America.  

  •  

So, You’ve Hit an Age Gate. What Now?

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.

At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.

Follow the Data

Since we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that’s not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won’t tell the website anything more than if you meet the age requirement, but access to that digital ID isn’t available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don’t want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don’t want to be a part of a digital ID system

If you’re given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:

    • Data: What info does each method require?
    • Access: Who can see the data during the course of the verification process?
    • Retention: Who will hold onto that data after the verification process, and for how long?
    • Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards. 
    • Visibility: Who will be aware that you’re attempting to verify your age, and will they know which platform you’re trying to verify for?

We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.

In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you’re using that particular website, to the third party. 

Then, there’s the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We’ve already seen examples of how this can fail. For example, Discord routed users' ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users' data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users' ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok should automatically start the deletion process on your behalf.

Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We’d prefer to see more assurances across the board about how information is handled.

Each site offers users a menu of age assurance options to choose from. We’ve chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:

Meta – Facebook, Instagram, WhatsApp, Messenger, Threads

Inferred Age

If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you’ve posted to guess your age, like looking at “Happy birthday!” messages. It’s a creepy reminder that they already have quite a lot of information about you.

If Meta cannot guess your age, or if Meta infers you're too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID. 

Face Scan

If you choose to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well. 

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with Meta. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Upload ID

If Yoti’s age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identity—which you may not want. If you don’t want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.

Google – Gmail, YouTube 

Inferred Age

If Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you’ve had the account and your YouTube viewing habits. It’s yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren’t likely to ask for even more identifying data.

If Google cannot guess your age, or decides you're too young, Google will next ask you to verify your age. You’ll be given a variety of options for how to do so, with availability that will depend on your location and your age.

Google’s methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you’re over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.

Face Scan

If you choose to use facial age estimation, you’ll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID’s verifier within the page—this means that your selfie will be checked without any images leaving your device. If the system decides you’re over 18, it will let Google know that, and only that. Of course, no technology is perfect—should Private ID be mandated to target you specifically, there’s nothing to stop it from sending down code that does in fact upload your image, and you probably won’t notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that’s unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you’re under 18.

If Private ID’s age estimation decides your face looks too young, you may next be able to decide if you’d rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.

Email Usage

If you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you’ve done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses “proprietary algorithms and external data sources” that involve sending your email address to “trusted third parties, such as data aggregators.” It claims to “ensure that such third parties are contractually bound to meet these requirements,” but you’ll have to trust it on that one—we haven’t seen any mention of who those parties are, so you’ll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.

Credit Card Verification

If you choose to let Google use your credit card information, you’ll be asked to set up a Google Payments account. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you’ll have to tell Google your credit card info, but the fact that it’s done through Google Payments (their regular card-processing system) means that at least your credit card information won’t be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.

Digital ID

If the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you’ll be given the option to use your digital ID. In some cases, it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option. Depending on the implementation, there’s a chance that the verification step will “phone home” to the ID provider (usually a government) to let them know the service asked for your age. It’s a complicated and varied topic that you can learn more about by visiting EFF’s page on digital identity.

Upload ID

Should none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you’ll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID “will be stored securely,” the verification process page says ID “will be deleted after your date of birth is successfully verified.” Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it’s deleted immediately, though we have questions about what Google means when it says your ID will be used to “improve [its] verification services for Google products and protect against fraud and abuse.” No system is perfect, and we can only hope that Google schedules outside audits regularly.

TikTok

Inferred Age

If TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you’ve posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you’ve given it consent to try to guess how old you look and sound.

If TikTok decides you’re too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you're too young, it will automatically revoke your access based on age—including either restricting features or deleting your account. To get your access and account back, you’ll have a limited amount of time to verify your age. As soon as you see the notification that your account is restricted, you’ll want to act fast because in some places you’ll have as little as 23 days before the deadline passes.

When you get that notification, you’re given various options to verify your age based on your location.

Face Scan

If you’re given the option to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with TikTok. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Credit Card Verification

If you have a credit card in your name, TikTok will accept that as proof that you’re over 18. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It’s unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we’d rather TikTok provide assurances that the information will be processed securely.

Credit Card Verification of a Parent or Guardian

Sometimes, if you’re between 13 and 17, you’ll be given the option to let your parent or guardian confirm your age. You’ll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn’t always seem to be offered, and in the one case we could find, it’s possible that TikTok never followed up with the parent. So it’s unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you’re not old enough to have a credit card, and you’re ok with letting an adult know you use TikTok, this option may be reasonable to try.

Photo with a Random Adult?

Bizarrely, if you’re between 13 and 17, TikTok claims to offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they’re holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn’t say which one. We haven’t found any evidence of this verification method being offered. Please do let us know if you’ve used this method to verify your age on TikTok!

Photo ID and Face Comparison

If you aren’t offered or have failed the other options, you’ll have to verify your age by submitting a copy of your ID and matching photo of your face. You’ll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn’t automatically delete the data you give it once the process is complete, but TikTok does claim to “start the process to delete the information you submitted,” which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it’ll ask for your date of birth even after your age has been verified.

TikTok itself might not see your actual ID depending on its implementation choices, but Incode will. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don’t want TikTok or Incode to know your name, what you look like, and where you live—or if you don't want to rely on both TikTok and Incode to keep to their deletion promises—then this option may not be right for you.

Everywhere Else

We’ve covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you’re trying to choose what information to provide to continue to use a service, consider the “follow the data” questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry: Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on. 

Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That’s just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.


Join EFF


Help protect digital privacy & free speech for everyone

  •  

How Hackers Are Fighting Back Against ICE

Read more about how ICE has spent hundreds of millions of dollars on surveillance technology to spy on anyone—and potentially everyone—in the United States, and how to follow the Homeland Security Spending Trail..

ICE has been invading U.S. cities, targeting, surveilling, harassing, assaulting, detaining, and torturing people who are undocumented immigrants. They also have targeted people with work permits, asylum seekers, permanent residents (people holding “green cards”), naturalized citizens, and even citizens by birth. ICE has spent hundreds of millions of dollars on surveillance technology to spy on anyoneand potentially everyonein the United States. It can be hard to imagine how to defend oneself against such an overwhelming force. But a few enterprising hackers have started projects to do counter surveillance against ICE, and hopefully protect their communities through clever use of technology. 

Let’s start with Flock, the company behind a number of automated license plate reader (ALPR) and other camera technologies. You might be surprised at how many Flock cameras there are in your community. Many large and small municipalities around the country have signed deals with Flock for license plate readers to track the movement of all cars in their city. Even though these deals are signed by local police departments, oftentimes ICE also gains access

Because of their ubiquity, people are interested in finding out where and how many Flock cameras are in their community. One project that can help with this is the OUI-SPY, a small piece of open source hardware. The OUI-SPY runs on a cheap Arduino compatible chip called an ESP-32. There are multiple programs available for loading on the chip, such as “Flock You,” which allows people to detect Flock cameras and “Sky-Spy” to detect overhead drones. There’s also “BLE Detect,” which detects various Bluetooth signals including ones from Axon, Meta’s Ray-Bans that secretly record you, and more. It also has a mode commonly known as “fox hunting” to track down a specific device. Activists and researchers can use this tool to map out different technologies and quantify the spread of surveillance. 

There’s also the open source Wigle app which is primarily designed for mapping out Wi-Fi, but also has the ability to make an audio alert when a specific Wi-Fi or Bluetooth identifier is detected. This means you can set it up to get a notification when it detects products from Flock, Axon, or other nasties in their vicinity. 

One enterprising YouTuber, Benn Jordan, figured out a way to fool Flock cameras into not recording his license plate simply by painting some minor visual noise on his license plate. This is innocuous enough that any human will still be able to read his license plate, but it completely prevented Flock devices from recognizing his license plate as a license plate at the time. Some states have outlawed drivers obscuring their license plates, so taking such action is not recommended. 

Jordan later went on to discover hundreds of misconfigured Flock cameras that were exposing their administrator interface without a password on the public internet. This would allow anyone with an internet connection to view a live surveillance feed, download 30 days of video, view logs, and more. The cameras pointed at parks, public trails, busy intersections, and even a playground. This was a massive breach of public trust and a huge mistake for a company that claims to be working for public safety.

Other hackers have taken on the task of open-source intelligence and community reporting. One interesting example is deflock.me and alpr.watch, which are crowdsourced maps of ALPR cameras. Much like the OUI-SPY project, this allows activists to map out and expose Flock surveillance cameras in their community. 

There have also been several ICE reporting apps released, including apps to report ICE sightings in your area such Stop ICE Alerts, ICEOUT.org, and ICE Block. ICEBlock was delisted by Apple at the request of Attorney General Pam Bondi, a fact we are suing over. There is also Eyes Up, an app to securely record and archive ICE raids, which was taken down by Apple earlier this year. 

Another interesting project documenting ICE and creating a trove of open-source intelligence is ICE List Wiki which contains info on companies that have contracts with ICE, incidents and encounters with ICE, and vehicles ICE uses. 

People without programming knowledge can also get involved. In Chicago, people used whistles to warn their neighbors that ICE was present or in the area. Many people 3D-printed whistles along with instructional booklets to hand out to their communities, allowing a wider distribution of whistles and consequently earlier warnings for their neighbors. 

Many hackers have started hosting digital security trainings for their communities or building web sites with security advice, including how to remove your data from the watchful eyes of the surveillance industry. To reach a broader community, trainers have even started hosting trainings on how to defend their communities and what to do in an ICE raid in video games, such as Fortnight

There is also EFF’s own Rayhunter project for detecting cell-site simulators, about which we have written extensively. Rayhunter runs on a cheap mobile hotspot and doesn’t require deep technical knowledge to use.

It’s important to remember that we are not powerless. Even in the face of a domestic law enforcement presence with massive surveillance capabilities and military-esque technologies, there are still ways to engage in surveillance self-defense. We cannot give into nihilism and fear. We must continue to find small ways to protect ourselves and our communities, and when we can, fight back. 

EFF is not affiliated with any of these projects (other than Rayhunter) and does not endorse them. We don’t make any statements about the legality of using any of these projects. Please consult with an attorney to determine what risks there may be. 

Join EFF

Help protect digital privacy & free speech for everyone

  •  

ICE Is Going on a Surveillance Shopping Spree

Read more about how enterprising hackers have started projects to do counter surveillance against ICE, and learn how to follow the Homeland Security spending trail.

U.S. Immigration and Customs Enforcement (ICE) has a new budget under the current administration, and they are going on a surveillance tech shopping spree. Standing at $28.7 billion dollars for the year 2025 (nearly triple their 2024 budget) and at least another $56.25 billion over the next three years, ICE's budget would be the envy of many national militaries around the world. Indeed, this budget would put ICE as the 14th most well-funded military in the world, right between Ukraine and Israel.  

There are many different agencies under U.S. Department of Homeland Security (DHS) that deal with immigration, as well as non-immigration related agencies such as Cybersecurity and Infrastructure Security Agency (CISA) and Federal Emergency Management Agency (FEMA). ICE is specifically the enforcement arm of the U.S. immigration apparatus. Their stated mission is to “[p]rotect America through criminal investigations and enforcing immigration laws to preserve national security and public safety.” 

Of course, ICE doesn’t just end up targeting, surveilling, harassing, assaulting, detaining, and torturing people who are undocumented immigrants. They have targeted people on work permits, asylum seekers, permanent residents (people holding “green cards”), naturalized citizens, and even citizens by birth. 

While the NSA and FBI might be the first agencies that come to mind when thinking about surveillance in the U.S., ICE should not be discounted. ICE has always engaged in surveillance and intelligence-gathering as part of their mission. A 2022 report by Georgetown Law’s Center for Privacy and Technology found the following:

  • ICE had scanned the driver’s license photos of 1 in 3 adults.
  • ICE had access to the driver’s license data of 3 in 4 adults.
  • ICE was tracking the movements of drivers in cities home to 3 in 4 adults.
  • ICE could locate 3 in 4 adults through their utility records.
  • ​​ICE built its surveillance dragnet by tapping data from private companies and state and local bureaucracies.
  • ICE spent approximately $2.8 billion between 2008 and 2021 on new surveillance, data collection and data-sharing programs. 

With a budget for 2025 that is 10 times the size of the agency’s total surveillance spending over the last 13 years, ICE is going on a shopping spree, creating one of the largest, most comprehensive domestic surveillance machines in history. 

How We Got Here

The entire surveillance industry has been allowed to grow and flourish under both Democratic and Republican regimes. For example, President Obama dramatically expanded ICE from its more limited origins, while at the same time narrowing its focus to undocumented people accused of crimes. Under the first and second Trump administrations, ICE ramped up its operations significantly, increasing raids in major cities far from the southern border and casting a much wider net on potential targets. ICE has most recently expanded its partnerships with sheriffs across the U.S., and deported more than 1.5 million people cumulatively under the Trump administrations (600,000 of those were just during the first year of Trump’s second term according to DHS statistics), not including the 1.6 million people DHS claims have “self-deported.” More horrifying is that in just the last year of the current administration, 4,250 people detained by ICE have gone missing, and 31 have died in custody or while being detained. In contrast, 24 people died in ICE custody during the entirety of the Biden administration.

ICE also has openly stated that they plan to spy on the American public, looking for any signs of left-wing dissent against their domestic military-like presence. Acting ICE Director Todd Lyons said in a recent interview that his agency “was dedicated to the mission of going after” Antifa and left-wing gun clubs. 

On a long enough timeline, any surveillance tool you build will eventually be used by people you don’t like for reasons that you disagree with.

On a long enough timeline, any surveillance tool you build will eventually be used by people you don’t like for reasons that you disagree with. A surveillance-industrial complex and a democratic society are fundamentally incompatible, regardless of your political party. 

EFF recently published a guide to using government databases to dig up homeland security spending and compiled our own dataset of companies selling tech to DHS components. In 2025, ICE entered new contracts with several private companies for location surveillance, social media surveillance, face surveillance, spyware, and phone surveillance. Let’s dig into each.

Phone Surveillance Tools 

One common surveillance tactic of immigration officials is to get physical access to a person’s phone, either while the person is detained at a border crossing, or while they are under arrest. ICE renewed an $11 million contract with a company called Cellebrite, which helps ICE unlock phones and then can take a complete image of all the data on the phone, including apps, location history, photos, notes, call records, text messages, and even Signal and WhatsApp messages. ICE also signed a $3 million contract with Cellebrite’s main competitor Magnet Forensics, makers of the Graykey device for unlocking phones. DHS has had contracts with Cellebrite since 2008, but the number of phones they search has risen dramatically each year, reaching a new high of 14,899 devices searched by ICE’s sister agency U.S. Customs and Border Protection (CBP) between April and June of 2025. 

If ICE can’t get physical access to your phone, that won’t stop them from trying to gain access to your data. They have also resumed a $2 million contract with the spyware manufacturer, Paragon. Paragon makes the Graphite spyware, which made headlines in 2025 for being found on the phones of several dozen members of Italian civil society. Graphite is able to harvest messages from multiple different encrypted chat apps such as Signal and WhatsApp without the user ever knowing. 

Our concern with ICE buying this software is the likelihood that it will be used against undocumented people and immigrants who are here legally, as well as U.S. citizens who have spoken up against ICE or who work with immigrant communities. Malware such as Graphite can be used to read encrypted messages as they are sent, other forms of spyware can also download files, photos, location history, record phone calls, and even discretely turn on your microphone to record you. 

How to Protect Yourself 

The most effective way to protect yourself from smartphone surveillance would be to not have a phone. But that’s not realistic advice in modern society. Fortunately, for most people there are other ways you can make it harder for ICE to spy on your digital life. 

The first and easiest step is to keep your phone up to date. Installing security updates makes it harder to use malware against you and makes it less likely for Cellebrite to break into your phone. Likewise, both iPhone (Lockdown Mode) and Android (Advanced Protection) offer special modes that lock your phone down and can help protect against some malware.

The first and easiest step is to keep your phone up to date.

Having your phone’s software up to date and locked with a strong alphanumeric password will offer some protection against Cellebrite, depending on your model of phone. However, the strongest protection is simply to keep your phone turned off, which puts it in “before first unlock” mode and has been typically harder for law enforcement to bypass. This is good to do if you are at a protest and expect to be arrested, if you are crossing a border, or if you are expecting to encounter ICE. Keeping your phone on airplane mode should be enough to protect against cell-site simulators, but turning your phone off will offer extra protection against cell-site simulators and Cellebrite devices. If you aren’t able to turn your phone off, it’s a good idea to at least turn off face/fingerprint unlock to make it harder for police to force you to unlock your phone. While EFF continues to fight to strengthen our legal protections against compelling people to decrypt their devices, there is currently less protection against compelled face and fingerprint unlocking than there is against compelled password disclosure.

Internet Surveillance 

ICE has also spent $5 million to acquire at least two location and social media surveillance tools: Webloc and Tangles, from a company called Pen Link, an established player in the open source intelligence space. Webloc gathers the locations of millions of phones by gathering data from mobile data brokers and linking it together with other information about users. Tangles is a social media surveillance tool which combines web scraping with access to social media application programming interfaces. These tools are able to build a dossier on anyone who has a public social media account. Tangles is able to link together a person’s posting history, posts, and comments containing keywords, location history, tags, social graph, and photos with those of their friends and family. Penlink then sells this information to law enforcement, allowing law enforcement to avoid the need for a warrant. This means ICE can look up historic and current locations of many people all across the U.S. without ever having to get a warrant.

These tools are able to build a dossier on anyone who has a public social media account.

ICE also has established contracts with other social media scanning and AI analysis companies, such as a $4.2 million contract with a company called Fivecast for the social media surveillance and AI analysis tool ONYX. According to Fivecast, ONYX can conduct “automated, continuous and targeted collection of multimedia data” from all major “news streams, search engines, social media, marketplaces, the dark web, etc.” ONYX can build what it calls “digital footprints” from biographical data and curated datasets spanning numerous platforms, and “track shifts in sentiment and emotion” and identify the level of risk associated with an individual. 

Another contract is with ShadowDragon for their product Social Net, which is able to monitor publicly available data from over 200 websites. In an acquisition document from 2022, ICE confirmed that ShadowDragon allowed the agency to search “100+ social networking sites,” noting that “[p]ersistent access to Facebook and Twitter provided by ShadowDragon SocialNet is of the utmost importance as they are the most prominent social media platforms.”

ICE has also indicated that they intend to spend between 20 and 50 million dollars on building and staffing a 24/7 social media monitoring office with at least 30 full time agents to comb every major social media website for leads that could generate enforcement raids. 

How to protect yourself 

For U.S. citizens, making your account private on social media is a good place to start. You might also consider having accounts under a pseudonym, or deleting your social media accounts altogether. For more information, check out our guide to protecting yourself on social media. Unfortunately, people immigrating to the U.S. might be subject to greater scrutiny, including mandatory social media checks, and should consult with an immigration attorney before taking any action. For people traveling to the U.S., new rules will soon likely require them to reveal five years of social media history and 10 years of past email addresses to immigration officials. 

Street-Level Surveillance 

But it’s not just your digital habits ICE wants to surveil; they also want to spy on you in the physical world. ICE has contracts with multiple automated license plate reader (ALPR) companies and is able to follow the driving habits of a large percentage of Americans. ICE uses this data to track down specific people anywhere in the country. ICE has a $6 million contract through a Thomson Reuters subsidiary to access ALPR data from Motorola Solutions. ICE has also persuaded local law enforcement officers to run searches on their behalf through Flock Safety's massive network of ALPR data. CBP, including Border Patrol, also operates a network of covert ALPR systems in many areas. 

ICE has also invested in biometric surveillance tools, such as face recognition software called Mobile Fortify to scan the faces of people they stop to determine if they are here legally. Mobile Fortify checks the pictures it takes against a database of 200 million photos for a match (the source of the photos is unknown). Additionally, ICE has a $10 million contract with Clearview AI for face recognition. ICE has also contracted with iris scanning company BI2 technologies for even more invasive biometric surveillance. ICE agents have also been spotted wearing Meta’s Ray-Ban video recording sunglasses. 

ICE has acquired trucks equipped with cell-site simulators (AKA Stingrays) from a company called TechOps Specialty Vehicles (likely the cell-site simulators were manufactured by another company). This is not the first time ICE has bought this technology. According to documents obtained by the American Civil Liberties Union, ICE deployed cell-site simulators at least 466 times between 2017 and 2019, and ICE more than 1,885 times between 2013 and 2017, according to documents obtained by BuzzFeed News. Cell-site simulators can be used to track down a specific person in real time, with more granularity than a phone company or tools like Webloc can provide, though Webloc has the distinct advantage of being used without a warrant and not requiring agents to be in the vicinity of the person being tracked. 

How to protect yourself 

Taking public transit or bicycling is a great way to keep yourself off ALPR databases, but an even better way is to go to your local city council meetings and demand the city cancels contracts with ALPR companies, like people have done in Flagstaff, Arizona; Eugene, Oregon; and Denver, Colorado, among others. 

If you are at a protest, putting your phone on airplane mode could help protect you from cell-site simulators and from apps on your phone disclosing your location, but might leave you vulnerable to advanced targeted attacks. For more advanced protection, turning your phone completely off protects against all radio based attacks, and also makes it harder for tools like Cellebrite to break into your phone as discussed above. But each individual will need to weigh their need for security from advanced radio based attacks against their need to document potential abuses through photo or video. For more information about protecting yourself at a protest, head over to SSD.

There is nothing you can do to change your face, which is why we need more stringent privacy laws such as Illinois Biometric Information Privacy Act.

Tying All the Data Together 

Last but not least, ICE uses tools to combine and search all this data along with the data on Americans they have acquired from private companies, the IRS, TSA, and other government databases. 

To search all this data, ICE uses ImmigrationOS, a system that came from a $30-million contract with Palantir. What Palantir does is hard to explain, even for people who work there, but essentially they are plumbers. Palantir makes it so that ICE has all the data they have acquired in one place so it’s easy to search through. Palantir links data from different databases, like IRS data, immigration records, and private databases, and enables ICE to view all of this data about a specific person in one place. 

Palantir makes it so that ICE has all the data they have acquired in one place so it’s easy to search through.

The true civil liberties nightmare of Palantir is that they enable governments to link data that should have never been linked. There are good civil liberties reasons why IRS data was never linked with immigration data and was never linked with social media data, but Palantir breaks those firewalls. Palantir has labeled themselves as a progressive, human rights centric company historically, but their recent actions have given them away as just another tech company enabling surveillance nightmares.

Threat Modeling When ICE Is Your Adversary 

 Understanding the capabilities and limits of ICE and how to threat model helps you and your community fight back, remain powerful, and protect yourself.

One of the most important things you can do is to not spread rumors and misinformation. Rumors like “ICE has malware so now everyone's phones are compromised” or “Palantir knows what you are doing all the time” or “Signal is broken” don’t help your community. It’s more useful to spread facts, ways to protect yourself, and ways to fight back. For information about how to create a security plan for yourself or your community, and other tips to protect yourself, read our Surveillance Self-Defense guides.

How EFF Is Fighting Back

One way to fight back against ICE is in the courts. EFF currently has a lawsuit against ICE over their pressure on Apple and Google to take down ICE spotting apps, like ICEBlock. We also represent multiple labor unions suing ICE over their social media surveillance practices

We have also demanded the San Francisco Police Department stop sharing data illegally with ICE, and issued a statement condemning the collaboration between ICE and the malware provider Paragon. We also continue to maintain our Rayhunter project for detecting cell-site simulators. 

Other civil liberties organizations are also suing ICE. ACLU has sued ICE over a subpoena to Meta attempting to identify the owner of an account providing advice to protestors, and another coalition of groups has thus far successfully sued the IRS to stop sharing taxpayer data with ICE

We need to have a hard look at the surveillance industry. It is a key enabler of vast and untold violations of human rights and civil liberties, and it continues to be used by aspiring autocrats to threaten our very democracy. As long as it exists, the surveillance industry, and the data it generates, will be an irresistible tool for anti-democratic forces.

Join EFF

Help protect digital privacy & free speech for everyone

  •  

EFFecting Change: The Human Cost of Online Age Verification

Age verification mandates are spreading fast, and they’re ushering in a new age of online surveillance, censorship, and exclusion for everyone—not just young people. Age-gating laws generally require websites and apps to collect sensitive data from every user, often through invasive tools like ID checks, biometric scans, or other dubious “estimation” methods, before granting them access to certain content or services. Lawmakers tout these laws as the silver-bullet solution to “kids’ online safety,” but in reality, age-verification mandates wall off large swaths of the web, build sweeping new surveillance infrastructure, increase the risk of data breaches and real-life privacy harms, and threaten the anonymity that has long allowed people to seek support, explore new ideas, and organize and build community online.

Join EFF's Rindala Alajaji and Alexis Hancock along with Hana Memon from Gen-Z for Change and Cynthia Conti-Cook from Collaborative Research Center for Resilience for a conversation about what we stand to lose as more and more governments push to age-gate the web. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. The conversation will be followed by a live Q&A. 

EFFecting Change Livestream Series:
The Human Cost of Online Age Verification
Thursday, January 15th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!


RSVP Today


Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

  •  

The Homeland Security Spending Trail: How to Follow the Money Through U.S. Government Databases

This guide was co-written by Andrew Zuker with support from the Heinrich Boell Foundation.

The U.S. government publishes volumes of detailed data on the money it spends, but searching through it and finding information can be challenging. Complex search functions and poor user interfaces on government reporting sites can hamper an investigation, as can inconsistent company profiles and complex corporate ownership structures. 

This week, EFF and the Heinrich Boell Foundation released an update to our database of vendors providing technology to components of the U.S. Department of Homeland Security (DHS), such as Immigration and Customs Enforcement (ICE) and Customs and Border Protections (CBP). It includes new vendor profiles, new fields, and updated data on top contractors, so that journalists and researchers have a jumping-off point for their own investigations.

Access the dataset through Google Sheets (Google's Privacy Policy applies) or download the Excel file here

This time we thought we would also share some of the research methods we developed while assembling this dataset.

This guide covers the key databases that store information on federal spending and contracts (often referred to as "awards"), government solicitations for products and services, and the government's "online shopping superstore," plus a few other deep-in-the-weeds datasets buried in the online bureaucracy. We have provided a step-by-step guide for searching these sites efficiently and help tips for finding information. While we have written this specifically with DHS agencies in mind, it should serve as a useful resource for procurement across the federal government. 


1. Procurement Sites: FPDS.gov and USASpending.gov

Federal Procurement Data System - fpds.gov

Update Jan. 30, 2026: In mid 2026, FPDS.gov's functions will be transferred to SAM.gov. More info here.

The Federal Procurement Data System (FPDS) is the best place to start for finding out what companies are working with DHS. It is the official system for tracking federal discretionary spending and contains current data on contracts with non-governmental entities like corporations and private businesses. Award data is up-to-date and includes detailed information on vendors and awards which can be helpful when searching the other systems. It is a little bit old-school, but that often makes it one of the easiest and quickest sites to search, once you get the hang of it, since it offers a lot of options for narrowing search parameters to specific agencies, vendors, classification of services, etc. 

How to Use FDPS
To begin searching Awards for a particular vendor, click into the “ezSearch” field in the center of the page, delete or replace the text “Google-like search to help you find federal contracts…” with a vendor name or keywords, and hit Enter to begin a new search. 

The EZ Search landing page for FPDS.gov

A new tab will open automatically with exact matches at the top. 

A page of results for Google's contracts with the federal government.

Four “Top 10” modules on the left side of the page link to top results in descending order: Department Full Name, Contracting Agency Name, Full Legal Business Name, and Treasury Account Symbol. These ranked lists help the user quickly narrow in on departments and agencies that vendors do business with. DHS may not appear in the “Top 10” results, which may indicate that the vendor hasn’t yet been awarded DHS or subagency contracts.

For example, if you searched the term “FLIR”, as in Teledyne FLIR who make infrared surveillance systems used along the U.S.-Mexico border, DHS is the 2nd result in the “Top 10: Department Full Name” box. 

FDPS.gov results for FLIR with the agency full name sidebar highlighted.

To see all DHS contracts awarded to the vendor, click “Homeland Security, Department of” from the “Top 10 Department Full Name” module. When the page loads, you will see the subcomponents of DHS (e.g., ICE, CBP, or the U.S. Secret Service) in the lefthand menu. You can click on each of those to drill down even further. You can also drill down by choosing a company. 

Sorting options can be found on the right side of the page which offer the ability to refine and organize search results. One of the most useful is "Date Signed," which will arrange the results in chronological order. 

FPDS.gov results for FLIR with the sort by sidebar highlighted

You don't have to search by a company name. You can also use a product keyword, such as "LPR" (license plate reader). However, because keywords are not consistently used by government agencies, you will need to try various permutations to gather the most data. 

Each click or search filter adds a new term to the search both in the main field at the top and in the Search Criteria module on the right. They can be deleted by clicking the X next to the term in this module or by removing the text in the main search field.

FPDS.gov results with the sidebar for deselecting terms highlighted with an arrow.

For each contract item, you can click "View" to see the specific details. However, these pages don't have permalinks, so you'll want to print-to-pdf if you need to retain a permanent copy of the record. 

Often the vendor brand name we know from their marketing or news media is not the same entity that is awarded government contracts. Foreign companies in particular rely on partnerships with domestic entities that are established federal contractors. If you can’t find any spending records for a vendor, search the web for information on the company including acquisitions, partnerships, licensing agreements, parent companies, and subsidiaries. It is likely that one of these types of related companies is the contract holder. 

USA Spending - usaspending.gov

The Federal Funding and Accountability Act (FFATA) of 2006 and the DATA Act of 2014 require the government to publish all spending records and contracts on a single, searchable public website, including agency-specific contracts, using unified reporting standards to ensure consistent, reliable, searchable data. This led to the creation of USA Spending (usaspending.gov). 

USA Spending is populated with data from multiple sources including the Federal Procurement Data System (fpds.gov) and the System for Awards Management (sam.gov - which we'll discuss in the next section). It also compiles Treasury Reports and data from the financial systems of dozens of federal agencies. We relied heavily on Awards data from these systems to verify vendor information including contracts with the DHS and its subagencies such as CBP and ICE. 

USA Spending has a more modern interface, but is often very slow with the information often hidden in expandable menus. In many ways it is duplicative of FPDS, but with more features, including the ability to bookmark individual pages. We often found ourselves using FPDS to quickly identify data, and then using the "Award ID" number to find the specific record within USA Spending. 

USA Spending also has some visualizations and ways to analyze data in chart form, which is not possible with the largely text-based FPDS. 

How to Use USA Spending

To begin searching for DHS awards, click on either “Search Award Data” on the navigation bar, or the blue “Start Searching Awards”button. 

The landing page of USA Spending with arrows pointing to the search links.

On the left of the Search page are a list of drop down menus with options. You can enter a vendor name as a keyword, or expand the “Recipient” menu if you know the full company name or their Unique Entity Identifier (UEI) number. Expand the “Agency Tab” and enter DHS which will bring up the Department of Homeland Security Option.

USA Spending page with arrows pointing to the key search filters.

In the example below, we entered “Palantir Technologies” as a keyword, and selected DHS in the Agency dropdown:

Search results showing Palantir contracts

For vendors with hundreds of contracts that return many pages of results, consider adding more filters to the search such as a specific time period or specifying a Funding Agency such as ICE or CBP. In this example, the filters “Palantir Technologies” and “DHS” returned 13 results (at the time of publication). It is important to note that the search results table is larger than what displays in that module. You can scroll down to view more Awards and scroll to the right to see much more information. 

Scroll down outside of that module to reveal more info including modules for Results by Category, Results over Time, and Results by Geography, all of which can be viewed as a list or graph. 

USA Spending page with graphs and charts

Once you've identified a contract, you can click the "Prime Award ID" to see the granular details for each time. 

From the search, you can also select just the agency to see all the contracts on file. Each agency also has its own page showing a breakdown for every fiscal year of how much money they had to spend and which components spent the most. For example, here's DHS's page.

2. Contracting Opportunities  - SAM.gov  

So far we've talked about how to track contracts and spending, but now let's take a step back and look at how those contracts come to be. The System for Award Management, SAM.gov, is the site that allows companies to see what products and services the government intends to buy so they can bid on the contract. But SAM.gov is also open to the public, which means you can see the same information, including a detailed scope of a project and sometimes even technical details. 

How to Use Sam.gov

SAM.gov does not require an account for its basic contracting opportunity searches, but you may want to create one in order to save the things you find and to receive keyword- or agency-based alerts via email when new items of interest are posted. 

First you will click "Search" in the menu bar, which will bring you to this page: 

Search page on Sam.gov

We recommend selecting both "Active" and "Inactive" in the Status menu. Contracts quickly go inactive, and besides, sometimes the contracts you are most interested in are several years old. 

If you are researching a particular technology such as unmanned aerial vehicles, you might just type "unmanned" in the Simple Search bar. That will bring up every solicitation with that keyword across the federal government.

One of the most useful features is filtering by agency, while leaving the keyword search blank. This will return a running list of an agency's calls for bids and related procurement activities. It is worth checking regularly. For example, here's what CBP's looks like on a given day: 

Sam.gov results for Customs and Border Patrol

If you click on an item, you should next scroll down to see if there are attachments. These tend to contain the most details. Specifically, you should look for the term "SOW," the abbreviation for "Statement of Work." For example, here are the attachments for a CBP contracting opportunity for "Cellular Covert Cameras": 

Links for attachments

The first document is the Statement of Work, which tells you the exact brand, model, and number of devices they want to acquire: 

Line items for hundreds of Hyperfire cameras and related components.

The attachments also included a "BNO Justification." BNO stands for "Brand Name Only," and this document explains in even more detail why CBP wants that specific product:

Explanation of why the government wants to purchase this particular model of camera.

If you see the terms "Sole Source" in a listing, that also means that an agency has decided that only one product meets its requirements and it will not open bidding to other companies. 

In addition to contracting, many agencies announce "Industry Day" events, usually virtual, that members of the public can join. This is a unique opportunity to listen in on what contractors are being told by government purchasing officials. The presentation slides are also often later uploaded to the SAM.gov page. Occasionally, the list of attendees will also be posted, and you'll find several examples of those lists in our dataset.

3. The Government's "Superstore" - gsaadvantage.gov

Another way to investigate DHS purchasing is by browsing the catalog of items and services immediately available to them. The General Services Administration operates GSA Advantage, which it describes as "the government's central online shopping superstore." The website's search is open, allowing members of the public to view any vendors' offerings–including both products and services– easily as they would with any online marketplace. 

For example, you could search for "license plate reader" and produce a list of available products: 

Search results that show a license plate reader for sale for $995.

If you click "Advanced Search," you can also isolate every product available from a particular manufacturer. For example, here are the results when you search for products available from Skydio, a drone manufacturer.

Search results for 50 Skydio drone-related products

If you switch from "Products" to "Services" you can export datasets for each company about their offerings. For example, if you search for "Palantir" you'll get results that look like this:

Search results with companies offering Palantir-related services.

This means all these companies are offering some sort of Palantir-related services. If you click "Matches found in Terms and Conditions," you'll download a PDF with a lot of details about what the company offers. 

For example, here's a a screengrab from Anduril's documentation

A menu of surveillance towers with prices.

If you click "Matches Found in Price List" you'll download a spreadsheet that serves as a blueprint of what the company offers, including contract personnel. Here's a snippet from Palantir's: 

A spreadsheet with prices for various Palantir services.

4. Other Resources

Daily Public Report of Covered Contract Awards - Maybe FPDS isn't enough for you and you want to know every day what contracts have been signed. Buried in the DHS website are links to a daily feed of all contracts worth $4 million or more. It's available in XML, JSON, CSV and XLSX formats. 

DHS Acquisition Planning Forecast System (APFS) - DHS operates a site for vendors to learn about upcoming contracts greater than $350,000. You can sort by agency at a granular level,  such as upcoming projects by ICE Enforcement & Removal Operations. This is one to check regularly for updates. 

Results for a proposed contract for open source intelligence

DHS Artificial Intelligence Use Case Inventory - Many federal agencies are required to maintain datasets of "AI Use Cases." DHS has broken these out for each of its subcomponents, including ICE and CBP. Advanced users will find the spreadsheet versions of these inventory more interesting. 

Use case summary for surveillance towers

NASA Solutions for Enterprise-Wide Procurement (SEWP) - SEWP is a way for agencies to fast track acquisition of "Information Technology, Communication and Audio Visual" products through existing contracts. The site provides an index of existing contract holders, but the somewhat buried "Provider Lookup" has a more comprehensive list of companies involved in this type of contracting, illustrating how the companies serve as passthroughs for one another. Relatedly, DHS's list of "Prime Contractors" shows which companies hold master contracts with the agency and its components. 

List of resellers of Palantir technology

TechInquiry - Techinquiry is a small non-profit that aggregates records from a wide variety of sources about tech companies, particularly those involved in government contracting. 

  •  

The Year States Chose Surveillance Over Safety: 2025 in Review

2025 was the year age verification went from a fringe policy experiment to a sweeping reality across the United States. Half of the U.S. now mandates age verification for accessing adult content or social media platforms. Nine states saw their laws take effect this year alone, with more coming in 2026.

The good news is that courts have blocked many of the laws seeking to impose age-verification gates on social media, largely for the same reasons that EFF opposes these efforts.  Age-verification measures censor the internet and burden access to online speech. Though age-verification mandates are often touted as "online safety" measures for young people, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users' privacy, anonymity, and security.

If you're feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you're not alone. That's why we've launched EFF's Age Verification Resource Hub at EFF.org/Age—a one-stop shop to understand what these laws actually do, what's at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and safe internet. Moreover, there is hope. Although the Supreme Court ruled that imposing age-verification gates to access adult content does not violate the First Amendment on its face, the legal fight continues regarding whether those laws are constitutional. 

As we built the hub throughout 2025, we also fought state mandates in legislatures, courts, and regulatory hearings. Here's a summary of what happened this year.

The Laws That Took Effect (And Immediately Backfired)

Nine states’ age verification laws for accessing adult content went into effect in 2025:

Predictably, users didn’t stop accessing adult content after the laws went into effect, they just changed how they got to it. As we’ve said elsewhere: the internet always routes around censorship. 

In fact, research from the New York Center for Social Media and Politics and the public policy nonprofit the Phoenix Center confirm what we’ve warned from the beginning: age verification laws don’t work. Their research found:

  • Searches for platforms that have blocked access to residents in states with these laws dropped significantly, while searches for offshore sites surged.
  • Researchers saw a predictable surge in VPN usage following the enactment of age verification laws, where for example, Florida saw a 1,150% increase in VPN demand after its law took effect.

As foretold, when platforms block access or require invasive verification, it drives people to sites that operate outside the law—platforms that often pose greater safety risks. Instead of protecting young people, these laws push them toward less secure, less regulated spaces.

Legislation Watch: Expanding Beyond “Adult Content”

Lawmakers Take Aim at Social Media Platforms

Earlier this year, we raised the alarm that state legislatures wouldn’t stop at adult content. Sure enough, throughout 2025, lawmakers set their sights on young people’s social media usage, passing laws that require platforms to verify users’ ages and obtain parental consent for accounts belonging to anyone under 18. Four states already passed similar laws in previous years.  These laws were swiftly blocked in courts because they violate the First Amendment and subject every user to surveillance as a condition of participation in online speech. 

Warning Labels and Time Limits

​​And it doesn’t stop with age verification. California and Minnesota passed new laws this year requiring social media platforms to display warning labels to users. Virginia’s SB 854, which also passed this year, took a different approach. It requires social media platforms to use “commercially reasonable efforts” to determine a user's age and, if that user is under 16, limits them to one hour per day per application by default unless a parent changes the time allowance.

EFF is opposed to these laws as they have serious First Amendment concerns. And courts have agreed: in November 2025, the U.S. District Court for the District of Colorado temporarily halted Colorado's warning label law, which would have required platforms to display warnings to users under 18 about the negative impacts of social media. We expect courts to similarly halt California and Minnesota’s laws.

App Store and Device-Level Age Verification

2025 also saw the rise of device-level and app-store age verification laws, which shift the obligation to verify users onto app stores and operating system providers. These laws seriously impact users’ (adults and young people alike) from accessing information, particularly since these laws block a much broader swath of content (not only adult or sexual content), but every bit of content provided by every application. In October, California Governor Gavin Newsom signed the Digital Age Assurance Act (AB 1043), which takes a slightly different approach to age verification in that it requires “operating system providers”—not just app stores—to offer an interface at device/account setup that prompts the account holder to indicate the user’s birth date or age. Developers must request an age signal when applications are downloaded and launched. These laws expand beyond earlier legislation passed in other states that mandate individual websites implement the law, and apply the responsibility to app stores, operating systems, or device makers at a more fundamental level.

Again, these laws have drawn legal challenges. In October, the Computer & Communications Industry Association (CCIA) filed a lawsuit arguing that Texas’s SB 2420 is unconstitutional. A separate suit, Students Engaged in Advancing Texas (SEAT) v. Paxton, challenges the same law on First Amendment grounds, arguing it violates the free speech rights of young people and adults alike. Both lawsuits argue that the burdens placed on platforms, developers, and users outweigh any proposed benefits.

From Legislation to Regulation: Rulemaking Processes Begin

States with existing laws have also begun the process of rulemaking—translating broad statutory language into specific regulatory requirements. These rulemaking processes matter, because the specific technical requirements, data—handling procedures, and enforcement mechanisms will determine just how invasive these laws become in practice. 

California’s Attorney General held a hearing in November to solicit public comment on methods and standards for age assurance under SB 976, the “Protecting Our Kids from Social Media Addiction Act,” which will require age verification by the end of 2026. EFF supported the legal challenge to S.B. 976 since its passage, and federal courts have blocked portions of the law from taking effect. Now in the rulemaking process, EFF submitted comments raising concerns about the discriminatory impacts of any proposed regulations.

New York's Attorney General also released proposed rules for the state’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, describing which companies must comply and the standards for determining users’ age and obtaining parental consent. EFF submitted comments opposing the age verification requirements in September of 2024, and again in December 2025.

Our comments in both states warn that these rules risk entrenching invasive age verification systems and normalizing surveillance as a prerequisite for online participation.

The Boundaries Keep Shifting

As we’ve said, age verification will not stop at adult content and social media. Lawmakers are already proposing bills to require ID checks for everything from skincare products in California to diet supplements in Washington. Lawmakers in Wisconsin and Michigan have set their targets on virtual private networks, or VPNs—proposing various legislation that would ban the use of VPNs to prevent people from bypassing age verification laws. AI chatbots are next on the list, with several states considering legislation that would require age verification for all users. Behind the reasonable-sounding talking points lies a sprawling surveillance regime that would reshape how people of all ages use the internet. EFF remains ready to push back against these efforts in legislatures, regulatory hearings, and court rooms.

2025 showed us that age verification mandates are spreading rapidly, despite clear evidence that they don't work and actively harm the people they claim to protect. 2026 will be the year we push back harder—like the future of a free, open, private, and safe internet depends on it.

This is why we must fight back to protect the internet that we know and love. If you want to learn more about these bills, visit EFF.org/Age

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

Surveillance Self-Defense: 2025 Year in Review

Our Surveillance Self-Defense (SSD) guides, which provide practical advice and explainers for how to deal with government and corporate surveillance, had a big year. We published several large updates to existing guides and released three all new guides. And with frequent massive protests across the U.S., our guide to attending a protest remained one of the most popular guides of the year, so we made sure our translations were up to date.

(Re)learn All You Need to Know About Encryption

We started this year by taking a deep look at our various encryption guides, which start with the basics before moving up to deeper concepts. We slimmed each guide down and tried to focus on making them as clear and concise as deep explainers on complicated topics can be. We reviewed and edited four guides in total:

And if you’re not sure where to start, we got you covered with the new Interested in Encryption? playlist.

New Guides

We launched three new guides this year, including iPhone and Android privacy guides, which walk you through all the various privacy options of your phone. Both of these guides received a handful of updates throughout their first year as new features were released or, in the case of the iPhone, a new design language was introduced. These also got a fun little boost from a segment on "Last Week Tonight with John Oliver" telling people how to disable their phone’s advertising identifier.

We also launched our How to: Manage Your Digital Footprint guide. This guide is designed to help you claw back some of the data you may find about yourself online, walking through different privacy options across different platforms, digging up old accounts, removing yourself from people search sites, and much more.

Always Be Updating

As is the case with most software, there is always incremental work to do. This year, that meant small updates to our WhatsApp and Signal guides to acknowledge new features (both are already on deck for similar updates early next year as well). 

We overhauled our device encryption guides for Windows, Mac, and Linux, rolling what was once three guides into one, and including more detailed guidance on how to handle recovery keys. Some slight changes to how this works on both Windows and Mac means this one will get another look early next year as well.

Speaking of rolling multiple guides into one, we did the same with our guidance for the Tor browser, where it once lived across three guides, it now lives as one that covers all the major desktop platforms (the mobile guide remains separate).

The password manager guide saw some small changes to note some new features with Apple and Chrome’s managers, as well as some new independent security audits. Likewise, the VPN guide got a light touch to address the TunnelVision security issue.

Finally, the secure deletion guide got a much needed update after years of dormancy. With the proliferation of solid state drives (SSDs, not to be confused with SSD), not much has changed in the secure deletion space, but we did move our guidance for those SSDs to the top of the guide to make it easier to find, while still acknowledging many people around the world still only have access to a computer with spinning disk drives. 

Translations

As always, we worked on translations for these updates. We’re very close to a point where every current SSD guide is updated and translated into Arabic, French, Mandarin, Portuguese, Russian, Spanish, and Turkish.

And with the help of Localization Lab, we also now have translations for a handful of the most important guides in Changana, Mozambican Portuguese, Ndau, Luganda, and Bengali.

Blogs Blogs Blogs

Sometimes we take our SSD-like advice and blog it so we can respond to news events or talk about more niche topics. This year, we blogged about new features, like WhatsApp’s “Advanced Chat Privacy” and Google’s "Advanced Protection.” We also broke down the differences between how different secure chat clients handle backups and pushed for expanding encryption on Android and iPhone.

We fight for more privacy and security every day of every year, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

Congress's Crusade to Age Gate the Internet: 2025 in Review

In the name of 'protecting kids online,' Congress pushed forward legislation this year that could have severely undermined our privacy and stifled free speech. These bills would have mandated invasive age-verification checks for everyone online—adults and kids alike—handing unprecedented control to tech companies and government authorities.

Lawmakers from both sides of the aisle introduced bill after bill, each one somehow more problematic than the last, and each one a gateway for massive surveillance, internet censorship, and government overreach. In all, Congress considered nearly twenty federal proposals.

For us, this meant a year of playing legislative whack-a-mole, fighting off one bad bill after another. But more importantly, it meant building sustained opposition, strengthening coalitions, and empowering our supporters—that's you!—with the tools you need to understand what's at stake and take action.

Luckily, thanks to this strong opposition, these federal efforts all stalled… for now.

So, before we hang our hats and prepare for the new year, let’s review some of our major wins against federal age-verification legislation in 2025.

The Kids Online Safety Act (KOSA)

Of the dozens of federal proposals relating to kids online, the Kids Online Safety Act remains the biggest threat. We, along with a coalition of civil liberties groups, LGBTQ+ advocates, youth organizations, human rights advocates, and privacy experts, have been sounding the alarm on KOSA for years now.

First introduced in 2022, KOSA would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to certain content. There have been numerous versions introduced, though all of them share a common core: KOSA is an unconstitutional censorship bill that threatens the speech and privacy rights of all internet users. It would impose a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” Those prohibitions are so broad that they will sweep up online speech about the topics, including efforts to provide resources to adults and minors experiencing them. The bill claims prohibit censorship based on “the viewpoint of users,” but that’s simply a smokescreen. Its core function is to let the federal government sue platforms, big or small, that don’t block or restrict content that someone later claims contributed to one of these harms. 

In addition to stifling online speech, KOSA would strongly incentivize age-verification systems—forcing all users, adults and minors, to prove who they are before they can speak or read online. Because KOSA requires online services to separate and censor aspects of their services accessed by children, services are highly likely to demand to know every user’s age to avoid showing minors any of the content KOSA deems harmful. There are a variety of age determination options, but all have serious privacy, accuracy, or security problems. Even worse, age-verification schemes lead everyone to provide even more personal data to the very online services that have invaded our privacy before. And all age verification systems, at their core, burden the rights of adults to read, get information, and speak and browse online anonymously.

Despite what lawmakers claim, KOSA won’t bother big tech—in fact, they endorse it! The bill is written so that big tech companies, like Apple and X, will be able to handle the regulatory burden that KOSA will demand, while smaller platforms will struggle to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

The good news is that KOSA’s momentum this Congress was waning at best. There was a lot of talk about the bill from lawmakers, but little action. The Senate version of the bill, which passed overwhelmingly last summer, did not even make it out of committee this Congress.

In the House, lawmakers could not get on the same page about the bill—so much so that one of the original sponsors of KOSA actually voted against the bill in committee in December.

The bad news is that lawmakers are determined to keep raising this issue, as soon as the beginning of next year. So let’s keep the momentum going by showing them that users do not want age verification mandates—we want privacy.

TAKE ACTION

Don't let congress censor the internet

Threats Beyond KOSA

KOSA wasn’t the only federal bill in 2025 that used “kids’ safety” as a cover for sweeping surveillance and censorship mandates. Concern about possible harms of AI chatbots dominated policy discussion this year in Congress.

One of the most alarming proposals on the issue was the GUARD Act, which would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. As we wrote in November, though the GUARD Act may look like a child-safety bill, in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using some of the digital tools that they rely on every day.

Like KOSA, the GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further fracturing the internet we know and love.

With your help, we urged lawmakers to reject the GUARD Act and focus instead on policies that provide more transparency, options, and comprehensive privacy for all users.

Beating Age Verification for Good

Together, these bills reveal a troubling pattern in Congress this year. Rather than actually protecting young people’s privacy and safety online, Congress continues to push a legislative framework that’s based on some deeply flawed assumptions:

  1. That the internet must be age-gated, with young people either heavily monitored or kicked off entirely, in order to be safe;
  2. That the value of our expressive content to each individual should be determined by the state, not individuals or even families; and
  3. That these censorship and surveillance regimes are worth the loss of all users’ privacy, anonymity, and free expression online.

We’ve written over and over about the many communities who are immeasurably harmed by online age verification mandates. It is also worth remembering who these bills serve—big tech companies, private age verification vendors, AI companies, and legislators vying for the credit of “solving” online safety while undermining users at every turn.

We fought these bills all through 2025, and we’ll continue to do so until we beat age verification for good. So rest up, read up (starting with our all-new resource hub, EFF.org/Age!), and get ready to join us in this fight in 2026. Thank you for your support this year.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

States Tried to Censor Kids Online. Courts, and EFF, Mostly Stopped Them: 2025 in Review

Lawmakers in at least a dozen states believe that they can pass laws blocking  young people from social media or require them to get their parents’ permission before logging on. Fortunately, nearly every trial court to review these laws has ruled that they are unconstitutional.

It’s not just courts telling these lawmakers they are wrong. EFF has spent the past year filing friend-of-the-court briefs in courts across the country explaining how these laws violate young people’s First Amendment rights to speak and get information online. In the process, these laws also burden adults’ rights, and jeopardize everyone’s privacy and data security.

Minors have long had the same First Amendment rights as adults: to talk about politics, create art, comment on the news, discuss or practice religion, and more. The internet simply amplified their ability to speak, organize, and find community.

Although these state laws vary in scope, most have two core features. First, they require social media services to estimate or verify the ages of all users. Second, they either ban minor access to social media, or require parental permission. 

In 2025, EFF filed briefs challenging age-gating laws in California (twice), Florida, Georgia, Mississippi, Ohio, Utah, Texas, and Tennessee. Across these cases we argued the same point: these laws burden the First Amendment rights of both young people and adults. In many of these briefs, the ACLU, Center for Democracy & Technology, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined.

There is no “kid exception” to the First Amendment. The Supreme Court has repeatedly struck down laws that restrict minors’ speech or impose parental-permission requirements. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks. As EFF has urged, lawmakers should pursue strong privacy laws, not censorship, to address online harms.

These laws also burden everyone’s speech requiring users to prove their age. ID-based systems of access can lock people out if they don’t have the right form of ID, and biometric systems are often discriminatory or inaccurate. Requiring users to identify themselves before speaking also chills anonymous speech—protected by the First Amendment, and essential for those who risk retaliation. 

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions. Most of these laws perversely require social media companies to collect even more personal information from everyone, especially children, who can be more vulnerable to identify theft.

EFF will continue to fight for the rights of minors and adults to access the internet, speak freely, and organize online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

Site Blocking Laws Will Always Be a Bad Idea: 2025 in Review

This year, we fought back against the return of a terrible idea that hasn’t improved with age: site blocking laws. 

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved.  

But we’ve never believed they were gone for good. The major media and entertainment companies that backed site blocking in the US in 2012 turned to pushing for site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. And sure enough, the Motion Picture Association (MPA) and its allies have asked Congress to try again. 

There were no less than three Congressional drafts of site-blocking legislation. Representative Zoe Lofgren kicked off the year with the Foreign Anti-Digital Piracy Act (FADPA). Fellow House of Representatives member Darrell Issa also claimed to be working on a bill that would make it offensively easy for a studio to block your access to a website based solely on the belief that there is infringement happening. Not to be left out, the Senate Judiciary Committee produced the terribly named Block BEARD Act 

None of these three attempts to fundamentally alter the way you experience the internet moved too far after their press releases. But the number tells us that there is, once again, an appetite among major media conglomerates and politicians to resurrect SOPA/PIPA from the dead.  

None of these proposals fixes the flaws of SOPA/PIPA, and none ever could. Site blocking is a flawed idea and a disaster for free expression that no amount of rewriting will fix. There is no way to create a fast lane for removing your access to a website that is not a major threat to the open web. Just as we opposed SOPA/PIPA over ten years ago, we oppose these efforts.  

Site blocking bills seek to build a new infrastructure of censorship into the heart of the internet. They would enable court orders directed to the organizations that make the internet work, like internet service providers, domain name resolvers, and reverse proxy services, compelling them to help block US internet users from visiting websites accused of copyright infringement. The technical means haven’t changed much since 2012. - tThey involve blocking Internet Protocol addresses or domain names of websites. These methods are blunt—sledgehammers rather than scalpels. Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Italy, South Korea, France, and in the US, to name just a few.  

Given this downside, one would think the benefits of copyright enforcement from these bills ought to be significant. But site blocking is trivially easy to evade. Determined site owners can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online.  

The limits that lawmakers have proposed to put on these laws are an illusion. While ostensibly aimed at “foreign” websites, they sweep in any website that doesn’t conspicuously display a US origin, putting anonymity at risk. And despite the rhetoric of MPA and others that new laws would be used only by responsible companies against the largest criminal syndicates, laws don’t work that way. Massive new censorship powers invite abuse by opportunists large and small, and the costs to the economy, security, and free expression are widely borne. 

It’s time for Big Media and its friends in Congress to drop this flawed idea. But as long as they keep bringing it up, we’ll keep on rallying internet users of all stripes to fight it. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in Review

Throughout 2025, EFF conducted groundbreaking investigations into Flock Safety's automated license plate reader (ALPR) network, revealing a system designed to enable mass surveillance and susceptible to grave abuses. Our research sparked state and federal investigations, drove landmark litigation, and exposed dangerous expansion into always-listening voice detection technology. We documented how Flock's surveillance infrastructure allowed law enforcement to track protesters exercising their First Amendment rights, target Romani people with discriminatory searches, and surveil women seeking reproductive healthcare.

Flock Enables Surveillance of Protesters

When we obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025, the patterns were unmistakable. Agencies logged hundreds of searches related to political demonstrations—the 50501 protests in February, Hands Off protests in April, and No Kings protests in June and October. Nineteen agencies conducted dozens of searches specifically tied to No Kings protests alone. Sometimes searches explicitly referenced protest activity; other times, agencies used vague terminology to obscure surveillance of constitutionally protected speech.

The surveillance extended beyond mass demonstrations. Three agencies used Flock's system to target activists from Direct Action Everywhere, an animal-rights organization using civil disobedience to expose factory farm conditions. Delaware State Police queried the Flock network nine times in March 2025 related to Direct Action Everywhere actions—showing how ALPR surveillance targets groups engaged in activism challenging powerful industries.

Biased Policing and Discriminatory Searches

Our November analysis revealed deeply troubling patterns: more than 80 law enforcement agencies used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety ALPR network. Between June 2024 and October 2025, police performed hundreds of searches using terms such as "roma" and racial slurs—often without mentioning any suspected crime.

Audit logs revealed searches including "roma traveler," "possible g*psy," and "g*psy ruse." Grand Prairie Police Department in Texas searched for the slur six times while using Flock's "Convoy" feature, which identifies vehicles traveling together—essentially targeting an entire traveling community without specifying any crime. According to a 2020 Harvard University survey, four out of 10 Romani Americans reported being subjected to racial profiling by police. Flock's system makes such discrimination faster and easier to execute at scale.

Weaponizing Surveillance Against Reproductive Rights

In October, we obtained documents showing that Texas deputies queried Flock Safety's surveillance data in what police characterized as a missing person investigation, but was actually an abortion case. Deputies initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman's self-managed abortion, and consulted prosecutors about possible charges.

A Johnson County official ran two searches with the note "had an abortion, search for female." The second search probed 6,809 networks, accessing 83,345 cameras across nearly the entire country. This case revealed Flock's fundamental danger: a single query accesses more than 83,000 cameras spanning almost the entire nation, with minimal oversight and maximum potential for abuse—particularly when weaponized against people seeking reproductive healthcare.

Feature Updates Miss the Point

In June, EFF explained why Flock Safety's announced feature updates cannot make ALPRs safe. The company promised privacy-enhancing features like geofencing and retention limits in response to public pressure. But these tweaks don't address the core problem: Flock's business model depends on building a nationwide, interconnected surveillance network that creates risks no software update can eliminate. Our 2025 investigations proved that abuses stem from the architecture itself, not just how individual agencies use the technology.

Accountability and Community Action

EFF's work sparked significant accountability measures. U.S. Rep. Raja Krishnamoorthi and Rep. Robert Garcia launched a formal investigation into Flock's role in "enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans."

Illinois Secretary of State Alexi Giannoulias launched an audit after EFF research showed Flock allowed U.S. Customs and Border Protection to access Illinois data in violation of state privacy laws. In November, EFF partnered with the ACLU of Northern California to file a lawsuit against San Jose and its police department, challenging warrantless searches of millions of ALPR records. Between June 5, 2024 and June 17, 2025, SJPD and other California law enforcement agencies searched San Jose's database 3,965,519 times—a staggering figure illustrating the vast scope of warrantless surveillance enabled by Flock's infrastructure.

Our investigations also fueled municipal resistance to Flock Safety. Communities from Austin to Evanston to Eugene successfully canceled or refused to renew their Flock contracts after organizing campaigns centered on our research documenting discriminatory policing, immigration enforcement, threats to reproductive rights, and chilling effects on protest. These victories demonstrate that communities—armed with evidence of Flock's harms—can challenge and reject surveillance infrastructure that threatens civil liberties.

Dangerous New Capabilities: Always-Listening Microphones

In October 2025, Flock announced plans to expand its gunshot detection microphones to listen for "human distress" including screaming. This dangerous expansion transforms audio sensors into powerful surveillance tools monitoring human voices on city streets. High-powered microphones above densely populated areas raise serious questions about wiretapping laws, false alerts, and potential for dangerous police responses to non-emergencies. After EFF exposed this feature, Flock quietly amended its marketing materials to remove explicit references to "screaming"—replacing them with vaguer language about "distress" detection—while continuing to develop and deploy the technology.

Looking Forward

Flock Safety's surveillance infrastructure is not a neutral public safety tool. It's a system that enables and amplifies racist policing, threatens reproductive rights, and chills constitutionally protected speech. Our 2025 investigations proved it beyond doubt. As we head into 2026, EFF will continue exposing these abuses, supporting communities fighting back, and litigating for the constitutional protections that surveillance technology has stripped away.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

Fighting Renewed Attempts to Make ISPs Copyright Cops: 2025 in Review

You might not know it, given the many headlines focused on new questions about copyright and Generative AI, but the year’s biggest copyright case concerned an old-for-the-internet question: do ISPs have to be copyright cops? After years of litigation, that question is now squarely before the Supreme Court. And if the Supreme Court doesn’t reverse a lower court’s ruling, ISPs could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life.

The Stakes: Turning ISPs into Copyright Police

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service. The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). Under that standard, providing a service that could be used for infringement is enough to create liability when a customer infringes.

The Fourth Circuit’s rule would have devastating consequences for the public. Given copyright law’s draconian penalties, ISP would be under enormous pressure to terminate accounts whenever they get an infringement notice, whether or not the actual accountholder has infringed anything: entire households, schools, libraries, or businesses that share an internet connection. These would include:

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

And with more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect.

EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the Fourth Circuit’s decision, taking guidance from patent law. In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework.

The Supreme Court held a hearing in the case on December 1, and a majority of the justices seemed troubled by the implications of the Fourth Circuit’s ruling. One exchange was particularly telling: asked what should happen when the notices of infringement target a university account upon which thousands of people rely, Sony’s counsel suggested the university could resolve the issue by essentially slowing internet speeds so infringement might be less appealing. It’s hard to imagine the university community would agree that research, teaching, artmaking, library services, and the myriad other activities that rely on internet access should be throttled because of the actions of a few students. Hopefully the Supreme Court won’t either.

We expect a ruling in the case in the next few months. Fingers crossed that the Court rejects the Fourth Circuit’s draconian rule.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

Operations Security (OPSEC) Trainings: 2025 in Review

It's no secret that digital surveillance and other tech-enabled oppressions are acute dangers for liberation movement workers. The rising tides of tech-fueled authoritarianism and hyper-surveillance are universal themes across the various threat models we consider. EFF's Surveillance Self-Defense project is a vital antidote to these threats, but it's not all we do to help others address these concerns. Our team often receives questions, requests for security trainings, presentations on our research, and asks for general OPSEC (operations security, or, the process of applying digital privacy and information security strategies to a current workflow or process) advising. This year stood out for the sheer number and urgency of requests we fielded. 

Combining efforts across our Public Interest Technology and Activism teams, we consulted with an estimated 66 groups and organizations, with at least 2000 participants attending those sessions. These engagements typically look like OPSEC advising and training, usually merging aspects of threat modeling, cybersecurity 101, secure communications practices, doxxing self-defense, and more. The groups we work with are often focused on issue-spaces that are particularly embattled at the current moment, such as abortion access, advocacy for transgender rights, and climate justice. 

Our ability to offer realistic and community-focused OPSEC advice for these liberation movement workers is something we take great pride in. These groups are often under-resourced and unable to afford typical infosec consulting. Even if they could, traditional information security firms are designed to protect corporate infrastructure, not grassroots activism. Offering this assistance also allows us to stress-test the advice given in the aforementioned Surveillance Self-Defense project with real-world experience and update it when necessary. What we learn from these sessions also informs our blog posts, such as this piece on strategies for overcoming tech-enabled violence for transgender people, and this one surveying the landscape of digital threats in the abortion access movement post-Roe. 

There is still much to be done. Maintaining effective privacy and security within one's work is an ongoing process. We are grateful to be included in the OPSEC process planning for so many other human-rights defenders and activists, and we look forward to continuing this work in the coming years. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •  

EFF in the Press: 2025 in Review

EFF’s attorneys, activists, and technologists don’t just do the hard, endless work of defending our digital civil liberties — they also spend a lot of time and effort explaining that work to the public via media interviews. 

EFF had thousands of media mentions in 2025, from the smallest hyperlocal outlets to international news behemoths. Our work on street-level surveillance — the technology that police use to spy on our communities — generated a great deal of press attention, particularly regarding automated license plate readers (ALPRs). But we also got a lot of ink and airtime for our three lawsuits against the federal government: one challenging the U.S. Office of Personnel Management's illegal data sharing, a second challenging the State Department's unconstitutional "catch and revoke" program, and the third demanding that the departments of State and Justice reveal what pressure they put on app stores to remove ICE-tracking apps.

Other hot media topics included how travelers can protect themselves against searches of their devices, how protestors can protect themselves from surveillance, and the misguided age-verification laws that are proliferating across the nation and around the world, which are an attack on privacy and free expression.

On national television, Matthew Guariglia spoke with NBC Nightly News to discuss how more and more police agencies are using private doorbell cameras to surveil neighborhoods. Tori Noble spoke with ABC’s Good Morning America about the dangers of digital price tags, as well as with ABC News Live Prime about privacy concerns over OpenAI’s new web browser.

play
Privacy info. This embed will serve content from youtube.com
play
Privacy info. This embed will serve content from youtube.com

 

In a sampling of mainstream national media, EFF was cited 33 times by the Washington Post, 16 times by CNN, 13 times by USA Today, 12 times by the Associated Press, 11 times by NBC News, 11 times by the New York Times, 10 times by Reuters, and eight times by National Public Radio. Among tech and legal media, EFF was cited 74 times by Privacy Daily, 35 times by The Verge, 32 times by 404 Media, 32 times by The Register, 26 times by Ars Technica, 25 times by WIRED, 21 times by Law360, 21 times by TechCrunch, 20 times by Gizmodo, and 14 times by Bloomberg Law.

Abroad, EFF was cited in coverage by media outlets in nations including Australia, Bangladesh, Belgium, Canada, Colombia, El Salvador, France, Germany, India, Ireland, New Zealand, Palestine, the Philippines, Slovakia, South Africa, Spain, Trinidad and Tobago, the United Arab Emirates, and the United Kingdom. 

EFF staffers spoke to the masses in their own words via op-eds such as: 

And we ruled the airwaves on podcasts including: 

We're grateful to all the intrepid journalists who keep doing the hard work of reporting accurately on tech and privacy policy, and we encourage them to keep reaching out to us at press@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

  •