Normal view

Received before yesterday

Coalition Urges California to Revoke Permits for Federal License Plate Reader Surveillance

10 February 2026 at 12:41
Group led by EFF and Imperial Valley Equity & Justice Asks Gov. Newsom and Caltrans Director to Act Immediately

SAN FRANCISCO – California must revoke permits allowing federal agencies such as Customs and Border Patrol (CBP) and the Drug Enforcement Administration (DEA) to put automated license plate readers along border highways, a coalition led by the Electronic Frontier Foundation (EFF) and Imperial Valley Equity & Justice (IVEJ) demanded today. 

In a letter to Gov. Gavin Newsom and California Department of Transportation (Caltrans) Director Dina El-Tawansy, the coalition notes that this invasive mass surveillance – automated license plate readers (ALPRs) often disguised as traffic barrels – puts both residents and migrants at risk of harassment, abuse, detention, and deportation.  

“With USBP (U.S. Border Patrol) Chief Greg Bovino reported to be returning to El Centro sector, after leading a brutal campaign against immigrants and U.S. citizens alike in Los Angeles, Chicago, and Minneapolis, it is urgent that your administration take action,” the letter says. “Caltrans must revoke any permits issued to USBP. CBP, and DEA for these surveillance devices and effectuate their removal.” 

Coalition members signing the letter include the California Nurses Association; American Federation of Teachers Guild, Local 1931; ACLU California Action; Fight for the Future; Electronic Privacy Information Center; Just Futures Law; Jobs to Move America; Project on Government Oversight; American Friends Service Committee U.S./Mexico Border Program; Survivors of Torture, International; Partnership for the Advancement of New Americans; Border Angels; Southern California Immigration Project; Trust SD Coalition; Alliance San Diego; San Diego Immigrant Rights Consortium; Showing Up for Racial Justice San Diego; San Diego Privacy; Oakland Privacy; Japanese American Citizens League and its Florin-Sacramento Valley, San Francisco, South Bay, Berkeley, Torrance, and Greater Pasadena chapters; Democratic Socialists of America- San Diego; Center for Human Rights and Privacy; The Becoming Project Inc.; Imperial Valley for Palestine; Imperial Liberation Collaborative; Comité de Acción del Valle Inc.; CBFD Indivisible; South Bay People Power; and queercasa. 

California law prevents state and local agencies from sharing ALPR data with out-of-state agencies, including federal agencies involved in immigration enforcement. However, USBP, CBP, and DEA are bypassing these regulations by installing their own ALPRs. 

EFF researchers have released a map of more than 40 of these covert ALPRs along highways in San Diego and Imperial counties that are believed to belong to federal agencies engaged in immigration enforcement.  In response to a June 2025 public records request, Caltrans has released several documents showing CBP and DEA have applied for permits for ALPRs, with more expected as Caltrans continues to locate records responsive to the request. 

“California must not allow Border Patrol and other federal agencies to use surveillance on our roadways to unleash violence and intimidation on San Diego and Imperial Valley residents,” the letter says. “We ask that your administration investigate and release the relevant permits, revoke them, and initiate the removal of these devices. No further permits for ALPRs or tactical checkpoints should be approved for USBP, CBP, or DEA.” 

"The State of California must not allow Border Patrol to exploit our public roads and bypass state law," said Sergio Ojeda, IVEJ’s Lead Community Organizer for Racial and Economic Justice Programs.  "It's time to stop federal agencies from installing hidden cameras that they use to track, target and harass our communities for travelling between Imperial Valley, San Diego and Yuma." 

For the letter: https://www.eff.org/document/coalition-letter-re-covert-alprs

For the map of the covert ALPRs: https://www.eff.org/covertALPRmap

For high-res images of two of the covert ALPRs: https://www.eff.org/node/111725

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

 

Contact: 
Dave
Maass
Director of Investigations

Beware: Government Using Image Manipulation for Propaganda

27 January 2026 at 15:13

U.S. Homeland Security Secretary Kristi Noem last week posted a photo of the arrest of Nekima Levy Armstrong, one of three activists who had entered a St. Paul, Minn. church to confront a pastor who also serves as acting field director of the St Paul Immigration and Customs Enforcement (ICE) office. 

A short while later, the White House posted the same photo – except that version had been digitally altered to darken Armstrong’s skin and rearrange her facial features to make it appear she was sobbing or distraught. The Guardian one of many media outlets to report on this image manipulation, created a handy slider graphic to help viewers see clearly how the photo had been changed.  

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world. 

The New York Times reported it had run the two images through Resemble.AI, an A.I. detection system, which concluded Noem’s image was real but the White House’s version showed signs of manipulation. "The Times was able to create images nearly identical to the White House’s version by asking Gemini and Grok — generative A.I. tools from Google and Elon Musk’s xAI start-up — to alter Ms. Noem’s original image." 

Most of us can agree that the government shouldn’t lie to its constituents. We can also agree that good government does not involve emphasizing cruelty or furthering racial biases. But this abuse of technology violates both those norms. 

“Accuracy and truthfulness are core to the credibility of visual reporting,” the National Press Photographers Association said in a statement issued about this incident. “The integrity of photographic images is essential to public trust and to the historical record. Altering editorial content for any purpose that misrepresents subjects or events undermines that trust and is incompatible with professional practice.” 

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world.

Reworking an arrest photo to make the arrestee look more distraught not only is a lie, but it’s also a doubling-down on a “the cruelty is the point” manifesto. Using a manipulated image further humiliates the individual and perpetuate harmful biases, and the only reason to darken an arrestee’s skin would be to reinforce colorist stereotypes and stoke the flames of racial prejudice, particularly against dark-skinned people.  

History is replete with cruel and racist images as propaganda: Think of Nazi Germany’s cartoons depicting Jewish people, or contemporaneously, U.S. cartoons depicting Japanese people as we placed Japanese-Americans in internment camps. Time magazine caught hell in 1994 for using an artificially darkened photo of O.J. Simpson on its cover, and several Republican politcal campaigns in recent years have been called out for similar manipulation in recent years. 

But in an age when we can create or alter a photo with a few keyboard strokes, when we can alter what viewers think is reality so easily and convincingly, the danger of abuse by government is greater.   

Had the Trump administration not ham-handedly released the retouched perp-walk photo after Noem had released the original, we might not have known the reality of that arrest at all. This dishonesty is all the more reason why Americans’ right to record law enforcement activities must be protected. Without independent records and documentation of what’s happening, there’s no way to contradict the government’s lies. 

This incident raises the question of whether the Trump Administration feels emboldened to manipulate other photos for other propaganda purposes. Does it rework photos of the President to make him appear healthier, or more awake? Does it rework military or intelligence images to create pretexts for war? Does it rework photos of American citizens protesting or safeguarding their neighbors to justify a military deployment? 

In this instance, like so much of today’s political trolling, there’s a good chance it’ll be counterproductive for the trolls: The New York Times correctly noted that the doctored photograph could hinder the Armstrong’s right to a fair trial. “As the case proceeds, her lawyers could use it to accuse the Trump administration of making what are known as improper extrajudicial statements. Most federal courts bar prosecutors from making any remarks about court filings or a legal proceeding outside of court in a way that could prejudice the pool of jurors who might ultimately hear the case.” They also could claim the doctored photo proves the Justice Department bore some sort of animus against Armstrong and charged her vindictively. 

In the past, we've urged caution when analyzing proposals to regulate technologies that could be used to create false images. In those cases, we argued that any new regulation should rely on the established framework for addressing harms caused by other forms of harmful false information. But in this situation, it is the government itself that is misusing technology and propagating harmful falsehoods. This doesn't require new laws; the government can and should put an end to this practice on its own. 

Any reputable journalism organization would fire an employee for manipulating a photo this way; many have done exactly that. It’s a shame our government can’t adhere to such a basic ethical and moral code too. 

Report: ICE Using Palantir Tool That Feeds On Medicaid Data

15 January 2026 at 15:30

EFF last summer asked a federal judge to block the federal government from using Medicaid data to identify and deport immigrants.  

We also warned about the danger of the Trump administration consolidating all of the government’s information into a single searchable, AI-driven interface with help from Palantir, a company that has a shaky-at-best record on privacy and human rights. 

Now we have the first evidence that our concerns have become reality. 

“Palantir is working on a tool for Immigration and Customs Enforcement (ICE) that populates a map with potential deportation targets, brings up a dossier on each person, and provides a “confidence score” on the person’s current address,” 404 Media reports today. “ICE is using it to find locations where lots of people it might detain could be based.” 

The tool – dubbed Enhanced Leads Identification & Targeting for Enforcement (ELITE) – receives peoples’ addresses from the Department of Health and Human Services (which includes Medicaid) and other sources, 404 Media reports based on court testimony in Oregon by law enforcement agents, among other sources. 

This revelation comes as ICE – which has gone on a surveillance technology shopping spree – floods Minneapolis with agents, violently running roughshod over the civil rights of immigrants and U.S. citizens alike; President Trump has threatened to use the Insurrection Act of 1807 to deploy military troops against protestors there. Other localities are preparing for the possibility of similar surges. 

Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected.

This kind of consolidation of government records provides enormous government power that can be abused. Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected. 

As EFF Executive Director Cindy Cohn wrote in a Mercury News op-ed last August, “While couched in the benign language of eliminating government ‘data silos,’ this plan runs roughshod over your privacy and security. It’s a throwback to the rightly mocked ‘Total Information Awareness’ plans of the early 2000s that were, at least publicly, stopped after massive outcry from the public and from key members of Congress. It’s time to cry out again.” 

In addition to the amicus brief we co-authored challenging ICE’s grab for Medicaid data, EFF has successfully sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, filed an amicus brief in a suit challenging ICE’s grab for taxpayer data, and sued the departments of State and Homeland Security to halt a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. 

But litigation isn’t enough. People need to keep raising concerns via public discourse and Congress should act immediately to put brakes on this runaway train that threatens to crush the privacy and security of each and every person in America.  

EFF in the Press: 2025 in Review

29 December 2025 at 11:34

EFF’s attorneys, activists, and technologists don’t just do the hard, endless work of defending our digital civil liberties — they also spend a lot of time and effort explaining that work to the public via media interviews. 

EFF had thousands of media mentions in 2025, from the smallest hyperlocal outlets to international news behemoths. Our work on street-level surveillance — the technology that police use to spy on our communities — generated a great deal of press attention, particularly regarding automated license plate readers (ALPRs). But we also got a lot of ink and airtime for our three lawsuits against the federal government: one challenging the U.S. Office of Personnel Management's illegal data sharing, a second challenging the State Department's unconstitutional "catch and revoke" program, and the third demanding that the departments of State and Justice reveal what pressure they put on app stores to remove ICE-tracking apps.

Other hot media topics included how travelers can protect themselves against searches of their devices, how protestors can protect themselves from surveillance, and the misguided age-verification laws that are proliferating across the nation and around the world, which are an attack on privacy and free expression.

On national television, Matthew Guariglia spoke with NBC Nightly News to discuss how more and more police agencies are using private doorbell cameras to surveil neighborhoods. Tori Noble spoke with ABC’s Good Morning America about the dangers of digital price tags, as well as with ABC News Live Prime about privacy concerns over OpenAI’s new web browser.

play
Privacy info. This embed will serve content from youtube.com
play
Privacy info. This embed will serve content from youtube.com

 

In a sampling of mainstream national media, EFF was cited 33 times by the Washington Post, 16 times by CNN, 13 times by USA Today, 12 times by the Associated Press, 11 times by NBC News, 11 times by the New York Times, 10 times by Reuters, and eight times by National Public Radio. Among tech and legal media, EFF was cited 74 times by Privacy Daily, 35 times by The Verge, 32 times by 404 Media, 32 times by The Register, 26 times by Ars Technica, 25 times by WIRED, 21 times by Law360, 21 times by TechCrunch, 20 times by Gizmodo, and 14 times by Bloomberg Law.

Abroad, EFF was cited in coverage by media outlets in nations including Australia, Bangladesh, Belgium, Canada, Colombia, El Salvador, France, Germany, India, Ireland, New Zealand, Palestine, the Philippines, Slovakia, South Africa, Spain, Trinidad and Tobago, the United Arab Emirates, and the United Kingdom. 

EFF staffers spoke to the masses in their own words via op-eds such as: 

And we ruled the airwaves on podcasts including: 

We're grateful to all the intrepid journalists who keep doing the hard work of reporting accurately on tech and privacy policy, and we encourage them to keep reaching out to us at press@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

EFF’s ‘How to Fix the Internet’ Podcast: 2025 in Review

24 December 2025 at 11:45

2025 was a stellar year for EFF’s award-winning podcast, “How to Fix the Internet,” as our sixth season focused on the tools and technology of freedom. 

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futuresfrom tracking-based surveillance capitalism, to street level government surveillance, to the dominance of a few large platforms choking innovation, to the growing efforts by authoritarian governments to control what we see and saythe landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building solutions. That’s where our podcast comes in. 

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, EFF Executive Director Cindy Cohn and Activism Director Jason Kelley explore creative solutions to some of today’s biggest tech challenges. Our sixth season, which ran from May through September, featured: 

  • Digital Autonomy for Bodily Autonomy” – We all leave digital trails as we navigate the internetrecords of what we searched for, what we bought, who we talked to, where we went or want to go in the real worldand those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? Digital Defense Fund Director Kate Bertash joined Cindy and Jason to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 
  • Love the Internet Before You Hate On It” – There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. Molly Whitea researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech joined Cindy and Jason to discuss working toward a human-centered internet that gives everyone a sense of control and interaction; open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 
  • Why Three is Tor's Magic Number” – Many in Silicon Valley, and in U.S. business at large, seem to believe innovation springs only from competition, a race to build the next big thing first, cheaper, better, best. But what if collaboration and community breeds innovation just as well as adversarial competition? Tor Project Executive Director Isabela Fernandes joined Cindy and Jason to discuss the importance of not just accepting technology as it’s given to us, but collaboratively breaking it, tinkering with it, and rebuilding it together until it becomes the technology that we really need to make our world a better place. 
  • Securing Journalism on the ‘Data-Greedy’ Internet” – Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs? Freedom of the Press Foundation Digital Security Director Harlo Holmes joined Cindy and Jason to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed. 
  • Cryptography Makes a Post-Quantum Leap” – The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problemsand the cryptography they underpinare no longer infeasible for computers to solve? Will our online defenses collapse? Research and applied cryptographer Deirdre Connolly joined Cindy and Jason to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 
  • Finding the Joy in Digital Security” – Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning and retain more knowledge when they're having a good time. It doesn’t mean the topic isn’t seriousit’s just about intentionally approaching a serious topic with joy. East Africa digital security trainer Helen Andromedon joined Cindy and Jason to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 
  • Smashing the Tech Oligarchy” – Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more. Tech journalist and critic Kara Swisher joined Cindy and Jason to discuss regulation that can keep people safe online without stifling innovation, creating an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 
  • Separating AI Hope from AI Hype” – If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. Princeton Professor and “AI Snake Oil” publisher Arvind Narayanan joined Cindy and Jason to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 
  • Protecting Privacy in Your Brain” – Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to ita Pandora’s box of epic proportions. Neuroscientist Rafael Yuste and human rights lawyer Jared Genser, co-founders of The Neurorights Foundation, joined Cindy and Jason to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 
  • Building and Preserving the Library of Everything” – Access to knowledge not only creates an informed populace that democracy requires but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed ofso long as that knowledge is allowed to flow freely. Internet Archive founder and digital librarian Brewster Kahle joined Cindy and Jason to discuss how the free flow of knowledge makes all of us more free.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

EFF Launches Age Verification Hub as Resource Against Misguided Laws

10 December 2025 at 12:15
EFF Also Will Host a Reddit AMA and a Livestreamed Panel Discussion

SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back. 

To mark the hub's launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws. 

“These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children's safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.” 

Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids. 

It’s not just in the U.S.  Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. 

We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many. 

Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.  

To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law. 

EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days. 

EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age. 

For the age verification resource hub: https://www.eff.org/age 

For the Reddit AMA: https://www.reddit.com/r/privacy/  

For the Jan. 15 livestream: https://www.eff.org/livestream-age  

 

Contact: 
Molly
Buckley
Activist

Lawsuit Challenges San Jose’s Warrantless ALPR Mass Surveillance

18 November 2025 at 13:11
EFF and the ACLU of Northern California Sue on Behalf of Local Nonprofits

Contact: Josh Richman, EFF, jrichman@eff.org;  Carmen King, ACLU of Northern California, cking@aclunc.org

SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warrantless searches of the stored records of millions of drivers’ private habits, movements, and associations, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union of Northern California (ACLU-NC) argue in a lawsuit filed Tuesday. 

The lawsuit, filed in Santa Clara County Superior Court on behalf of the Services, Immigrant Rights and Education Network (SIREN) and the Council on American-Islamic Relations – California (CAIR-CA), challenges San Jose police officers’ practice of searching for location information collected by automated license plate readers (ALPRs) without first getting a warrant.  

ALPRs are an invasive mass-surveillance technology: high-speed, computer-controlled cameras that automatically capture images of the license plates of every driver that passes by, without any suspicion that the driver has broken the law. 

“A person who regularly drives through an area subject to ALPR surveillance can have their location information captured multiple times per day,” the lawsuit says. “This information can reveal travel patterns and provide an intimate window into a person’s life as they travel from home to work, drop off their children at school, or park at a house of worship, a doctor’s office, or a protest. It could also reveal whether a person crossed state lines to seek health care in California.”

The San Jose Police Department has blanketed the city’s roadways with nearly 500 ALPRs – indiscriminately collecting millions of records per month about people’s movements – and keeps this data for an entire year. Then the department permits its officers and other law enforcement officials from across the state to search this ALPR database to instantly reconstruct people’s locations over time – without first getting a warrant. This is an unchecked police power to scrutinize the movements of San Jose’s residents and visitors as they lawfully travel to work, to the doctor, or to a protest. 

San Jose’s ALPR surveillance program is especially pervasive: Few California law enforcement agencies retain ALPR data for an entire year, and few have deployed nearly 500 cameras.  

The lawsuit, which names the city, its Police Chief Paul Joseph, and its Mayor Matt Mahan as defendants, asks the court to stop the city and its police from searching ALPR data without first obtaining a warrant. Location information reflecting people’s physical movements, even in public spaces, is protected under the Fourth Amendment according to U.S. Supreme Court case law. The California Constitution is even more protective of location privacy, at both Article I, Section 13 (the ban on unreasonable searches) and Article I, Section 1 (the guarantee of privacy). “The SJPD’s widespread collection and searches of ALPR information poses serious threats to communities’ privacy and freedom of movement."

“This is not just about data or technology — it’s about power, accountability, and our right to move freely without being watched,” said CAIR-San Francisco Bay Area Executive Director Zahra Billoo. “For Muslim communities, and for anyone who has experienced profiling, the knowledge that police can track your every move without cause is chilling. San Jose’s mass surveillance program violates the California Constitution and undermines the privacy rights of every person who drives through the city. We’re going to court to make sure those protections still mean something." 

"The right to privacy is one of the strongest protections that our immigrant communities have in the face of these acts of violence and terrorism from the federal government," said SIREN Executive Director Huy Tran. "This case does not raise the question of whether these cameras should be used. What we need to guard against is a surveillance state, particularly when we have seen other cities or counties violate laws that prohibit collaborating with ICE. We can protect the privacy rights of our residents with one simple rule: Access to the data should only happen once approved under a judicial warrant.”  

For the complaint: https://www.eff.org/files/2025/11/18/siren_v._san_jose_-_filed_complaint.pdf

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

Wave of Phony News Quotes Affects Everyone—Including EFF

30 September 2025 at 18:36

Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims. 

WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage: 

That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27. 

But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus. 

Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too. 

Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.  

When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus. 

“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level. 

“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added. 

AI slop abounds 

It’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.  

The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.” 

And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.” 

As journalist Peter Sterne wrote for Nieman Lab in 2022: 

Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting. 

(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.) 

The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.  

Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.  

WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin: 

An email to the outlet demanding the article’s retraction went unanswered. 

In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine. 

 

A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry: 


No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance. 

Our most egregious example 

The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage: 

Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights. 

Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.  

Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted. 

But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us. 

If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.  

As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends: 

❌