Normal view

Received before yesterday

Yes to the “ICE Out of Our Faces Act”

5 February 2026 at 17:46

Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.

Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.

To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.

The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.

As EFF explains in the new bill’s announcement:

It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.

You can read the bill here, and the bill’s announcement here.

Protecting Our Right to Sue Federal Agents Who Violate the Constitution

4 February 2026 at 19:50

Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.

To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.

Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.

The Problem

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.

So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.”  He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”

Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.

In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.

In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling.  In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”

Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.

The Solution

At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.

In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.

State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.” 

This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.

We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.

We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.

Strengthen Colorado’s AI Act

19 November 2025 at 12:37

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

Privacy Harm Is Harm

1 October 2025 at 18:51

Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.

That's why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.

The amicus brief explains:

This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.

California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.

People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.

What’s At Stake

The defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.

The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.

Strong Privacy Laws

This case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.

We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.

The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.

You can read the amicus brief here.

❌