Reading view

Strengthen Colorado’s AI Act

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

  •  

Privacy Harm Is Harm

Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.

That's why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.

The amicus brief explains:

This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.

California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.

People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.

What’s At Stake

The defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.

The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.

Strong Privacy Laws

This case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.

We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.

The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.

You can read the amicus brief here.

  •  

Yes to California’s “No Robo Bosses Act”

California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”

Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.

What the Bill Does

S.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.

Algorithmic decision-making is a growing threat to workers.

The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).

The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.

Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.

Next Steps

The “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.

EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

  •  

EFF to Court: Protect Our Health Data from DHS

The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at EPIC and the Protect Democracy Project have filed an amicus brief asking a judge to block this dangerous violation of federal data privacy laws.

Last month, the AP reported that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. Medicaid is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow enrollment by non-citizens. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.

In response, California and 19 other states sued HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the Social Security Act, the Privacy Act, and HIPAA, and (2) the notice-and-comment requirements for rulemaking under the Administrative Procedure Act (APA).

Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.

A new agency agreement, recently reported by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”

In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an amicus brief in a suit challenging ICE grabbing taxpayer data. We’ve also reported on the U.S. Department of Agriculture’s grab of food stamp data and DHS’s potential grab of postal data. And we’ve written about the dangers of consolidating all government information.

We have data protection rules for good reason, and these latest data grabs are exactly why.

You can read our new amicus brief here.

  •  

When Your Power Meter Becomes a Tool of Mass Surveillance

Simply using extra electricity to power some Christmas lights or a big fish tank shouldn’t bring the police to your door. In fact, in California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise “smart” meter data in most cases. 

Despite this, Sacramento’s power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. The Electronic Frontier Foundation (EFF) is seeking to end Sacramento’s dragnet surveillance of energy customers and have asked for a court order to stop this practice for good.

For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers’ energy data, and passed on more than 33,000 tips about supposedly “high” usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such “high” power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called “suspicion” has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. One SMUD analyst admitted that they themselves “used 3500 [kWh] last month.”

This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was “4k [kWh], Asian,” and another suspicious because “multiple Asians have reported there.” Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity.

In 2022, EFF and the law firm Vallejo, Antolin, Agarwal, Kanter LLP sued SMUD and the City of Sacramento, representing the Asian American Liberation Network and two Sacramento County residents. One is an immigrant from Vietnam. Sheriff’s deputies showed up unannounced at his home, falsely accused him of growing cannabis based on an erroneous SMUD tip, demanded entry for a search, and threatened him with arrest when he refused. He has never grown cannabis; rather, he consumes more than average electricity due to a spinal injury.

Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. California’s state constitution bars unreasonable searches. This type of dragnet surveillance — suspicionless searches of entire zip codes worth of customer energy data — is inherently unreasonable. Additionally, a state statute generally prohibits public utilities from sharing such data. As we write in our brief, the Sacramento’s mass surveillance scheme does not qualify for one of the narrow exceptions to this rule. 

Mass surveillance violates the privacy of many individuals, as police without individualized suspicion seek (possibly non-existent) evidence of some kind of offense by some unknown person. As we’ve seen time and time again, innocent people inevitably get caught in the dragnet. For decades, EFF has been exposing and fighting these kinds of dangerous schemes. We remain committed to protecting digital privacy, whether it’s being threatened by national governments – or your local power company.

  •  

How to Build on Washington’s “My Health, My Data” Act

In 2023, the State of Washington enacted one of the strongest consumer data privacy laws in recent years: the “my health my data” act (HB 1155). EFF commends the civil rights, data privacy, and reproductive justice advocates who worked to pass this law.

This post suggests ways for legislators and advocates in other states to build on the Washington law and draft one with even stronger protections. This post will separately address the law’s scope (such as who is protected); its safeguards (such as consent and minimization); and its enforcement (such as a private right of action). While the law only applies to one category of personal data – our health information – its structure could be used to protect all manner of data.

Scope of Protection

Authors of every consumer data privacy law must make three decisions about scope: What kind of data is protected? Whose data is protected? And who is regulated?

The Washington law protects “consumer health data,” defined as information linkable to a consumer that identifies their “physical or mental health status.” This includes all manner of conditions and treatments, such as gender-affirming and reproductive care. While EFF’s ultimate goal is protection of all types of personal information, bills that protect at least some types can be a great start.

The Washington law protects “consumers,” defined as all natural persons who reside in the state or had their health data collected there. It is best, as here, to protect all people. If a data privacy law protects just some people, that can incentivize a regulated entity to collect even more data, in order to distinguish protected from unprotected people. Notably, Washington’s definition of “consumers” applies only in “an individual or household context,” but not “an employment context”; thus, Washingtonians will need a different health privacy law to protect them from their snooping bosses.

The Washington law defines a “regulated entity” as “any legal entity” that both: “conducts business” in the state or targets residents for products or services; and “determines the purpose and means” of processing consumer health data. This appears to include many non-profit groups, which is good, because such groups can harmfully process a lot of personal data.

The law excludes government from regulation, which is not unusual for data privacy bills focused on non-governmental actors. State and local government will likely need to be regulated by another data privacy law.

Unfortunately, the Washington law also excludes “contracted service providers when processing data on behalf of government.” A data broker or other surveillance-oriented business should not be free from regulation just because it is working for the police.

Consent or Minimization to Collect or Share Health Data

The most important part of Washington’s law requires either consent or minimization for a regulated entity to collect or share a consumer’s health data.

The law has a strong definition of “consent.” It must be “a clear affirmative act that signifies a consumer’s freely given, specific, informed, opt-in, voluntary, and unambiguous agreement.” Consent cannot be obtained with “broad terms of use” or “deceptive design.”

Absent consent, a regulated entity cannot collect or share a consumer’s health data except as necessary to provide a good or service that the consumer requested. Such rules are often called “data minimization.” Their virtue is that a consumer does not need to do anything to enjoy their statutory privacy rights; the burden is on the regulated entity to process less data.

As to data “sale,” the Washington law requires enhanced consent (which the law calls “valid authorization”). Sale is the most dangerous form of sharing, because it incentivizes businesses to collect the most possible data in hopes of later selling it. For this reason, some laws flatly ban sale of sensitive data, like the Illinois biometric information privacy act (BIPA).

For context, there are four ways for a bill or law to configure consent and/or minimization. Some require just consent, like BIPA’s provisions on data collection. Others require just minimization, like the federal “my body my data” bill. Still others require both, like the Massachusetts location data privacy bill. And some require either one or the other. In various times and places, EFF has supported all four configurations. “Either/or” is weakest, because it allows regulated entities to choose whether to minimize or to seek consent – a choice they will make based on their profit and not our privacy.

Two Protections of Location Data Privacy

Data brokers harvest our location information and sell it to anyone who will pay, including advertisers, police, and other adversaries. Legislators are stepping forward to address this threat.

The Washington law does so in two ways. First, the “consumer health data” protected by the consent-or-minimization rule is defined to include “precise location information that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies.” In turn, “precise location” is defined as within 1,750’ of a person.

Second, the Washington law bans a “geofence” around an “in-person health care service,” if “used” for one of three forbidden purposes (to track consumers, to collect their data, or to send them messages or ads). A “geofence” is defined as technology that uses GPS or the like “to establish a virtual boundary” of 2,000’ around the perimeter of a physical location.

This is a good start. It is also much better than weaker rules that only apply to the immediate vicinity of sensitive locations. Such rules allow adversaries to use location data to track us as we move towards sensitive locations, observe us enter the small no-data bubble around those locations, and infer what we may have done there. On the other hand, Washington’s rules apply to sizeable areas. Also, its consent-or-minimization rule applies to all locations that could indicate pursuit of health care (not just health facilities). And its geofence rule forbids use of location data to track people.

Still, the better approach, as in several recent bills, is to simply protect all location data. Protecting just one kind of sensitive location, like houses of worship, will leave out others, like courthouses. More fundamentally, all locations are sensitive, given the risk that others will use our location data to determine where – and with whom – we live, work, and socialize.

More Data Privacy Protections

Other safeguards in the Washington law deserve attention from legislators in other states:

  • Regulated entities must publish a privacy policy that discloses, for example, the categories of data collected and shared, and the purposes of collection. Regulated entities must not collect, use, or share additional categories of data, or process them for additional purposes, without consent.
  • Regulated entities must provide consumers the rights to access and delete their data.
  • Regulated entities must restrict data access to just those employees who need it, and maintain industry-standard data security

Enforcement

A law is only as strong as its teeth. The best way to ensure enforcement is to empower people to sue regulated entities that violate their privacy; this is often called a “private right of action.”

The Washington law provides that its violation is “an unfair or deceptive act” under the state’s separate consumer protection act. That law, in turn, bans unfair or deceptive acts in the conduct of trade or commerce. Upon a violation of the ban, that law provides a civil action to “any person who is injured in [their] business or property,” with the remedies of injunction, actual damages, treble damages up to $25,000, and legal fees and costs. It remains to be seen how Washington’s courts will apply this old civil action to the new “my health my data” act.

Washington legislators are demonstrating that privacy is important to public policy, but a more explicit claim would be cleaner: invasion of the fundamental human right to data privacy. Sadly, there is a nationwide debate about whether injury to data privacy, by itself, should be enough to go to court, without also proving a more tangible injury like identity theft. The best legislative models ensure full access to the courts in two ways. First, they provide: “A violation of this law regarding an individual’s data constitutes an injury to that individual, and any individual alleging a violation of this law may bring a civil action.” Second, they provide a baseline amount of damages (often called “liquidated” or “statutory” damages), because it is often difficult to prove actual damages arising from a data privacy injury.

Finally, data privacy laws must protect people from “pay for privacy” schemes, where a business charges a higher price or delivers an inferior product if a consumer exercises their statutory data privacy rights. Such schemes will lead to a society of privacy “haves” and “have nots.”

The Washington law has two helpful provisions. First, a regulated entity “may not unlawfully discriminate against a consumer for exercising any rights included in this chapter.” Second, there can be no data sale without a “statement” from the regulated entity to the consumer that “the provision of goods or services may not be conditioned on the consumer signing the valid authorization.”

Some privacy bills contain more-specific language, for example along these lines: “a regulated entity cannot take an adverse action against a consumer (such as refusal to provide a good or service, charging a higher price, or providing a lower quality) because the consumer exercised their data privacy rights, unless the data at issue is essential to the good or service they requested and then only to the extent the data is essential.”

What About Congress?

We still desperately need comprehensive federal consumer data privacy law built on “privacy first” principles. In the meantime, states are taking the lead. The very worst thing Congress could do now is preempt states from protecting their residents’ data privacy. Advocates and legislators from across the country, seeking to take up this mantle, would benefit from looking at – and building on – Washington’s “my health my data” law.

  •