Reading view

Strengthen Colorado’s AI Act

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

  •  

Yes to California’s “No Robo Bosses Act”

California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”

Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.

What the Bill Does

S.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.

Algorithmic decision-making is a growing threat to workers.

The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).

The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.

Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.

Next Steps

The “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.

EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

  •  

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230

In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech.

The new documents, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were talking with Senate staffers working to pass speech- and privacy-chilling bills like the EARN IT Act and PACT Act (neither became law). DOJ officials also communicated with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.

Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.

DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be blindsided by the order. EFF was counsel to plaintiffs who challenged the order, and President Joe Biden later rescinded it. EFF filed two FOIA suits seeking records about the executive order and the DOJ’s work to weaken Section 230.

The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 flexed its powers to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was critical of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230.

Further, DOJ reviewed and proposed edits to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC lacked any legal authority to reinterpret the law.

You can read the latest release of documents here, and all the documents released in this case are here.

  •  

Today's Supreme Court Decision on Age Verification Tramples Free Speech and Undermines Privacy

Today’s decision in Free Speech Coalition v. Paxton is a direct blow to the free speech rights of adults. The Court ruled that “no person—adult or child—has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” This ruling allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. We will continue to fight against age restrictions on online access more broadly, such as on social media and specific online features.  

Still, the decision has immense consequences for internet users in Texas and in other states that have enacted similar laws. The Texas law forces adults to submit personal information over the internet to access entire websites that hold some amount of sexual material, not just pages or portions of sites that contain specific sexual materials. Many sites that cannot reasonably implement age verification measures for reasons such as cost or technical requirements will likely block users living in Texas and other states with similar laws wholesale.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. 

Many users will not be comfortable sharing private information to access sites that do implement age verification, for reasons of privacy or concern for data breaches. Many others do not have a driver’s license or photo ID to complete the age verification process. This decision will, ultimately, deter adult users from speaking and accessing lawful content, and will endanger the privacy of those who choose to go forward with verification. 

What the Court Said Today 

In the 6-3 decision, the Court ruled that Texas’ HB 1181 is constitutional. This law requires websites that Texas decides are composed of “one-third” or more of “sexual material harmful to minors” to confirm the age of users by collecting age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.   

In 1997, the Supreme Court struck down a federal online age-verification law in Reno v. American Civil Liberties Union. In that case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB 1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

In Reno and in subsequent cases, the Supreme Court ruled that laws that burden adults’ access to lawful speech are subjected to the highest level of review under the First Amendment, known as strict scrutiny. This level of scrutiny requires a law to be very narrowly tailored or the least speech-restrictive means available to the government.  

That all changed with the Supreme Court’s decision today 

The Court now says that laws that burden adults access to sexual materials that are obscene to minors are subject to less-searching First Amendment review, known as intermediate scrutiny. And under that lower standard, the Texas law does not violate the First Amendment. The Court did not have to respond to arguments that there are less speech-restrictive ways of reaching the same goal—for example, encouraging parents to install content-filtering software on their children’s devices.

The court reached this decision by incorrectly assuming that online age verification is functionally equivalent to flashing an ID at a brick-and-mortar store. As we explained in our amicus brief, this ignores the many ways in which verifying age online is significantly more burdensome and invasive than doing so in person. As we and many others have previously explained, unlike with in-person age-checks, the only viable way for a website to comply with an age verification requirement is to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information.  

This leads to a host of serious anonymity, privacy, and security concerns—all of which the majority failed to address. A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms. Age verification also undermines anonymous internet browsing, even though courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.    

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception

The Court sidestepped its previous online age verification decisions by claiming the internet has changed too much to follow the precedent from Reno that requires these laws to survive strict scrutiny. Writing for the minority, Justice Kagan disagreed with the premise that the internet has changed: “the majority’s claim—again mistaken—that the internet has changed too much to follow our precedents’ lead.”   

But the majority argues that past precedent does not account for the dramatic expansion of the internet since the 1990s, which has led to easier and greater internet access and larger amounts of content available to teens online. The majority’s opinion entirely fails to address the obvious corollary: the internet’s expansion also has benefited adults. Age verification requirements now affect exponentially more adults than they did in the 1990s and burden vastly more constitutionally protected online speech. The majority's argument actually demonstrates that the burdens on adult speech have grown dramatically larger because of technological changes, yet the Court bizarrely interprets this expansion as justification for weaker constitutional protection. 

What It Means Going Forward 

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception: the government will not stand in the way of people accessing First Amendment-protected material. There is no question that multiple states will now introduce similar laws to Texas. Two dozen already have, though they are not all in effect. At least three of those states have no limit on the percentage of material required before the law applies—a sweeping restriction on every site that contains any material that the state believes the law includes. These laws will force U.S.-based adult websites to implement age-verification or block users in those states, as many have in the past when similar laws were in effect.  

Rather than submit to verification, research has found that people will choose a variety of other paths: using VPNs to indicate that they are outside of the state, accessing similar sites that don’t comply with the law, often because the site is operating in a different country. While many users will simply not access the content as a result, others may accept the risk, at their peril.   

We expect some states to push the envelope in terms of what content they consider “harmful to minors,” and to expand the type of websites that are covered by these laws, either through updated language or threats of litigation. Even if these attacks are struck down, operators of sites that involve sexual content of any type may be under threat, especially if that information is politically divisive. We worry that the point of some of these laws will be to deter queer folks and others from accessing lawful speech and finding community online by requiring them to identify themselves. We will continue to fight to protect against the disclosure of this critical information and for people to maintain their anonymity. 

EFF Will Continue to Fight for All Users’ Free Expression and Privacy 

That said, the ruling does not give states or Congress the green light to impose age-verification regulations on the broader internet. The majority’s decision rests on the fact that minors do not have a First Amendment right to access sexual material that would be obscene. In short, adults have a First Amendment right to access those sexual materials, while minors do not. Although it was wrong, the majority’s opinion ruled that because Texas is blocking minors from speech they have no constitutional right to access, the age-verification requirement only incidentally burdens adult’s First Amendment rights.  

But the same rationale does not apply to general-audience sites and services, including social media. Minors and adults have coextensive rights to both speak and access the speech of other users on these sites because the vast majority of the speech is not sexual materials that would be obscene to minors. Lawmakers should be careful not to interpret this ruling to mean that broader restrictions on minors’ First Amendment rights, like those included in the Kids Online Safety Act, would be deemed constitutional.  

Free Speech Coalition v. Paxton will have an effect on nearly every U.S. adult internet user for the foreseeable future. It marks a worrying shift in the ways that governments can restrict access to speech online. But that only means we must work harder than ever to protect privacy, security, and free speech as central tenets of the internet.  

  •