Normal view

Received before yesterday

Smart AI Policy Means Examining Its Real Harms and Benefits

4 February 2026 at 17:40

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

Copyright Kills Competition

21 January 2026 at 18:14

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.

Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.

Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.

AI Training

In the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would  limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.

For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.

Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.

Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.

The DMCA’s “Anti-Circumvention” Provision

The Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.

Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers

In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)

Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.

EFF to California Appeals Court: First Amendment Protects Journalist from Tech Executive’s Meritless Lawsuit

16 January 2026 at 16:22

EFF asked a California appeals court to uphold a lower court’s decision to strike a tech CEO’s lawsuit against a journalist that sought to silence reporting the CEO, Maury Blackman, didn’t like.

The journalist, Jack Poulson, reported on Maury Blackman’s arrest for felony domestic violence after receiving a copy of the arrest report from a confidential source. Blackman didn’t like that. So, he sued Poulson—along with Substack, Amazon Web Services, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take his articles down from the internet.

Fortunately, the trial court saw this case for what it was: a classic SLAPP, or a strategic lawsuit against public participation. The court dismissed the entire complaint under California’s anti-SLAPP statute, which provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.

The appeals court should affirm the trial court’s correct decision.  

Poulson’s reporting is just the kind of activity that the state’s anti-SLAPP law was designed to protect: truthful speech about a matter of public interest. The felony domestic violence arrest of the CEO of a controversial surveillance company with U.S. military contracts is undoubtedly a matter of public interest. As we explained to the court, “the public has a clear interest in knowing about the people their government is doing business with.”

Blackman’s claims are totally meritless, because they are barred by the First Amendment. The First Amendment protects Poulson’s right to publish and report on the incident report. Blackman argues that a court order sealing the arrest overrides Poulson’s right to report the news—despite decades of Supreme Court and California Court of Appeals precedent to the contrary. The trial correctly rejected this argument and found that the First Amendment defeats all of Blackman’s claims. As the trial court explained, “the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing.”

The court of appeals should reach the same conclusion.

Artificial Intelligence, Copyright, and the Fight for User Rights: 2025 in Review

25 December 2025 at 15:07

A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.

At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.

Fair Use Still Protects Learning—Even by Machines

Copyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.

We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.

More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.

In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.

But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.

How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.

Protecting the Open Web in the IETF

Rightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.

Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.

That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.

Looking Ahead

The AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.

AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance

EFF intern Alexandra Halbeck contributed to this blog

When people talk to a chatbot, they often reveal highly personal information they wouldn’t share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.

AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.

Chat logs are deeply personal, just like your emails.

Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal people’s most sensitive information.

Without privacy protections, users would be chilled in their use of AI systems.


Consider the sensitivity of the following prompts: “how to get abortion pills,” “how to protect myself at a protest,” or “how to escape an abusive relationship.” These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.

Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.

Chat logs require a warrant.

Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.

For over a century, the Fourth Amendment has protected the content of private communications—such as letters, emails, and search engine prompts—from unreasonable government searches. AI prompts require the same constitutional protection.

This protection is not aspirational—it already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.

AI companies must resist bulk surveillance orders.

AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.

What do these overbroad orders look like? In the past decade or so, police have often sought “reverse” search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included “tower dumps” or “geofence warrants,” in which police order a company to search all users’ location data to identify anyone that’s been near a particular place at a particular time. It has also included “keyword” warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politician’s name or busy street, or a geofence warrant near a protest or church.

Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficult—if not impossible—to provide mass location data in response to a geofence warrant.

This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.

In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users. These aren’t radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:

  1. commit to fighting bulk orders for user data in court,
  2. commit to providing users with advanced notice before complying with a legal demand so that users can choose to fight on their own behalf, and 
  3. commit to publishing periodic transparency reports, which tally up how many legal demands for user data the company receives (including the number of bulk orders specifically).

❌