New Method Can Find Hidden Eggs to Aid in Fertility Treatment

© Cassandra Klos for The New York Times

© Cassandra Klos for The New York Times

© Jon Cherry for The New York Times

© Jon Cherry for The New York Times

© The New York Times
The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.
Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.
We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.
Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.
EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.
So let’s look at the real-world landscape.
Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.
There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on. If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.
And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.
These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.
These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.
Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.
We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.
Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.
Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.
Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.
However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.
Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.
To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.
Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.
AI Advancements in Scientific and Medical ResearchAI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.
For example:
Researchers are using AI to help develop new medical treatments:
AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:
When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:
It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.
Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.
It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.


© Tommy Wang/Agence France-Presse — Getty Images

© Philip Cheung for The New York Times

© Christie Hemm Klok for The New York Times

© Jeff Zelevansky/Associated Press
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.
Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.
Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.
U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.
Copying works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.
Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use.
Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology.
Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated.
One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative.
The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share.
Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times are core functions of government. Copyright law doesn’t help with those tasks in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression.
Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.
Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.
Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.


© Don Feria/Associated Press
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
In the Netflix/Spotify/Amazon era, many of us access copyrighted works purely in digital form – and that means we rarely have the chance to buy them. Instead, we are stuck renting them, subject to all kinds of terms and conditions. And because the content is digital, reselling it, lending it, even preserving it for your own use inevitably requires copying. Unfortunately, when it comes to copying digital media, US copyright law has pretty much lost the plot.
As we approach the 50th anniversary of the 1976 Copyrights, the last major overhaul of US copyright law, we’re not the only ones wondering if it’s time for the next one. It’s a high-risk proposition, given the wealth and influence of entrenched copyright interests who will not hesitate to send carefully selected celebrities to argue for changes that will send more money, into fewer pockets, for longer terms. But it’s equally clear that and nowhere is that more evident than the waning influence of Section 109, aka the first sale doctrine.
First sale—the principle that once you buy a copyrighted work you have the right to re-sell it, lend it, hide it under the bed, or set it on fire in protest—is deeply rooted in US copyright law. Indeed, in an era where so many judges are looking to the Framers for guidance on how to interpret current law, it’s worth noting that the first sale principles (also characterized as “copyright exhaustion”) can be found in the earliest copyright cases and applied across the rights in the so-called “copyright bundle.”
Unfortunately, courts have held that first sale, at least as it was codified in the Copyright Act, only applies to distribution, not reproduction. So even if you want to copy a rented digital textbook to a second device, and you go through the trouble of deleting it from the first device, the doctrine does not protect you.
We’re all worse off as a result. Our access to culture, from hit songs to obscure indie films, are mediated by the whims of major corporations. With physical media the first sale principle built bustling second hand markets, community swaps, and libraries—places where culture can be shared and celebrated, while making it more affordable for everyone.
And while these new subscription or rental services have an appealing upfront cost, it comes with a lot more precarity. If you love rewatching a show, you may be chasing it between services or find it is suddenly unavailable on any platform. Or, as fans of Mad Men or Buffy the Vampire Slayer know, you could be stuck with a terrible remaster as the only digital version available
Last year we saw one improvement with California Assembly Bill 2426 taking effect. In California companies must now at least disclose to potential customers if a “purchase” is a revocable license—i.e. If they can blow it up after you pay. A story driving this change was Ubisoft revoking access to “The Crew” and making customers’ copies unplayable a decade after launch.
On the federal level, EFF, Public Knowledge, and 15 other public interest organizations backed Sen. Ron Wyden’s message to the FTC to similarly establish clear ground rules for digital ownership and sales of goods. Unfortunately FTC Chairman Andrew Ferguson has thus far turned down this easy win for consumers.
As for the courts, some scholars think they have just gotten it wrong. We agree, but it appears we need Congress to set them straight. The Copyright Act might not need a complete overhaul, but Section 109 certainly does. The current version hurts consumers, artists, and the millions of ordinary people who depend on software and digital works every day for entertainment, education, transportation, and, yes, to grow our food.
We realize this might not be the most urgent problem Congress confronts in 2026—to be honest, we wish it was—but it’s a relatively easy one to solve. That solution could release a wave of new innovation, and equally importantly, restore some degree of agency to American consumers by making them owners again.

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.
Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.
In the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.
For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.
The Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.
Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers
In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)
Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
There’s a crisis of creativity in mainstream American culture. We have fewer and fewer studios and record labels and fewer and fewer platforms online that serve independent artists and creators.
At its core, copyright is a monopoly right on creative output and expression. It’s intended to allow people who make things to make a living through those things, to incentivize creativity. To square the circle that is “exclusive control over expression” and “free speech,” we have fair use.
However, we aren’t just seeing artists having a time-limited ability to make money off of their creations. We are also seeing large corporations turn into megacorporations and consolidating huge stores of copyrights under one umbrella. When the monopoly right granted by copyright is compounded by the speed and scale of media company mergers, we end up with a crisis in creativity.
People have been complaining about the lack of originality in Hollywood for a long time. What is interesting is that the response from the major studios has rarely, especially recently, to invest in original programming. Instead, they have increased their copyright holdings through mergers and acquisitions. In today’s consolidated media world, copyright is doing the opposite of its intended purpose: instead of encouraging creativity, it’s discouraging it. The drive to snap up media franchises (or “intellectual properties”) that can generate sequels, reboots, spinoffs, and series for years to come has crowded out truly original and fresh creativity in many sectors. And since copyright terms last so long, there isn’t even a ticking clock to force these corporations to seek out new original creations.
In theory, the internet should provide a counterweight to this problem by lowering barriers to entry for independent creators. But as online platforms for creativity likewise shrink in number and grow in scale, they have closed ranks with the major studios.
It’s a betrayal of the promise of the internet: that it should be a level playing field where you get to decide what you want to do, watch, listen to, read. And our government should be ashamed for letting it happen.

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.
This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.
Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.
One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.
On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.
Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.
By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.
“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.
Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.


© Mojo Wang

© Scott Ball for The New York Times

© Schmidt Sciences

© Schmidt Sciences

© Kenny Holston/The New York Times

© Soña Lee

© Soña Lee
This year, we fought back against the return of a terrible idea that hasn’t improved with age: site blocking laws.
More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved.
But we’ve never believed they were gone for good. The major media and entertainment companies that backed site blocking in the US in 2012 turned to pushing for site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. And sure enough, the Motion Picture Association (MPA) and its allies have asked Congress to try again.
There were no less than three Congressional drafts of site-blocking legislation. Representative Zoe Lofgren kicked off the year with the Foreign Anti-Digital Piracy Act (FADPA). Fellow House of Representatives member Darrell Issa also claimed to be working on a bill that would make it offensively easy for a studio to block your access to a website based solely on the belief that there is infringement happening. Not to be left out, the Senate Judiciary Committee produced the terribly named Block BEARD Act.
None of these three attempts to fundamentally alter the way you experience the internet moved too far after their press releases. But the number tells us that there is, once again, an appetite among major media conglomerates and politicians to resurrect SOPA/PIPA from the dead.
None of these proposals fixes the flaws of SOPA/PIPA, and none ever could. Site blocking is a flawed idea and a disaster for free expression that no amount of rewriting will fix. There is no way to create a fast lane for removing your access to a website that is not a major threat to the open web. Just as we opposed SOPA/PIPA over ten years ago, we oppose these efforts.
Site blocking bills seek to build a new infrastructure of censorship into the heart of the internet. They would enable court orders directed to the organizations that make the internet work, like internet service providers, domain name resolvers, and reverse proxy services, compelling them to help block US internet users from visiting websites accused of copyright infringement. The technical means haven’t changed much since 2012. - tThey involve blocking Internet Protocol addresses or domain names of websites. These methods are blunt—sledgehammers rather than scalpels. Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Italy, South Korea, France, and in the US, to name just a few.
Given this downside, one would think the benefits of copyright enforcement from these bills ought to be significant. But site blocking is trivially easy to evade. Determined site owners can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online.
The limits that lawmakers have proposed to put on these laws are an illusion. While ostensibly aimed at “foreign” websites, they sweep in any website that doesn’t conspicuously display a US origin, putting anonymity at risk. And despite the rhetoric of MPA and others that new laws would be used only by responsible companies against the largest criminal syndicates, laws don’t work that way. Massive new censorship powers invite abuse by opportunists large and small, and the costs to the economy, security, and free expression are widely borne.
It’s time for Big Media and its friends in Congress to drop this flawed idea. But as long as they keep bringing it up, we’ll keep on rallying internet users of all stripes to fight it.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges.
Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents.
EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.
The Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.
Together, these bills would have dismantled much of the progress made over the last decade.
We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.”
EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it.
Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.
Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.
2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office.
The USPTO’s proposed rule changes would:
These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back.
We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it.
In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.
The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.
GitHub emphasized the increased risk and litigation cost for open-source communities.
Twenty-two patent law professors called the proposal unlawful and harmful to innovation.
Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.
The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.
But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.
We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.
Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.
We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.
Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.
Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.
At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.
Copyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.
We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.
More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.
In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.
But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.
How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.
Rightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.
Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.
That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.
The AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.

Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.
For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.
Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted.
One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.
Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.
Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.
Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.
As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.”
This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.
Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.
If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.


© Pete Marovich for The New York Times

© Gabriela Hasbun for The New York Times
The state of streaming is... bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years.
Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.”
It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control.
In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.
Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.
They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.
Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.
First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.
When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.
Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years, b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.
On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.
On the other hand, well, there's everything else.
The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competiing against another recently merged media megazord of Paramount Skydance.
Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover.
Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role pushing a specific view of American culture.
The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.
Note 12/10/2025: One line in this blog has been modified a few hours post-publication. The substance remains the same.

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.
The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.
The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.
EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.
TELL CONGRESS: The guard act won't keep us safe
The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.
The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.
The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.
By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.
The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.
Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.
Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.
EFF has long documented the dangers of age-verification systems:
As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.
Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responses—including not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.
The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.
Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.
Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.
While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.
In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.
The GUARD Act would make the internet less free, less private, and less safe for everyone.
The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.
Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.
TELL CONGRESS: OPPOSe THE GUARD ACT

Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss how to address complex questions and risks in AI while protecting civil liberties and human rights online.
Join EFF Director of Policy and Advocacy Katharine Trendacosta, EFF Staff Attorney Tori Noble, Berkeley Center for Law & Technology Co-Director Pam Samuelson, and Icarus Salon Artist Şerife Wong for a live discussion with Q&A.
This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.
EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online.
We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share. Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season.
If you’re new to Signal, it's a free and secure messaging app built by the nonprofit Signal Foundation at the forefront of defending user privacy. While chatting privately, you can add some seasonal flair with Signal Stickers, and rest assured: friends receiving them get the full sticker pack fully encrypted, safe from prying eyes and lurking spirits.
On any mobile device or desktop with the Signal app installed, you can simply click the button below.
Download EFF's Signal Stickers
To share Frights and Rights
You can also paste the sticker link directly into a signal chat, and then tap it to download the pack directly to the app.
Once they’re installed, they are even easier to share—simply open a chat, tap the sticker menu on your keyboard, and send one of EFF’s spooky stickers. They’ll then be asked if they’d like to also have the sticker pack.
All of this works without any third parties knowing what sticker packs you have or whom you shared them with. Our little ghosts and ghouls are just between us.

These familiar champions of digital rights—The Encryptids—are back! Don’t let their monstrous looks fool you; each one advocates for privacy, security, and a dash of weirdness in their own way. Whether they’re shouting about online anonymity or the importance of interoperability, they’re ready to help you share your love for digital rights. Learn more about their stories here, and you can even grab a bigfoot pin to let everyone know that privacy is a “human” right.

On a cool autumn night, you might be on the lookout for ghosts and ghouls from your favorite horror flicks—but in the real world, there are far scarier monsters lurking in the dark: police surveillance technologies. Often hidden in plain sight, these tools quietly watch from the shadows and are hard to spot. That’s why we’ve given these tools the hideous faces they deserve in our Street-Level Surveillance Monsters series, ready to scare (and inform) your loved ones.

Ask any online creator and they’ll tell you: few things are scarier than a copyright takedown. From unfair DMCA claims and demonetization to frivolous lawsuits designed to intimidate people into a hefty payment, the creeping expansion of copyright can inspire as much dread as any monster on the big screen. That’s why this pack includes a few trolls and creeps straight from a broken copyright system—where profit haunts innovation.
To that end, all of EFF’s work (including these stickers) are under an open CC-BY License, free for you to use and remix as you see fit.
These frights may disappear with your message, but the fights persist. That’s why we’re so grateful to EFF supporters for helping us make the digital world a little more weird and a little less scary. You can become a member today and grab some gear to show your support. Happy Halloween!

There is a lot of bad on the internet and it seems to only be getting worse. But one of the things the internet did well, and is worth preserving, is nontraditional paths for creativity, journalism, and criticism. As governments and major corporations throw up more barriers to expression—and more and more gatekeepers try to control the internet—it’s important to learn how to crash through those gates.
In EFF's interview series, Gate Crashing, we talk to people who have used the internet to take nontraditional paths to the very traditional worlds of journalism, creativity, and criticism. We hope it's both inspiring to see these people and enlightening for anyone trying to find voices they like online.
Our mini-series will be dropping an episode each month closing out 2025 in style.
Be sure to mark your calendar or check our socials on drop dates. If you have a friend or colleague that might be interested in watching our series, please forward this link: eff.org/gatecrashing
For over 35 years, EFF members have empowered attorneys, activists, and technologists to defend civil liberties and human rights online for everyone.
Tech should be a tool for the people, and we need you in this fight.
* This interview was originally published in December 2024. No changes have been made

Jimmy Kimmel has been in the news a lot recently, which means the ongoing lawsuit against him by perennial late-night punching bag/convicted fraudster/former congressman George Santos flew under the radar. But what happened in that case is an essential illustration of the limits of both copyright law and the “fine print” terms of service on websites and apps.
What happened was this: Kimmel and his staff saw that Santos was on Cameo, which allows people to purchase short videos from various public figures with requested language. Usually it’s something like “happy birthday” or “happy retirement.” In the case of Kimmel and his writers, they set out to see if there was anything they couldn’t get Santos to say on Cameo. For this to work, they obviously didn’t disclose that it was Jimmy Kimmel Live! asking for the videos.
Santos did not like the segment, which aired clips of these videos, called “Will Santos Say It?”. He sued Kimmel, ABC, and ABC’s parent company, Disney. He alleged both copyright infringement and breach of contract—the contract in this case being Cameo’s terms of service. He lost on all counts, twice: his case was dismissed at the district court level, and then that dismissal was upheld by an appeals court.
On the copyright claim, Kimmel and Disney argued and won on the grounds of fair use. The court cited precedent that fair use excuses what might be strictly seen as infringement if such a finding would “stifle the very creativity” that copyright is meant to promote. In this case, the use of the videos was part of the ongoing commentary by Jimmy Kimmel Live! around whether there was anything Santos wouldn’t say for money. Santos tried to argue that since this was their purpose from the outset, the use wasn’t transformative. Which... isn’t how it works. Santos’ purpose was, presumably, to fulfill a request sent through the app. The show’s purpose was to collect enough examples of a behavior to show a pattern and comment on it.
Santos tried to say that their not disclosing what the reason was invalidated the fair use argument because it was “deceptive.” But the court found that the record didn’t show that the deception was designed to replace the market for Santos’s Cameos. It bears repeating: commenting on the quality of a product or the person making it is not legally actionable interference with a business. If someone tells you that a movie, book, or, yes, Cameo isn’t worth anything because of its ubiquity or quality and shows you examples, that’s not a deceptive business practice. In fact, undercover quality checks and reviews are fairly standard practices! Is this a funnier and more entertaining example than a restaurant review? Yes. That doesn’t make it unprotected by fair use.
It’s nice to have this case as a reminder that, despite everything, the major studios often argue, fair use protects everyone, including them. Don’t hold your breath on them remembering this the next time someone tries to make a YouTube review of a Hollywood movie using clips.
Another claim from this case that is less obvious but just as important involves the Cameo terms of service. We often see contracts being used to restrict people’s fair use rights. Cameo offers different kinds of videos for purchase. The most well-known comes with a personal use license, the “happy birthdays,” and so on. They also offer a “commercial” use license, presumably if you want to use the videos to generate revenue, like you do with an ad or paid endorsement. However, in this case, the court found that the terms of service are a contract between a customer and Cameo, not between the customer and the video maker. Cameo’s terms of service explicitly lay out when their terms apply to the person selling a video, and they don’t create a situation where Santos can use those terms to sue Jimmy Kimmel Live! According to the court, the terms don’t even imply a shared understanding and contract between the two parties.
It's so rare to find a situation where the wall of text that most terms of service consist of actually helps protect free expression; it’s a pleasant surprise to see it here.
In general, we at EFF hate it when these kinds of contracts—you know the ones, where you hit accept after scrolling for ages just so you can use the app—are used to constrain users’ rights. Fair use is supposed to protect us all from overly strict interpretations of copyright law, but abusive terms of service can erode those rights. We’ll keep fighting for those rights and the people who use them, even if the one exercising fair use is Disney.
