Normal view

There are new articles available, click to refresh the page.
Before yesterdayEFF Deeplinks

No Country Should be Making Speech Rules for the World

9 May 2024 at 15:38

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Congress Should Just Say No to NO FAKES

29 April 2024 at 16:21

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

EFF to Ninth Circuit: There’s No Software Exception to Traditional Copyright Limits

11 March 2024 at 18:31

Copyright’s reach is already far too broad, and courts have no business expanding it any further, particularly where that reframing will undermine adversarial interoperability. Unfortunately, a federal district court did just that in the latest iteration of Oracle v. Rimini, concluding that software Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though the update didn’t use any of Oracle’s copyrightable code.

That’s a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is “substantially similar” to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those innovations.

That’s why EFF, along with a diverse group of stakeholders representing consumers, small businesses, software developers, security researchers, and the independent repair community, filed an amicus brief in the Ninth Circuit Court of Appeals explaining that the district court ruling is not just bad policy, it’s also bad law.  Court after court has confronted the challenging problem of applying copyright to functional software, and until now none have found that the copyright monopoly extends to interoperable software absent substantial similarity. In other words, there is no “software exception” to the definition of derivative works, and the Ninth Circuit should reject any effort to create one.

The district court’s holding relied heavily on an erroneous interpretation of a 1998 case, Micro Star v. FormGen. In that case, the plaintiff, FormGen, published a video game following the adventures of action hero Duke Nukem. The game included a software tool that allowed players themselves to build new levels to the game and share them with others. Micro Star downloaded hundreds of those user-created files and sold them as a collection. When FormGen sued for copyright infringement, Micro Star argued that because the user files didn’t contain art or code from the FormGen game, they were not derivative works.

The Ninth Circuit Court of Appeals ruled against Micro Star, explaining that:

[t]he work that Micro Star infringes is the [Duke Nukem] story itself—a beefy commando type named Duke who wanders around post-Apocalypse Los Angeles, shooting Pig Cops with a gun, lobbing hand grenades, searching for medkits and steroids, using a jetpack to leap over obstacles, blowing up gas tanks, avoiding radioactive slime. A copyright owner holds the right to create sequels and the stories told in the [user files] are surely sequels, telling new (though somewhat repetitive) tales of Duke’s fabulous adventures.

Thus, the user files were “substantially similar” because they functioned as sequels to the video game itself—specifically the story and principal character of the game. If the user files had told a different story, with different characters, they would not be derivative works. For example, a company offering a Lord of the Rings game might include tools allowing a user to create their own character from scratch. If the user used the tool to create a hobbit, that character might be considered a derivative work. A unique character that was simply a 21st century human in jeans and a t-shirt, not so much.

Still, even confined to its facts, Micro Star stretched the definition of derivative work. By misapplying Micro Star to purely functional works that do not incorporate any protectable expression, however, the district court rewrote the definition altogether. If the court’s analysis were correct, rightsholders would suddenly have a new default veto right in all kinds of works that are intended to “interact and be useable with” their software. Unfortunately, they are all too likely to use that right to threaten add-on innovation, security, and repair.

Defenders of the district court’s approach might argue that interoperable software will often be protected by fair use. As copyrightable software is found in everything from phones to refrigerators, fair use is an essential safeguard for the development of interoperable tools, where those tools might indeed qualify as derivative works. But many developers cannot afford to litigate the question, and they should not have to just because one federal court misread a decades-old case.

What Home Videotaping Can Tell Us About Generative AI

24 January 2024 at 16:04

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.


It’s 1975. Earth, Wind and Fire rule the airwaves, Jaws is on every theater screen, All In the Family is must-see TV, and Bill Gates and Paul Allen are selling software for the first personal computer, the Altair 8800.

But for copyright lawyers, and eventually the public, something even more significant is about to happen: Sony starts selling the first videotape recorder, or VTR. Suddenly, people had the power to  store TV programs and watch them later. Does work get in the way of watching your daytime soap operas? No problem, record them and watch when you get home. Want to watch the game but hate to miss your favorite show? No problem. Or, as an ad Sony sent to Universal Studios put it, “Now you don’t have to miss Kojak because you’re watching Columbo (or vice versa).”

What does all of this have to do with Generative AI? For one thing, the reaction to the VTR was very similar to today’s AI anxieties. Copyright industry associations ran to Congress, claiming that the VTR "is to the American film producer and the American public as the Boston strangler is to the woman home alone" – rhetoric that isn’t far from some of what we’ve heard in Congress on AI lately. And then, as now, rightsholders also ran to court, claiming Sony was facilitating mass copyright infringement. The crux of the argument was a new legal theory: that a machine manufacturer could be held liable under copyright law (and thus potentially subject to ruinous statutory damages) for how others used that machine.

The case eventually worked its way up to the Supreme Court, and in 1984 the Court rejected the copyright industry’s rhetoric and ruled in Sony’s favor. Forty years later, at least two aspects of that ruling are likely to get special attention.

First, the Court observed that where copyright law has not kept up with technological innovation, courts should be careful not to expand copyright protections on their own. As the decision reads:

Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology. In a case like this, in which Congress has not plainly marked our course, we must be circumspect in construing the scope of rights created by a legislative enactment which never contemplated such a calculus of interests.

Second, the Court borrowed from patent law the concept of “substantial noninfringing uses.” In order to hold Sony liable for how its customers used their VTRs, rightholders had to show that the VTR was simply a tool for infringement. If, instead, the VTR was “capable of substantial noninfringing uses,” then Sony was off the hook. The Court held that the VTR fell in the latter category because it was used for private, noncommercial time-shifting, and that time-shifting was a lawful fair use.  The Court even quoted Fred Rogers, who testified that home-taping of children’s programs served an important function for many families.

That rule helped unleash decades of technological innovation. If Sony had lost, Hollywood would have been able to legally veto any tool that could be used for infringing as well as non-infringing purposes. With Congress’ help, it has found ways to effectively do so anyway, such as Section 1201 of the DMCA. Nonetheless, Sony remains a crucial judicial protection for new creativity.

Generative AI may test the enduring strength of that protection. Rightsholders argue that generative AI toolmakers directly infringe when they used copyrighted works as training data. That use is very likely to be found lawful. The more interesting question is whether toolmakers are liable if customers use the tools to generate infringing works. To be clear, the users themselves may well be liable – but they are less likely to have the kind of deep pockets that make litigation worthwhile. Under Sony, however, the key question for the toolmakers will be whether their tools are capable of substantial non-infringing uses. The answer to that question is surely yes, which should preclude most of the copyright claims.

But there’s risk here as well – if any of these cases reach its doors, the Supreme Court could overturn Sony. Hollywood certainly hoped it would do so it when considered the legality of peer-to-peer file-sharing in MGM v Grokster. EFF and many others argued hard for the opposite result. Instead, the Court side-stepped Sony altogether in favor of creating a new form of secondary liability for “inducement.”

The current spate of litigation may end with multiple settlements, or Congress may decide to step in. If not, the Supreme Court (and a lot of lawyers) may get to party like it’s 1975. Let’s hope the justices choose once again to ensure that copyright maximalists don’t get to control our technological future.

Related Cases: 

The No AI Fraud Act Creates Way More Problems Than It Solves

19 January 2024 at 18:27

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes senseyou should be able to prevent a company from running an advertisement that falsely claims that you endorse its productsbut the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that categoryfrom pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyoneincluding performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendmentthat’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

❌
❌