Normal view

There are new articles available, click to refresh the page.
Before yesterdayEFF Deeplinks

Kids Online Safety Shouldn’t Require Massive Online Censorship and Surveillance: 2023 Year in Review

28 December 2023 at 11:25

There’s been plenty of bad news regarding federal legislation in 2023. For starters, Congress has failed to pass meaningful comprehensive data privacy reforms. Instead, legislators have spent an enormous amount of energy pushing dangerous legislation that’s intended to limit young people’s use of some of the most popular sites and apps, all under the guise of protecting kids. Unfortunately, many of these bills would run roughshod over the rights of young people and adults in the process. We spent much of the year fighting these dangerous “child safety” bills, while also pointing out to legislators that comprehensive data privacy legislation would be more likely to pass constitutional muster and address many of the issues that these child safety bills focus on. 

But there’s also good news: so far, none of these dangerous bills have been passed at the federal level, or signed into law. That's thanks to a large coalition of digital rights groups and other organizations pushing back, as well as tens of thousands of individuals demanding protections for online rights in the many bills put forward.

Kids Online Safety Act Returns

The biggest danger has come from the Kids Online Safety Act (KOSA). Originally introduced in 2022, it was reintroduced this year and amended several times, and as of today, has 46 co-sponsors in the Senate. As soon as it was reintroduced, we fought back, because KOSA is fundamentally a censorship bill. The heart of the bill is a “Duty of Care” that the government will force on a huge number of websites, apps, social networks, messaging forums, and online video games. KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a  wormhole of internet content to get anxious; they could see a newspaper on the breakfast table. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote.

KOSA will empower every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. Platforms affected by KOSA would likely find it impossible to filter out this type of “harmful” content, though they would likely try. Online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave, and institute age verification tools to ensure that it happens. Age verification systems are surveillance systems that threaten everyone’s privacy. Mandatory age verification, and with it, mandatory identity verification, is the wrong approach to protecting young people online.

The Senate passed amendments to KOSA later in the year, but these do not resolve its issues. As an example, liability under the law was shifted to be triggered only for content that online services recommend to users under 18, rather than content that minors specifically search for. In practice, that means platforms could not proactively show content to young users that could be “harmful,” but could present that content to them. How this would play out in practice is unclear; search results are recommendations, and future recommendations are impacted by previous searches. But however it’s interpreted, it’s still censorship—and it fundamentally misunderstands how search works online. Ultimately, no amendment will change the basic fact that KOSA’s duty of care turns what is meant to be a bill about child safety into a censorship bill that will harm the rights of both adult and minor users. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote. In fact, even many of the young people it is intended to help are vehemently against it. We will continue to oppose it in the new year, and urge you to contact your congressperson about it today

Most KOSA Alternatives Aren’t Much Better

KOSA wasn’t the only child safety bill Congress put forward this year. The Protecting Kids on Social Media Act would combine some of the worst elements of other social media bills aimed at “protecting the children” into a single law. It includes elements of KOSA as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law

When originally introduced, the Protecting Kids on Social Media Act had five major components: 

  • A mandate that social media companies verify the ages of all account holders, including adults 
  • A ban on children under age 13 using social media at all
  • A mandate that social media companies obtain parent or guardian consent before minors over 12 years old and under 18 years old may use social media
  • A ban on the data of minors (anyone over 12 years old and under 18 years old) being used to inform a social media platform’s content recommendation algorithm
  • The creation of a digital ID pilot program, instituted by the Department of Commerce, for citizens and legal residents, to verify ages and parent/guardian-minor relationships

EFF is opposed to all of these components, and has written extensively about why age verification mandates and parental consent requirements are generally dangerous and likely unconstitutional. 

In response to criticisms, senators updated the bill to remove some of the most flagrantly unconstitutional provisions: it no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media.  

One silver lining to this fight is that it has activated young people. 

Still, it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition. It would still prohibit children under 13 from using any ad-based social media, despite the vast majority of content on social media being lawful speech fully protected by the First Amendment. If enacted, the bill would suffer a similar fate to a California law struck down in 2011 for violating the First Amendment, which was aimed at restricting minors’ access to violent video games. 

What’s Next

One silver lining to this fight is that it has activated young people. The threat of KOSA, as well as several similar state-level bills that did pass, has made it clear that young people may be the biggest target for online censorship and surveillance, but they are also a strong weapon against them

The authors of these bills have good, laudable intentions. But laws that would force platforms to determine the age of their users are privacy-invasive, and laws that restrict speech—even if only for those who can’t prove they are above a certain age—are censorship laws. We expect that KOSA, at least, will return in one form or another. We will be ready when it does.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Protecting Students from Faulty Software and Legislation: 2023 Year in Review

28 December 2023 at 11:25

Lawmakers, schools districts, educational technology companies and others keep rolling out legislation and software that threatens students’ privacy, free speech, and access to social media, in the name of “protecting” children. At EFF, we fought back against this overreach and demand accountability and transparency.

Bad bills and invasive monitoring systems, though sometimes well-meaning, hurt students rather than protect them from the perceived dangers of the internet and social media. We saw many efforts to bar young people, and students, from digital spaces, censor what they are allowed to see and share online, and monitor and control when and how they can do it. This makes it increasingly difficult for them to access information about everything from gun violence and drug abuse to politics and LGBTQ+ topics, all because some software or elected official considers these topics “harmful.”

In response, we doubled down on exposing faulty surveillance software, long a problem in many schools across the country. We launched a new project called the Red Flag Machine, an interactive quiz and report demonstrating the absurd inefficiency—and potential dangers—of student surveillance software that schools across the country use and that routinely invades the privacy of millions of children.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back

The project grew out of our investigation of GoGuardian, computer monitoring software used in about 11,500 schools to surveil about 27 million students—mostly in middle and high school—according to the company. The software allows school officials and teachers to monitor student’s computers and devices, talk to them via chat or webcam, block sites considered “offensive,” and get alerts when students access content that the software, or the school, deems harmful or explicit.

Our investigation showed that the software inaccurately flags massive amounts of useful material. The software flagged sites about black authors and artists, the Holocaust, and the LGBTQ+ rights movement. The software flagged the official Marine Corps’ fitness guide and the bios of the cast of Shark Tank. Bible.com was flagged because the text of Genesis 3 contained the word “naked.” We found thousands more examples of mis-flagged sites.

EFF built the Red Flag Machine to expose the ludicrous results of GoGuardian’s flagging algorithm. In addition to reading our research about the software, you can take a quiz that presents websites flagged by the software, and guess which of five possible words triggered the flag. The results would be funny if they were not so potentially harmful.

Congress Takes Aim At Students and Young People

Meanwhile, Congress this year resurrected the Kids Online Safety Act (KOSA), a bill that would increase surveillance and restrict access to information in the name of protecting children online—including students. KOSA would give power to state attorneys general to decide what content on many popular online platforms is dangerous for young people, and would enable censorship and surveillance. Sites would likely be required to block important educational content, often made by young people themselves, about how to deal with anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal thoughts. We urged Congress to reject this bill and encouraged people to tell their senators and representative that KOSA will censor the internet but not help kids. 

We also called out the brazen Eyes on the Board Act, which aims to end social media use entirely in schools. This heavy-handed bill would cut some federal funding to any school that doesn’t block all social media platforms. We can understand the desire to ensure students are focusing on schoolwork when in class, but this bill tells teachers and school officials how to do their jobs, and enforces unnecessary censorship.

Many schools already don’t allow device use in the classroom and block social media sites and other content on school issued devices. Too much social media is not a problem that teachers and administrators need the government to correct—they already have the tools and know-how to do it.

Unfortunately, we’ve seen a slew of state bills that also seek to control what students and young people can access online. There are bills in Texas, Utah, Arkansas, Florida, Montana, to name just a few, and keeping up with all this bad legislation is like a game of whack a mole.

Finally, teachers and school administrators are grappling with whether generative AI use should be allowed, and if they should deploy detection tools to find students who have used it. We think the answer to both is no. AI detection tools are very inaccurate and carry significant risks of falsely flagging students for plagiarism. And AI use is growing exponentially and will likely have significant impact on students’ lives and futures. They should be learning about and exploring generative AI now to understand some of the benefits and flaws. Demonizing it only deprives students from gaining knowledge about a technology that may change the world around us.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back against efforts to supposedly protect children that actually give government control over who gets to see what content. It has never been more important for young people to defend our democracy and we’re excited to be joining with them. 

If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

In the Trenches of Broadband Policy: 2023 Year In Review

By: Chao Liu
29 December 2023 at 14:31

EFF has long advocated for affordable, accessible, future-proof internet access for all. Nearly 80% of Americans already consider internet access to be as essential as water and electricity, so as our work, health services, education, entertainment, social lives, etc. increasingly have an online component, we cannot accept a future where the quality of your internet access—and so the quality of your connection to these crucial facets of your life—is determined by geographic, socioeconomic, or otherwise divided lines. 

Lawmakers recognized this during the pandemic and set in motion once-in-a-generation opportunities to build the future-proof fiber infrastructure needed to close the digital divide once and for all.

As we exit the pandemic however, that dedication is wavering. Monopolistic internet service providers (ISPs), with business models that created the digital divide in the first place, are doing everything they can to maintain control over the broadband market—including stopping the construction of any infrastructure they do not control. Further, while some government agencies are continuing to make rules to advance equitable and competitive access to broadband, others have not. Regardless, EFF will continue to fight for the vision we’ve long advocated.

New York City Abandons Revolutionary Fiber Plan 

This year, New York City Mayor Eric Adams turned his back on the future of broadband accessibility for New Yorkers.

In 2020, then Mayor Bill de Blasio unveiled New York City’s Internet Master Plan to deliver broadband to low-income New Yorkers by investing in public fiber infrastructure. Public fiber infrastructure would have been an investment in New York City’s future, a long-term solution to permanently bridge the digital divide and bring affordable, accessible future-proof service to New Yorkers for generations to come. This kind of public infrastructure, especially if provisioned on an open and affordable basis dramatically lowers barriers to entry, which in turn creates competition, lower prices, and better customer service in the market as a whole.

Mayor Eric Adams not only abandoned this plan, but subsequently introduced a three-year $90 million dollar subsidy plan called Big Apple Connect. Instead of building physical infrastructure to bridge the digital divide for decades to come, New York City will now subsidize NYC’s oligopolist ISPs, Charter Spectrum and Altice, to continue doing business as usual. This does nothing to address the needs of underinvested communities whose legacy networks physically cannot handle a fast connection. All it does is put taxpayer dollars into corporate pockets instead of into infrastructure that actually serves the people.

The Adams administration even asked a cooperatively-run community based ISP that had been a part of the Internet Master Plan and had already installed fiber infrastructure to dismantle their network so the city can further contract with the big ISPs.

California Wavers On Its Commitments

New York City is not the only place public commitment to bridging the digital divide has wavered. 

In 2021, California invested nearly $7 billion to bring affordable fiber infrastructure to all Californians. As part of this process California’s Department of Technology was meant to build 10,000 miles of middle-mile fiber infrastructure, the physical foundation through which community-level last mile connections would be built to serve underserved communities for decades to come.

Unfortunately, in August the Department of Technology not only reduced the number of miles to be built but also cut off entire communities that had traditionally been underserved. Despite fierce community pushback, the Department of Technology stuck to their revised plans and awarded contracts accordingly.

Governor Newsom has promised to restore the lost miles in 2024, which EFF and California community groups intend to hold him to, but the fact remains that the reduction of miles should not have been done the way they were.

FCC Rules on Digital Discrimination and Rulemaking on Net Neutrality

On the federal level, the Federal Communications Commission finally received its fifth commissioner in Anna Gomez September of this year, allowing them to begin their rulemaking on net neutrality and promulgate rules on digital discrimination. We submitted comments on the net neutrality proceeding, advocating for a return to light-touch, targeted, and enforceable net neutrality protections for the whole country.

On digital discrimination, EFF applauds the Commission for adopting a disparate treatment as well as disparate impact standard. Companies can now be found liable for digital discrimination not only when they intentionally treat communities differently, but when the impact of their decisions—regardless of intent—affect a community differently.  Further, for the first time the Commission recognized the link between historic redlining in housing and digital discrimination, making the connection between the historic underinvestment of lower income communities of color and the continued underinvestment by the monopolistic ISPs.

Next year will bring more fights around broadband implementation. The questions will be who gets funding, whether and where infrastructure gets built, and whether long-neglected communities will finally be heard and brought into the 21st-century or left behind by public neglect or private greed. The path to affordable, accessible, future-proof internet for all will require the political will to invest in physical infrastructure and hold incumbents to nondiscrimination rules that preserve speech and competition online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting For Your Digital Rights Across the Country: Year in Review 2023

29 December 2023 at 14:42

EFF works every year to improve policy in ways that protect your digital rights in states across the country. Thanks to the messages of hundreds of EFF members across the country, we've spoken up for digital rights this year from Sacramento to Augusta.

Much of EFF's state legislative work has, historically, been in our home state of California—also often the most active state on digital civil liberties issues. This year, the Golden State passed several laws that strengthen consumer digital rights.

Two major laws we supported stand out in 2023. The first is S.B. 244, authored by California Sen. Susan Eggman, which makes it easier for individuals and independent repair shops to access materials and parts needed for maintenance on electronics and appliances. That means that Californians with a broken phone screen or a busted washing machine will have many more options for getting them fixed. Even though some electronics are not included, such as video game consoles, it still raises the bar for other right-to-repair bills.

S.B. 244 is one of the strongest right-to-repair laws in the country, doggedly championed by a group of advocates led by the California Public Interest Research Group, and we were proud to support it.

Another significant win comes with the signing of S.B. 362, also known as the CA Delete Act, authored by California Sen. Josh Becker. Privacy Rights Clearinghouse and Californians for Consumer Privacy led the fight on this bill, which builds on the state's landmark data privacy law and makes it easier for Californians to control their data through the state's data broker registry.

In addition to these wins, several other California bills we supported are now law. These include a measure that will broaden protections for immigration status data and one to facilitate better broadband access.

Health Privacy Is Data Privacy

States across the country continue to legislate at the intersection of digital privacy and reproductive rights. Both in California and beyond, EFF has worked with reproductive justice activists, medical practitioners, and other digital rights advocates to ensure that data from apps, electronic health records, law enforcement databases, and social media posts are not weaponized to prosecute those seeking or aiding those who seek reproductive or gender-affirming care. 

While some states are directly targeting those who seek this type of health care, other states are taking different approaches to strengthen protections. In California, EFF supported a bill that passed into law—A.B. 352, authored by CA Assemblymember Rebecca Bauer-Kahan—which extended the protections of California's health care data privacy law to apps such as period trackers. Washington, meanwhile, passed the "My Health, My Data Act"—H.B. 1155, authored by WA Rep. Vandana Slatter—that, among other protections, prohibits the collection of health data without consent. While EFF did not take a position on H.B. 1155, we do applaud the law's opt-in consent provisions and encourage other states to consider similar bills.

Consumer Privacy Bills Could Be Stronger

Since California passed the California Consumer Privacy Act in 2018, several states have passed their own versions of consumer privacy legislation. Unfortunately, many of these laws have been more consumer-hostile and business-friendly than EFF would like to see. In 2023, eight states—Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee and Texas— passed their own versions of broad consumer privacy bills.

EFF did not support any of these laws, many of which can trace their lineage to a weak Virginia law we opposed in 2021. Yet not all of them are equally bad.

For example, while EFF could not support the Oregon bill after a legislative deal stripped it of its private right of action, the law is a strong starting point for privacy legislation moving forward. While it has its flaws, unique among all other state privacy laws, it requires businesses to share the names of actual third parties, rather than simply the categories of companies that have your information. So, instead of knowing a "data broker" has your information and hitting a dead end in following your own data trail, you can know exactly where to file your next request. EFF participated in a years-long process to bring that bill together, and we thank the Oregon Attorney General's office for their work to keep it as strong as it is.

EFF also wants to give plaudits to Montana for another bill—a strong genetic privacy bill passed this year. The bill is a good starting point for other states, and shows Montana is thinking critically about how to protect people from overbroad data collection and surveillance.

Of course, one post can't capture all the work we did in states this year. In particular, the curious should read our Year in Review post specifically focused on children’s privacy, speech, and censorship bills introduced in states this year. But EFF was able to move the ball forward on several issues this year—and will continue to fight for your digital rights in statehouses from coast to coast.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

First, Let’s Talk About Consumer Privacy: 2023 Year in Review

29 December 2023 at 14:53

Whatever online harms you want to alleviate on the internet today, you can do it better—with a broader impact—if you enact strong consumer data privacy legislation first. That is a grounding principle that has informed much of EFF’s consumer protection work in 2023.

While consumer privacy will not solve every problem, it is superior to many other proposals that attempt to address issues like child mental health or foreign government surveillance. That is true for two reasons: well written consumer privacy laws address the root source of corporate surveillance, and they can withstand constitutional scrutiny.

EFF’s work on this issue includes: (1) advocating for strong comprehensive consumer data privacy laws; (2) fighting bad laws; (3) protecting existing sectoral privacy laws.

Advocating for Strong Comprehensive Consumer Data Privacy


This year, EFF released a report titled “Privacy First: A Better Way to Address Online Harms.” The report listed the key pillars of a strong privacy law (like no online behavioral ads and minimization) and how these principles can help address current issues (like protecting children’s mental health or reproductive health privacy).

We highlighted why data privacy legislation is a form of civil rights legislation and why adtech surveillance often feeds government surveillance.

And we made the case why well-written privacy laws can be constitutional when they regulate the commercial processing of personal data; that personal data is private and not a matter of public concern; and the law is tailored to address the government’s interest in privacy, free expression, security, and guarding against discrimination.

Fighting Bad Laws Based in Censorship of Internet Users


We filed amicus briefs in lawsuits challenging laws in Arkansas and Texas that required internet users to submit to age verification before accessing certain online content. These challenges continue to make their way through the courts, but they have so far been successful. We plan to do the same in a case challenging California’s Age Appropriate Design Code, while cautioning the court not to cast doubt on important privacy principles.

We filed a similar amicus brief in a lawsuit challenging Montana’s TikTok ban, where a federal court recently ruled that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content.

Protecting Existing Sectoral Laws


EFF is also gearing up to file an amicus brief supporting the constitutionality of the federal law called the Video Privacy Protection Act, which limits how video providers can sell or share their users’ private viewing data with third-party companies or the government. While we think a comprehensive privacy law is best, we support strong existing sectoral laws that protect data like video watch history, biometrics, and broadband use records.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting European Threats to Encryption: 2023 Year in Review 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

30 December 2023 at 10:58

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Taking Back the Web with Decentralization: 2023 in Review

31 December 2023 at 09:12

When a system becomes too tightly-controlled and centralized, the people being squeezed tend to push back to reclaim their lost autonomy. The internet is no exception. While the internet began as a loose affiliation of universities and government bodies, that emergent digital commons has been increasingly privatized and consolidated into a handful of walled gardens. Their names are too often made synonymous with the internet, as they fight for the data and eyeballs of their users.

In the past few years, there's been an accelerating swing back toward decentralization. Users are fed up with the concentration of power, and the prevalence of privacy and free expression violations, and many users are fleeing to smaller, independently operated projects.

This momentum wasn’t only seen in the growth of new social media projects. Other exciting projects have emerged this year, and public policy is adapting.  

Major gains for the Federated Social Web

After Elon Musk acquired Twitter (now X) at the end of 2022,  many people moved to various corners of the “IndieWeb” at an unprecedented rate. It turns out those were just the cracks before the dam burst this year. 2023 was defined as much by the ascent of federated microblogging as it was by the descent of X as a platform. These users didn't just want a drop-in replacement for twitter, they wanted to break the major social media platform model for good by forcing hosts to compete on service and respect.

The other major development in the fediverse came from a seemingly unlikely source—Meta.

This momentum at the start of the year was principally seen in the fediverse, with Mastodon. This software project filled the microblogging niche for users leaving Twitter, while conveniently being one of the most mature projects using the ActivityPub protocol, the basic building block at the heart of interoperability in the many fediverse services.

Filling a similar niche, but built on the privately developed Authenticated Transfer (AT) Protocol, Bluesky also saw rapid growth despite remaining invite-only and not-yet being open to interoperating until next year. Projects like Bridgy Fed are already working to connect Bluesky to the broader federated ecosystem, and show some promise of a future where we don’t have to choose between using the tools and sites we prefer and connecting to friends, family, and many others. 

The other major development in the fediverse came from a seemingly unlikely source—Meta.  Meta owns Facebook and Instagram, which have gone to great lengths to control user data—even invoking privacy-washing claims to maintain their walled gardens. So Meta’s launch of Threads in July, a new microblogging site using the fediverse’s ActivityPub protocol, was surprising. After an initial break-out success, thanks to bringing Instagram users into the new service, Threads is already many times larger than the fediverse and Bluesky combined. While such a large site could mean federated microblogging joins federated direct messages (email) in the mainstream, Threads has not yet interoperated, and may create a rift among hosts and users wary of Meta’s poor track record in protecting user privacy and content moderation

We also saw the federation of social news aggregation. In June, Reddit outraged its moderators and third party developers by updating its API pricing policy to become less interoperable. This outrage manifested into a major platform-wide blackout protesting the changes and the unfair treatment of the unpaid and passionate volunteers who make the site worthwhile. Again, users turned to the maturing fediverse as a decentralized refuge, specifically the more reddit-like cousins of Mastodon, Lemmy and Kbin. Reddit, echoing Twitter once again, also came under fire for briefly banning users and subreddits related to these fediverse alternatives. While the protests continued well beyond their initial scope, and continued to remain in the public eye, order was eventually restored. However, the formerly fringe alternatives in the fediverse continue to be active and improving.

Some of our friends are hard at work figuring out what comes next.

Finally, while these projects made great strides in gaining adoption and improving usability, many remain generally small and under-resourced. For the decentralized social web to succeed, it must be sustainable and maintain high standards for how users are treated and safeguarded. These indie hosts face similar liability risks and governmental threats as the billion dollar companies. In a harrowing example we saw this year, an FBI raid on a Mastodon server admin for unrelated reasons resulted in the seizure of an unencrypted server database. It’s a situation that echoes EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, and it underlines the need for small hosts to be prepared to guard against government overreach.

With so much momentum towards better tools and a wider adoption of better standards, we remain optimistic about the future of these federated projects.

Innovative Peer-to-Peer Apps

This year has also seen continued work on components of the web that live further down the stack, in the form of protocols and libraries that most people never interact with but which enable the decentralized services that users rely on every day. The ActivityPub protocol, for example, describes how all the servers that make up the fediverse communicate with each other. ActivityPub opened up a world of federated decentralized social media—but progress isn't stopping there.

Some of our friends are hard at work figuring out what comes next. The Veilid project was officially released in August, at DEFCON, and the Spritely project has been throwing out impressive news and releases all year long. Both projects promise to revolutionize how we can exchange data directly from person to person, securely and privately, and without needing intermediaries. As we wrote, we’re looking forward to seeing where they lead us in the coming year.

The European Union’s Digital Markets Act went into effect in May of 2023, and one of its provisions requires that messaging platforms greater than a certain size must interoperate with other competitors. While each service with obligations under the DMA could offer its own bespoke API to satisfy the law’s requirements, the better result for both competition and users would be the creation of a common protocol for cross-platform messaging that is open, relatively easy to implement, and, crucially, maintains end-to-end encryption for the protection of end users. Fortunately, the More Instant Messaging Interoperability (MIMI) working group at the Internet Engineering Task Force (IETF) has taken up that exact challenge. We’ve been keeping tabs on the group and are optimistic about the possibility of open interoperability that promotes competition and decentralization while protecting privacy.

EFF on DWeb Policy

DWeb Camp 2023

The “star-studded gala” (such as it is) of the decentralized web, DWeb Camp, took place this year among the redwoods of Northern California over a weekend in late June. EFF participated in a number of panels focused on the policy implications of decentralization, how to influence policy makers, and the future direction of the decentralized web movement. The opportunity to connect with others working on both policy and engineering was invaluable, as were the contributions from those living outside the US and Europe.  

Blockchain Testimony

Blockchains have been the focus of plenty of legislators and regulators in the past handful of years, but most of the focus has been on the financial uses and implications of the tool. EFF had a welcome opportunity to direct attention toward the less-often discussed other potential uses of blockchains when we were invited to testify before the United States House Energy and Commerce Committee Subcommittee on Innovation, Data, and Commerce. The hearing focused specifically on non-financial uses of blockchains, and our testimony attempted to cut through the hype to help members of Congress understand what it is and how and when it can be helpful while being clear about its potential downsides. 

The overarching message of our testimony was that blockchain at the end of the day is just a tool and, just as with other tools, Congress should refrain from regulating it specifically because of what it is. The other important point we made was that the individuals that contribute open source code to blockchain projects should not, absent some other factor, be the ones held responsible for what others do with the code they write.

A decentralized system means that individuals can “shop” for the moderation style that best suits their preferences.

Moderation in Decentralized Social Media

One of the major issues brought to light by the rise of decentralized social media such as Bluesky and the fediverse this year has been the promises and complications of content moderation in a decentralized space. On centralized social media, content moderation can seem more straightforward. The moderation team has broad insight into the whole network, and, for the major platforms most people are used to, these centralized services have more resources to maintain a team of moderators. Decentralized social media has its own benefits when it comes to moderation, however. For example, a decentralized system means that individuals can “shop” for the moderation style that best suits their preferences. This community-level moderation may scale better than centralized models, as moderators have more context and personal investment in the space

But decentralized moderation is certainly not a solved problem, which is why the Atlantic Council created the Task Force for a Trustworthy Future Web. The Task Force started out by compiling a comprehensive report on the state of trust and safety work in social media and the upcoming challenges in the space. They then conducted a series of public and private consultations focused on the challenges of content moderation in these new platforms. Experts from many related fields were invited to participate, including EFF, and we were excited to offer our thoughts and to hear from the other assembled groups. The Task Force is compiling a final report that will synthesize the feedback and which should be out early next year.

The past year has been a strong one for the decentralization movement. More and more people are realizing that the large centralized services are not all there is to the internet, and exploration of alternatives is happening at a level that we haven’t seen in at least a decade. New services, protocols, and governance models are also popping up all the time. Throughout the year we have tried to guide newcomers through the differences in decentralized services, inform public policies surrounding these technologies and tools, and help envision where the movement should grow next. We’re looking forward to continuing to do so in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

How To Fight Bad Patents: 2023 Year In Review

31 December 2023 at 09:14

At EFF, we believe that all the rights we have in the offline world–to speak freely, create culture, play games, build things and do business–must hold up in the digital world, as well. 

EFF’s longstanding project of fighting for a more balanced, just patent system has always borne free expression in mind. And patent trolls, who simply use intellectual property (IP) rights to extract money from others, continue to be a barrier to people who want to freely innovate, or even just use technology. 

Defending IPR 

The inter partes review (IPR) process that Congress created about a decade ago is far from perfect, and we’ve supported a few ideas that would make it stronger. But overall, IPR has been a big step forward for limiting the damage of wrongly granted patents. Thousands of patent claims have been canceled through this process, which uses specialized administrative judges and is considerably faster and less expensive than federal courts. 

And IPR does no harm to legitimate patent holders. In fact, it only affects a tiny proportion of patents at all. In fiscal year 2023, there were 392 patents that were partially invalidated, and 133 patents that were fully invalidated. That’s out of a universe of an estimated 3.8 million “live” patents, according to the U.S. Patent and Trademark Office’s (USPTO) own data. 

Patent examiners have less than 20 hours, on average, to go through the entire review process for a particular patent application. The process ends with the patent applicant getting a limited monopoly from the government–a monopoly right that’s now given out more than 300,000 times per year. It only makes sense to have some type of post-grant review system to challenge the worst patents at the patent office. 

Despite this, patent trolls and other large, aggressive patent holders are determined to roll back the IPR process. This year, they lobbied the USPTO to begin a process that would allow wrongheaded rule changes that would severely threaten access to the IPR process. 

EFF, allied organizations, and tens of thousands of individuals wrote to the U.S. Patent Office opposing the proposed rules, and insisting that patent challenges should remain open to the public. 

We’re also opposing an even more extreme set of rule changes to IPR that has been unfortunately put forward by some key Senators. The PREVAIL Act would sharply limit IPR to only the immediately affected parties, and bar groups like EFF from accessing IPR at all. (A crowdfunded IPR process is how we shut down the dangerous “podcasting” patent.) 

Defending Alice

The Supreme Court’s 2014 decision in Alice v. CLS Bank barred patents that were nothing more than abstract ideas with computer jargon added in. Using the Alice test, federal courts have kicked out a rogue’s gallery of hundreds of the worst patents, including patents claiming “matchmaking”, online picture menus, scavenger hunts, and online photo contests

Dozens of individuals and small businesses have been saved by the Alice precedent, which has done a decent job of stopping the worst computer patents from surviving–at least when a defendant can afford to litigate the case. 

Unfortunately, certain trade groups keep pushing to roll back the Alice framework. For the second year in a row, we saw the introduction of a bill called the Patent Eligibility Restoration Act. This proposal would reverse course not only on the Alice rule, but also authorize the patenting of human genes that currently cannot be patented thanks to another Supreme Court case, AMP v. Myriad. It would “restore” the absolute worst patents on computer technology, and on human genes. 

We also called out the U.S. Solicitor General when that office wrote a shocking brief siding with a patent troll, suggesting that the Supreme Court re-visit Alice. 

The Alice precedent protects everyday internet users. We opposed the Solicitor General when she came out against users, and we’ll continue to strongly oppose PERA

Until our patent laws get the kind of wholesale change we have advocated for, profiteers and scam artists will continue to claim they “own” various types of basic internet use. That myth is wrong, it hurts innovation, and it hurts free speech. With your help, EFF remains a bulwark against this type of patent abuse.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Year In Review: Google’s Corporate Paternalism in The Browser

1 January 2024 at 08:15

It’s a big year for the oozing creep of corporate paternalism and ad-tracking technology online. Google and its subsidiary companies have tightened their grips on the throat of internet innovation, all while employing the now familiar tactic of marketing these things as beneficial for users. Here we’ll review the most significant changes this year, all emphasizing the point that browser privacy tools (like Privacy Badger) are more important than ever.

Manifest V2 to Manifest V3: Final Death of Legacy Chrome Extensions

Chrome, the most popular web browser by all measurements, recently announced the official death date for Manifest V2, hastening the reign of its janky successor, Manifest V3. We've been complaining about this since the start, but here's the gist: the finer details of MV3 have gotten somewhat better over time (namely that it won't completely break all privacy extensions). However, what security benefits it has are bought by limiting what all extensions can do. Chrome could invest in a more robust extension review process. Doing so would protect both innovation and security, but it’s clear that the true intention of this change is somewhere else. Put bluntly: Chrome, a browser built by an advertising company, has positioned itself as the gatekeeper for in-browser privacy tools, the sole arbiter of how they should be designed. Considering that Google’s trackers are present on at least 85% of the top 50,000 websites, contributing to an overall profit of approximately 225 billion dollars in 2022, this is an unsurprising, yet still disappointing, decision.

For what it's worth, Apple's Safari browser imposes similar restrictions to allegedly protect Safari users from malicious extensions. While it’s important to protect users from said malicious extensions, it’s equally important to honor their privacy.

Topics API

This year also saw the rollout of Google's planned "Privacy Sandbox" project, which also uses a lot of mealy-mouthed marketing to justify its questionable characteristics. While it will finally get rid of third-party cookies, an honestly good move, it is replacing that form of tracking with another called the "Topics API." At best, this reduces the number of parties that are able to track a user through the Chrome browser (though we aren’t the only privacy experts casting doubt toward its so-called benefits). But it limits tracking so it's only done by a single powerful party, Chrome itself, who then gets to dole out its learnings to advertisers that are willing to pay. This is just another step in transforming the browser from a user agent to an advertising agent.

Privacy Badger now disables the Topics API by default.

YouTube Blocking Access for Users With Ad-Blockers

Most recently, people with ad-blockers began to see a petulant message from Youtube when trying to watch a video. The blocking message gave users a countdown until they would no longer be able to use the site unless they disabled their ad-blockers. Privacy and security benefits be damned. YouTube, a Google owned company which saw its own all-time high in third quarter advertising revenue (a meager 8 billion dollars), has no equivocal announcement laden with deceptive language for this one. If you’re on Chrome or a Chromium-based browser, expect YouTube to be broken unless you turn off your ad-blocker.

Privacy Tools > Corporate Paternalism

Obviously this all sucks. User security shouldn’t be bought by forfeiting privacy. In reality, one is deeply imbricated with the other. All this bad decision-making drives home how important privacy tools are. Privacy Badger is one of many. It’s not just that Privacy Badger is built to protect the disempowered users, that it's a plug-n-play tool working quietly (but ferociously) behind the scenes to halt the tracking industry, but that it exists in an ecosystem of other like minded privacy projects that complement each other. Where one tool might miss, another hones in.

This year, Privacy Badger has unveiled exciting support projects and new features:

Until we have comprehensive privacy protections in place, until corporate tech stops abusing our desires to not be snooped on, privacy tools must be empowered to make up for these harms. Users deserve the right to choose what privacy means to them, not have that decision made by an advertising company like Google.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Digital Rights for LGBTQ+ People: 2023 Year in Review

1 January 2024 at 08:16

An increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and offline across the globe. Throughout 2023, several countries sought to pass explicitly anti-LGBTQ+ initiatives restricting freedom of expression and privacy. This fuels offline intolerance against LGBTQ+ people, and forces them to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One growing threat to LGBTQ+ people is data surveillance. Across the U.S., a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging

It is imperative that businesses are prevented from collecting and retaining this data in the first place, so that it cannot later be seized by police and used as evidence. Legislators should start with Rep. Jacobs’ My Body, My Data bill. We also need new laws to ban reverse warrants, which police can use to identify every person who searched for the keywords “how do I get gender-affirming care,” or who was physically located near a trans health clinic. 

Moreover, LGBTQ+ expression was targeted by U.S. student monitoring tools like GoGuardian, Gaggle, and Bark. The tools scan web pages and documents in students’ cloud drives for keywords about topics like sex and drugs, which are subsequently blocked or flagged for review by school administrators. Numerous reports show regular flagging of LGBTQ+ content. This creates a harmful atmosphere for students; for example, some have been outed because of it. In a positive move, Gaggle recently removed LGBTQ+ terms from their keyword list and GoGuardian has done the same. But, LGBTQ+ resources are still commonly flagged for containing words like "sex," "breasts," or "vagina." Student monitoring tools must remove all terms from their blocking and flagging lists that trigger scrutiny and erasure of sexual and gender identity. 

Looking outside the U.S., LGBTQ+ rights were gravely threatened by expansive cybercrime and surveillance legislation in the Middle East and North Africa throughout 2023. For example, the Cybercrime Law of 2023 in Jordan, introduced as part of King Abdullah II’s modernization reforms, will negatively impact LGBTQ+ people by restricting encryption and anonymity in digital communications, and criminalizing free speech through overly broad and vaguely defined terms. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. 

For many countries across Africa, and indeed the world, anti-LGBTQ+ discourses and laws can be traced back to colonial rule. These laws have been used to imprison, harass, and intimidate LGBTQ+ individuals. In May 2023, Ugandan President Yoweri Museveni signed into law the extremely harsh Anti-Homosexuality Act 2023. It imposes, for example, a 20-year sentence for the vaguely worded offense of “promoting” homosexuality. Such laws are not only an assault on the rights of LGBTQ+ people to exist, but also a grave threat to freedom of expression. They lead to more censorship and surveillance of online LGBTQ+ speech, the latter of which will lead to more self-censorship, too.

Ghana’s draft Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill 2021 goes much further. It threatens up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter, Facebook, and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content. If passed, Ghanaian authorities could also probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. EFF this year joined other human rights groups to oppose this law.

Taking inspiration from Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—would impose ten years imprisonment for homosexuality, and life imprisonment for “aggravated homosexuality.” The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. Kenya today is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation; sadly, that may change. EFF has called on the authorities in Kenya and Ghana to reject their respective repulsive bills, and for authorities in Uganda to repeal the Anti-Homosexuality Act.

2023 was a challenging year for the digital rights of LGBTQ+ people. But we are optimistic that in the year to come, LGBTQ+ people and their allies, working together online and off, will make strides against censorship, surveillance, and discrimination.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Victory! Police Drone Footage is Not Categorically Exempt From California’s Public Records Law

3 January 2024 at 13:20

Video footage captured by police drones sent in response to 911 calls cannot be kept entirely secret from the public, a California appellate court ruled last week.

The decision by the California Court of Appeal for the Fourth District came after a journalist sought access to videos created by Chula Vista Police Department’s “Drones as First Responders” (DFR) program. The police department is the first law enforcement agency in the country to use drones to respond to emergency calls, and several other agencies across the U.S. have since adopted similar models.

After the journalist, Arturo Castañares of La Prensa, sued, the trial court ruled that Chula Vista police could withhold all footage because the videos were exempt from disclosure as law enforcement investigatory records under the California Public Records Act. Castañares appealed.

EFF, along with the First Amendment Coalition and the Reporters Committee for Freedom of the Press, filed a friend-of-the-court brief in support of Castañares, arguing that categorically excluding all drone footage from public disclosure could have troubling consequences on the public’s ability to understand and oversee the police drone program.

Drones, also called unmanned aerial vehicles (UAVs) or unmanned aerial systems (UAS), are relatively inexpensive devices that police use to remotely surveil areas. Historically, law enforcement have used small systems, such as quadrotors, for situational awareness during emergency situations, for capturing crime scene footage, or for monitoring public gatherings, such as parades and protests. DFR programs represent a fundamental change in strategy, with police responding to a much, much larger number of situations with drones, resulting in pervasive, if not persistent surveillance of communities.

Because drones raise distinct privacy and free expression concerns, foreclosing public access to their footage would make it difficult to assess whether police are following their own rules about when and whether they record sensitive places, such as people’s homes or public protests.

The appellate court agreed that drone footage is not categorically exempt from public disclosure. In reversing the trial court’s decision, the California Court of Appeal ruled that although some 911 calls are likely part of law enforcement investigation or at least are used to determine whether a crime occurred, not all 911 calls involve crimes.

“For example, a 911 call about a mountain lion roaming a neighborhood, a water leak, or a stranded motorist on the freeway could warrant the use of a drone but do not suggest a crime might have been committed or is in the process of being committed,” the court wrote.

Because it’s possible that some of Chula Vista’s drone footage involves scenarios in which no crime is committed or suspected, the police department cannot categorically withhold every moment of video footage from the public.

The appellate court sent the case back to the trial court and ordered it and the police department to take a more nuanced approach to determine whether the underlying call for service was a crime or was an initial investigation into a potential crime.

“The drone video footage should not be treated as a monolith, but rather, it can be divided into separate parts corresponding to each specific call,” the court wrote. “Then each distinct video can be evaluated under the CPRA in relation to the call triggering the drone dispatch.”

This victory sends a message to other agencies in California adopting copycat programs, such as the Beverly Hills Police Department, Irvine Police Department, and Fremont Police Department, that they can’t abuse public records laws to shield every second of drone footage from public scrutiny.

EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

4 January 2024 at 13:41

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

AI Watermarking Won't Curb Disinformation

5 January 2024 at 13:46

Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big companies should incorporate watermarks into the outputs of their AIs. For instance, this could involve taking an image and subtly changing many pixels in a way that’s undetectable to the eye but detectable to a computer program. Or it could involve swapping words for synonyms in a predictable way so that the meaning is unchanged, but a program could readily determine the text was generated by an AI.

Unfortunately, watermarking schemes are unlikely to work. So far most have proven easy to remove, and it’s likely that future schemes will have similar problems.

One kind of watermark is already common for digital images. Stock image sites often overlay text on an image that renders it mostly useless for publication. This kind of watermark is visible and is slightly challenging to remove since it requires some photo editing skills.

Images can also have metadata attached by a camera or image processing program, including information like the date, time, and location a photograph was taken, the camera settings, or the creator of an image. This metadata is unobtrusive but can be readily viewed with common programs. It’s also easily removed from a file. For instance, social media sites often automatically remove metadata when people upload images, both to prevent people from accidentally revealing their location and simply to save storage space.

A useful watermark for AI images would need two properties: 

  • It would need to continue to be detectable after an image is cropped, rotated, or edited in various ways (robustness). 
  • It couldn’t be conspicuous like the watermark on stock image samples, because the resulting images wouldn’t be of much use to anybody.

One simple technique is to manipulate the least perceptible bits of an image. For instance, to a human viewer these two squares are the same shade:

But to a computer it’s obvious that they are different by a single bit: #93c47d vs 93c57d. Each pixel of an image is represented by a certain number of bits, and some of them make more of a perceptual difference than others. By manipulating those least-important bits, a watermarking program can create a pattern that viewers won’t see, but a watermarking-detecting program will. If that pattern repeats across the whole image, the watermark is even robust to cropping. However, this method has one clear flaw: rotating or resizing the image is likely to accidentally destroy the watermark.

There are more sophisticated watermarking proposals that are robust to a wider variety of common edits. However, proposals for AI watermarking must pass a tougher challenge. They must be robust against someone who knows about the watermark and wants to eliminate it. The person who wants to remove a watermark isn’t limited to common edits, but can directly manipulate the image file. For instance, if a watermark is encoded in the least important bits of an image, someone could remove it by simply setting all the least important bits to 0, or to a random value (1 or 0), or to a value automatically predicted based on neighboring pixels. Just like adding a watermark, removing a watermark this way gives an image that looks basically identical to the original, at least to a human eye.

Coming at the problem from the opposite direction, some companies are working on ways to prove that an image came from a camera (“content authenticity”). Rather than marking AI generated images, they add metadata to camera-generated images, and use cryptographic signatures to prove the metadata is genuine. This approach is more workable than watermarking AI generated images, since there’s no incentive to remove the mark. In fact, there’s the opposite incentive: publishers would want to keep this metadata around because it helps establish that their images are “real.” But it’s still a fiendishly complicated scheme, since the chain of verifiability has to be preserved through all software used to edit photos. And most cameras will never produce this metadata, meaning that its absence can’t be used to prove a photograph is fake.

Comparing watermarking vs content authenticity, watermarking aims to identify or mark (some) fake images; content authenticity aims to identify or mark (some) real images. Neither approach is comprehensive, since most of the images on the Internet will have neither a watermark nor content authenticity metadata.

Watermarking Content authenticity
AI images Marked Unmarked
(Some) camera images Unmarked Marked
Everything else Unmarked Unmarked

 

Text-based Watermarks

The watermarking problem is even harder for text-based generative AI. Similar techniques can be devised. For instance, an AI could boost the probability of certain words, giving itself a subtle textual style that would go unnoticed most of the time, but could be recognized by a program with access to the list of words. This would effectively be a computer version of determining the authorship of the twelve disputed essays in The Federalist Papers by analyzing Madison’s and Hamilton’s habitual word choices.

But creating an indelible textual watermark is a much harder task than telling Hamilton from Madison, since the watermark must be robust to someone modifying the text trying to remove it. Any watermark based on word choice is likely to be defeated by some amount of rewording. That rewording could even be performed by an alternate AI, perhaps one that is less sophisticated than the one that generated the original text, but not subject to a watermarking requirement.

There’s also a problem of whether the tools to detect watermarked text are publicly available or are secret. Making detection tools publicly available gives an advantage to those who want to remove watermarking, because they can repeatedly edit their text or image until the detection tool gives an all clear. But keeping them a secret makes them dramatically less useful, because every detection request must be sent to whatever company produced the watermarking. That would potentially require people to share private communication if they wanted to check for a watermark. And it would hinder attempts by social media companies to automatically label AI-generated content at scale, since they’d have to run every post past the big AI companies.

Since text output from current AIs isn’t watermarked, services like GPTZero and TurnItIn have popped up, claiming to be able to detect AI-generated content anyhow. These detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism.

Lastly, if AI watermarking is to prevent disinformation campaigns sponsored by states, it’s important to keep in mind that those states can readily develop modern generative AI, and probably will in the near future. A state-sponsored disinformation campaign is unlikely to be so polite as to watermark its output.

Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation. And watermarks may be useful in understanding reshared content where there is no deceptive intent. But research into adversarial watermarking for AI is just beginning, and while there’s no strong reason to believe it will succeed, there are some good reasons to believe it will ultimately fail.

EFF Urges Pennsylvania Supreme Court to Find Keyword Search Warrant Unconstitutional

5 January 2024 at 14:21
These Dragnet Searches Violate the Privacy of Millions of Americans

SAN FRANCISCO—Keyword warrants that let police indiscriminately sift through search engine databases are unconstitutional dragnets that target free speech, lack particularity and probable cause, and violate the privacy of countless innocent people, the Electronic Frontier Foundation (EFF) and other organizations argued in a brief filed today to the Supreme Court of Pennsylvania. 

Everyone deserves to search online without police looking over their shoulder, yet millions of innocent Americans’ privacy rights are at risk in Commonwealth v. Kurtz—only the second case of its kind to reach a state’s highest court. The brief filed by EFF, the National Association of Criminal Defense Lawyers (NACDL), and the Pennsylvania Association of Criminal Defense Lawyers (PACDL) challenges the constitutionality of a keyword search warrant issued by the police to Google. The case involves a massive invasion of Google users’ privacy, and unless the lower court’s ruling is overturned, it could be applied to any user using any search engine. 

“Keyword search warrants are totally incompatible with constitutional protections for privacy and freedom of speech and expression,” said EFF Surveillance Litigation Director Andrew Crocker. “All keyword warrants—which target our speech when we seek information on a search engine—have the potential to implicate innocent people who just happen to be searching for something an officer believes is somehow linked to a crime. Dragnet warrants that target speech simply have no place in a democracy.” 

Users have come to rely on search engines to routinely seek answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant. Google keeps detailed information on every search query it receives, however, resulting in a vast record of users’ most private and personal thoughts, opinions, and associations that police seek to access by merely demanding the identities of all users who searched for specific keywords. 

Because this data is so broad and detailed, keyword search warrants are especially concerning: Unlike typical warrants for electronic information, these do not target specific people or accounts. Instead, they require a provider to search its entire reserve of user data to identify any and all users or devices who searched for words or phrases specified by police. As in this case, the police generally have no identified suspects when they seek such a warrant; instead, the sole basis is the officer’s hunch that the perpetrator might have searched for something related to the crime.  

This violates the Pennsylvania Constitution’s Article I, Section 8 and the Fourth Amendment to the U.S. Constitution, EFF’s brief argued, both of which were inspired by 18th-century writs of assistance—general warrants that let police conduct exploratory rummaging through a person’s belongings. These keyword search warrants also are especially harmful because they target protected speech and the related right to receive information, the brief argued. 

"Keyword search warrants are digital dragnets giving the government permission to rummage through our most private information, and the Pennsylvania Supreme Court should find them unconstitutional,” said NACDL Fourth Amendment Center Litigation Director Michael Price. 

“Search engines are an indispensable tool for finding information on the Internet, and the ability to use them—and use them anonymously—is critical to a free society,” said Crocker. “If providers can be forced to disclose users’ search queries in response to a dragnet warrant, it will chill users from seeking out information about anything that police officers might conceivably choose as a searchable keyword.” 

For the brief: https://www.eff.org/document/commonwealth-v-kurtz-amicus-brief-pennsylvania-supreme-court-1-5-2024

For a similar case in Colorado: https://www.eff.org/deeplinks/2023/10/colorado-supreme-court-upholds-keyword-search-warrant 

Contact: 
Andrew
Crocker
Surveillance Litigation Director

Craig Newmark Philanthropies – Celebrating 30 Years of Support for Digital Rights

8 January 2024 at 19:16

EFF has been awarded a new $200,000 grant from Craig Newmark Philanthropies to strengthen our cybersecurity work in 2024. We are especially grateful this year, as it marks 30 years of donations from Craig Newmark, who joined as an EFF member just three years after our founding and four years before he launched the popular website craigslist.  

Over the past several years, grants from Craig Newmark Philanthropies have focused on supporting trustworthy journalism to defend our democracy and hold the powerful accountable, as well as cybersecurity to protect consumers and journalists alike from malware and other dangers online. With this funding, EFF has built networks to help defend against disinformation warfare, fought online harassment, strengthened ethical journalism, and researched state-sponsored malware, cyber-mercenaries, and consumer spyware. EFF’s Threat Lab conducts research on surveillance technologies used to target journalists, communities, activists, and individuals. For example, we helped co-found, and continue to provide leadership to the Coalition Against Stalkerware. EFF also created and updated tools to educate and train working and student journalists alike to keep themselves safe from adversarial attacks. In addition to maintaining our popular Surveillance Self Defense guide, we scaled up our Report Back tool for student journalists, cybersecurity students, and grassroots volunteers to collaboratively study technology in society. 

In 2006, EFF recognized craigslist for cultivating a pervasive culture of trust and maintaining its public service charge even as it became one of the most popular websites in the world. Though Craig has retired from craigslist, this ethos continues through his philanthropic giving, which is “focused on a commitment to fairness and doing right by others.” EFF thanks Craig Newmark for his 30 years of financial support, which has helped us grow to become the leading nonprofit defending digital privacy, free speech, and innovation today. 

UAE Confirms Trial Against 84 Detainees; Ahmed Mansoor Suspected Among Them

10 January 2024 at 05:51

The UAE confirmed this week that it has placed 84 detainees on trial, on charges of “establishing another secret organization for the purpose of committing acts of violence and terrorism on state territory.” Suspected to be among those facing trial is award-winning human rights defender Ahmed Mansoor, also known as the “the million dollar dissident,” as he was once the target of exploits that exposed major security flaws in Apple’s iOS operating system—the kind of “zero-day” vulnerabilities that fetch seven figures on the exploit market. Mansoor drew the ire of UAE authorities for criticizing the country’s internet censorship and surveillance apparatus and for calling for a free press and democratic freedoms in the country.

Having previously been arrested in 2011 and sentenced to three years' imprisonment for “insulting officials,'' Ahmed Mansoor was released after eight months due to a presidential pardon influenced by international pressure. Later, Mansoor faced new speech-related charges for using social media to “publish false information that harms national unity.” During this period, authorities held him in an unknown location for over a year, deprived of legal representation, before convicting him again in May 2018 to ten years in prison under the UAE’s draconian cybercrime law. We have long advocated for his release, and are joined in doing so by hundreds of digital and human rights organizations around the world.

At the recent COP28 climate talks, Human Rights Watch and Amnesty International and other activists conducted a protest inside the UN-protected “blue zone” to raise awareness of Mansoor’s plight, as well the cases of both UAE detainee Mohamed El-Siddiq and Egyptian-British activist  Alaa Abd El Fattah. At the same time, it was reported by a dissident group that the UAE was proceeding with the trial against 84 of its detainees.

We reiterate our call for Ahmed Mansoor’s freedom, and take this opportunity to raise further awareness of the oppressive nature of the legislation that was used to imprison him. The UAE’s use of its criminal law to silence those who speak truth to power is another example of how counter-terrorism laws restrict free expression and justify disproportionate state surveillance. This concern is not hypothetical; a 2023 study by the Special Rapporteur on counter-terrorism found widespread and systematic abuse of civil society and civic space through the use of similar laws supposedly designed to counter terrorism. Moreover, and problematically, references 'related to terrorism’ in the treaty preamble are still included in the latest version of a proposed United Nations Cybercrime Treaty, currently being negotiated with more than 190 member states, even though there is no  agreed-upon definition of terrorism in international law. If approved as currently written, the UN Cybercrime Treaty has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide, and could enable States to justify repressive measures that overly restrict free expression and peaceful dissent.

Privacy Badger Puts You in Control of Widgets

10 January 2024 at 09:34

The latest version of Privacy Badger 1 replaces embedded tweets with click-to-activate placeholders. This is part of Privacy Badger's widget replacement feature, where certain potentially useful widgets are blocked and then replaced with placeholders. This protects privacy by default while letting you restore the original widget whenever you want it or need it for the page to function.

Websites often include external elements such as social media buttons, comments sections, and video players. Although potentially useful, these “widgets” often track your behavior. The tracking happens regardless of whether you click on the widget. If you see a widget, the widget sees you back.

This is where Privacy Badger's widget replacement comes in. When blocking certain social buttons and other potentially useful widgets, Privacy Badger replaces them with click-to-activate placeholders. You will not be tracked by these replacements unless you explicitly choose to activate them.

A screenshot of Privacy Badger’s widget placeholder. The text inside the placeholder states that “Privacy Badger has replaced this X (Twitter) widget”. The words “this X (Twitter) widget” are a link. There are two buttons inside the placeholder, “Allow once” and “Always allow on this site.”

Privacy Badger’s placeholders tell you exactly what happened while putting you in control.

Changing the UI of a website is a bold move for a browser extension to do. That’s what Privacy Badger is all about though: making strong choices on behalf of user privacy and revealing how that privacy is betrayed by businesses online.

Privacy Badger isn’t the first software to replace embedded widgets with placeholders for privacy or security purposes. As early as 2004, users could install Flashblock, an extension that replaced embedded Adobe Flash plugin content, a notoriously insecure technology.

A screenshot of Flashblock’s Flash plugin placeholder.

Flashblock’s Flash plugin placeholders lacked user-friendly buttons but got the (Flash blocking) job done.

Other extensions and eventually, even browsers, followed Flashblock in offering similar plugin-blocking placeholders. The need to do this declined as plugin use dropped over time, but a new concern rose to prominence. Privacy was under attack as social media buttons started spreading everywhere.

This brings us to ShareMeNot. Developed in 2012 as a research tool to investigate how browser extensions might enforce privacy on behest of the user, ShareMeNot replaced social media “share” buttons with click-to-activate placeholders. In 2014, ShareMeNot became a part of Privacy Badger. While the emphasis has shifted away from social media buttons to interactive widgets like video players and comments sections, Privacy Badger continues to carry on ShareMeNot's legacy.

Unfortunately, widget replacement is not perfect. The placeholder’s buttons may not work sometimes, or the placeholder may appear in the wrong place or may fail to appear at all. We will keep fixing and improving widget replacement. You can help by letting us know when something isn’t working right.

A screenshot of Privacy Badger’s popup. Privacy Badger’s browser toolbar icon as well as the “Report broken site” button are highlighted.

To report problems, first click on Privacy Badger’s icon in your browser toolbar. Privacy Badger’s “popup” window will open. Then, click the Report broken site button in the popup.

Pro tip #1: Because our YouTube replacement is not quite ready to be enabled by default, embedded YouTube players are not yet blocked or replaced. If you like though, you can try our YouTube replacement now.

A screenshot of Privacy Badger’s options page with the Tracking Domains tab selected. The list of tracking domains was filtered for “youtube.com”; the slider for youtube.com was moved to the “Block entirely” position.

To opt in, visit Privacy Badger's options page, select the “Tracking Domains” tab, search for “youtube.com”, and move the toggle for youtube.com to the Block entirely position.

Pro tip #2: The most private way to activate a replaced widget is to use the this [YouTube] widget link (inside the Privacy Badger has replaced this [YouTube] widget text), when the link is available. Going through the link, as opposed to one of the Allow buttons, means the widget provider doesn't necessarily get to know what site you activated the widget on. You can also right-click the link to save the widget URL; no need to visit the link or to use browser developer tools.

A screenshot of Privacy Badger’s widget placeholder. The “this YouTube widget” link is highlighted.

Click the link to open the widget in a new tab.

Privacy tools should be measured not only by efficacy, but also ease of use. As we write in the FAQ, we want Privacy Badger to function well without any special knowledge or configuration by the user. Privacy should be made easy, rather than gatekept for “power users.” Everyone should be able to decide for themselves when and with whom they want to share information. Privacy Badger fights to restore this control, biting back at sneaky non-consensual surveillance.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

 

  • 1. Privacy Badger version 2023.12.1

EFF Unveils Its New Street Level Surveillance Hub

10 January 2024 at 13:56
The Updated and Expanded Hub Sheds New Light on the Digital Surveillance Dragnet that Law Enforcement Deploys Against Everyone

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) today unveiled its new Street Level Surveillance hub, a standalone website featuring expanded and updated content on various technologies that law enforcement agencies commonly use to invade Americans’ privacy. 

The hub has new or updated pages on automated license plate readers, biometric surveillance, body-worn cameras, camera networks, cell-site simulators, drones and robots, face recognition, electronic monitoring, gunshot detection, forensic extraction tools, police access to the Internet of Things, predictive policing, community surveillance apps, real-time location tracking, social media monitoring, and police databases.  

It also features links to the latest articles by EFF’s Street Level Surveillance working group, consisting of attorneys, policy analysts, technologists, and activists with extensive experience in this field. 

“People are surveilled by police at more times and in more ways than ever before, and understanding this panopticon is the first step in protecting our rights,” said EFF Senior Policy Analyst Dr. Matthew Guariglia. “Our new hub is a ‘Field Guide to Police Surveillance;’ providing a reference source on recognizing the most-used police spy technology. But more than that it is a vital, constantly updated news feed offering cutting-edge, detailed analysis of law enforcement’s uses and abuses of these devices.” 

The new hub also interfaces with several of EFF’s ongoing projects, including: 

  • The Atlas of Surveillance, EFF’s collaboration with the Reynolds School of Journalism at the University of Nevada, Reno to map more than 12,000 police surveillance technologies in use across America; and 
  • Spot the Surveillance, an open-source educational virtual reality tool to help people identify street-level surveillance in their community. 

"We hope community groups, advocacy organizations, defense attorneys, and concerned individuals will use the hub to stay abreast of the latest legal cases and technological developments, and share their own stories with us,” Guariglia said. 

Visit EFF’s new Street Level Surveillance hub at https://sls.eff.org/ 

Contact: 
Matthew
Guariglia
Senior Policy Analyst

FTC Bars X-Mode from Selling Sensitive Location Data

23 January 2024 at 18:51

Update, January 23, 2024: Another week, another win! The FTC announced a successful enforcement action against another location data broker, InMarket.

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals. The company’s customers include marketers and government contractors.

The FTC’s proposed order contains a strong set of rules to protect the public from this company.

General rules for all location data:

  • X-Mode cannot collect, use, maintain, or disclose a person’s location data absent their opt-in consent. This includes location data the company collected in the past.
  • The order defines “location data” as any data that may reveal the precise location of a person or their mobile device, including from GPS, cell towers, WiFi, and Bluetooth.
  • X-Mode must adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.
  • X-Mode must, on request of a person, delete their location data, and inform them of every entity that received their location data.

Heightened rules for sensitive location data:

  • X-Mode cannot sell, disclose, or use any “sensitive” location data.
  • The order defines “sensitive” locations to include medical facilities (such as family planning centers), religious institutions, union offices, schools, shelters for domestic violence survivors, and immigrant services.
  • To implement this rule, the company must develop a comprehensive list of sensitive locations.
  • However, X-Mode can use sensitive location data if it has a direct relationship with a person related to that data, the person provides opt-in consent, and the company uses the data to provide a service the person directly requested.

As the FTC Chair and Commissioners explain in a statement accompanying this order’s announcement:

The explosion of business models that monetize people’s personal information has resulted in routine trafficking and marketing of Americans’ location data. As the FTC has stated, openly selling a person’s location data the highest bidder can expose people to harassment, stigma, discrimination, or even physical violence. And, as a federal court recently recognized, an invasion of privacy alone can constitute “substantial injury” in violation of the law, even if that privacy invasion does not lead to further or secondary harm.

X-Mode has disputed the implications of the FTC’s statements regarding the settlement, and asserted that the FTC did not find an instance of data misuse.

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.” The FTC has laid out a powerful case that X-Mode’s brokering of location data is unfair and thus unlawful.

The FTC’s enforcement action against X-Mode sends a strong signal that other location data brokers should take a hard look at their own business model or risk similar legal consequences.

The FTC has recently taken many other welcome actions to protect data privacy from corporate surveillance. In 2023, the agency limited Rite Aid’s use of face recognition, and fined Amazon’s Ring for failing to secure its customers’ data. In 2022, the agency brought an unfair business practices claim against another location data broker, Kochava, and began exploring issuance of new rules against commercial data surveillance.

EFF’s 2024 In/Out List

19 January 2024 at 09:46

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.


IN

1. Affordable and future-proof
internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A
privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

Companies Make it Too Easy for Thieves to Impersonate Police and Steal Our Data

19 January 2024 at 11:29

For years, people have been impersonating police online in order to get companies to hand over incredibly sensitive personal information. Reporting by 404 Media recently revealed that Verizon handed over the address and phone logs of an individual to a stalker pretending to be a police officer who had a PDF of a fake warrant. Worse, the imposter wasn’t particularly convincing. His request was missing a form that is required for search warrants from his state. He used the name of a police officer that did not exist in the department he claimed to be from. And he used a Proton Mail account, which any person online can use, rather than an official government email address.

Likewise, bad actors have used breached law enforcement email accounts or domain names to send fake warrants, subpoenas, or “Emergency Data Requests” (which police can send without judicial oversight to get data quickly in supposedly life or death situations). Impersonating police to get sensitive information from companies isn’t just the realm of stalkers and domestic abusers; according to Motherboard, bounty hunters and debt collectors have also used the tactic.

We have two very big entwined problems. The first is the “collect it all” business model of too many companies, which creates vast reservoirs of personal information stored in corporate data servers, ripe for police to seize and thieves to steal. The second is that too many companies fail to prevent thieves from stealing data by pretending to be police.

Companies have to make it harder for fake “officers” to get access to our sensitive data. For starters, they must do better at scrutinizing warrants, subpoenas, and emergency data requests when they come in. These requirements should be spelled out clearly in a public-facing privacy policy, and all employees who deal with data requests from law enforcement should receive training in how to adhere to these requirements and spot fraudulent requests. Fake emergency data requests raise special concerns, because real ones depend on the discretion of both companies and policetwo parties with less than stellar reputations for valuing privacy. 

The No AI Fraud Act Creates Way More Problems Than It Solves

19 January 2024 at 18:27

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes senseyou should be able to prevent a company from running an advertisement that falsely claims that you endorse its productsbut the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that categoryfrom pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyoneincluding performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendmentthat’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

Tools to Protect Your Privacy Online | EFFector 36.1

22 January 2024 at 13:02

New year, but EFF is still here to keep you up to date with the latest digital rights happenings! Be sure to check out our latest newsletter, EFFector 36.1, which covers topics ranging from: our thoughts on AI watermarking, changes in the tech landscape we'd like to see in 2024, and updates to our Street Level Surveillance hub and Privacy Badger.

EFFector 36.1 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

EFFector 36.1 | Tools to Protect Your Privacy Online

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

It's Copyright Week 2024: Join Us in the Fight for Better Copyright Law and Policy

22 January 2024 at 14:12

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright law affects so much of our daily lives, and new technologies have only helped make everyone more and more aware of it. For example, while 1998’s Digital Millennium Copyright Act helped spur the growth of platforms for creating and sharing art, music and literature, it also helped make the phrase “blocked due to a claim by the copyright holder” so ubiquitous.

Copyright law helps shape the movies we watch, the books we read, and the music we listen to. But it also impacts everything from who can fix a tractor to what information is available to us to when we communicate online. Given that power, it’s crucial that copyright law and policy serve everyone.

Unfortunately, that’s not the way it tends to work. Instead, copyright law is often treated as the exclusive domain of major media and entertainment industries. Individual artists don’t often find that copyright does what it is meant to do, i.e. “promote the progress of science and useful arts” by giving them a way to live off of the work they’ve done. The promise of the internet was to help eliminate barriers between creators and audiences, so that voices that traditional gatekeepers ignored could still find success. Through copyright, those gatekeepers have found ways to once again control what we see.

12 years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright-infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection.

We continue to fight for a version of copyright that truly serves the public interest. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and promote a set of principles that should guide copyright law and policy. This year’s issues are:

  • Monday: Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Tuesday: Device and Digital Ownership 
    As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
  • Wednesday: Copyright and AI
    The growing availability of AI, especially generative AI trained on datasets that include copyrightable material, has raised new debates about copyright law. It’s important to remember the limitations of copyright law in giving the kind of protections creators are looking for.
  • Thursday: Free Expression and Fair Use 
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Friday: Copyright Enforcement as a Tool of Censorship
    Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on X, formerly known as Twitter.

The PRESS Act Will Protect Journalists When They Need It Most

22 January 2024 at 14:45

Our government shouldn’t be spying on journalists. Nor should law enforcement agencies force journalists to identify their confidential sources or go to prison. 

To fix this, we need to change the law. Now, we’ve got our best chance in years. The House of Representatives has passed the Protect Reporters from Exploitive State Spying (PRESS) Act, H.R. 4250, and it’s one of the strongest federal shield bills for journalists we’ve seen. 

Take Action

Tell Congress To Pass the PRESS Act Now

The PRESS Act would do two critical things: first, it would bar federal law enforcement from surveilling journalists by gathering their phone, messaging, or email records. Secondly, it strictly limits when the government can force a journalist to disclose their sources. 

Since its introduction, the bill has had strong bipartisan support. And such “shield” laws for reporters have vast support across the U.S., with 49 states and the District of Columbia all having some type of law that prevents journalists from being forced to hand over their files to assist in criminal prosecutions, or even private lawsuits. 

While journalists are well protected in many states, federal law is currently lacking in protections. That’s had serious consequences for journalists, and for all Americans’ right to freely access information. 

Multiple Presidential Administrations Have Abused Laws To Spy On Journalists

The Congressional report on this bill details abuses against journalists by all of the past three Presidential administrations. Federal law enforcement officials improperly acquired reporters’ phone records on numerous occasions since 2004, under both Democratic and Republican administrations. 

On at least 12 occasions since 1990, law enforcement threatened journalists with jail or home confinement for refusing to give up their sources; some reporters served months in jail. 

Elected officials must do more about these abuses than preside over after-the-fact apologies. 

PRESS Act Protections

The PRESS Act bars the federal government from surveilling journalists through their phones, email providers, or other online services. These digital protections are critical because they reflect how journalists operate in the field today. The bill restricts subpoenas aimed not just at the journalists themselves, but their phone and email providers. Its exceptions are narrow and targeted. 

The PRESS Act also has an appropriately broad definition of the practice of journalism, covering both professional and citizen journalists. It applies regardless of a journalist’s political leanings or medium of publication. 

The government surveillance of journalists over the years has chilled journalists’ ability to gather news. It’s also likely discouraged sources from coming forward, because their anonymity isn’t guaranteed. We can’t know the important stories that weren’t published, or weren’t published in time, because of fear of retaliation on the part of journalists or their sources. 

In addition to EFF, the PRESS Act is supported by a wide range of press and rights groups, including the ACLU, the Committee to Protect Journalists, the Freedom of the Press Foundation, the First Amendment Coalition, the News Media Alliance, the Reporters Committee for Freedom of the Press, and many others. 

Our democracy relies on the rights of both professional journalists and everyday citizens to gather and publish information. The PRESS Act is a long overdue protection. We have sent Congress a clear message to pass it; please join us by sending your own email to the Senate using our links below. 

Take Action

Tell Congress To Pass the PRESS Act Now

The Public Domain Benefits Everyone – But Sometimes Copyright Holders Won’t Let Go

22 January 2024 at 16:36

Every January, we celebrate the addition of formerly copyrighted works to the public domain. You’ve likely heard that this year’s crop of public domain newcomers includes Steamboat Willie, the 1928 cartoon that marked Mickey Mouse’s debut. When something enters the public domain, you’re free to copy, share, and remix it without fear of a copyright lawsuit. But the former copyright holders aren’t always willing to let go of their “property” so easily. That’s where trademark law enters the scene.

Unlike copyright, trademark protection has no fixed expiration date. Instead, it works on a “use it or lose it” model. With some exceptions, the law will grant trademark protection for as long as you keep using that mark to identify your products. This actually makes sense when you understand the difference between copyright and trademark. The idea behind copyright protection is to give creators a financial incentive to make new works that will benefit the public; that incentive needn’t be eternal to be effective. Trademark law, on the other hand, is about consumer protection. The function of a trademark is essentially to tell you who a product came from, which helps you make informed decisions and incentivizes quality control. If everyone were allowed to use that same mark after some fixed period, it would stop serving that function.

So, what’s the problem? Since trademarks don’t expire, we see former copyright holders of public domain works turn to trademark law as a way to keep exerting control. In one case we wrote about, a company claiming to own a trademark in the name of a public domain TV show called “You Asked For It” sent takedown demands targeting everything from episodes of the show, to remix videos using show footage, to totally unrelated uses of that common phrase. Other infamous examples include disputes over alleged trademarks in elements from Peter Rabbit and Tarzan. Now, with Steamboat Willie in the public domain, Disney seems poised to do the same. It’s already alluded to this in public statements, and in 2022, it registered a trademark for Walt Disney Animation Studios that incorporates a snippet from the cartoon.

The news isn’t all bad: trademark protection is in some ways more limited than copyright—it only applies to uses that are likely to confuse consumers about the use’s connection to the mark owner. And importantly, the U.S. Supreme Court has made clear that trademark law cannot be used to control the distribution of creative works, lest it spawn “a species of mutant copyright law” that usurps the public’s right to copy and use works in the public domain. (Of course, that doesn’t mean companies won’t try it.) So go forth and make your Steamboat Willie art, but beware of trademark lawyers waiting in the wings.

EFF and More Than 100+ NGOS Set Non-Negotiable Redlines Ahead of UN Cybercrime Treaty Negotiations

23 January 2024 at 09:44

EFF has joined forces with 110 NGOs today in a joint statement delivered to the United Nations Ad Hoc Committee, clearly outlining civil society non-negotiable redlines for the proposed UN Cybercrime Treaty, and asserting that states should reject the proposed treaty if these essential changes are not implemented. 

The last draft published on November 6, 2023 does not adequately ensure adherence to human rights law and standards. Initially focused on cybercrime, the proposed Treaty has alarmingly evolved into an expansive surveillance tool.

Katitza Rodriguez, EFF Policy Director for Global Privacy, asserts ahead of the upcoming concluding negotiations:

The proposed treaty needs more than just minor adjustments; it requires a more focused, narrowly defined approach to tackle cybercrime. This change is essential to prevent the treaty from becoming a global surveillance pact rather than a tool for effectively combating core cybercrimes. With its wide-reaching scope and invasive surveillance powers, the current version raises serious concerns about cross-border repression and potential police overreach. Above all, human rights must be the treaty's cornerstone, not an afterthought. If states can't unite on these key points, they must outright reject the treaty.

Historically, cybercrime legislation has been exploited to target journalists and security researchers, suppress dissent and whistleblowers, endanger human rights defenders, limit free expression, and justify unnecessary and disproportionate state surveillance measures. We are concerned that the proposed Treaty, as it stands now, will exacerbate these problems. The proposed treaty concluding session will be held at the UN Headquarters in New York from January 29 to February 10th. EFF will be attending in person.

The joint statement specifically calls States to narrow the scope of criminalization provisions to well defined cyber dependent crimes; shield security researchers, whistleblowers, activists, and journalists from being prosecuted for their legitimate activities; explicitly include language on international human rights law, data protection, and gender mainstreaming; limit the scope of the domestic criminal procedural measures and international cooperation to core cybercrimes established in the criminalization chapter; and address concerns that the current draft could weaken cybersecurity and encryption. Additionally, it requires the necessity to establish specific safeguards, such as the principles of prior judicial authorization, necessity, legitimate aim, and proportionality.

Fragging: The Subscription Model Comes for Gamers

By: Rory Mir
23 January 2024 at 19:24

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

The video game industry is undergoing the same concerning changes we’ve seen before with film and TV, and it underscores the need for meaningful digital ownership.

Twenty years ago you owned DVDs. Ten years ago you probably had a Netflix subscription with a seemingly endless library. Now, you probably have two to three subscription services, and regularly hear about shows and movies you can no longer access, either because they’ve moved to yet another subscription service, or because platforms are delisting them all together.

The video game industry is getting the same treatment. While it is still common for people to purchase physical or digital copies of games, albeit often from within walled gardens like Steam or Epic Games, game subscriptions are becoming more and more common. Like the early days of movie streaming, services like Microsoft Game Pass or PlayStation Plus seem to offer a good deal. For a flat monthly fee, you have access to seemingly unlimited game choices. That is, for now.

In a recent announcement from game developer Ubisoft, their director of subscriptions said plainly that a goal of their subscription service’s rebranding is to get players “comfortable” with not owning their games. Notably, this is from a company which had developed five non-mobile games last year, hoping users will access them and older games through a $17.99 per month subscription; that is, $215.88 per year. And after a year, how many games does the end user actually own? None. 

This fragmentation of the video game subscription market isn’t just driven by greed, but answering a real frustration from users the industry itself has created. Gamers at one point could easily buy and return games, they could rent games they were only curious about, and even recoup costs by reselling their game. With the proliferation of DRM and walled-garden game vendors, ownership rights have been eroded. Reselling or giving away a copy of your game, or leaving it for your next of kin, is no longer permitted. The closest thing to a rental now available is a game demo (if it exists) or playing a game within the time frame necessary to get a refund (if a storefront offers one). These purchases are also put at risk as games are sometimes released incomplete beyond this time limit. Developers such as Ubisoft will also shut down online services which severely impact the features of these games, or even make them unplayable.

DRM and tightly controlled gaming platforms also make it harder to mod or tweak games in ways the platform doesn’t choose to support. Mods are a thriving medium for extending the functionalities, messages, and experiences facilitated by a base game, one where passion has driven contributors to design amazing things with a low barrier to entry. Mods depend on users who have the necessary access to a work to understand how to mod it and to deploy mods when running the program. A model wherein the player can only access these aspects of the game in the ways the manufacturer supports undermines the creative rights of owners as well.

This shift should raise alarms for both users and creators alike. With publishers serving as intermediaries, game developers are left either struggling to reach their audience, or settling for a fraction of the revenue they could receive from traditional sales. 

We need to preserve digital ownership before we see video games fall into the same cycles as film and TV, with users stuck paying more and receiving not robust ownership, but fragile access on the platform’s terms.

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

24 January 2024 at 14:09

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

What Home Videotaping Can Tell Us About Generative AI

24 January 2024 at 16:04

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.


It’s 1975. Earth, Wind and Fire rule the airwaves, Jaws is on every theater screen, All In the Family is must-see TV, and Bill Gates and Paul Allen are selling software for the first personal computer, the Altair 8800.

But for copyright lawyers, and eventually the public, something even more significant is about to happen: Sony starts selling the first videotape recorder, or VTR. Suddenly, people had the power to  store TV programs and watch them later. Does work get in the way of watching your daytime soap operas? No problem, record them and watch when you get home. Want to watch the game but hate to miss your favorite show? No problem. Or, as an ad Sony sent to Universal Studios put it, “Now you don’t have to miss Kojak because you’re watching Columbo (or vice versa).”

What does all of this have to do with Generative AI? For one thing, the reaction to the VTR was very similar to today’s AI anxieties. Copyright industry associations ran to Congress, claiming that the VTR "is to the American film producer and the American public as the Boston strangler is to the woman home alone" – rhetoric that isn’t far from some of what we’ve heard in Congress on AI lately. And then, as now, rightsholders also ran to court, claiming Sony was facilitating mass copyright infringement. The crux of the argument was a new legal theory: that a machine manufacturer could be held liable under copyright law (and thus potentially subject to ruinous statutory damages) for how others used that machine.

The case eventually worked its way up to the Supreme Court, and in 1984 the Court rejected the copyright industry’s rhetoric and ruled in Sony’s favor. Forty years later, at least two aspects of that ruling are likely to get special attention.

First, the Court observed that where copyright law has not kept up with technological innovation, courts should be careful not to expand copyright protections on their own. As the decision reads:

Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology. In a case like this, in which Congress has not plainly marked our course, we must be circumspect in construing the scope of rights created by a legislative enactment which never contemplated such a calculus of interests.

Second, the Court borrowed from patent law the concept of “substantial noninfringing uses.” In order to hold Sony liable for how its customers used their VTRs, rightholders had to show that the VTR was simply a tool for infringement. If, instead, the VTR was “capable of substantial noninfringing uses,” then Sony was off the hook. The Court held that the VTR fell in the latter category because it was used for private, noncommercial time-shifting, and that time-shifting was a lawful fair use.  The Court even quoted Fred Rogers, who testified that home-taping of children’s programs served an important function for many families.

That rule helped unleash decades of technological innovation. If Sony had lost, Hollywood would have been able to legally veto any tool that could be used for infringing as well as non-infringing purposes. With Congress’ help, it has found ways to effectively do so anyway, such as Section 1201 of the DMCA. Nonetheless, Sony remains a crucial judicial protection for new creativity.

Generative AI may test the enduring strength of that protection. Rightsholders argue that generative AI toolmakers directly infringe when they used copyrighted works as training data. That use is very likely to be found lawful. The more interesting question is whether toolmakers are liable if customers use the tools to generate infringing works. To be clear, the users themselves may well be liable – but they are less likely to have the kind of deep pockets that make litigation worthwhile. Under Sony, however, the key question for the toolmakers will be whether their tools are capable of substantial non-infringing uses. The answer to that question is surely yes, which should preclude most of the copyright claims.

But there’s risk here as well – if any of these cases reach its doors, the Supreme Court could overturn Sony. Hollywood certainly hoped it would do so it when considered the legality of peer-to-peer file-sharing in MGM v Grokster. EFF and many others argued hard for the opposite result. Instead, the Court side-stepped Sony altogether in favor of creating a new form of secondary liability for “inducement.”

The current spate of litigation may end with multiple settlements, or Congress may decide to step in. If not, the Supreme Court (and a lot of lawyers) may get to party like it’s 1975. Let’s hope the justices choose once again to ensure that copyright maximalists don’t get to control our technological future.

Related Cases: 

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

25 January 2024 at 13:14

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

play
Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

Tell the FTC: It's Time to Act on the Right to Repair

25 January 2024 at 18:22

Update: The FTC  is no longer accepting comments for this rulemaking. More than 1,600 comments were filed in the proceeding, with many of you sharing your personal stories about why you support the right to repair. Thank you for taking action!

Do you care about being able to fix and modify your stuff? Then it's time to speak up and tell the Federal Trade Commission that you care about your right to repair.

As we have said before, you own what you buy—and you should be able do what you want with it. That should be the end of the story, whether we’re talking about a car, a tractor, a smartphone, or a computer. If something breaks, you should be able to fix it yourself, or choose who you want to take care of it for you.

The Federal Trade Commission has just opened a 30-day comment period on the right to repair, and it needs to hear from you. If you have a few minutes to share why the right to repair is important to you, or a story about something you own that you haven't been able to fix the way you want, click here and tell the agency what it needs to hear.

Take Action

Tell the FTC: Stand up for our Right to Repair

If you’re not sure what to say, there are three topics that matter most for this petition. The FTC should:

  • Make repair easy
  • Make repair parts available and reasonably priced
  • Label products with ease of repairability

If you have a personal story of why right to repair matters to you, let them know!

This is a great moment to ask for the FTC to step up. We have won some huge victories in state legislatures across the country in the past several years, with good right-to-repair bills passing in California, Minnesota, Colorado, and Massachusetts. Apple, long a critic, has come out in favor of right to repair.

With the wind at our backs, it's time for the FTC to consider nationwide solutions, such as making parts and resources more available to everyday people and independent repair shops.

EFF has worked for years with our friends at organizations including U.S. PIRG (Public Interest Research Group) and iFixit to make it easier to tinker with your stuff. We're proud to support their call to the FTC to work on right to repair, and hope you'll add your voice to the chorus.

Join the (currently) 700 people making their voice heard. 

Take Action

Tell the FTC: Stand up for our Right to Repair

 

Save Your Twitter Account

By: Rory Mir
25 January 2024 at 19:02

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Amid reports that X—the site formerly known as Twitter—is dropping in value, hindering how people use the site, and engaging in controversial account removals, it has never been more precarious to rely on the site as a historical record. So, it’s important for individuals to act now and save what they can. While your tweets may feel ephemeral or inconsequential, they are part of a greater history in danger of being wiped out.

Any centralized communication platform, particularly one operated for profit, is vulnerable to being coopted by the powerful. This might mean exploiting users to maximize short-term profits or changing moderation rules to silence marginalized people and promote hate speech. The past year has seen unprecedented numbers of users fleeing X, Reddit, and other platforms over changes in policy

But leaving these platforms, whether in protest, disgust, or boredom, leaves behind an important digital record of how communities come together and grow.

Archiving tweets isn’t just for Dril and former presidents. In its heyday, Twitter was an essential platform for activists, organizers, journalists, and other everyday people around the world to speak truth to power and fight for social justice. Its importance for movements and building community was noted by oppressive governments around the world, forcing the site to ward off data requests and authoritarian speech suppression

A prominent example in the U.S. is the movement for Black Lives, where activists built momentum on the site and found effective strategies to bring global attention to their protests. Already though, #BlackLivesMatter tweets from 2014 are vanishing from X, and the site seems to be blocking and disabling  tools from archivists preserving this history.

In documenting social movements we must remember social media is not an archive, and platforms will only store (and gate keep) user work insofar as it's profitable, just as they only make it accessible to the public when it is profitable to do so. But when platforms fail, with them goes the history of everyday voices speaking to power, the very voices organizations like EFF fought to protect. The voice of power, in contrast, remains well documented.

In the battleground of history, archival work is cultural defense. Luckily, digital media can be quickly and cheaply duplicated and shared. In just a few minutes of your time, the following easy steps will help preserve not just your history, but the history of your community and the voices you supported.

1. Request Your Archive

Despite the many new restrictions on Twitter access, the site still allows users to backup their entire profile in just a few clicks.

  • First, in your browser or the X app, navigate to Settings. This will look like three dots, and may say "More" on the sidebar.

  • Select Settings and Privacy, then Your Account, if it is not already open.

  • Here, click Download an archive of your data

  • You'll be prompted to sign into your account again, and X will need to send a verification code to your email or text message. Verifying with email may be more reliable, particularly for users outside of the US.

  • Select Request archive

  • Finally—wait. This process can take a few days, but you will receive an email once it is complete. Eventually you will get an email saying that your archive is ready. Follow that link while logged in and download the ZIP files.

2. Optionally, Share with a Library or Archive.

There are many libraries, archives, and community groups who would be interested in preserving these archives. You may want to reach out to a librarian to help find one curating a collection specific to your community.

You can also request that your archive be preserved by the Internet Archive's Wayback Machine. While these steps are specific to the Internet Archive. We recommend using a desktop computer or laptop, rather than a mobile device.

  • Unpack the ZIP file you downloaded in the previous section.
  • In the Data folder, select the tweets.js file. This is a JSON file with just your tweets. JSON files are difficult to read, but you can convert it to a CSV file and view them in a spreadsheet program like Excel or LibreOffice Calc as a free alternative.
  • With your accounts and tweets.js file ready, go to the Save Page Now's Google Sheet Interface and select "Archive all your Tweets with the Wayback Machine.”

  • Fill in your Twitter handle, select your "tweets.js" file from Step 2 and click "Upload."

  • After some processing, you will be able to download the CSV file.
  • Import this CSV to a new Google Sheet. All of this information is already public on Twitter, but if you notice very sensitive content, you can remove those lines. Otherwise it is best to leave this information untampered.
  • Then, use Save Page Now's Google Sheet Interface again to archive from the sheet made in the previous step.
  • It may take hours or days for this request to fully process, but once it is complete you will get an email with the results.
  • Finally, The Wayback Machine will give you the option to also preserve all of your outlinks as well. This is a way to archive all the website URLs you shared on Twitter. This is an easy way to further preserve the messages you've promoted over the years.

3. Personal Backup Plan

Now that you have a ZIP file with all of your Twitter data, including public and private information, you may want to have a security plan on how to handle this information. This plan will differ for everyone, but these are a few steps to consider.

If you only wish to preserve the public information you already successfully shared with an archive, you can delete the archive. For anything you would like to keep but may be sensitive, you may want to use a tool to encrypt the file and keep it on a secure device.

Finally, even if this information is not sensitive, you'll want to be sure you have a solid backup plan. If you are still using Twitter, this means deciding on a schedule to repeat this process so your archive is up to date. Otherwise, you'll want to keep a few copies of the file across several devices. If you already have a plan for backing up your PC, this may not be necessary.

4. Closing Your Account

Finally, you'll want to consider what to do with your current Twitter account now that all your data is backed up and secure.

(If you are planning on leaving X, make sure to follow EFF on Mastodon, Bluesky or another platform.)

Since you have a backup, it may be a good idea to request data be deleted on the site. You can try to delete just the most sensitive information, like your account DMs, but there's no guarantee Twitter will honor these requests—or that it's even capable of honoring such requests. Even EU citizens covered by the GDPR will need to request the deletion of their entire account.

If you aren’t concerned about Twitter keeping this information, however, there is some value in keeping your old account up. Holding the username can prevent impersonators, and listing your new social media account will help people on the site find you elsewhere. In our guide for joining mastodon we recommended sharing your new account in several places. However, adding the new account to one's Twitter name will have the best visibility across search engines, screenshots, or alternative front ends like nitter.

More Than a Decade Later, Site-Blocking Is Still Censorship

26 January 2024 at 14:45

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

As Copyright Week comes to a close, it’s worth remembering why we have it in January. Twelve years ago, a diverse coalition of internet users, websites, and public interest activists took to the internet to protest SOPA/PIPA, proposed laws that would have, among other things, blocked access to websites if they were alleged to be used for copyright infringement. More than a decade on, there still is no way to do this without causing irreparable harm to legal online expression.

A lot has changed in twelve years. Among those changes is a major shift in how we, and legislators, view technology companies. What once were new innovations have become behemoths. And what once were underdogs are now the establishment.

What has not changed, however, is the fact that much of what internet platforms are used for is legal, protected, expression. Moreover, the typical users of those platforms are those without access to the megaphones of major studios, record labels, or publishers. Any attempt to resurrect SOPA/PIPA—no matter what it is rebranded as—remains a threat to that expression.

Site-blocking, sometimes called a “no-fault injunction,” functionally allows a rightsholder to prevent access to an entire website based on accusations of copyright infringement. Not just access to the alleged infringement, but the entire website. It is using a chainsaw to trim your nails.

We are all so used to the Digital Millennium Copyright Act (DMCA) and the safe harbor it provides that we sometimes forget how extraordinary the relief it provides really is. Instead of providing proof of their claims to a judge or jury, rightsholders merely have to contact a website with their honest belief that their copyright is being infringed, and the allegedly infringing material will be taken down almost immediately. That is a vast difference from traditional methods of shutting down expression.

Site-blocking would go even further, bypassing the website and getting internet service providers to deny their customers access to a website. This clearly imperils the expression of those not even accused of infringement, and it’s far too blunt an instrument for the problem it’s meant to solve. We remain opposed to any attempts to do this. We have a long memory, and twelve years isn’t even that long.

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

29 January 2024 at 12:42

Update: Delegates at the concluding negotiating session failed to reach consensus on human rights protections, government surveillance, and other key issues. The session was suspended Feb. 8 without a final draft text. Delegates will resume talks at a later day with a view to concluding their work and providing a draft convention to the UN General Assembly at its 78th session later this year.

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

31 January 2024 at 10:06

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

Dozens of Rogue California Police Agencies Still Sharing Driver Locations with Anti-Abortion States

31 January 2024 at 14:56
Civil Liberties Groups Urge Attorney General Bonta to Enforce California's Automated License Plate Reader Laws

SAN FRANCISCO—California Attorney General Rob Bonta should crack down on police agencies that still violate Californians’ privacy by sharing automated license plate reader information with out-of-state government agencies, putting abortion seekers and providers at particular risk, the Electronic Frontier Foundation (EFF) and the state’s American Civil Liberties Union (ACLU) affiliates urged in a letter to Bonta today. 

In October 2023, Bonta issued a legal interpretation and guidance clarifying that a 2016 state law, SB 34, prohibits California’s local and state police from sharing information collected from automated license plate readers (ALPR) with out-of-state or federal agencies. However, despite the Attorney General’s definitive stance, dozens of law enforcement agencies have signaled their intent to continue defying the law. 

The EFF and ACLU letter lists 35 specific police agencies which either have informed the civil liberties organizations that they plan to keep sharing ALPR information with out-of-state law enforcement, or have failed to confirm their compliance with the law in response to inquiries by the organizations. 

“We urge your office to explore all potential avenues to ensure that state and local law enforcement agencies immediately comply,” the letter said. “We are deeply concerned that the information could be shared with agencies that do not respect California’s commitment to civil rights and liberties and are not covered by California’s privacy protections.” 

ALPR systems collect and store location information about drivers, including dates, times, and locations. This sensitive information can reveal where individuals work, live, associate, worship, or seek reproductive health services and other medical care. Sharing any ALPR information with out-of-state or federal law enforcement agencies has been forbidden by the California Civil Code since enactment of SB 34 in 2016.  

And sharing this data with law enforcement in states that criminalize abortion also undermines California’s extensive efforts to protect reproductive health privacy, especially a 2022 law (AB 1242) prohibiting state and local agencies from providing abortion-related information to out-of-state agencies. The UCLA Center on Reproductive Health, Law and Policy estimates that between 8,000 and 16,100 people will travel to California each year for reproductive care. 

An EFF investigation involving hundreds of public records requests uncovered that many California police departments continued sharing records containing residents’ detailed driving profiles with out-of-state agencies. EFF and the ACLUs of Northern and Southern California in March 2023 wrote to more than 70 such agencies to demand they comply with state law. While many complied, many others have not. 

“We appreciate your office’s statement on SB 34 and your efforts to protect the privacy and civil rights of everyone in California,” today’s letter said. “Nevertheless, it is clear that many law enforcement agencies continue to ignore your interpretation of the law by continuing to share ALPR information with out-of-state and federal agencies. This violation of SB 34 will continue to imperil marginalized communities across the country, and abortion seekers, providers, and facilitators will be at greater risk of undue criminalization and prosecution.” 

For the letter to Bonta: https://www.eff.org/document/01-31-2024-letter-california-ag-rob-bonta-re-enforcing-sb34-alprs 

For the letters sent last year to noncompliant California police agencies: https://www.eff.org/press/releases/civil-liberties-groups-demand-california-police-stop-sharing-drivers-location-data 

For information on how ALPRs threaten abortion access: https://www.eff.org/deeplinks/2022/09/automated-license-plate-readers-threaten-abortion-access-heres-how-policymakers 

For general information about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs

Contact: 
Jennifer
Pinsof
Staff Attorney
Adam
Schwartz
Privacy Litigation Director

What Apple's Promise to Support RCS Means for Text Messaging

31 January 2024 at 16:51

You may have heard recently that Apple is planning to implement Rich Communication Services (RCS) on iPhones, once again igniting the green versus blue bubble debate. RCS will thankfully bring a number of long-missing features to those green bubble conversations in Messages, but Apple's proposed implementation has a murkier future when it comes to security. 

The RCS standard will replace SMS, the protocol behind basic everyday text messages, and MMS, the protocol for sending pictures in text messages. RCS has a number of improvements over SMS, including being able to send longer messages, sending high quality pictures, read receipts, typing indicators, GIFs, location sharing, the ability to send and receive messages over Wi-Fi, and improved group messaging. Basically, it's a modern messaging standard with features people have grown to expect. 

The RCS standard is being worked on by the same standards body (GSMA) that wrote the standard for SMS and many other core mobile functions. It has been in the works since 2007 and supported by Google since 2019. Apple had previously said it wouldn’t support RCS, but recently came around and declared that it will support sending and receiving RCS messages starting some time in 2024. This is a win for user experience and interoperability, since now iPhone and Android users will be able to send each other rich modern text messages using their phone’s default messaging apps. 

But is it a win for security? 

On its own, the core RCS protocol is currently not any more secure than SMS. The protocol is not encrypted by default, meaning that anyone at your phone company or any law enforcement agent (ordinarily with a warrant) will be able to see the contents and metadata of your RCS messages. The RCS protocol by itself does not specify or recommend any type of end-to-end encryption. The only encryption of messages is in the incidental transport encryption that happens between your phone and a cell tower. This is the same way it works for SMS.

But what’s exciting about RCS is its native support for extensions. Google has taken advantage of this ability to implement its own plan for encryption on top of RCS using a version of the Signal protocol. As of now, this only works for users who are both using Google’s default messaging app (Google Messages), and whose phone companies support RCS messaging (the big three in the U.S. all do, as do a majority around the world). If encryption is not supported by either user the conversation continues to use the default unencrypted version. A user’s phone company could actively choose to block encrypted RCS in a specific region or for a specific user or for a specific pair of users by pretending it doesn’t support RCS. In that case the user will be given the option of resending the messages unencrypted, but can choose to not send the message over the unencrypted channel. Google’s implementation of encrypted RCS also doesn’t hide any metadata about your messages, so law enforcement could still get a record of who you conversed with, how many messages were sent, at what times, and how big the messages were. It's a significant security improvement over SMS, but people with heightened risk profiles should still consider apps that leak less metadata, like Signal. Despite those caveats this is a good step by Google towards a fully encrypted text messaging future.

Apple stated it will not use any type of proprietary end-to-end encryption–presumably referring to Google's approach—but did say it would work to make end-to-end encryption part of the RCS standard. Avoiding a discordant ecosystem with a different encryption protocol for each company is desirable goal. Ideally Apple and Google will work together on standardizing end-to-end encryption in RCS so that the solution is guaranteed to work with both companies’ products from the outset. Hopefully encryption will be a part of the RCS standard by the time Apple officially releases support for it, otherwise users will be left with the status quo of having to use third-party apps for interoperable encrypted messaging.

We hope that the GSMA members will agree on a standard soon, that any standard will use modern cryptographic techniques, and that the standard will do more to protect metadata and downgrade attacks than the current implementation of encrypted RCS. We urge Google and Apple to work with the GSMA to finalize and adopt such a standard quickly. Interoperable, encrypted text messaging by default can’t come soon enough.

Worried About AI Voice Clone Scams? Create a Family Password

31 January 2024 at 19:42

Your grandfather receives a call late at night from a person pretending to be you. The caller says that you are in jail or have been kidnapped and that they need money urgently to get you out of trouble. Perhaps they then bring on a fake police officer or kidnapper to heighten the tension. The money, of course, should be wired right away to an unfamiliar account at an unfamiliar bank. 

It’s a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim’s common sense and make them more likely to send money. Now, scammers are reportedly experimenting with a way to further heighten that panic by playing a simulated recording of “your” voice. Fortunately, there’s an easy and old-school trick you can use to preempt the scammers: creating a shared verbal password with your family.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology. There are myriad websites that will let you make voice clones. Some will let you use a variety of celebrity voices to say anything they want, while others will let you upload a new person’s voice to create a voice clone of anyone you have a recording of. Scammers have figured out that they can use this to clone the voices of regular people. Suddenly your relative isn’t talking to someone who sounds like a complete stranger, they are hearing your own voice. This makes the scam much more concerning. 

Voice generation scams aren’t widespread yet, but they do seem to be happening. There have been news stories and even congressional testimony from people who have been the targets of voice impersonation scams. Voice cloning scams are also being used in political disinformation campaigns as well. It’s impossible for us to know what kind of technology these scammers used, or if they're just really good impersonations. But it is likely that the scams will grow more prevalent as the technology gets cheaper and more ubiquitous. For now, the novelty of these scams, and the use of machine learning and deepfakes, technologies which are raising concerns across many sectors of society, seems to be driving a lot of the coverage. 

The family password is a decades-old, low tech solution to this modern high tech problem. 

The first step is to agree with your family on a password you can all remember and use. The most important thing is that it should be easy to remember in a panic, hard to forget, and not public information. You could use the name of a well known person or object in your family, an inside joke, a family meme, or any word that you can all remember easily. Despite the name, this doesn't need to be limited to your family, it can be a chosen family, workplace, anarchist witch coven, etc. Any group of people with which you associate can benefit from having a password. 

Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can’t tell it to you, then they might be a fake. You could of course further verify this with other questions,  like, “what is my cat's name” or “when was the last time we saw each other?” These sorts of questions work even if you haven’t previously set up a passphrase in your family or friend group. But keep in mind people tend to forget basic things when they have experienced trauma or are in a panic. It might be helpful, especially for   people with less robust memories, to write down the password in case you forget it. After all, it’s not likely that the scammer will break into your house to find the family password.

These techniques can be useful against other scams which haven’t been invented yet, but which may come around as deepfakes become more prevalent, such as machine-generated video or photo avatars for “proof.” Or should you ever find yourself in a hackneyed sci-fi situation where there are two identical copies of your friend and you aren’t sure which one is the evil clone and which one is the original. 

An image of spider-man pointing at another spider-man who is pointing at him. A classic meme.

Spider-man hopes The Avengers haven't forgotten their secret password!

The added benefit of this technique is that it gives you a minute to step back, breath, and engage in some critical thinking. Many scams of this nature rely on panic and keeping you in your lower brain, by asking for the passphrase you can also take a minute to think. Is your kid really in Mexico right now? Can you call them back at their phone number to be sure it’s them?  

So, go make a family password and a friend password to keep your family and friends from getting scammed by AI impostors (or evil clones).

San Francisco Police’s Live Surveillance Yields Almost 200 Hours of Spying–Including of Music Festivals

2 February 2024 at 16:21

A new report reveals that in just three months, from July 1 to September 30, 2023,  the San Francisco Police Department (SFPD) racked up 193 hours and 19 minutes of live access to non-city surveillance cameras. That means for the equivalent of 8 days, police sat behind a desk and tapped into hundreds of cameras, ostensibly including San Francisco’s extensive semi-private security camera networks, to watch city residents, workers, and visitors live. An article by the San Francisco Chronicle analyzing the report also uncovered that the SFPD tapped into these cameras to watch 42 hours of live footage during the Outside Lands music festival.

The city’s Board of Supervisors granted police permission to get live access to these cameras in September 2022 as part of a 15-month pilot program to see if allowing police to conduct widespread, live surveillance would create more safety for all people. However, even before this legislation’s passage, the SFPD covertly used non-city security cameras to monitor protests and other public events. In fact, police and the rich man who funded large networks of semi-private surveillance cameras both claimed publicly that the police department could easily access historic footage of incidents after the fact to help build cases, but could not peer through the cameras live. This claim was debunked by EFF and other investigators who revealed that police requested live access to semi-private cameras to monitor protests, parades, and public events—despite being the type of activity protected by the First Amendment.

When the Board of Supervisors passed this ordinance, which allowed police live access to non-city cameras for criminal investigations (for up to 24 hours after an incident) and for large-scale events, we warned that police would use this newfound power to put huge swaths of the city under surveillance—and we were unfortunately correct.

The most egregious example from the report is the 42 hours of live surveillance conducted during the Outside Lands music festival, which yielded five arrests for theft, pickpocketing, and resisting arrest—and only one of which resulted in the District Attorney’s office filing charges. Despite proponents’ arguments that live surveillance would promote efficiency in policing, in this case, it resulted in a massive use of police resources with little to show for it.

There still remain many unanswered questions about how the police are using these cameras. As the Chronicle article recognized:

…nearly a year into the experiment, it remains unclear just how effective the strategy of using private cameras is in fighting crime in San Francisco, in part because the Police Department’s disclosures don’t provide information on how live footage was used, how it led to arrests and whether police could have used other methods to make those arrests.

The need for greater transparency—and at minimum, for the police to follow all reporting requirements mandated by the non-city surveillance camera ordinance—is crucial to truly evaluate the impact that access to live surveillance has had on policing. In particular, the SFPD’s data fails to make clear how live surveillance helps police prevent or solve crimes in a way that footage after the fact does not. 

Nonetheless, surveillance proponents tout this report as showing that real-time access to non-city surveillance cameras is effective in fighting crime. Many are using this to push for a measure on the March 5, 2024 ballot, Proposition E, which would roll back police accountability measures and grant even more surveillance powers to the SFPD. In particular, Prop E would allow the SFPD a one-year pilot period to test out any new surveillance technology, without any use policy or oversight by the Board of Supervisors. As we’ve stated before, this initiative is bad all around—for policing, for civil liberties, and for all San Franciscans.

Police in San Francisco still don’t get it. They can continue to heap more time, money, and resources into fighting oversight and amassing all sorts of surveillance technology—but at the end of the day, this still won’t help combat the societal issues the city faces. Technologies touted as being useful in extreme cases will just end up as an oversized tool for policing misdemeanors and petty infractions, and will undoubtedly put already-marginalized communities further under the microscope. Just as it’s time to continue asking questions about what live surveillance helps the SFPD accomplish, it’s also time to oppose the erosion of existing oversight by voting NO on Proposition E on March 5. 

What is Proposition E and Why Should San Francisco Voters Oppose It?

2 February 2024 at 18:39

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

7 February 2024 at 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

7 February 2024 at 10:57

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious
exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people
globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirementincluding an intent to cause damage, defraud, or harmis needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for
unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)a minor infraction ordinarily considered a civil issuecould be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as c
hanging one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblockingwhich can be done innocently simply by changing locationshould not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers
whether within organizations or independentdiscover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of
Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)

Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)
Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl,
Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundacin Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

EFF Helps News Organizations Push Back Against Legal Bullying from Cyber Mercenary Group

8 February 2024 at 18:47

Cyber mercenaries present a grave threat to human rights and freedom of expression. They have been implicated in surveillance, torture, and even murder of human rights defenders, political candidates, and journalists. One of the most effective ways that the human rights community pushes back against the threat of targeted surveillance and cyber mercenaries is to investigate and expose these companies and their owners and customers. 

But for the last several months, there has emerged a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company’s cofounder, Rajat Khare, in particular. These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories.

We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites. On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response – so far, we have received nothing.

Background

If you worked in cybersecurity in the early 2010’s, chances are that you remember Appin Technology, an Indian company offering information security education and training with a sideline in (at least according to many technical reports) hacking-for-hire. 

On November 16th, 2023, Reuters published an extensively-researched story titled “How an Indian Startup Hacked the World” about Appin Technology and its cofounder Rajat Khare. The story detailed hacking operations carried out by Appin against private and government targets all over the world while Khare was still involved with the company. The story was well-sourced, based on over 70 original documents and interviews with primary sources from inside Appin. But within just days of publication, the story—and many others covering the issue—disappeared from most of the web.

On December 4th, an Indian court preliminarily ordered Reuters to take down their story about Appin Technology and Khare while a case filed against them remains pending in the court. Reuters subsequently complied with the order and took the story offline. Since then dozens of other journalists have written about the original story and about the takedown that followed. 

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called “Association of Appin Training Centers (AOATC).” Khare’s lawyers have also sent letters to news sites in multiple countries demanding they remove his name from investigative reports. Khare’s lawyers also succeeded in getting Swiss courts to issue an injunction against reporting from Swiss public television, forcing them to remove his name from a story about Qatar hiring hackers to spy on FIFA officials in preparation for the World Cup. Original stories, cybersecurity reports naming Appin, stories about the Reuters story, and even stories about the takedown have all been taken down. Even the archived version of the Reuters story was taken down from archive.org in response to letters sent by the Association of Appin Training Centers.

One of the letters sent by AOATC to Ron Deibert, the founder and director of Citizen Lab, reads:

A letter from the association of appin training centers to citizenlab asking the latter to take down their story .

Ron Deibert had the following response:

 "The #SLAPP story killers from India 🇮🇳 looking to silence @Reuters  @Bing_Chris  @razhael  & colleagues are coming after me too!  I received the following 👇  "takedown" notice from the "Association of Appin Training Centers" to which I say:  🖕🖕🖕🖕🖕🖕🖕"

Not everyone has been as confident as Ron Deibert. Some of the stories that were taken down have been replaced with a note explaining the takedown, while others were redacted into illegibility, such as the story from Lawfare:

 On Dec. 28, 2023, Lawfare received a letter notifying us that the Reuters story summarized in this article had been taken down pursuant to court order in response to allegations that it is false and defamatory. The letter demanded that we retract this post as well. The article in question has, indeed, been removed from the Reuters web site, replac

It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn’t exist until after the lawsuit was filed against Reuters in Indian court. Khare’s lawyers have denied any connection between Khare and the training center organization. Even if this is true, it is clear that the goals of both parties are fundamentally aligned in silencing any negative press covering Appin or Rajat Khare.  

Regardless of who is behind the Association of Appin Training Centers, the links between Khare and Appin Technology are extensive and clear. Khare continues to claim that he left Appin in 2013, before any hacking-for-hire took place. However, Indian corporate records demonstrate that he stayed involved with Appin long after that time. 

Khare has also been the subject of multiple criminal investigations. Reuters published a sworn 2016 affidavit by Israeli private investigator Aviram Halevi in which he admits hiring Appin to steal emails from a Korean businessman. It also published a 2012 Dominican prosecutor’s filing which described Khare as part of an alleged hacker’s “international criminal network.” A publicly available criminal complaint filed with India’s Central Bureau of Investigation shows that Khare is accused, with others, of embezzling nearly $100 million from an Indian education technology company. A Times of India story from 2013 notes that Appin was investigated by an unnamed Indian intelligence agency over alleged “wrongdoings.”

Response to AOATC

EFF is helping two news organizations stand up to the Association of Appin Training Centers’ bullying—Techdirt and Muckrock Foundation. 

Techdirt received a similar request to the one Ron Diebert received, after it published an article about the Reuters takedown, but then also received the following emails:

Dear Sir/Madam,

I am writing to you on behalf of Association of Appin Training Centers in regards to the removal of a defamatory article running on https://www.techdirt.com/ that refers to Reuters story, titled: “How An Indian Startup Hacked The World” published on 16th November 2023.

As you must be aware, Reuters has withdrawn the story, respecting the order of a Delhi court. The article made allegations without providing substantive evidence and was based solely on interviews conducted with several people.

In light of the same, we request you to kindly remove the story as it is damaging to us.

Please find the URL mentioned below.

https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

Thanks & Regards

Association of Appin Training Centers

And received the following email twice, roughly two weeks apart:

Hi Sir/Madam

This mail is regarding an article published on your website,

URL : https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

dated on 7th Dec. 23 .

As you have stated in your article, the Reuters story was declared defamatory by the Indian Court which was subsequently removed from their website.

However, It is pertinent to mention here that you extracted a portion of your article from the same defamatory article which itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.

You are advised to remove this article from your website with immediate effect.

 

Thanks & Regards

Association of Appin Training Centers

We responded to AOATC on behalf of Techdirt and MuckRock Foundation to the “requests for assistance” which were sent to them, challenging AOATC’s assertions about the substance and effect of the Indian court interim order. We pointed out that the Indian court order is only interim and not a final judgment that Reuters’ reporting was false, and that it only requires Reuters and Google to do anything. Furthermore, we explained that even if the court order applied to MuckRock and Techdirt, the order is inconsistent with the First Amendment and would be unenforceable in US courts pursuant to the SPEECH Act:

To the Association of Appin Training Centers:

We represent and write on behalf of Techdirt and MuckRock Foundation (which runs the DocumentCloud hosting services), each of which received correspondence from you making certain assertions about the legal significance of an interim court order in the matter of Vinay Pandey v. Raphael Satter & Ors. Please direct any future correspondence about this matter to me.

We are concerned with two issues you raise in your correspondence.

First, you refer to the Reuters article as containing defamatory materials as determined by the court. However, the court’s order by its very terms is an interim order, that indicates that the defendants’ evidence has not yet been considered, and that a final determination of the defamatory character of the article has not been made. The order itself states “this is only a prima-facie opinion and the defendants shall have sufficient opportunity to express their views through reply, contest in the main suit etc. and the final decision shall be taken subsequently.”

Second, you assert that reporting by others of the disputed statements made in the Reuters article “itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.” But, again by its plain terms, the court’s interim order applies only to Reuters and to Google. The order does not require any other person or entity to depublish their articles or other pertinent materials. And the order does not address its effect on those outside the jurisdiction of Indian courts. The order is in no way the global takedown order your correspondence represents it to be. Moreover, both Techdirt and MuckRock Foundation are U.S. entities. Thus, even if the court’s order could apply beyond the parties named within it, it will be unenforceable in U.S. courts to the extent it and Indian defamation law is inconsistent with the First Amendment to the U.S. Constitution and 47 U.S.C. § 230, pursuant to the SPEECH Act, 28 U.S.C. § 4102. Since the First Amendment would not permit an interim depublication order in a defamation case, the Pandey order is unenforceable.

If you disagree, please provide us with legal authority so we can assess those arguments. Unless we hear from you otherwise, we will assume that you concede that the order binds only Reuters and Google and that you will cease asserting otherwise to our clients or to anyone else.

We have not yet received any response from AOATC. We hope that others who have received takedown requests and demands from AOATC will examine their assertions with a critical eye.  

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

Voting Against the Surveillance State | EFFector 36.2

12 February 2024 at 13:48

EFF is here to keep you up-to-date with the latest news about your digital rights! EFFector 36.2 is out now and covers a ton of the latest news, including: a victory, as Amazon's Ring will no longer facilitate warrantless footage requests from police; an analysis on Apple's announcement to support RCS on iPhones; and a call for San Francisco voters to vote no on Proposition E on the March 5, 2024 ballot.

You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFector 36.2 | Voting Against the Surveillance State

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Privacy Isn't Dead. Far From It.

13 February 2024 at 19:07

Welcome! 

The fact that you’re reading this means that you probably care deeply about the issue of privacy, which warms our hearts. Unfortunately, even though you care about privacy, or perhaps because you care so much about it, you may feel that there's not much you (or anyone) can really do to protect it, no matter how hard you try. Perhaps you think “privacy is dead.” 

We’ve all probably felt a little bit like you do at one time or another. At its worst, this feeling might be described as despair. Maybe it hits you because a new privacy law seems to be too little, too late. Or maybe you felt a kind of vertigo after reading a news story about a data breach or a company that was vacuuming up private data willy-nilly without consent. 

People are angry because they care about privacy, not because privacy is dead.

Even if you don’t have this feeling now, at some point you may have felt—or possibly will feel—that we’re past the point of no return when it comes to protecting our private lives from digital snooping. There are so many dangers out there—invasive governments, doorbell cameras, license plate readers, greedy data brokers, mismanaged companies that haven’t installed any security updates in a decade. The list goes on.

This feeling is sometimes called “privacy nihilism.” Those of us who care the most about privacy are probably more likely to get it, because we know how tough the fight is. 

We could go on about this feeling, because sometimes we at EFF have it, too. But the important thing to get across is that this feeling is valid, but it’s also not accurate. Here’s why.

You Aren’t Fighting for Privacy Alone

For starters, remember that none of us are fighting alone. EFF is one of dozens, if not hundreds,  of organizations that work to protect privacy.  EFF alone has over thirty-thousand dues-paying members who support that fight—not to mention hundreds of thousands of supporters subscribed to our email lists and social media feeds. Millions of people read EFF’s website each year, and tens of millions use the tools we’ve made, like Privacy Badger. Privacy is one of EFF’s biggest concerns, and as an organization we have grown by leaps and bounds over the last two decades because more and more people care. Some people say that Americans have given up on privacy. But if you look at actual facts—not just EFF membership, but survey results and votes cast on ballot initiatives—Americans overwhelmingly support new privacy protections. In general, the country has grown more concerned about how the government uses our data, and a large majority of people say that we need more data privacy protections. 

People are angry because they care about privacy, not because privacy is dead.

Some people also say that kids these days don’t care about their privacy, but the ones that we’ve met think about privacy a lot. What’s more, they are fighting as hard as anyone to stop privacy-invasive bills like the Kids Online Safety Act. In our experience, the next generation cares intensely about protecting privacy, and they’re likely to have even more tools to do so. 

Laws are Making Their Way Around the World

Strong privacy laws don’t cover every American—yet. But take a look at just one example to see how things are improving: the California Consumer Privacy Act of 2018 (CCPA). The CCPA isn’t perfect, but it did make a difference. The CCPA granted Californians a few basic rights when it comes to their relationship with businesses, like the right to know what information companies have about you, the right to delete that information, and the right to tell companies not to sell your information. 

This wasn’t a perfect law for a few reasons. Under the CCPA, consumers have to go company-by-company to opt out in order to protect their data. At EFF, we’d like to see privacy and protection as the default until consumers opt-in. Also, CCPA doesn’t allow individuals to sue if their data is mismanaged—only California’s Attorney General and the California Privacy Protection Agency can do it. And of course, the law only covers Californians. 

Remember that it takes time to change the system.

But this imperfect law is slowly getting better. Just this year California’s legislature passed the DELETE Act, which resolves one of those issues. The California Privacy Protection Agency now must create a deletion mechanism for data brokers that allows people to make their requests to every data broker with a single, verifiable consumer request. 

Pick a privacy-related topic, and chances are good that model bills are being introduced, or already exist as laws in some places, even if they don’t exist everywhere. The Illinois Biometric Information Privacy Act, for example, passed back in 2008, protects people from nonconsensual use of their biometrics for face recognition. We may not have comprehensive privacy laws yet in the US, but other parts of the world—like Europe—have more impactful, if imperfect, laws. We can have a nationwide comprehensive consumer data privacy law, and once those laws are on the books, they can be improved.  

We Know We’re Playing the Long Game

Remember that it takes time to change the system. Today we take many protections for granted, and often assume that things are only getting worse, not better. But many important rights are relatively new. For example, our Constitution didn’t always require police to get a warrant before wiretapping our phones. It took the Supreme Court four decades to get this right. (They were wrong in 1928 in Olmstead, then right in 1967 in Katz.)

Similarly, creating privacy protections in law and in technology is not a sprint. It is a marathon. The fight is long, and we know that. Below, we’ve got examples of the progress that we’ve already made, in law and elsewhere. 

Just because we don’t have some protective laws today doesn’t mean we can’t have them tomorrow. 

Privacy Protections Have Actually Increased Over the Years

The World Wide Web is Now Encrypted 

When the World Wide Web was created, most websites were unencrypted. Privacy laws aren’t the only way to create privacy protections, as the now nearly-entirely encrypted web shows:  another approach is to engineer in strong privacy protections from the start. 

The web has now largely switched from non-secure HTTP to the more secure HTTPS protocol. Before this happened, most web browsing was vulnerable to eavesdropping and content hijacking. HTTPS fixes most of these problems. That's why EFF, and many like-minded supporters, pushed for web sites to adopt HTTPS by default. As of 2021, about 90% of all web page visits use HTTPS. This switch happened in under a decade. This is a big win for encryption and security for everyone, and EFF's Certbot and HTTPS Everywhere are tools that made it happen, by offering an easy and free way to switch an existing HTTP site to HTTPS. (With a lot of help from Let’s Encrypt, started in 2013 by a group of determined researchers and technologists from EFF and the University of Michigan.) Today, it’s the default to implement HTTPS. 

Cell Phone Location Data Now Requires a Warrant

In 2018, the Supreme Court handed down a landmark opinion in Carpenter v. United States, ruling 5-4 that the Fourth Amendment protects cell phone location information. As a result, police must now get a warrant before obtaining this data. 

But where else this ruling applies is still being worked out. Perhaps the most significant part of the ruling is its explicit recognition that individuals can maintain an expectation of privacy in information that they provide to third parties. The Court termed that a “rare” case, but it’s clear that other invasive surveillance technologies, particularly those that can track individuals through physical space, are now ripe for challenge. Expect to see much more litigation on this subject from EFF and our friends.

Americans’ Outrage At Unconstitutional Mass Surveillance Made A Difference

In 2013, government contractor Edward Snowden shared evidence confirming, among other things, that the United States government had been conducting mass surveillance on a global scale, including surveillance of its own citizens’ telephone and internet use. Ten years later, there is definitely more work to be done regarding mass surveillance. But some things are undoubtedly better: some of the National Security Agency’s most egregiously illegal programs and authorities have shuttered or been forced to end. The Intelligence Community has started affirmatively releasing at least some important information, although EFF and others have still had to fight some long Freedom of Information Act (FOIA) battles.

Privacy Options Are So Much Better Today

Remember PGP and GPG? If you do, you know that generally, there are much easier ways to send end-to-end encrypted communications today than there used to be. It’s fantastic that people worked so hard to protect their privacy in the past, and it’s fantastic that they don’t have to work as hard now! (If you aren’t familiar with PGP or GPG, just trust us on this one.) 

Don’t give in to privacy nihilism. Instead, share and celebrate the ways we’re winning. 

Advice for protecting online privacy used to require epic how-to guides for complex tools; now, advice is usually just about what relatively simple tools or settings to use. People across the world have Signal and WhatsApp. The web is encrypted, and the Tor Browser lets people visit websites anonymously fairly easily. Password managers protect your passwords and your accounts; third-party cookie blockers like EFF’s Privacy Badger stop third-party tracking. There are even options now to turn off your Ad ID—the key that enables most third-party tracking on mobile devices—right on your phone. These tools and settings all push the needle forward.

We Are Winning The Privacy War, Not Losing It

Sometimes people respond to privacy dangers by comparing them to sci-fi dystopias. But be honest: most science fiction dystopias still scare the heck out of us because they are much, much more invasive of privacy than the world we live in. 

In an essay called “Stop Saying Privacy Is Dead,” Evan Selinger makes a necessary point: “As long as you have some meaningful say over when you are watched and can exert agency over how your data is processed, you will have some modicum of privacy.” 

Of course we want more than a modicum of privacy. But the point here is that many of us generally do get to make decisions about our privacy. Not all—of course. But we all recognize that there are different levels of privacy in different places, and that privacy protections aren’t equally good or bad no matter where we go. We have places we can go—online and off—that afford us more protections than others. And because of this, most of the people reading this still have deep private lives, and can choose, with varying amounts of effort, not to allow corporate or government surveillance into those lives. 

Worrying about every potential threat, and trying to protect yourself from each of them, all of the time, is a recipe for failure.

Privacy is a process, not a single thing. We are always negotiating what levels of privacy we have. We might not always have the upper hand, but we are often able to negotiate. This is why we still see some fictional dystopias and think, “Thank God that’s not my life.” As long as we can do this, we are winning. 

“Giving Up” On Privacy May Not Mean Much to You, But It Does to Many

Shrugging about the dangers of surveillance can seem reasonable when that surveillance isn’t very impactful on our lives. But for many, fighting for privacy isn't a choice, it is a means to survive. Privacy inequity is real; increasingly, money buys additional privacy protections. And if privacy is available for some, then it can exist for all. But we should not accept that some people will have privacy and others will not. This is why digital privacy legislation is digital rights legislation, and why EFF is opposed to data dividends and pay-for-privacy schemes.

Privacy increases for all of us when it increases for each of us. It is much easier for a repressive government to ban end-to-end encrypted messengers when only journalists and activists use them. It is easier to know who is an activist or a journalist when they are the only ones using privacy-protecting services or methods. As the number of people demanding privacy increases, the safer we all are. Sacrificing others because you don't feel the impact of surveillance is a fool's bargain. 

Time Heals Most Privacy Wounds

You may want to tell yourself: companies already know everything about me, so a privacy law a year from now won't help. That's incorrect, because companies are always searching for new data. Some pieces of information will never change, like our biometrics. But chances are you've changed in many ways over the years—whether that's as big as a major life event or as small as a change in your tastes in movies—but who you are today is not necessarily you'll be tomorrow.

As the source of that data, we should have more control over where it goes, and we’re slowly getting it. But that expiration date means that even if some of our information is already out there, it’s never going to be too late to shut off the faucet. So if we pass a privacy law next year, it’s not the case that every bit of information about you has already leaked, so it won’t do any good. It will.

What To Do When You Feel Like It’s Impossible

It can feel overwhelming to care about something that feels like it’s dying a death of a thousand cuts. But worrying about every potential threat, and trying to protect yourself from each of them, all of the time, is a recipe for failure. No one really needs to be vigilant about every threat at all times. That’s why our recommendation is to create a personalized security plan, rather than throwing your hands up or cowering in a corner. 

Once you’ve figured out what threats you should worry about, our advice is to stay involved. We are all occasionally skeptical that we can succeed, but taking action is a great way to get rid of that gnawing feeling that there’s nothing to be done. EFF regularly launches new projects that we hope will help you fight privacy nihilism. We’re in court many times a year fighting privacy violations. We create ways for like-minded, privacy-focused people to work together in their local advocacy groups, through the Electronic Frontier Alliance, our grassroots network of community and campus organizations fighting for digital rights. We even help you teach others to protect their own privacy. And of course every day is a good day for you to join us in telling government officials and companies that privacy matters. 

We know we can win because we’re creating the better future that we want to see every day, and it’s working. But we’re also building the plane while we’re flying it. Just as the death of privacy is not inevitable, neither is our success. It takes real work, and we hope you’ll help us do that work by joining us. Take action. Tell a friend. Download Privacy Badger. Become an EFF member. Gift an EFF membership to someone else.

Don’t give in to privacy nihilism. Instead, share and celebrate the ways we’re winning. 

EFF to Court: Strike Down Age Estimation in California But Not Consumer Privacy

14 February 2024 at 18:44

The Electronic Frontier Foundation (EFF) called on the Ninth Circuit to rule that California’s Age Appropriate Design Code (AADC) violates the First Amendment, while not casting doubt on well-written data privacy laws. EFF filed an amicus brief in the case NetChoice v. Bonta, along with the Center for Democracy & Technology.

A lower court already ruled the law is likely unconstitutional. EFF agrees, but we asked the appeals court to chart a narrower path. EFF argued the AADC’s age estimation scheme and vague terms that describe amorphous “harmful content” render the entire law unconstitutional. But the lower court also incorrectly suggested that many foundational consumer privacy principles cannot pass First Amendment scrutiny. That is a mistake that EFF asked the Ninth Circuit to fix.

In late 2022, California passed the AADC with the goal of protecting children online. It has many data privacy provisions that EFF would like to see in a comprehensive federal privacy bill, like data minimization, strong limits on the processing of geolocation data, regulation of dark patterns, and enforcement of privacy policies.

Government should provide such privacy protections to all people. The protections in the AADC, however, are only guaranteed to children. And to offer those protections to children but not adults, technology companies are strongly incentivized to “estimate the age” to their entire user base—children and adults alike. While the method is not specified, techniques could include submitting a government ID or a biometric scan of your face. In addition, technology companies are required to assess their products to determine if they are designed to expose children to undefined “harmful content” and determine what is in the undefined “best interest of children.”

In its brief, EFF argued that the AADC’s age estimation scheme raises the same problems as other age verification laws that have been almost universally struck down, often with help from EFF. The AADC burdens adults’ and children’s access to protected speech and frustrates all users’ right to speak anonymously online. In addition, EFF argued that the vague terms offer no clear standards, and thus give government officials too much discretion in deciding what conduct is forbidden, while incentivizing platforms to self-censor given uncertainty about what is allowed.

“Many internet users will be reluctant to provide personal information necessary to verify their ages, because of reasonable doubts regarding the security of the services, and the resulting threat of identity theft and fraud,” EFF wrote.

Because age estimation is essential to the AADC, the entire law should be struck down for that reason alone, without assessing the privacy provisions. EFF asked the court to take that narrow path.

If the court instead chooses to address the AADC’s privacy protections, EFF cautioned that many of the principles reflected in those provisions, when stripped of the unconstitutional censorship provisions and vague terms, could survive intermediate scrutiny. As EFF wrote:

“This Court should not follow the approach of the district court below. It narrowly focused on California’s interest in blocking minors from harmful content. But the government often has several substantial interests, as here: not just protection of information privacy, but also protection of free expression, information security, equal opportunity, and reduction of deceptive commercial speech. The privacy principles that inform AADC’s consumer data privacy provisions are narrowly tailored to these interests.”

EFF has a long history of supporting well-written privacy laws against First Amendment attacks. The AADC is not one of them. We have filed briefs supporting laws that protect video viewing history, biometric data, and other internet records. We have advocated for a federal law to protect reproductive health records. And we have written extensively on the need for a strong federal privacy law.

Hip Hip Hooray For Hipster Antitrust

14 February 2024 at 18:58

Don’t believe the hype.

The undeniable fact is that the FTC has racked up a long list of victories over corporate abuses, like busting a nationwide, decades-long fraud that tricked people into paying for “free” tax preparation.

The wheels of justice grind slowly, so many of the actions the FTC has brought are still pending. But these actions are significant. In tandem with the Department of Justice, it is suing over fake apartment listings, blocking noncompete clauses, targeting fake online reviews, and going after gig work platforms for ripping off their workers.

Companies that abuse our privacy and trust are being hit with massive fines: $520 million for Epic’s tricks to get kids to spend money online, $20 million to punish Microsoft for spying on kids who use Xboxes, and a $25 million fine against Amazon for capturing voice recordings of kids and storing kids’ location data.

The FTC is using its authority to investigate many forms of digital deception, from deceptive and fraudulent online ads to the use of cloud computing to lock in business customers to data brokers’ sale of our personal information.

And of course, the FTC is targeting anticompetitive mergers, like Nvidia’s attempted takeover of ARM - which has the immediate effect of preventing an anticompetitive merger and the long-term benefit of deterring future attempts at similar oligopolistic mergers. They’ve also targeted private equity “rollups,” which combine  dozens or hundreds of smaller companies into a monopoly with pricing power over its customers and the whip hand over its workers. These kinds of rollups are all too common, and destructive of offline and online services alike.

From Right to Repair to Click to Cancel to fines for deceptive UI (“dark patterns”), the FTC has taken up many of the issues we’ve fought for over the years. So the argument that the FTC is a do-nothing agency wasting our time with grandstanding stunts is just factually wrong. As recently as  December 2023, the FTC  and DOJ chalked up ten major victories

But this “win/loss ratio” accounting also misses the point. Even if the outcome isn’t guaranteed, this FTC refuses to turn a blind eye  to abuses of the American public. 

What’s more, the FTC collaborated with the DOJ on new merger guidelines that spell out what kinds of mergers are likely to be legal. These are the most comprehensive, future-looking guidelines in generations, and they tee up enforcement actions for this FTC and its successors for many years to come.

The FTC is also seeking to revive existing laws that have lane dormant for too long. . As John Mark Newman explains, this FTC has cannily filed cases that reassert its right to investigate “competing” companies with interlocking directorates.

Newman also praises the FTC for “supercharging student interest in the field,” with law schools seeing surging interest in antitrust courses and a renaissance in law review articles about antitrust enforcement. 

The FTC is not alone in this. Its colleagues in the DOJ’s antitrust division have their own long list of victories.

But the most important victory for America’s antitrust enforcers is what doesn’t happen. Across the economy and every sector, corporate leaders are backing away from merger-driven growth and predatory pricing, deterred from violating the law by the knowledge that the generations-long period of tolerance for lawless corporate abuse is coming to a close.

Even better, America’s antitrust enforcers don’t stand alone. At long last, it seems that the whole world is reversing decades of tacit support for oligopolies and corporate bullying. 

❌
❌