Normal view

There are new articles available, click to refresh the page.
Yesterday — 17 May 2024Main stream

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

17 May 2024 at 13:02

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect.

Montana’s ban (which EFF and others opposed) prohibits TikTok from operating anywhere within the state and imposes financial penalties on TikTok or any mobile application store that allows users to access TikTok. The district court recognized that Montana’s law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” and blocked Montana’s ban from going into effect. Last year, EFF—along with the ACLU, Freedom of the Press Foundation, Reason Foundation, and the Center for Democracy and Technology—filed a friend-of-the-court brief in support of TikTok and Montana TikTok users’ challenge to this law at the trial court level.

As the brief explains, Montana’s TikTok ban is a prior restraint on speech that prohibits Montana TikTok users—and TikTok itself—from posting on the platform. The law also prohibits TikTok’s ability to make decisions about curating its platform.

Prior restraints such as Montana’s ban are presumptively unconstitutional. For a court to uphold a prior restraint, the First Amendment requires it to satisfy the most exacting scrutiny. The prior restraint must be necessary to further an urgent interest of the highest magnitude, and the narrowest possible way for the government to accomplish its precise interest. Montana’s TikTok ban fails to meet this demanding standard.

Even if the ban is not a prior restraint, the brief illustrates that it would still violate the First Amendment. Montana’s law is a “total ban” on speech: it completely forecloses TikTok users’ speech with respect to the entire medium of expression that is TikTok. As a result, Montana’s ban is subject to an exacting tailoring requirement: it must target and eliminate “no more than the exact source of the ‘evil’ it seeks to remedy.” Montana’s law is undeniably overbroad and fails to satisfy this scrutiny.

This appeal is happening in the immediate aftermath of President Biden signing into law federal legislation that effectively bans TikTok in its current form, by requiring TikTok to divest of any Chinese ownership within 270 days. This federal law raises many of the same First Amendment concerns as Montana’s.

It’s important that the Ninth Circuit take this opportunity to make clear that the First Amendment requires the government to satisfy a very demanding standard before it can impose these types of extreme restrictions on Americans’ speech.

Before yesterdayMain stream

Eight Supreme Court Cases To Watch

The Supreme Court’s docket this term includes many of the complex issues American society is currently facing, including gun control, free speech online, race-based discrimination in voting, reproductive rights, presidential immunity from criminal accountability, and more.

The ACLU has served as counsel or filed friend-of-the-court briefs in all of the cases addressing these hot-button issues. The court will decide all its cases by the beginning of July. Here are eight undecided cases to watch, and what they mean for the future of our civil liberties.


Reproductive freedom: Protections for medication abortion and access to abortion during medical emergencies

FDA v. Alliance for Hippocratic Medicine

The Facts: Anti-abortion doctors, who do not prescribe medication abortion, are asking the Supreme Court to force the Food & Drug Administration (FDA) to impose severe restrictions on mifepristone – a safe and effective medication used in this country in most abortions and for miscarriage management – in every state, even where abortion is protected by state law.

Our Argument: The FDA approved mifepristone more than 20 years ago, finding that it is safe, effective, and medically necessary. Since its approval, more than 5 million people in the U.S. have used this medication. Our brief argued that the two lower courts – a district court in Texas and the U.S. Court of Appeals for the Fifth Circuit – relied on junk science and discredited witnesses to override the FDA’s expert decision to eliminate medically-unnecessary restrictions on an essential medication with a stronger safety record than Tylenol. We urged the Supreme Court to protect access to medication abortion and reverse the lower courts’ rulings.

Why it Matters: Today, with abortion access already severely restricted, the ability to get medication-abortion care using mifepristone is more important than ever. If the Fifth Circuit’s ruling is allowed to stand, individuals would be blocked from filling mifepristone prescriptions through mail-order pharmacies, forcing many to travel, sometimes hundreds of miles, just to pick up a pill they can safely receive through the mail. Healthcare professionals with specialized training, like advanced practice clinicians, would also be prohibited from prescribing mifepristone, further limiting where patients can access this critical medication. The American Cancer Society and other leading patient advocacy groups are also sounding the alarm that overturning the FDA’s decision would upend drug innovation and research, with consequences well beyond reproductive health care.

The Last Word: “As this case shows, overturning Roe v. Wade wasn’t the end goal for extremists. In addition to targeting nationwide-access to mifepristone, politicians in some states have already moved on to attack birth control and IVF. We need to take these extremists seriously when they show us they’re coming for every aspect of our reproductive lives.” – Jennifer Dalven, director of the ACLU Reproductive Freedom Project.

Idaho & Moyle et. al v. US

The Facts: Idaho politicians want the power to disregard the Emergency Medical Treatment and Labor Act (EMTALA) that requires emergency rooms to provide stabilizing treatment to patients in emergency situations, including abortion where that is the appropriate stabilizing treatment. If the state prevails, it would jail doctors for providing pregnant patients with the necessary emergency care required under this federal law.

Our Argument: The ACLU and its legal partners filed a friend-of-the-court brief explaining that the law requires hospitals to provide whatever emergency care is required; there is no carve-out for patients who need an abortion to stabilize an emergency condition. All three branches of government have long recognized that hospitals are required under EMTALA to provide emergency abortion care to any patient who needs it.

Why it Matters: Because Idaho’s current abortion ban prohibits providing the emergency care required under EMTALA, medical providers have found themselves having to decide between providing necessary emergency care to a pregnant patient or facing criminal prosecution from the state. Depending on how the court rules, medical providers and patients in several other states with extreme abortion bans could find themselves in a similar position.

The Last Word: “If these politicians succeed, doctors will be forced to withhold critical care from their patients. We’re already seeing the devastating impact of this case play out in Idaho, and we fear a ripple effect across the country.” – Alexa Kolbi-Molinas, deputy director of the ACLU Reproductive Freedom Project


Free speech: Government authority over online and political speech

National Rifle Association v. Vullo

The Facts: In 2018, Maria Vullo, New York’s former chief financial regulator, in coordination with then-Mayor Andrew Cuomo, threatened to use her regulatory power over banks and insurance companies to coerce them into denying basic financial services to the National Rifle Association (NRA) because she and Cuomo disagreed with its pro-gun rights advocacy. The NRA argued that Vullo’s alleged efforts to blacklist the NRA penalized it for its political advocacy, in violation of the First Amendment.

Our Argument: The ACLU, representing the NRA at the Supreme Court, argued that any government attempt to blacklist an advocacy group and deny it financial services because of its viewpoint violates the right to free speech. Our brief urges the court to apply the precedent it set in 1963 in Bantam Books v. Sullivan, which established that even informal, indirect efforts to censor speech violate the First Amendment.

Why it Matters: While the ACLU stands in stark opposition to the NRA on many issues, this case is about securing basic First Amendment rights for all advocacy organizations. If New York State is allowed to blacklist the NRA, then Oklahoma could similarly penalize criminal justice reformers advocating for bail reform, and Texas could target climate change organizations advancing the view that all fossil fuel extraction must end. The ACLU itself could be targeted for its advocacy.

The Last Word: “The right to advocate views the government opposes safeguards our ability to organize for the country we want to see. It’s a principle the ACLU has defended for more than 100 years, and one we will continue to protect from government censorship of all kinds, whether we agree or disagree with the views of those being targeted.” – David Cole, ACLU legal director

NetChoice v. Paxton and Moody v. NetChoice

The Facts: Motivated by a perception that social media platforms disproportionately silence conservative voices, Florida and Texas passed laws that give the government authority to regulate how large social media companies like Facebook and YouTube curate content posted on their sites.

Our Argument: In a friend-of-the-court brief, the ACLU, the ACLU of Florida and the ACLU of Texas argued that the First Amendment right to speak includes the right to choose what to publish and how to prioritize what is published. The government’s desire to have private speakers, like social media companies, distribute more conservative viewpoints–or any specific viewpoints–is not a permissible basis for state control of what content appears on privately-owned platforms.

Why it Matters: If these laws are allowed to stand, platforms may fear liability and decide to publish nothing at all, effectively eliminating the internet’s function as a modern public square. Or, in an attempt to comply with government regulations, social media companies may be forced to publish a lot more distracting and unwanted content. For example, under the Texas law, which requires “viewpoint neutrality,” a platform that publishes posts about suicide prevention would also have to publish posts directing readers to websites that encourage suicide. .

The Last Word: “Social media companies have a First Amendment right to choose what to host, display, and publish. The Supreme Court has recognized that right for everyone from booksellers to newspapers to cable companies, and this case should make clear that the same is true for social media platforms.” — Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, & Technology Project


Voting rights: Racial gerrymandering and the fight for fair maps

Alexander v. South Carolina NAACP

The Facts: In 2022, South Carolina adopted a racially-gerrymandered congressional map. The state legislature singled out Black communities, “cracking” predominantly Black communities and neighborhoods across two districts to reduce their electoral influence in the state’s first congressional district.

Our Argument: The ACLU and its legal partners sued on behalf of the South Carolina NAACP and an affected voter to challenge the constitutionality of the new congressional map. We argued that the Equal Protection Clause of the Fourteenth Amendment forbids the sorting of voters on the basis of their race, absent a compelling interest, which the state failed to provide.

Why it Matters: This racially-gerrymandered congressional map deprives Black South Carolinians the political representation they deserve in all but one of seven districts, limiting the power and influence of more than a quarter of the state’s population just before the 2024 election.

The Last Word: “South Carolina’s failure to rectify its racially-gerrymandered congressional map blatantly disregards the voices and the rights of Black voters. The ACLU is determined to fight back until Black South Carolina voters have a lawful map that fairly represents them.” – Adriel I. Cepeda Derieux, deputy director of the ACLU Voting Rights Project


Gender justice: Denying guns to persons subject to domestic violence restraining orders

United States v. Rahimi

The Facts: Zackey Rahimi was convicted under a federal law that forbids individuals subject to domestic violence protective orders from possessing a firearm. Mr. Rahimi challenged the law as a violation of his Second Amendment right to bear arms.

Our Argument: The U.S. Court of Appeals for the Fifth Circuit ruled that individuals subject to domestic violence protective orders have a constitutional right to possess guns. It invalidated the federal gun law because it found no historical analogues in the 1700s or 1800s that prohibited those subject to domestic violence protective orders from possessing a firearm. The ACLU argued that the Fifth Circuit’s analysis is a misapplication of the Supreme Court’s decision in New York State Rifle & Pistol Association, Inc. v. Bruen because it effectively required a “historical twin” law in order to uphold a law today. There were no identical laws at the time of the Framing because there were no domestic violence protective orders then, but that should not be a basis for invalidating the laws today. We also argued that imposing time-limited firearms restrictions based on civil restraining orders is a critical tool for protecting those who have experienced domestic violence and face a threat of further violence.

Why it Matters: If the Fifth Circuit’s rationale is affirmed, then governments would lose the ability to prohibit gun possession by persons subject to restraining orders — and presumably even to run pre-acquisition background checks, which have stopped more than 77,000 purchases of weapons by individuals subject to domestic violence orders in the 25 years that the federal law has been in place. This “originalist” interpretation of the Second Amendment not only hinders our ability to protect individuals against newly recognized threats, but also tethers the authority to regulate gun possession to periods when governments disregarded many forms of violence directed against women, Black people, Indigenous people, and others.

The Last Word: “It would be a radical mistake to allow historical wrongs to defeat efforts today to protect women and other survivors of domestic abuse. The Supreme Court should affirm that the government can enact laws aimed at preventing intimate partner violence, consistent with the Second Amendment.” – Ria Tabacco Mar, director of the ACLU Women’s Rights Project


Criminal justice: Eighth-Amendment protections for unhoused persons accused of sleeping in public when they have nowhere else to go

City of Grants Pass v. Johnson

The Facts: Grants Pass, Oregon, enacted ordinances that make it illegal for people, including unhoused persons with no access to shelter, to sleep outside in public using a blanket, pillow, or even a cardboard sheet to lie on. Last year, the Ninth Circuit Court of Appeals ruled that punishing unhoused people for sleeping in public when they have no other choice violates the Eighth Amendment’s ban on cruel and unusual punishment.

Our Argument: In Oregon, and elsewhere in the United States, the population of unhoused persons often exceeds the number of shelter beds available, forcing many to sleep on the streets or in parks. The ACLU and 19 state affiliates submitted a friend-of-the-court brief arguing that it is cruel and unusual to punish unhoused people for the essential life-sustaining activity of sleeping outside when they lack access to any alternative shelter.

Why it Matters: When applied to people with nowhere else to go, fines and arrests for sleeping outside serve no purpose and are plainly disproportionately punitive. Arresting and fining unhoused people for sleeping in public only exacerbates cycles of homelessness and mass incarceration.

The Last Word: “There is no punishment that fits the ‘crime’ of being forced to sleep outside. Instead of saddling people with fines, jail time, and criminal records, cities should focus on proven solutions, like affordable housing, accessible and voluntary services, and eviction protections.” – Scout Katovich, staff attorney with the ACLU Trone Center for Justice and Equality


Democracy: Presidential immunity from prosecution for criminal acts after leaving office

Trump v. United States

The Facts: Former President Donald Trump is asking the Supreme Court to rule that he cannot be held criminally liable for any official acts as president, even after leaving office, and even where the crimes concern efforts to resist the peaceful transition of power after an election. This claim runs contrary to fundamental principles of constitutional accountability, and decades of precedent.

Our Argument: Our friend-of-the-court brief argues that former President Trump is not immune from criminal prosecution, and that the Constitution and long-established Supreme Court precedent support the principle that in our democracy, nobody is above the law — even the president. Our brief warns that there are “few propositions more dangerous” in a democracy than the notion that an elected head of state has blanket immunity from criminal prosecution.

Why it Matters: No other president has asserted that presidents can never be prosecuted for official acts that violate criminal law. The president’s accountability to the law is an integral part of the separation of powers and the rule of law. If the President is free, as Trump’s legal counsel argued, to order the assassination of his political opponents and escape all criminal accountability even after he leaves office, both of these fundamental principles of our system would have a fatal Achilles’ heel.

The Last Word: “The United States does not have a king, and former presidents have no claim to being above the law. A functioning democracy depends on our ability to critically reckon with the troubling actions of government officials and hold them accountable.” – David Cole, ACLU legal director

The U.S. House Version of KOSA: Still a Censorship Bill

3 May 2024 at 12:48

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

On World Press Freedom Day (and Every Day), We Fight for an Open Internet

3 May 2024 at 11:47

Today marks World Press Freedom Day, an annual celebration instituted by the United Nations in 1993 to raise awareness of press freedom and remind governments of their duties under Article 19 of the Universal Declaration of Human Rights. This year, the day is dedicated to the importance of journalism and freedom of expression in the context of the current global environmental crisis.

Journalists everywhere face challenges in reporting on climate change and other environmental issues. Whether lawsuits, intimidation, arrests, or disinformation campaigns, these challenges are myriad. For instance, journalists and human rights campaigners attending the COP28 Summit held in Dubai last autumn faced surveillance and intimidation. The Committee to Protect Journalists (CPJ) has documented arrests of environmental journalists in Iran and Venezuela, among other countries. And in 2022, a Guardian journalist was murdered while on the job in the Brazilian Amazon.

The threats faced by journalists are the same as those faced by ordinary internet users around the world. According to CPJ, there are 320 journalists jailed worldwide for doing their job. And ranked among the top jailers of journalists last year were China, Myanmar, Belarus, Russia, Vietnam, Israel, and Iran; countries in which internet users also face censorship, intimidation, and in some cases, arrest. 

On this World Press Freedom Day, we honor the journalists, human rights defenders, and internet users fighting for a better world. EFF will continue to fight for the right to freedom of expression and a free and open internet for every internet user, everywhere.



Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

How Comics Can Spark Conversations About Race and History in the Classroom

pRight now, efforts to censor college protestors, to ban diverse materials in schools and to silence students and staff threaten our right to free speech in schools. People are having their voices silenced, their right to learn challenged, and their access to information restricted. But how can we navigate these complex issues with the next generation?/p pWe at the ACLU created a series of comic stories with illustrative journalist Eda Uzunlar to empower students and educators, spark vital conversations about their rights, and ensure all voices are heard and clear, both in the classroom and beyond. Our first installment illustrates the story of Anthony Crawford who is a public school teacher and part of a a href=https://www.aclu.org/cases/bert-v-oconnorlawsuit challenging HB 1775/a, Oklahoma’s classroom censorship law./p pIn this Qamp;A, we sat down with Eda to discuss why comics are the perfect medium to tackle these issues and connect with young people in a way that resonates far more effectively than traditional media can./p pbLet’s start with your journey as an illustrative journalist, comic creator, and audio enthusiast. What inspired you to use this kind of medium for your work?/b/p pI#8217;ve been making comics since childhood. Like most kids, I doodled, and eventually, my doodles turned into my first comic. It was about a character called Spaceman – creative, I know – an astronaut stranded on the moon. He was this sardonic, really sarcastic, figure. It was a simple concept. He became this kind of vessel for expressing myself as a young person, particularly growing up in South Dakota with my family being both Muslim and immigrants from Turkey. Expressing these issues in a way that people who were very different from me would understand was crucial to me./p div class=wp-single-image sizing--full-bleed mb-8 figure class=wp-image mb-8 img width=2800 height=1400 src=https://www.aclu.org/wp-content/uploads/2024/04/crt-three.jpg class=attachment-original size-original alt=A preview of Eda Uzunlar#039;s comic featuring teacher and activist Anothy Crawford. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/04/crt-three.jpg 2800w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-768x384.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-1536x768.jpg 1536w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-2048x1024.jpg 2048w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-600x300.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-800x400.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-1000x500.jpg 1000w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-1200x600.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-1400x700.jpg 1400w, https://www.aclu.org/wp-content/uploads/2024/04/crt-three-1600x800.jpg 1600w sizes=(max-width: 2800px) 100vw, 2800px / /figure /div pI realized that comics are a way to discuss complex stories without oversimplifying them. But I never imagined it would become a career. Similarly, my entry into journalism was unexpected. Someone introduced me to FM radio in my teens. Within a year, community radio became this amazing space for me where I hosted a show discussing anything, from civil disobedience to whether or not respect is implied or earned – things I thought people from any background could weigh in on. And I don’t know why they gave a 16-year-old the ability to take live calls, but I got to talk to so many people in my community that way. It felt like a continuation of my comics — anonymous conversations driven by passion rather than preconceived notions based on appearances./p pSo I took those experiences and turned them into what I do now. I try to help people tell their stories – no matter how complex – in an accessible way, so others can gain understanding of perspectives they might not have known about before./p pbIt#8217;s so great how you#8217;ve integrated your childhood passion for comics with your later pursuits in journalism and radio. You mentioned that comics offer a unique way to discuss complex issues without oversimplifying them. How do you navigate that balance between accessibility and depth when creating your comic content?/b/p pIt#8217;s all about breaking down big ideas into something digestible and engaging. When stories like these are presented in a visual format, it helps the audience both process and retain what they’re taking in. This especially applies to younger people. They#8217;re the ones making use of social media and watching YouTube to learn about the world around them. Traditional newspapers? Not so much for them. And when we#8217;re talking about accessibility, it#8217;s a big deal. There#8217;s a direct correlation between marginalized groups and limited access to media literacy. Traditional long-form journalism often fails to reach these communities./p div class=wp-single-image sizing--full-bleed mb-8 figure class=wp-image mb-8 img width=2800 height=1400 src=https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b.jpg class=attachment-original size-original alt=A preview of Eda Uzunlar#039;s comic featuring teacher and activist Anothy Crawford. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b.jpg 2800w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-768x384.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-1536x768.jpg 1536w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-2048x1024.jpg 2048w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-600x300.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-800x400.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-1000x500.jpg 1000w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-1200x600.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-1400x700.jpg 1400w, https://www.aclu.org/wp-content/uploads/2024/04/crt-3-b-1600x800.jpg 1600w sizes=(max-width: 2800px) 100vw, 2800px / /figure /div pTake, for instance, the whole debate around critical race theory (CRT) in Oklahoma. A long-winded article might not reach the people who need to hear about it most. But with comics, we#8217;re able to package up those complex ideas into something that will catch your eye and is easy to grasp. It#8217;s like delivering a message directly to their social media feed. By making these reported stories visually engaging and using everyday language, we#8217;re making sure that everyone gets a chance to join the conversation, especially those who might feel left out by traditional media channels, especially the ones with a paywall./p pbLet’s talk about this first comic you worked on about Anthony Crawford, an Oklahoma teacher who is part of a lawsuit challenging a classroom censorship bill. How did your approach to brainstorming and initial sketches contribute to capturing his story, particularly in conveying the depth of Black history and the importance of including both student and teacher perspectives?/b/p pThere#8217;s a process where you try very hard not to limit yourself at the beginning. That#8217;s where you do quick sketches of one panel ten times, trying anything that might be cool to represent the idea. For example, for the panel about Black history being filled with wisdom, not just difficulty, there are a thousand ways to approach it. That could be represented literally with historical figures, or the opposite, which is what I did – a tree. A really big, grand tree. On its own, it could mean anything. But with the context and few words in the panel, it suggests a huge heritage and lineage. Trees are generational, lasting hundreds or even thousands of years. I had about five ideas, and then I saw how the tree looked. The detail and grandeur of this single image helped convey the depth to which Anthony described the importance of Black history in America, aligning with the voice he gave it throughout the piece. That#8217;s another thing – I went back and said, #8220;Listen, there#8217;s just a tree in this panel, but it’s based on how you talked about what Black history feels like to you.#8221; Like history existed before we were here and after we#8217;re gone, just like a tree. And he was like, #8220;That#8217;s perfect.#8221;/p video controls source src=https://www.aclu.org/wp-content/uploads/2024/04/TREENAME_TIMELAPSE.mp4 type=video/mp4 / Sorry, your browser doesn't support embedded videos. /video div class=wp-audio div class=wp-audio__content div class=wp-audio__metadata h3 class=wp-audio__episode-titleEda on Adding Figures in Black History to An Illustration/h3 /div audio controls controlslist= source src=https://www.aclu.org/wp-content/uploads/2024/04/treenames.mp3 type=audio/mpeg Your browser does not support the audio element. /audio /div div class=wp-audio__links a class=wp-audio__download-link href=https://www.aclu.org/wp-content/uploads/2024/04/treenames.mp3 target=_blankDownload audio/a /div /div pWith critical race theory and book bans, everyone loses. The teacher, the student, the whole community is affected when our right to learn and right to free speech are stifled. So we really wanted to get both the student and teacher perspectives. Anthony opened his own story as a teenage version of himself in the early 2000s, enraged because he wasn’t being taught his own community’s history, discussing his experience as a student, which served as an ideal starting point for the piece. Eventually, he transitions into the current day, where he’s facing the same problem – only now, he’s the teacher. And there’s this vague law in Oklahoma that makes it hard for him to teach that same history, and the history of other oppressed communities in America. This shift illustrates the cyclical nature of issues like CRT and book bans in Oklahoma, highlighting how such restrictions on free speech persist over time. The initial depiction of Anthony as an unhappy student parallels the final panel where he faces his own students, who are motivated to learn because they can actually see themselves in their histories./p pbFrom Anthony’s perspective as a teacher, the issue of critical race theory getting banned is represented as one that educators like him are worried about. How did you make sure that struggle spoke to the younger audience as well? /b/p pWhen students face dilemmas like seeing banned books in their libraries and the removal of celebrated authors of color from their curriculum, it can shake their confidence in their education and understanding of history. That’s the first part of the comic, and allows young people to make connections with the younger version of Anthony. Then, the narrative zeroes in on the educator perspective. Anthony champions diverse perspectives in his classroom. Through his actions, the comic reveals Anthony’s motivations for teaching, emphasizing his dedication to his students and his younger self. That’s where I wanted students to connect to the teacher side of the comic – so they know that if their right to an inclusive education is stifled, even if none of their own teachers have taken steps to continue teaching about America’s diverse history, there are educators out there who care and are making a difference. My hope is that by seeing someone who was once in their shoes assert his First Amendment rights, current students feel empowered to do the same for themselves./p video controls source src=https://www.aclu.org/wp-content/uploads/2024/04/AC_TIMELAPSE.mp4 type=video/mp4 / Sorry, your browser doesn't support embedded videos. /video div class=wp-audio div class=wp-audio__content div class=wp-audio__metadata h3 class=wp-audio__episode-titleEda on Drawing Anthony /h3 /div audio controls controlslist= source src=https://www.aclu.org/wp-content/uploads/2024/04/anthonydrawing.wav type=audio/mpeg Your browser does not support the audio element. /audio /div div class=wp-audio__links a class=wp-audio__download-link href=https://www.aclu.org/wp-content/uploads/2024/04/anthonydrawing.wav target=_blankDownload audio/a /div /div pIn fact, I have seen my comics be used as a connection between students and teachers. I put out a comic about juvenile justice, and about a year later, a teacher from Wyoming reached out to me on Facebook and shared that one of their students shared my comic with them. Next thing you know, they#8217;re teaching it in their classes, sparking discussions on juvenile justice, and showing students how to navigate tough situations. It#8217;s pretty amazing, right? Shows how comics can really make a difference in the real world by influencing education and promoting meaningful dialogue./p

Open Letter to College and University Presidents on Student Protests

pDear College and University Presidents:/p pWe write in response to the recent protests that have spread across our nation’s university and college campuses, and the disturbing arrests that have followed. We understand that as leaders of your campus communities, it can be extraordinarily difficult to navigate the pressures you face from politicians, donors, and faculty and students alike. You also have legal obligations to combat discrimination and a responsibility to maintain order. But as you fashion responses to the activism of your students (and faculty and staff), it is essential that you not sacrifice principles of academic freedom and free speech that are core to the educational mission of your respected institution./p pThe ACLU a href=https://www.aclu.org/news/free-speech/the-streets-belong-to-the-people-always-have-always-willhelped/a establish the right to protest as a central pillar of the First Amendment. We have defended those principles for more than a century. The a href=https://www.aclu.org/documents/united-states-bill-rights-first-10-amendments-constitutionFirst Amendment/a compels public universities and colleges to respect free speech rights. And while the Constitution does not apply directly to private institutions, academic freedom and free inquiry require that similar principles guide private universities. We approach this moment with appreciation for the challenges you confront. In the spirit of offering constructive solutions for a way forward, we offer five basic guardrails to ensure freedom of speech and academic freedom while protecting against discriminatory harassment and disruptive conduct./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardSchools must not single out particular viewpoints for censorship, discipline, or disproportionate punishment/h2 /div piFirst/i, university administrators must not single out particular viewpoints — however offensive they may be to some members of the community — for censorship, discipline, or disproportionate punishment. Viewpoint neutrality is essential. Harassment directed at individuals because of their race, ethnicity, or religion is not, of course, permissible. But general calls for a Palestinian state “from the river to the sea,” or defenses of Israel’s assault on Gaza, even if many listeners find these messages deeply offensive, cannot be prohibited or punished by a university that respects free speech principles./p pThese protections extend to both students and faculty, and to speech that supports either side of the conflict. Outside the classroom, including on social media, students and professors must be free to express even the most controversial political opinions without fear of discipline or censure. Inside the classroom, speech can be and always has been subject to more restrictive rules to ensure civil dialogue and a robust learning environment. But such rules have no place in a public forum like a campus green. Preserving physical safety on campuses is paramount; but “safety” from ideas or views that one finds offensive is anathema to the very enterprise of the university./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardSchools must protect students from discriminatory harassment and violence/h2 /div piSecond/i, both public and private universities are bound by civil rights laws that guarantee all students equal access to education, including Title VI of the Civil Rights Act. This means that schools can, and indeed must, protect students from discriminatory harassment on the basis of race or national origin, which has been a href=https://www2.ed.gov/about/offices/list/ocr/sharedancestry.htmlinterpreted/a to include discrimination on the basis of “shared ancestry or ethnic characteristics,” or “citizenship or residency in a country with a dominant religion or distinct religious identity.”/p pSo, while offensive and even racist speech is constitutionally protected, shouting an epithet at a particular student or pinning an offensive sign to their dorm room door can constitute impermissible harassment, not free speech. Antisemitic or anti-Palestinian speech targeted at individuals because of their ethnicity or national origin constitutes invidious discrimination, and cannot be tolerated. Physically intimidating students by blocking their movements or pursuing them aggressively is unprotected iconduct/i, not protected ispeech/i. It should go without saying that violence is never an acceptable protest tactic./p pSpeech that is inot/i targeted at an individual or individuals because of their ethnicity or national origin but merely expresses impassioned views about Israel or Palestine is not discrimination and should be protected. The only exception for such untargeted speech is where it is so severe or pervasive that it denies students equal access to an education — an extremely demanding standard that has almost never been met by pure speech. One can criticize Israel’s actions, even in vituperative terms, without being antisemitic. And by the same token, one can support Israel’s actions in Gaza and condemn Hamas without being anti-Muslim. Administrators must resist the tendency to equate criticism with discrimination. Speech condoning violence can be condemned, to be sure. But it cannot be the basis for punishment, without more./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardSchools can announce and enforce reasonable content-neutral protest policies but they must leave ample room for students to express themselves/h2 /div piThird/i, universities can announce and enforce reasonable time, place, or manner restrictions on protest activity to ensure that essential college functions can continue. Such restrictions must be content neutral, meaning that they do not depend on the substance of what is being communicated, but rather where, when, or how it is being communicated. Protests can be limited to certain areas of campus and certain times of the day, for example. These policies must, however, leave ample room for students to speak to and to be heard by other members of the community. And the rules must not only be content neutral on their face; they must also be applied in a content-neutral manner. If a university has routinely tolerated violations of its rules, and suddenly enforces them harshly in a specific context, singling out particular views for punishment, the fact that the policy is formally neutral on its face does not make viewpoint-based enforcement permissible./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardSchools must recognize that armed police on campus can endanger students and are a measure of last resort/h2 /div piFourth/i, when enforcement of content-neutral rules may be warranted, college administrators should involve police only as a last resort, after all other efforts have been exhausted. Inviting armed police into a campus protest environment, even a volatile one, can create unacceptable risks for all students and staff. University officials must also be cognizant of the history of law enforcement using inappropriate and excessive force on communities of color, including Black, Brown, and immigrant students. Moreover, arresting peaceful protestors is also likely to escalate, not calm, the tensions on campus — as events of the past week have made abundantly clear./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardSchools must resist the pressures placed on them by politicians seeking to exploit campus tensions/h2 /div piFinally/i, campus leaders must resist the pressures placed on them by politicians seeking to exploit campus tensions to advance their own notoriety or partisan agendas. Recent congressional hearings have featured disgraceful attacks by members of Congress on academic freedom and freedom of speech. Universities must stand up to such intimidation, and defend the principles of academic freedom so essential to their integrity and mission./p pThe Supreme Court has forcefully a href=https://supreme.justia.com/cases/federal/us/408/169/rejected/a the premise that, “because of the acknowledged need for order, First Amendment protections should apply with less force on college campuses than in the community at large.”/p p“Quite to the contrary,” the court stated, “the vigilant protection of constitutional freedoms is nowhere more vital than in the community of American schools.” In keeping with these values, we urge you to resist the temptation to silence students or faculty members because powerful voices deem their views offensive. Instead, we urge you to defend the university’s core mission of encouraging debate, fostering dissent, and preparing the future leaders of our pluralistic society to tolerate even profound differences of opinion./p

The Supreme Court Declined a Protestors' Rights Case. Here's What You Need to Know.

pThe Supreme Court recently declined to hear a case, a href=https://www.aclu.org/cases/doe-v-mckessoniMckesson v. Doe/i/a, that could have affirmed that the First Amendment protects protest organizers from being held liable for illegal actions committed by others present that organizers did not direct or intend. The high court’s decision to not hear the case at this time left in place an opinion by the Fifth Circuit, which covers Louisiana, Mississippi, and Texas, that said a protest organizer could be liable for the independent, violent actions of others based on nothing more than a showing of negligence./p pAcross the country, many people have expressed concern about how the Supreme Court’s decision not to review, or hear, the case at this stage could impact the right to protest. The ACLU, which asked the court to take up the case, breaks down what the court’s denial of review means./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardWhat Happened in Mckesson v. Doe?/h2 /div pThe case, a href=https://www.aclu.org/cases/doe-v-mckesson#press-releasesiMckesson v. Doe/i/a, was brought by a police officer against a href=https://www.aclu.org/news/free-speech/deray-mckesson-on-the-threat-to-protesters-rightsDeRay Mckesson/a, a prominent civil rights activist. The officer claims that Mckesson should be liable for personal injuries he suffered after an unknown individual — not Mckesson — threw a “rock-like” object at the officer during a 2016 protest of the killing of Alton Sterling by Baton Rouge, Louisiana police./p pThe officer does not claim that Mckesson encouraged or even knew about the rock-throwing. Rather than sue the rock-thrower, however, the officer is suing Mckesson on the theory that he allegedly organized the protest and in turn had a duty to protect every person there. In doing so, the argument goes, he “should have known” an assault could occur./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/cases/doe-v-mckesson#press-releases target=_blank tabindex=-1 img width=1600 height=1066 src=https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358.jpg 1600w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-768x512.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-1536x1023.jpg 1536w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-400x267.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-600x400.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-800x533.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-1000x666.jpg 1000w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-1200x800.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/04/97fe74bcd4689ea5205a3761b37ff358-1400x933.jpg 1400w sizes=(max-width: 1600px) 100vw, 1600px / /a /div div class=wp-link__title a href=https://www.aclu.org/cases/doe-v-mckesson#press-releases target=_blank Mckesson v. Doe /a /div div class=wp-link__description a href=https://www.aclu.org/cases/doe-v-mckesson#press-releases target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletCan a protest leader be held legally responsible for injuries inflicted by an unidentified person’s violent act at the protest?/p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/cases/doe-v-mckesson#press-releases target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pThe idea is that a protest organizer can be held responsible for what a stranger present at the protest does to someone else, not because the organizer asked or meant for them to do it, but merely because it was foreseeable that they might. If this theory of “negligent protest” were accepted, it would become far more risky to organize a protest. The ACLU has argued that this standard of liability violates the First Amendment in part because it would pose an unconstitutional burden on our right to protest./p pDespite this, and after several years of procedural back-and-forth between courts, the Fifth Circuit ruled in 2023 that the negligence claim against McKesson did not violate the First Amendment. Instead, the Fifth Circuit held that a protest organizer could be liable for the independent, violent actions of others based on nothing more than a showing of negligence./p pRecognizing how this decision squarely violates First Amendment fundamentals, the ACLU and co-counsel filed a a href=https://www.aclu.org/cases/doe-v-mckesson?document=plaintiff-applicant-brief-certified-question#press-releasespetition for certiorari/a, asking the Supreme Court to overturn the Fifth Circuit’s obviously wrong ruling. Unfortunately, the court a href=https://www.supremecourt.gov/opinions/23pdf/23-373_8njq.pdfdenied our petition/a./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardWhat Does the Supreme Court’s Denial of Review Mean for Our Right to Protest?/h2 /div pWhile the Supreme Court does not generally explain why it declines to hear a case — and it can do so for any number of reasons — Justice Sonia Sotomayor a href=https://www.supremecourt.gov/opinions/23pdf/23-373_8njq.pdfwrote a statement/a accompanying the denial that might explain the reason in this case: the Supreme Court has already settled this question, so the law is not in need of further clarification./p pIn her statement, Justice Sotomayor explains that, in 2023, shortly after the Fifth Circuit’s decision, the Supreme Court issued an opinion in a href=https://www.aclu.org/cases/counterman-v-coloradoiCounterman v. Colorado/i/ai, /iwhere it confirmed that negligence is never a sufficient basis for imposing liability on political expression and association. In fact, in iCounterman/i, the court made it explicitly clear that, when it comes to drawing the line between unprotected incitement and the kinds of “strong protests against the government and prevailing social order” that lie at the heart of the First Amendment, a showing of intent is required. That’s a much higher standard than negligence, which asks only whether someone who didn’t know what impact their speech would have ishould /ihave known the possible effect. Intent, in contrast, requires that the speaker knew about, wanted, and aimed for the resulting harm./p pJustice Sotomayor concluded her statement by emphasizing that while the Fifth Circuit did not have the benefit of the Supreme Court’s recent decision in iCounterman /iwhen it issued its opinion, the lower courts in this case (and in general) now do, and are expected to fairly apply that decision in future proceedings./p div class=wp-heading mb-8 h2 id= class=wp-heading-h2 with-standardHas Our Right to Protest Changed? /h2 /div pSome people have suggested that the Supreme Court’s decision not to hear this case means that our right to protest has been effectively abolished in three U.S. states. That’s not accurate./p pWhile it is true that the Fifth Circuit’s erroneous decision has not been vacated, and technically could be invoked against protest organizers in Louisiana, Mississippi, and Texas, it is important to understand two things./p pFirst, separate from the First Amendment problem, there’s the question of whether a “negligent protest” claim even exists under a state’s civil law. In iMckesson/i, the Louisiana Supreme Court said yes, but the high courts in Texas and Mississippi haven’t said the same. That means, the theory of “negligent protest” in iMckesson /iis specific to Louisiana state law./p pSecond, when it comes to the First Amendment, the Supreme Court has made it explicitly clear in many other cases that negligence is too low a threshold for imposing liability on one person for another person’s violence or other illegal acts at a protest./p pTo take just one example, in 1982, the court held that while the Constitution does not protect violence, it does limit the government’s ability to place responsibility for that violence onto peaceful protest leaders who did not direct or intend it. That seminal civil rights case, iNAACP v. Claiborne Hardware Co./i, has been cited repeatedly to ensure robust speech protections, including to a href=https://www.reuters.com/article/us-usa-trump-kentucky-lawsuit/trump-wins-dismissal-of-inciting-to-riot-lawsuit-over-2016-rally-idUSKCN1LR22Bdismiss a lawsuit/a against then-candidate Donald Trump for violent acts committed by others at a campaign rally and to a href=https://www.aclu.org/news/free-speech/south-dakota-governor-caves-on-attempted-efforts-to-silence-pipeline-protesters/challenge/a efforts to stifle Keystone XL pipeline protests. As Justice Sotomayor’s statement highlighted, the court recently reaffirmed these rules in iCounterman/i./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/free-speech/south-dakota-governor-caves-on-attempted-efforts-to-silence-pipeline-protesters target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a.jpg class=attachment-4x3_full size-4x3_full alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/04/7218730dbc6777f6d6b6043a99ade53a-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/free-speech/south-dakota-governor-caves-on-attempted-efforts-to-silence-pipeline-protesters target=_blank South Dakota Governor Caves on Attempted Efforts to Silence Pipeline Protesters /a /div div class=wp-link__description a href=https://www.aclu.org/news/free-speech/south-dakota-governor-caves-on-attempted-efforts-to-silence-pipeline-protesters target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletThe state's quick retreat should serve as a lesson for other legislatures: if you criminalize protest, we will sue./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/free-speech/south-dakota-governor-caves-on-attempted-efforts-to-silence-pipeline-protesters target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pHowever, since the Supreme Court did not officially reverse the Fifth Circuit’s decision, it is possible that a court in Louisiana may decide to apply the Fifth Circuit’s logic. Say, for example, that a small crowd of people act violently at a protest in Louisiana and the protest organizer — who had no connection to the violence — is subsequently sued for negligence. The lower court should heed Justice Sotomayor#8217;s statement, correctly apply iCounterman/i, and dismiss this claim for violating the First Amendment. But it is possible that a lower court would still apply the Fifth Circuit’s decision, issued prior to iCounterman. /iIf that were to happen, the ACLU is interested in fighting alongside the organizer to ensure that the correct rule ultimately applies, and that the Fifth Circuit’s clearly erroneous decision does not govern anywhere./p pSince our founding, efforts to silence dissent have emerged in moments of mass protest, like what we find ourselves in today. However, the Supreme Court has consistently upheld our right to protest and our right to be responsible only for our own actions. Today, the ACLU urges the lower courts to continue protecting our rights, and to deny the Fifth Circuit’s deeply misguided theory from gaining any traction. No one should be afraid to express dissent, to advocate for change, or to support causes they believe in./p pa href=https://www.youtube.com/watch?v=iCR7yfxnwWAPlay the video/a/p img width=1334 height=708 src=https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM.png class=attachment-16x9_1400 size-16x9_1400 alt=A photo of activist DeRay Mckesson. decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM.png 1334w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-768x408.png 768w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-400x212.png 400w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-600x318.png 600w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-800x425.png 800w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-1000x531.png 1000w, https://www.aclu.org/wp-content/uploads/2024/04/Screen-Shot-2024-04-22-at-5.08.05-PM-1200x637.png 1200w sizes=(max-width: 1334px) 100vw, 1334px /

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog

5 April 2024 at 09:01

This post was co-written by EFF legal intern Melda Gurakar.

Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled.

The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so.

Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute.

The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X.

The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers.

EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information.

The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions.

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

29 March 2024 at 17:45

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.  

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

  • the official “possessed actual authority to speak” on the government’s behalf, and
  • the official “purported to exercise that authority when he spoke on social media.”

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”  

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media?

Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”  

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction

26 March 2024 at 15:11

EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations.

Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties.

The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression.

The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote:

The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination.

In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking.

In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF.

We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.

State Legislative Sessions: How They Impact Your Rights

State legislation is crucially connected to our civil liberties, and can either expand our rights or chip away at them. These bills touch nearly every aspect of our lives. From Roe v. Wade and the Dobbs case that overturned the right to an abortion, to Loving v. Virginia, which struck down laws banning interracial marriage, and Obergefell v. Hodges, which recognized marriage equality across the country — many Supreme Court cases that address all of our civil rights come from laws that were passed in state legislatures.

With an increasingly conservative Supreme Court and federal court system, as well as a Congress whose members are constantly in gridlock, state legislatures offer a more accessible way to enact meaningful change. State lawmakers are easier to contact regarding policies that should be passed, and also frequently go on to run for federal office, or become governors. What’s more, state actions can lead to national impact if many similar policies are passed around the country, signaling national trends.

With many state legislative sessions currently underway, learn more about this important political process, how it affects your rights, and how to get involved.


What Are State Legislative Sessions?

Each state has its own legislative body in which lawmakers work together to pass policies — just like Congress does at the federal level. Every state except for Nebraska has a legislature composed of two chambers, or a ​​bicameral legislature — which must work together to get a majority of favorable votes and pass bills in both chambers. While the exact names and powers of these entities depend on the specific states, once a bill is passed, it will be sent to the governor to be signed into law or may face a veto.

Most state legislatures are made up of lawmakers who meet to pass laws during legislative sessions each year. If circumstances arise that require lawmakers to address legislation outside of these regular sessions, a special session can be called. There are also several states with full-time legislatures whose lawmakers meet year-round. Lawmakers often engage in this work part time, and are often not adequately paid.


When Are State Legislative Sessions Held?

The length and timing of state legislative sessions differ from state to state. Some legislatures are in session for many months, while others only take a few. The sessions that aren’t full time usually take place in the first half of the year, traditionally beginning in January.


How Do They Impact Our Rights?

The laws that are passed during state legislative sessions run the gamut and can affect a number of constituents’ rights, including reproductive freedom, voting protections, access to gender-affirming care, and others. But this influence goes both ways. Presumably, the prospective laws should reflect the majority opinions of individuals in the state, with lawmakers acting as advocates for these interests. Many bills and policies that make it to state legislatures are promoted by advocacy organizations or interest groups who work with lawmakers to get them passed. The ACLU is among these entities, and is the only organization focusing on civil rights and civil liberties that has an office with staff in every state, working with local policymakers.


What To Watch As Sessions Are Underway

There are many decisions happening in states around the country that put our rights in the balance. Without the federal protections from Roe v. Wade, many lawmakers are attacking abortion rights at the state level. There has also been a surge of state laws introduced that block trans youth from receiving gender-affirming care, censor student free speech, and suppress people’s voting powers.

But the ACLU will never stop fighting for your rights. We have taken on countless state-level legal battles to protect people’s liberties — and have seen many victories along the way.


How Do I Engage/Get Involved in the Process?

The ACLU always encourages our community to play a hands-on role in the fight for our freedoms. Across the country, we implement strategies that empower voters around the country to stay informed about local races and elect candidates whose interests align with theirs. We’re also mapping state-level attacks on LGBTQ rights so you can keep track of your own area’s legislation — and fight back accordingly.

Supporters can get in touch with the ACLU affiliate offices in their state to learn about local issues they are taking action on. Many affiliate websites offer primers on state legislatures. Our grassroots effort People Power also allows volunteers to engage with state-level actions in their area.

To learn about your state’s legislature, identify the lawmakers who represent you and what their stances are on the issues you care about most. State lawmakers and governors will usually highlight the issues they care about, and the legislative work they’ve done, wherever they are able. With most state legislative sessions underway right now, you can also keep track of policies that are being voted on. This will let you know your legislature’s priorities and if your lawmakers are fulfilling their campaign promises to constituents. Remember, the key players involved in the legislative process are voted into office by you. You have the power in numbers to elect or replace representatives based on whether they are advocating for your interests.

Why is the ACLU Representing the NRA Before the US Supreme Court?

For more than 100 years the American Civil Liberties Union has defended the right to free speech – no matter the speaker, and regardless of whether we agree with their views.

The defense and protection of free speech and expression span many forms and issues at the ACLU. In the last year alone, it has included efforts to actively oppose book bans; represent educators fighting classroom censorship aimed at suppressing important race perspectives; defend protesters responding to police shootings or overseas wars; protect the ability of Indigenous students to wear tribal regalia at their graduation ceremonies; and fight against retaliatory arrests for protected speech.

While the faces of the free speech movement continue to change, the significance of defending free speech remains unchanged. This work lies at the heart of the ACLU’s core principles and values.


Why the ACLU Represented the NRA

On March 18, the ACLU appeared before the U.S. Supreme Court to argue another free speech case of great significance. In this case, the ACLU represented the National Rifle Association (NRA) against government overreach and censorship. Some may have wondered why the ACLU was representing the NRA, since the ACLU clearly opposes the NRA on gun control and the role of firearms in society. In fact, we abhor many of the group’s goals, strategies, and tactics. So, the reality that we have joined forces, notwithstanding those disagreements, reflects the importance of the First Amendment principles at stake in this case.

The ACLU made the decision to represent the NRA in this case because we are deeply concerned that if regulators can threaten the NRA for their political views in New York state, they can come after the ACLU and allied organizations in places where our agendas are unpopular.

If reelected, President Trump has already promised to use the power of the government to go after his political adversaries. In a second Trump administration, opposition from the ACLU and its allied organizations will be top of mind for political leaders who may seek to go after their rivals the way New York targeted the NRA. The principal issue at stake in this case is one in which the ACLU deeply believes: preventing government blacklists of advocacy groups. Indeed, the timing couldn’t be better for drawing a bright line that would help bind a future Trump administration and other government officials who misuse their power.

In this case, the ACLU argues that Maria Vullo, New York’s former chief financial regulator, threatened to use her regulatory power over banks and insurance companies to coerce them into denying basic financial services to the NRA and, in Vullo’s own words, “other gun promotion” groups. The ACLU argues that coercing private parties to blacklist the NRA because of its advocacy violates the First Amendment, just as punishing the group directly for its “gun promotion” views would. And if New York can do this to the NRA, Texas or Florida could use the same tactics against groups advocating immigrants’ rights or the right to abortion.

The NRA has a right, like all other advocacy organizations, to pursue their mission free from reprisals by government officials who disagree with its political viewpoint. The government should not be able to evade the Constitution by doing indirectly what it plainly cannot do directly. History has, consistently, underscored the importance of this protection.

Nevertheless, we’ve faced criticism of our representation of the NRA on the theory that even if the NRA wins in this Supreme Court case, officials will still try to stifle the speech of people on the left, and courts will side with them. These critics are correct in one sense — those in power have an unfortunate tendency to try to stifle the speech of those with whom they disagree, and we will certainly continue to bring new cases to stop them. But the critics are wrong about the impact of the precedents we win, especially at the Supreme Court. People of every ideological stripe benefit with every decision vindicating the right to freedom of speech.


Why It's Important to Defend Speech We Detest

When we defend clients with positions with which we disagree, or even abhor, it’s because we are defending values crucial to the work of civil rights advocates in the past and present. These values include doctrines that protect our rights — at the local, state, and federal level — to join economic boycotts, hold protests, and publicly dissent. In fact, a significant amount of the ACLU’s modern day First Amendment advocacy work is predicated on principles stemming from landmark civil rights legal victories of the 1960s and 70s.

Take one of our most controversial cases, which is also one of the most important cases in the entire First Amendment canon — our defense of the Ku Klux Klan. In 1969, Klan member Clarence Brandenburg addressed a rally held in Ohio where he called for “revenge” against the government and Black individuals. He was convicted of violating the state’s Criminal Syndicalism law, which prohibited speech that “advocate[d] … the duty, necessity, or propriety of crime, sabotage, or unlawful methods of terrorism as a means of accomplishing industrial or political reform.”

The ACLU represented Brandenburg at the Supreme Court, which reversed his conviction. The court ruled that Brandenburg’s speech was protected by the First Amendment, and that the government can make it a crime to advocate illegal conduct only “where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

Brandenburg’s speech was reprehensible. But in preserving his First Amendment rights, the ACLU helped establish critical protection for all dissidents’ and activists’ speech. Before Brandenburg, governments had regularly charged their critics with advocating illegal activity. The Brandenburg precedent has been used to defend all kinds of political speech; indeed, today the ACLU is applying the decision in a Supreme Court case defending civil rights activist DeRay Mckesson, who took part in a Black Lives Matter protest in Louisiana.

Simply put, the right to speak freely applies to everyone. Otherwise, any elected official would be able to decide whose speech is acceptable, “safe,” or politically palatable. That is why we defend speech we hate. It’s why in 1978 the ACLU represented a neo-Nazi group that sought to demonstrate in Skokie, a Chicago suburb with a substantial Jewish population, including many survivors of the Holocaust. Notwithstanding the odious views of the protesters, we believed that once government officials are empowered to block demonstrations because they disagree with their message, the right to protest would be illusory. The Supreme Court agreed, and that decision ensures that city, state, and federal officials cannot suppress protests because they disapprove of their message.

The power to censor the neo-Nazis would have opened the door to censoring any protester, including civil rights activists or anti-war protesters. The ACLU’s position in this case was famously controversial and Aryeh Neier, the ACLU’s executive director in the 70s and a Jewish refugee from Nazi Germany, withstood withering criticism. But it was the right thing to do.


Why the First Amendment Applies to Everyone, Not Just Our Friends

The ACLU knew in the past, as we recognize now, that if the First Amendment protected only popular ideas, it would serve little purpose. If we do not take a principled stand on behalf of those with whom we disagree, we weaken our case the next time we defend those fighting for the values we share. At our core, the ACLU believes that rights and liberties are universal and “indivisible” – meaning they attach to all people, not just our friends.

Our mandate to advance all rights and liberties for all people was forged more than 100 years ago when we combatted political repression against dissidents, immigrants, workers, and other so-called radicals. Over the years the ACLU has defended the free speech rights of countless individuals and groups with which we disagree. We defended their speech rights — despite our disagreements — because we believe in free speech, and because we realize that once you chip away at one person’s rights, everyone’s rights are at risk.

Defending speech we hate is admittedly a controversial part of our mandate. Some of our allies and supporters don’t always agree with this stance. In fact, there are even some ACLU staff, leaders, and volunteers who believe that defending speech we hate does more harm than good. Some believe we shouldn’t use our limited resources defending individuals and causes with whom we disagree. Reasonable people can — and always will — disagree on the ACLU’s stance, including our own staff. Yet this is what we have done for over a century and, as the ACLU’s executive director, I respectfully believe it’s the right thing to do — for free speech and for the ACLU.

Ours is an organization that increasingly reflects all of America. We celebrate our growing diversity, just as we embrace the dissent and debate that attend it. Our commitment to free speech extends to dissent within our ranks. Dissent and debate are healthy for society — and for a civil liberties organization. This principle has long been the lifeblood of the ACLU. And it is that commitment that underlies our defense of the NRA’s free speech rights at the Supreme Court.

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

15 March 2024 at 22:12

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

Thousands of Young People Told Us Why the Kids Online Safety Act Will Be Harmful to Minors

15 March 2024 at 15:37

With KOSA passed, the information i can access as a minor will be limited and censored, under the guise of "protecting me", which is the responsibility of my parents, NOT the government. I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for! - Alan, 15  

___________________

If information is put through a filter, that’s bad. Any and all points of view should be accessible, even if harmful so everyone can get an understanding of all situations. Not to mention, as a young neurodivergent and queer person, I’m sure the information I’d be able to acquire and use to help myself would be severely impacted. I want to be free like anyone else. - Sunny, 15 

 ___________________

How young people feel about the Kids Online Safety Act (KOSA) matters. It will primarily affect them, and many, many teenagers oppose the bill. Some have been calling and emailing legislators to tell them how they feel. Others have been posting their concerns about the bill on social media. These teenagers have been baring their souls to explain how important social media access is to them, but lawmakers and civil liberties advocates, including us, have mostly been the ones talking about the bill and about what’s best for kids, and often we’re not hearing from minors in these debates at all. We should be — these young voices should be essential when talking about KOSA.

So, a few weeks ago, we asked some of the young advocates fighting to stop the Kids Online Safety Act a few questions:  

- How has access to social media improved your life? What do you gain from it? 

- What would you lose if KOSA passed? How would your life be different if it was already law? 

Within a week we received over 3,000 responses. As of today, we have received over 5,000.

These answers are critical for legislators to hear. Below, you can read some of these comments, sorted into the following themes (though they often overlap):  

These comments show that thoughtful young people are deeply concerned about the proposed law's fallout, and that many who would be affected think it will harm them, not help them. Over 700 of those who responded reported that they were currently sixteen or under—the age under which KOSA’s liability is applicable. The average age of those who answered the survey was 20 (of those who gave their age—the question was optional, and about 60% of people responded).  In addition to these two questions, we also asked those taking the survey if they were comfortable sharing their email address for any journalist who might want to speak with them; unfortunately much coverage usually only mentions one or two of the young people who would be most affected. So, journalists: We have contact info for over 300 young people who would be happy to speak to you about why social media matters to them, and why they oppose KOSA.

Individually, these answers show that social media, despite its current problems, offer an overall positive experience for many, many young people. It helps people living in remote areas find connection; it helps those in abusive situations find solace and escape; it offers education in history, art, health, and world events for those who wouldn’t otherwise have it; it helps people learn about themselves and the world around them. (Research also suggests that social media is more helpful than harmful for young people.) 

And as a whole, these answers tell a story that is 180° different from that which is regularly told by politicians and the media. In those stories, it is accepted as fact that the majority of young people’s experiences on social media platforms are harmful. But from these responses, it is clear that many, many young people also experience help, education, friendship, and a sense of belonging there—precisely because social media allows them to explore, something KOSA is likely to hinder. These kids are deeply engaged in the world around them through these platforms, and genuinely concerned that a law like KOSA could take that away from them and from other young people.  

Here are just a few of the thousands of reasons they’re worried.  

Note: We are sharing individuals’ opinions, without editing. We do not necessarily endorse them or their interpretation of KOSA.

KOSA Will Harm Rights That Young People Know They Ought to Have 

One of the most important things that would be lost is the freedom of speech - a given right that is crucial to a healthy, functioning environment. Not every speech is morally okay, but regulating what speech is deemed "acceptable" constricts people's rights; a clear violation of the First Amendment. Those who need or want to access certain information are not allowed to - not because the information will harm them or others, but for the reason that a certain portion of people disagree with the information. If the country only ran on what select people believed, we would be a bland, monotonous place. This country thrives on diversity, whether it be race, gender, sex, or any other personal belief. If KOSA was passed, I would lose my safe spaces, places where I can go to for mental health, places that make me feel more like a human than just some girl. No more would I be able to fight for ideas and beliefs I hold, nor enjoy my time on the internet either. - Anonymous, 16 

 ___________________

I, and many of my friends, grew up in an Internet where remaining anonymous was common sense, and where revealing your identity was foolish and dangerous, something only to be done sparingly, with a trusted ally at your side, meeting at a common, crowded public space like a convention or a college cafeteria. This bill spits in the face of these very practical instincts, forces you to dox yourself, and if you don’t want to be outed, you must be forced to withdraw from your communities. From your friends and allies. From the space you have made for yourself, somewhere you can truly be yourself with little judgment, where you can find out who you really are, alongside people who might be wildly different from you in some ways, and exactly like you in others. I am fortunate to have parents who are kind and accepting of who I am. I know many people are nowhere near as lucky as me. - Maeve, 25 

 ___________________ 

I couldn't do activism through social media and I couldn't connect with other queer individuals due to censorship and that would lead to loneliness, depression other mental health issues, and even suicide for some individuals such as myself. For some of us the internet is the only way to the world outside of our hateful environments, our only hope. Representation matters, and by KOSA passing queer children would see less of age appropriate representation and they would feel more alone. Not to mention that KOSA passing would lead to people being uninformed about things and it would start an era of censorship on the internet and by looking at the past censorship is never good, its a gateway to genocide and a way for the government to control. – Sage, 15 

  ___________________

Privacy, censorship, and freedom of speech are not just theoretical concepts to young people. Their rights are often already restricted, and they see the internet as a place where they can begin to learn about, understand, and exercise those freedoms. They know why censorship is dangerous; they understand why forcing people to identify themselves online is dangerous; they know the value of free speech and privacy, and they know what they’ve gained from an internet that doesn’t have guardrails put up by various government censors.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Impact Young People’s Artistic Education and Opportunities 

I found so many friends and new interests from social media. Inspirations for my art I find online, like others who have an art style I admire, or models who do poses I want to draw. I can connect with my friends, send them funny videos and pictures. I use social media to keep up with my favorite YouTubers, content creators, shows, books. When my dad gets drunk and hard to be around or my parents are arguing, I can go on YouTube or Instagram and watch something funny to laugh instead. It gives me a lot of comfort, being able to distract myself from my sometimes upsetting home life. I get to see what life is like for the billions of other people on this planet, in different cities, states, countries. I get to share my life with my friends too, freely speaking my thoughts, sharing pictures, videos, etc.  
I have found my favorite YouTubers from other social media platforms like tiktok, this happened maybe about a year ago, and since then I think this is the happiest I have been in a while. Since joining social media I have become a much more open minded person, it made me interested in what others lives are like. It also brought awareness and educated me about others who are suffering in the world like hunger, poor quality of life, etc. Posting on social media also made me more confident in my art, in the past year my drawing skills have immensely improved and I’m shocked at myself. Because I wanted to make better fan art, inspire others, and make them happy with my art. I have been introduce to many styles of clothing that have helped develop my own fun clothing style. It powers my dreams and makes me want to try hard when I see videos shared by people who have worked hard and made it. - Anonymous, 15 

  ___________________

As a kid I was able to interact in queer and disabled and fandom spaces, so even as a disabled introverted child who wasn’t popular with my peers I still didn’t feel lonely. The internet is arguably a safer way to interact with other fans of media than going to cons with strangers, as long as internet safety is really taught to kids. I also get inspiration for my art and writing from things I’ve only discovered online, and as an artist I can’t make money without the internet and even minors do commissions. The issue isn’t that the internet is unsafe, it’s that internet safety isn’t taught anymore. - Rachel, 19 

  ___________________

i am an artist, and sharing my things online makes me feel happy and good about myself. i love seeing other people online and knowing that they like what i make. when i make art, im always nervous to show other people. but when i post it online i feel like im a part of something, and that im in a community where i feel that i belong. – Anonymous, 15 

 ___________________ 

Social media has saved my life, just like it has for many young people. I have found safe spaces and motivation because of social media, and I have never encountered anything negative or harmful to me. With social media I have been able to share my creativity (writing, art, and music) and thoughts safely without feeling like I'm being held back or oppressed. My creations have been able to inspire and reach so many people, just like how other people's work have reached me. Recently, I have also been able to help the library I volunteer at through the help of social media. 
What I do in life and all my future plans (career, school, volunteer projects, etc.) surrounds social media, and without it I wouldn't be able to share what I do and learn more to improve my works and life. I wouldn't be able to connect with wonderful artists, musicians, and writers like I do now. I would be lost and feel like I don't have a reason to do what I do. If KOSA is passed, I wouldn't be able to get the help I need in order to survive. I've made so many friends who have been saved because of social media, and if this bill gets passed they will also be affected. Guess what? They wouldn't be able to get the help they need either. 
If KOSA was already a law when I was just a bit younger, I wouldn't even be alive. I wouldn't have been able to reach help when I needed it. I wouldn't have been able to share my mind with the world. Social media was the reason I was able to receive help when I was undergoing abuse and almost died. If KOSA was already a law, I would've taken my life, or my abuser would have done it before I could. If KOSA becomes a law now, I'm certain that the likeliness of that happening to kids of any age will increase. – Anonymous, 15 

  ___________________

A huge number of young artists say they use social media to improve their skills, and in many cases, the avenue by which they discovered their interest in a type of art or music. Young people are rightfully worried that the magic moment where you first stumble upon an artist or a style that changes your entire life will be less and less common for future generations if KOSA passes. We agree: KOSA would likely lead platforms to limit that opportunity for young people to experience unexpected things, forcing their online experiences into a much smaller box under the guise of protecting them.  

Also, a lot of young people told us they wanted to, or were developing, an online business—often an art business. Under KOSA, young people could have less opportunities in the online communities where artists share their work and build a customer base, and a harder time navigating the various communities where they can share their art.  

KOSA Will Hurt Young People’s Ability to Find Community Online 

Social media has allowed me to connect with some of my closest friends ever, probably deeper than some people in real life. i get to talk about anything i want unimpeded and people accept me for who i am. in my deepest and darkest moments, knowing that i had somewhere to go was truly more relieving than anything else. i've never had the courage to commit suicide, but still, if it weren't for social media, i probably wouldn't be here, mentally & emotionally at least. 
i'd lose the space that accepts me. i'd lose the only place where i can be me. in life, i put up a mask to appease my parents and in some cases, my friends. with how extreme the u.s. is becoming these days, i could even lose my life. i would live my days in fear. i'm terrified of how fast this country is changing and if this bill passes, saying i would fall into despair would be an understatement. people say to "be yourself", but they don't understand that if i were to be my true self tomorrow, i could be killed. – march, 14 

 ___________________ 

Without the internet, and especially the rhythm gaming community which I found through Discord, I would've most likely killed myself at 13. My time on here has not been perfect, as has anyone's but without the internet I wouldn't have been the person I am today. I wouldn't have gotten help recognizing that what my biological parents were doing to me was abuse, the support I've received for my identity (as queer youth) and the way I view things, with ways to help people all around the world and be a more mindful ally, activist, and thinker, and I wouldn't have met my mom. 
I love my chosen mom. We met at a Dance Dance Revolution tournament in April of last year and have been friends ever since. When I told her that she was the first person I saw as a mother figure in my life back in November, I was bawling my eyes out. I'm her mije, and she's my mom. love her so much that saying that doesn't even begin to express exactly how much I love her.  
I love all my chosen family from the rhythm gaming community, my older sisters and siblings, I love them all. I have a few, some I talk with more regularly than others. Even if they and I may not talk as much as we used to, I still love them. They mean so much to me. – X86, 15 

  ___________________

i spent my time in public school from ages 9-13 getting physically and emotionally abused by special ed aides, i remember a few months after i left public school for good, i saw a post online that made me realize that what i went through wasn’t normal. if it wasn’t for the internet, i wouldn’t have come to terms with my autism, i would have still hated myself due to not knowing that i was genderqueer, my mental health would be significantly worse, and i would probably still be self harming, which is something i stopped doing at 13. besides the trauma and mental health side of things, something important to know is that spaces for teenagers to hang out have been eradicated years ago, minors can’t go to malls unless they’re with their parents, anti loitering laws are everywhere, and schools aren’t exactly the best place for teenagers to hang out, especially considering queer teens who were murdered by bullies (such as brianna ghey or nex benedict), the internet has become the third space that teenagers have flocked to as a result. – Anonymous, 17 

  ___________________

KOSA is anti-community. People online don’t only connect over shared interests in art and music—they also connect over the difficult parts of their lives. Over and over again, young people told us that one of the most valuable parts of social media was learning that they were not alone in their troubles. Finding others in similar circumstances gave them a community, as well as ideas to improve their situations, and even opportunities to escape dangerous situations.  

KOSA will make this harder. As platforms limit the types of recommendations and public content they feel safe sharing with young people, those who would otherwise find communities or potential friends will not be as likely to do so. A number of young people explained that they simply would never have been able to overcome some of the worst parts of their lives alone, and they are concerned that KOSA’s passage would stop others from ever finding the help they did. 

KOSA Could Seriously Hinder People’s Self-Discovery  

I am a transgender person, and when I was a preteen, looking down the barrel of the gun of puberty, I was miserable. I didn't know what was wrong I just knew I'd rather do anything else but go through puberty. The internet taught me what that was. They told me it was okay. There were things like haircuts and binders that I could use now and medical treatment I could use when I grew up to fix things. The internet was there for me too when I was questioning my sexuality and again when my mental health was crashing and even again when I was realizing I'm not neurotypical. The internet is a crucial source of information for preteens and beyond and you cannot take it away. You cannot take away their only realistically reachable source of information for what the close-minded or undereducated adults around them don't know. - Jay, 17 

   ___________________

Social media has improved my life so much and led to how I met my best friend, I’ve known them for 6+ years now and they mean so much to me. Access to social media really helps me connect with people similar to me and that make me feel like less of an outcast among my peers, being able to communicate with other neurodivergent queer kids who like similar interests to me. Social media makes me feel like I’m actually apart of a community that won’t judge me for who I am. I feel like I can actually be myself and find others like me without being harassed or bullied, I can share my art with others and find people like me in a way I can’t in other spaces. The internet & social media raised me when my parents were busy and unavailable and genuinely shaped the way I am today and the person I’ve become. – Anonymous, 14 

   ___________________

The censorship likely to come from this bill would mean I would not see others who have similar struggles to me. The vagueness of KOSA allows for state attorney generals to decide what is and is not appropriate for children to see, a power that should never be placed in the hands of one person. If issues like LGBT rights and mental health were censored by KOSA, I would have never realized that I AM NOT ALONE. There are problems with children and the internet but KOSA is not the solution. I urge the senate to rethink this bill, and come up with solutions that actually protect children, not put them in more danger, and make them feel ever more alone. - Rae, 16 

  ___________________ 

KOSA would effectively censor anything the government deems "harmful," which could be anything from queerness and fandom spaces to anything else that deviates from "the norm." People would lose support systems, education, and in some cases, any way to find out about who they are. I'll stop beating around the bush, if it wasn't for places online, I would never have discovered my own queerness. My parents and the small circle of adults I know would be my only connection to "grown-up" opinions, exposing me to a narrow range of beliefs I would likely be forced to adopt. Any kids in positions like mine would have no place to speak out or ask questions, and anything they bring up would put them at risk. Schools and families can only teach so much, and in this age of information, why can't kids be trusted to learn things on their own? - Anonymous, 15 

   ___________________

Social media helped me escape a very traumatic childhood and helped me connect with others. quite frankly, it saved me from being brainwashed. – Milo, 16 

   ___________________

Social media introduced me to lifelong friends and communities of like-minded people; in an abusive home, online social media in the 2010s provided a haven of privacy, safety, and information. I honed my creativity, nurtured my interests and developed my identity through relating and talking to people to whom I would otherwise have been totally isolated from. Also, unrestricted internet access actually taught me how to spot shady websites and inappropriate content FAR more effectively than if censorship had been at play like it is today. 
A couple of the friends I made online, as young as thirteen, were adults; and being friends with adults who knew I was a child, who practiced safe boundaries with me yet treated me with respect, helped me recognise unhealthy patterns in predatory adults. I have befriended mothers and fathers online through games and forums, and they were instrumental in preventing me being groomed by actual pedophiles. Had it not been for them, I would have wound up terribly abused by an "in real life" adult "friend". Instead, I recognised the differences in how he was treating me (infantilising yet praising) vs how my adult friends had treated me (like a human being), and slowly tapered off the friendship and safely cut contact. 
As I grew older, I found a wealth of resources on safe sex and sexual health education online. Again, if not for these discoveries, I would most certainly have wound up abused and/or pregnant as a teenager. I was never taught about consent, safe sex, menstruation, cervical health, breast health, my own anatomy, puberty, etc. as a child or teenager. What I found online-- typically on Tumblr and written with an alarming degree of normalcy-- helped me understand my body and my boundaries far more effectively than "the talk" or in-school sex ed ever did. I learned that the things that made me panic were actually normal; the ins and outs of puberty and development, and, crucially, that my comfort mattered most. I was comfortable and unashamed of being a virgin my entire teen years because I knew it was okay that I wasn't ready. When I was ready, at twenty-one, I knew how to communicate with my partner and establish safe boundaries, and knew to check in and talk afterwards to make sure we both felt safe and happy. I knew there was no judgement for crying after sex and that it didn't necessarily mean I wasn't okay. I also knew about physical post-sex care; e.g. going to the bathroom and cleaning oneself safely. 
AGAIN, I would NOT have known any of this if not for social media. AT ALL. And seeing these topics did NOT turn me into a dreaded teenage whore; if anything, they prevented it by teaching me safety and self-care. 
I also found help with depression, anxiety, and eating disorders-- learning to define them enabled me to seek help. I would not have had this without online spaces and social media. As aforementioned too, learning, sometimes through trial of fire, to safely navigate the web and differentiate between safe and unsafe sites was far more effective without censored content. Censorship only hurts children; it has never, ever helped them. How else was I to know what I was experiencing at home was wrong? To call it "abuse"? I never would have found that out. I also would never have discovered how to establish safe sexual AND social boundaries, or how to stand up for myself, or how to handle harassment, or how to discover my own interests and identity through media. The list goes on and on and on. – June, 21 

   ___________________

One of the claims that KOSA’s proponents make is that it won’t stop young people from finding the things they already want to search for. But we read dozens and dozens of comments from people who didn’t know something about themselves until they heard others discussing it—a mental health diagnosis, their sexuality, that they were being abused, that they had an eating disorder, and much, much more.  

Censorship that stops you from looking through a library is still dangerous even if it doesn’t stop you from checking out the books you already know. It’s still a problem to stop young people in particular from finding new things that they didn’t know they were looking for.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Stop Young People from Getting Accurate News and Valuable Information 

Social media taught me to be curious. It taught me caution and trust and faith and that simply being me is enough. It brought me up where my parents failed, it allowed me to look into stories that assured me I am not alone where I am now. I would be fucking dead right now if it weren't for the stories of my fellow transgender folk out there, assuring me that it gets better.  
I'm young and I'm not smart but I know without social media, myself and plenty of the people I hold dear in person and online would not be alive. We wouldn't have news of the atrocities happening overseas that the news doesn't report on, we wouldn't have mentors to help teach us where our parents failed. - Anonymous, 16 

  ___________________ 

Through social media, I've learned about news and current events that weren't taught at school or home, things like politics or controversial topics that taught me nuance and solidified my concept of ethics. I learned about my identity and found numerous communities filled with people I could socialize with and relate to. I could talk about my interests with people who loved them just as much as I did. I found out about numerous different perspectives and cultures and experienced art and film like I never had before. My empathy and media literacy greatly improved with experience. I was also able to gain skills in gathering information and proper defences against misinformation. More technically, I learned how to organize my computer and work with files, programs, applications, etc; I could find guides on how to pursue my hobbies and improve my skills (I'm a self-taught artist, and I learned almost everything I know from things like YouTube or Tumblr for free). - Anonymous, 15 

  ___________________ 

A huge portion of my political identity has been shaped by news and information I could only find on social media because the mainstream news outlets wouldn’t cover it. (Climate Change, International Crisis, Corrupt Systems, etc.) KOSA seems to be intentionally working to stunt all of this. It’s horrifying. So much of modern life takes place on the internet, and to strip that away from kids is just another way to prevent them from formulating their own thoughts and ideas that the people in power are afraid of. Deeply sinister. I probably would have never learned about KOSA if it were in place! That’s terrifying! - Sarge, 17 

  ___________________

I’ve met many of my friends from [social media] and it has improved my mental health by giving me resources. I used to have an eating disorder and didn’t even realize it until I saw others on social media talking about it in a nuanced way and from personal experience. - Anonymous, 15 

   ___________________

Many young people told us that they’re worried KOSA will result in more biased news online, and a less diverse information ecosystem. This seems inevitable—we’ve written before that almost any content could fit into the categories that politicians believe will cause minors anxiety or depression, and so carrying that content could be legally dangerous for a platform. That could include truthful news about what’s going on in the world, including wars, gun violence, and climate change. 

“Preventing and mitigating” depression and anxiety isn’t a goal of any other outlet, and it shouldn’t be required for social media platforms. People have a right to access information—both news and opinion— in an open and democratic society, and sometimes that information is depressing or anxiety-inducing. To truly “prevent and mitigate” self-destructive behaviors, we must look beyond the media to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships—not hiding truthful information that is disappointing.  

Young People’s Voices Matter 

While KOSA’s sponsors intend to help these young people, those who responded to the survey don’t see it that way. You may have noticed that it’s impossible to limit these complex and detailed responses into single categories—many childhood abuse victims found help as well as arts education on social media; many children connected to communities that they otherwise couldn’t and learned something essential about themselves in doing so. Many understand that KOSA would endanger their privacy, and also know it could harm marginalized kids the most.  

In reading thousands of these comments, it becomes clear that social media itself was not in itself a solution to the issues they experienced. What helped these young people was other people. Social media was where they were able to find and stay connected with those friends, communities, artists, activists, and educators. When you look at it this way, of course KOSA seems absurd: social media has become an essential element of young peoples’ lives, and they are scared to death that if the law passes, that part of their lives will disappear. Older teens and twenty-somethings, meanwhile, worry that if the law had been passed a decade ago, they never would have become the person that they did. And all of these fears are reasonable.  

There were thousands more comments like those above. We hope this helps balance the conversation, because if young people’s voices are suppressed now—and if KOSA becomes law—it will be much more difficult for them to elevate their voices in the future.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Analyzing KOSA’s Constitutional Problems In Depth 

15 March 2024 at 15:35

Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

5 Questions to Ask Before Backing the TikTok Ban

15 March 2024 at 14:30

With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk. 

The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions. 

1. Is the TikTok bill about privacy or content?

Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users? 

From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public."

This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”

2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws? 

It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.

It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.

3. If the TikTok bill is about content, how will it avoid violating the First Amendment? 

Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.

The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.

4. Is the TikTok bill a ban or something else? 

Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.” 

Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment. 

5. Does the U.S. support the free flow of information as a fundamental democratic principle? 

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.

In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”

Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Why U.S. House Members Opposed the TikTok Ban Bill

14 March 2024 at 12:16

What do House Democrats like Alexandria Ocasio-Cortez and Barbara Lee have in common with House Republicans like Thomas Massie and Andy Biggs? Not a lot. But they do know an unconstitutional bill when they see one.

These and others on both sides of the aisle were among the 65 House Members who voted "no" yesterday on the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521, which would effectively ban TikTok. The bill now goes to the Senate, where we hope cooler heads will prevail in demanding comprehensive data privacy legislation instead of this attack on Americans' First Amendment rights.

We're saying plenty about this misguided, unfounded bill, and we want you to speak out about it too, but we thought you should see what some of the House Members who opposed it said, in their own words.

 

I am voting NO on the TikTok ban.

Rather than target one company in a rushed and secretive process, Congress should pass comprehensive data privacy protections and do a better job of informing the public of the threats these companies may pose to national security.

— Rep. Barbara Lee (@RepBarbaraLee) March 13, 2024

   ___________________ 

Today, I voted against the so-called “TikTok Bill.”

Here’s why: pic.twitter.com/Kbyh6hEhhj

— Rep Andy Biggs (@RepAndyBiggsAZ) March 13, 2024

   ___________________

Today, I voted against H.R. 7521. My full statement: pic.twitter.com/9QCFQ2yj5Q

— Rep. Nadler (@RepJerryNadler) March 13, 2024

   ___________________ 

Today I claimed 20 minutes in opposition to the TikTok ban bill, and yielded time to several likeminded colleagues.

This bill gives the President far too much authority to determine what Americans can see and do on the internet.

This is my closing statement, before I voted No. pic.twitter.com/xMxp9bU18t

— Thomas Massie (@RepThomasMassie) March 13, 2024

   ___________________ 

Why I voted no on the bill to potentially ban tik tok: pic.twitter.com/OGkfdxY8CR

— Jim Himes 🇺🇸🇺🇦 (@jahimes) March 13, 2024

   ___________________ 

I don’t use TikTok. I find it unwise to do so. But after careful review, I’m a no on this legislation.

This bill infringes on the First Amendment and grants undue power to the administrative state. pic.twitter.com/oSpmYhCrV8

— Rep. Dan Bishop (@RepDanBishop) March 13, 2024

   ___________________ 

I’m voting NO on the TikTok forced sale bill.

This bill was incredibly rushed, from committee to vote in 4 days, with little explanation.

There are serious antitrust and privacy questions here, and any national security concerns should be laid out to the public prior to a vote.

— Alexandria Ocasio-Cortez (@AOC) March 13, 2024

   ___________________ 

We should defend the free & open debate that our First Amendment protects. We should not take that power AWAY from the people & give it to the government. The answer to authoritarianism is NOT more authoritarianism. The answer to CCP-style propaganda is NOT CCP-style oppression. pic.twitter.com/z9HWgUSMpw

— Tom McClintock (@RepMcClintock) March 13, 2024

   ___________________ 

I'm voting no on the TikTok bill. Here's why:
1) It was rushed.
2) There's major free speech issues.
3) It would hurt small businesses.
4) America should be doing way more to protect data privacy & combatting misinformation online. Singling out one app isn't the answer.

— Rep. Jim McGovern (@RepMcGovern) March 13, 2024

    ___________________

Solve the correct problem.
Privacy.
Surveillance.
Content moderation.

Who owns #TikTok?
60% investors - including Americans
20% +7,000 employees - including Americans
20% founders
CEO & HQ Singapore
Data in Texas held by Oracle

What changes with ownership? I’ll be voting NO. pic.twitter.com/MrfROe02IS

— Warren Davidson 🇺🇸 (@WarrenDavidson) March 13, 2024

   ___________________ 

I voted no on the bill to force the sale of TikTok. Unlike our adversaries, we believe in freedom of speech and don’t ban social media platforms. Instead of this rushed bill, we need comprehensive data security legislation that protects all Americans.

— Val Hoyle (@RepValHoyle) March 13, 2024

    ___________________

Please tell the Senate to reject this bill and instead give Americans the comprehensive data privacy protections we so desperately need.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

"We Do No Such Thing": What the 303 Creative Decision Means and Doesn't Mean for Anti-Discrimination and Public Accommodation Laws

14 March 2024 at 12:52

Can a bakery that objects to marriage equality refuse to sell a cake to a gay couple for their wedding? This question, or some variant thereof, has occupied courts even before marriages for same-sex couples were legally recognized. In June 2023, in 303 Creative v. Elenis, the Supreme Court addressed this question in a case asking whether a wedding website design business could refuse to design websites for weddings of same-sex couples. The court ruled for the business. But properly understood, the decision does not license discrimination; it merely recognizes that where a business will not provide a particular product or service to anyone, it has the right to refuse it to a gay couple. That exception should not apply to most applications of anti-discrimination laws, which require only equal treatment, and do not require businesses to provide any particular service or product. As I explain in more detail in this Yale Law Journal article and as we argue in this model brief, 303 Creative does not create a First Amendment right to discriminate.

Can a bakery that objects to marriage equality refuse to sell a cake to a gay couple for their wedding?

Under Colorado’s public accommodations law, businesses that choose to serve the public at large cannot turn people away because of their race, sex, religion, sexual orientation or other protected characteristics. 303 Creative claimed that because its service is expressive and its owner objects to same-sex marriage, it can’t be required to provide website design services for same-sex weddings.

In a 6-3 decision, the court ruled for the business, concluding that Colorado’s application of its public accommodations law violated the designer’s First Amendment rights. In our view, the decision was wrong. We submitted a friend-of-the-court brief arguing that the Constitution did not give the business a right to refuse to comply with Colorado’s anti-discrimination law. But it’s important to understand the limits of the decision.

The case was brought by 303 Creative, a website design business, and its owner, Lorie Smith. Smith argued that Colorado’s law violated her First Amendment rights by compelling her, if she opened a wedding website design business, to serve both gay and heterosexual couples seeking to marry. The business had never actually designed a wedding website. Still, Smith brought the case before doing so, arguing that she was deterred from pursuing the business out of fear that Colorado’s public accommodations law would require her to create websites celebrating marriages that she opposed.

In a 6-3 decision, the court ruled for the business, concluding that Colorado’s application of its public accommodations law violated the designer’s First Amendment rights. In our view, the decision was wrong.

Because the case was brought before any actual application of the law, it was unclear what the designer would or wouldn’t do, or how the law would apply to her. As a result, the court’s opinions treat the case as if it presented two very different questions.

According to the majority opinion, the case involved a business owner unwilling to design for anyone a website whose content contravened her beliefs by expressly celebrating marriages of same-sex couples. It did not involve a business that refused services to customers based on their sexual orientation. Rather, Smith objected to the content of the message the state was compelling, not the identity of the customers. And equally significantly, the majority viewed Colorado’s purpose in applying its public accommodations law in such circumstances—where the business did not object to the identity of the customers but to the message requested—to be in suppressing disfavored ideas about marriage and compelling expression of the state’s favored viewpoint. In this particular application, the majority concluded, the business objected only to the message, and the state sought to enforce the law to compel a message–not to prohibit discriminatory sales on the basis of identity.

The dissenting opinions saw the case entirely differently. It viewed it as involving a website designer who objected to making any wedding website for a same-sex couple, regardless of its content. In its view, 303 Creative would refuse to make a website for a same-sex couple even if the website was identical to that of a different-sex couple. In its view, then, the designer sought a right to discriminate not based on the content of any particular message, but based on the customer’s sexual orientation. It correctly argued that the law has long been settled that the First Amendment does not permit businesses, even those whose services are expressive, to discriminate based on identity.

In essence, the majority and the dissent decided different cases. Indeed, when the dissent accused the majority of permitting businesses to discriminate on the basis of identity, the majority strongly rejected that conclusion, saying “We do no such thing.”

One way of understanding the difference is to imagine two paradigm cases. A t-shirt manufacturer that objects to making a t-shirt that says “Support Gay Marriage” has the right to refuse to make that t-shirt for a gay customer where his objection is to the message, not the identity of the customer. If the t-shirt manufacturer would not make a shirt with those words for anyone, it need not make one for a particular customer because they are gay. But at the same time, the t-shirt manufacturer could not refuse to sell a shirt saying “Love Marriage” to a customer because he was gay or sought to wear it to celebrate his marriage. If the business sells such shirts to others, it has to sell it to all. Nor could the t-shirt designer put up a sign saying “We Don’t Serve Gays.” In short, the decision permits a denial of service based on the message requested, but not based on who the product is for.

Understood in this light, the decision should have minimal impact on the enforcement of public accommodations and anti-discrimination laws. It recognizes a right to refuse service only where a business objects to expressing a particular message for anyone, not where it objects to serving certain customers because of their identity.

Because that is not the situation in the vast majority of instances in which antidiscrimination laws are applied, the decision leaves standing what the court previously described as the “general rule”—namely, that religious and philosophical objections “do not allow business owners and other actors in the economy and in society to deny protected persons equal access to goods and services under a neutral and generally applicable public accommodations law.”

In short, the decision in 303 Creative does not mean that a caterer, florist, or baker can refuse to provide food, flowers, or a cake for a wedding merely because the participants are of the same sex and the vendor objects to the implicit message providing those services sends. Instead, it is only when a public accommodations law compels speech that a business owner objects to providing for anyone—and does so in order to excise disfavored ideas, that it violates the First Amendment.

SXSW Tried to Silence Critics with Bogus Trademark and Copyright Claims. EFF Fought Back.

13 March 2024 at 19:01

Special thanks to EFF legal intern Jack Beck, who was the lead author of this post.

Amid heavy criticism for its ties to weapons manufacturers supplying Israel, South by Southwest—the organizer of an annual conference and music festival in Austin—has been on the defensive. One tool in their arsenal: bogus trademark and copyright claims against local advocacy group Austin for Palestine Coalition.

The Austin for Palestine Coalition has been a major source of momentum behind recent anti-SXSW protests. Their efforts have included organizing rallies outside festival stages and hosting an alternative music festival in solidarity with Palestine. They have also created social media posts explaining the controversy, criticizing SXSW, and calling on readers to email SXSW with demands for action. The group’s posts include graphics that modify SXSW’s arrow logo to add blood-stained fighter jets. Other images incorporate patterns evoking SXSW marketing materials overlaid with imagery like a bomb or a bleeding dove.

Graphic featuring parody of SXSW arrow logo and a bleeding dove in front of a geometric background, with the text "If SXSW wishes to retain its credibility, it must change course by disavowing the normalization of militarization within the tech and entertainment industries."

One of Austin for Palestine's graphics

Days after the posts went up, SXSW sent a cease-and-desist letter to Austin for Palestine, accusing them of trademark and copyright infringement and demanding they take down the posts. Austin for Palestine later received an email from Instagram indicating that SXSW had reported the post for violating their trademark rights.

We responded to SXSW on Austin for Palestine’s behalf, explaining that their claims are completely unsupported by the law and demanding they retract them.

The law is clear on this point. The First Amendment protects your right to make a political statement using trademark parodies, whether or not the trademark owner likes it. That’s why trademark law applies a different standard (the “Rogers test”) to infringement claims involving expressive works. The Rogers test is a crucial defense against takedowns like these, and it clearly applies here. Even without Rogers’ extra protections, SXSW’s trademark claim would be bogus: Trademark law is about preventing consumer confusion, and no reasonable consumer would see Austin for Palestine’s posts and infer they were created or endorsed by SXSW.

SXSW’s copyright claims are just as groundless. Basic symbols like their arrow logo are not copyrightable. Moreover, even if SXSW meant to challenge Austin for Palestine’s mimicking of their promotional material—and it’s questionable whether that is copyrightable as well—the posts are a clear example of non-infringing fair use. In a fair use analysis, courts conduct a four-part analysis, and each of those four factors here either favors Austin for Palestine or is at worst neutral. Most importantly, it’s clear that the critical message conveyed by Austin for Palestine’s use is entirely different from the original purpose of these marketing materials, and the only injury to SXSW is reputational—which is not a cognizable copyright injury.

SXSW has yet to respond to our letter. EFF has defended against bogus copyright and trademark claims in the past, and SXSW’s attempted takedown feels especially egregious considering the nature of Austin for Palestine’s advocacy. Austin for Palestine used SXSW’s iconography to make a political point about the festival itself, and neither trademark nor copyright is a free pass to shut down criticism. As an organization that “dedicates itself to helping creative people achieve their goals,” SXSW should know better.

Protect Yourself from Election Misinformation

13 March 2024 at 14:22

Welcome to your U.S. presidential election year, when all kinds of bad actors will flood the internet with election-related disinformation and misinformation aimed at swaying or suppressing your vote in November. 

So… what’re you going to do about it? 

As EFF’s Corynne McSherry wrote in 2020, online election disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events. Still, election misinformation and disinformation continue to proliferate online and off. 

That being said, regulation is not typically an effective or human rights-respecting way to address election misinformation. Even well-meaning efforts to control election misinformation through regulation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression. Indeed, any content regulation must be scrutinized to avoid inadvertently affecting meaningful expression: Is the approach narrowly tailored or a categorical ban? Does it empower users? Is it transparent? Is it consistent with human rights principles? 

 While platforms and regulators struggle to get it right, internet users must be vigilant about checking the election information they receive for accuracy. There is help. Nonprofit journalism organization ProPublica published a handy guide about how to tell if what you’re reading is accurate or “fake news.” The International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:

To make sure you’re getting good information about how your election is being conducted, check in with trusted sources including your state’s Secretary of State, Common Cause, and other nonpartisan voter protection groups, or call or text 866-OUR-VOTE (866-687-8683) to speak with a trained election protection volunteer. 

And if you see something, say something: You can report election disinformation at https://reportdisinfo.org/, a project of the Common Cause Education Fund. 

 EFF also offers some election-year food for thought: 

  • On EFF’s “How to Fix the Internet” podcast, Pamela Smith—president and CEO of Verified Voting—in 2022 talked with EFF’s Cindy Cohn and Jason Kelley about finding reliable information on how your elections are conducted, as part of ensuring ballot accessibility and election transparency.
  • Also on “How to Fix the Internet”, Alice Marwick—cofounder and principal researcher at the University of North Carolina, Chapel Hill’s Center for Information, Technology and Public Life—in 2023 talked about finding ways to identify and leverage people’s commonalities to stem the flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out. She discussed why seemingly ludicrous conspiracy theories get so many views and followers; how disinformation is tied to personal identity and feelings of marginalization and disenfranchisement; and when fact-checking does and doesn’t work.
  • EFF’s Cory Doctorow wrote in 2020 about how big tech monopolies distort our public discourse: “By gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.” 

An effective democracy requires an informed public and participating in a democracy is a responsibility that requires work. Online platforms have a long way to go in providing the tools users need to discern legitimate sources from fake news. In the meantime, it’s on each of us. Don’t let anyone lie, cheat, or scare you away from making the most informed decision for your community at the ballot box. 

Congress Should Give Up on Unconstitutional TikTok Bans

12 March 2024 at 20:01

Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week.

A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company.

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place.

Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target. 

The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now. 

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation

Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms.

Montana’s ban was as unprecedented as it was unconstitutional

And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana. 

Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.

Access to Internet Infrastructure is Essential, in Wartime and Peacetime

12 March 2024 at 10:49

We’ve been saying it for 20 years, and it remains true now more than ever: the internet is an essential service. It enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. More specifically, during wartime and conflict, internet and phone services enable the communication of information between people in challenging situations, as well as the reporting by on-the-ground journalists and ordinary people of the news. 

Unfortunately, governments across the world are very aware of their power to cut off this crucial lifeline, and frequently undertake targeted initiatives to do so. These internet shutdowns have become a blunt instrument that aid state violence and inhibit free speech, and are routinely deployed in direct contravention of human rights and civil liberties.

And this is not a one-dimensional situation. Nearly twenty years after the world’s first total internet shutdowns, this draconian measure is no longer the sole domain of authoritarian states but has become a favorite of a diverse set of governments across three continents. For example:

In Iran, the government has been suppressing internet access for many years. In the past two years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media, and sometimes cutting off internet access altogether. 

In Sudan, authorities have enacted a total telecommunications blackout during a massive conflict and displacement crisis. Shutting down the internet is a deliberate strategy blocking the flow of information that brings visibility to the crisis and prevents humanitarian aid from supporting populations endangered by the conflict. The communications blackout has extended for weeks, and in response a global campaign #KeepItOn has formed to put pressure on the Sudanese government to restore its peoples' access to these vital services. More than 300 global humanitarian organizations have signed on to support #KeepItOn.

And in Palestine, where the Israeli government exercises near-total control over both wired internet and mobile phone infrastructure, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. The latest blackout in January 2024 occurred amid a widespread crackdown by the Israeli government on digital rights—including censorship, surveillance, and arrests—and amid accusations of bias and unwarranted censorship by social media platforms. On that occasion, the internet was restored after calls from civil society and nations, including the U.S. As we’ve noted, internet shutdowns impede residents' ability to access and share resources and information, as well as the ability of residents and journalists to document and call attention to the situation on the ground—more necessary than ever given that a total of 83 journalists have been killed in the conflict so far. 

Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world. In 2022, President Biden offered to upgrade the West Bank and Gaza to 4G, but the initiative stalled. While some Palestinians are able to circumvent the blackout by utilizing Israeli SIM cards (which are difficult to obtain) or Egyptian eSIMs, these workarounds are not solutions to the larger problem of blackouts, which the National Security Council has said: “[deprive] people from accessing lifesaving information, while also undermining first responders and other humanitarian actors’ ability to operate and to do so safely.”

Access to internet infrastructure is essential, in wartime as in peacetime. In light of these numerous blackouts, we remain concerned about the control that authorities are able to exercise over the ability of millions of people to communicate. It is imperative that people’s access to the internet remains protected, regardless of how user platforms and internet companies transform over time. We continue to shout this, again and again, because it needs to be restated, and unfortunately today there are ever more examples of it happening before our eyes.




Four Voices You Should Hear this International Women’s Day

8 March 2024 at 17:15

Around the globe, freedom of expression varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship can have a negative impact on different communities, and especially marginalized or vulnerable ones. This International Women’s Day, spend some time with four stories of hope and inspiration that teach us how to reflect on the past to build a better future.

1. Podcast Episode: Safer Sex Work Makes a Safer Internet

An internet that is safe for sex workers is an internet that is safer for everyone. Though the effects of stigmatization and criminalization run deep, the sex worker community exemplifies how technology can help people reduce harm, share support, and offer experienced analysis to protect each other. Public interest technology lawyer Kendra Albert and sex worker, activist, and researcher Danielle Blunt have been fighting for sex workers’ online rights for years and say that holding online platforms legally responsible for user speech can lead to censorship that hurts us all. They join EFF’s Cindy Cohn and Jason Kelley in this podcast to talk about protecting all of our free speech rights.

2. Speaking Freely: Sandra Ordoñez

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park in Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools. In this interview with EFF’s Jillian C. York, Sandra discusses free speech and how communities that are often the most directly affected are the last consulted.

3. Story: Coded Resistance, the Comic!

From the days of chattel slavery until the modern Black Lives Matter movement, Black communities have developed innovative ways to fight back against oppression. EFF's Director of Engineering, Alexis Hancock, documented this important history of codes, ciphers, underground telecommunications and dance in a blog post that became one of our favorite articles of 2021. In collaboration with The Nib and illustrator Chelsea Saunders, "Coded Resistance" was adapted into comic form to further explore these stories, from the coded songs of Harriet Tubman to Darnella Frazier recording the murder of George Floyd.

4. Speaking Freely: Evan Greer

Evan Greer is many things: a musician, an activist for LGBTQ issues, the Deputy Director of Fight for the Future, and a true believer in the free and open internet. In this interview, EFF’s Jillian C. York spoke with Evan about the state of free expression, and what we should be doing to protect the internet for future activism. Among the many topics discussed was how policies that promote censorship—no matter how well-intentioned—have historically benefited the powerful and harmed vulnerable or marginalized communities. Evan talks about what we as free expression activists should do to get at that tension and find solutions that work for everyone in society.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Actions You Can Take To Protect Digital Rights this International Women’s Day

8 March 2024 at 17:09

This International Women’s Day, defend free speech, fight surveillance, and support innovation by calling on our elected politicians and private companies to uphold our most fundamental rights—both online and offline.

1. Pass the “My Body, My Data” Act

Privacy fears should never stand in the way of healthcare. That's why this common-sense federal bill, sponsored by U.S. Rep. Sara Jacobs, will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for. The protected information includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. It also lets people take on companies that violate their privacy with a strong private right of action.

2. Ban Government Use of Face Recognition

Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Because of face recognition errors, a Black woman, Porcha Woodruff, was wrongfully arrested, and another, Lamya Robinson, was wrongfully kicked out of a roller rink.

Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations, including to disparately surveil people of color. At the local, state, and federal level, people across the country are urging politicians to ban the government’s use of face surveillance because it is inherently invasive, discriminatory, and dangerous. Many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology. 

3. Tell Congress: Don’t Outlaw Encrypted Apps

Advocates of women's equality often face surveillance and repression from powerful interests. That's why they need strong end-to-end encryption. But if the so-called “STOP CSAM Act” passes, it would undermine digital security for all internet users, impacting private messaging and email app providers, social media platforms, cloud storage providers, and many other internet intermediaries and online services. Free speech for women’s rights advocates would also be at risk. STOP CSAM would also create a carveout in Section 230, the law that protects our online speech, exposing platforms to civil lawsuits for merely hosting a platform where part of the illegal conduct occurred. Tell Congress: don't pass this law that would undermine security and free speech online, two critical elements for fighting for equality for all genders.  

4. Tell Facebook: Stop Silencing Palestine

Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI) have led to Palestinian content and accounts being removed and banned at an unprecedented scale. As Palestinians and their supporters have taken to social platforms to share images and posts about the situation in the Gaza strip, some have noticed their content suddenly disappear, or had their posts flagged for breaches of the platforms’ terms of use. In some cases, their accounts have been suspended, and in others features such liking and commenting have been restricted

This has an exacerbated impact for the most at risk groups in Gaza, such as those who are pregnant or need reproductive healthcare support, as sharing information online is both an avenue to communicating the reality with the world, as well as sharing information with others who need it the most.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Voices You Should Hear this International Women’s Day

Victory! EFF Helps Resist Unlawful Warrant and Gag Order Issued to Independent News Outlet

7 March 2024 at 15:44

Over the past month, the independent news outlet Indybay has quietly fought off an unlawful search warrant and gag order served by the San Francisco Police Department. Today, a court lifted the gag order and confirmed the warrant is void. The police also promised the court to not seek another warrant from Indybay in its investigation.

Nevertheless, Indybay was unconstitutionally gagged from speaking about the warrant for more than a month. And the SFPD once again violated the law despite past assurances that it was putting safeguards in place to prevent such violations.

EFF provided pro bono legal representation to Indybay throughout the process.

Indybay’s experience highlights a worrying police tactic of demanding unpublished source material from journalists, in violation of clearly established shield laws. Warrants like the one issued by the police invade press autonomy, chill news gathering, and discourage sources from contributing. While this is a victory, Indybay was still gagged from speaking about the warrant, and it would have had to pay thousands of dollars in legal fees to fight the warrant without pro bono counsel. Other small news organizations might not be so lucky. 

It started on January 18, 2024, when an unknown member of the public published a story on Indybay’s unique community-sourced newswire, which allows anyone to publish news and source material on the website. The author claimed credit for smashing windows at the San Francisco Police Credit Union.

On January 24, police sought and obtained a search warrant that required Indybay to turn over any text messages, online identifiers like IP address, or other unpublished information that would help reveal the author of the story. The warrant also ordered Indybay not to speak about the warrant for 90 days. With the help of EFF, Indybay responded that the search warrant was illegal under both California and federal law and requested that the SFPD formally withdraw it. After several more requests and shortly before the deadline to comply with the search warrant, the police agreed to not pursue the warrant further “at this time.” The warrant became void when it was not executed after 10 days under California law, but the gag order remained in place.

Indybay went to court to confirm the warrant would not be renewed and to lift the gag order. It argued it was protected by California and federal shield laws that make it all but impossible for law enforcement to use a search warrant to obtain unpublished source material from a news outlet. California law, Penal Code § 1524(g), in particular, mandates that “no warrant shall issue” for that information. The Federal Privacy Protection Act has some exceptions, but they were clearly not applicable in this situation. Nontraditional and independent news outlets like Indybay are covered by these laws (Indybay fought this same fight more than a decade ago when one of its photographers successfully quashed a search warrant). And when attempting to unmask a source, an IP address can sometimes be as revealing as a reporter’s notebook. In a previous case, EFF established that IP addresses are among the types of unpublished journalistic information typically protected from forced disclosure by law.

In addition, Indybay argued that the gag order was an unconstitutional content-based prior restraint on speech—noting that the government did not have a compelling interest in hiding unlawful investigative techniques.

Rather than fight the case, the police conceded the warrant was void, promised not to seek another search warrant for Indybay’s information during the investigation, and agreed to lift the gag order. A San Francisco Superior Court Judge signed an order confirming that.

That this happened at all is especially concerning since the SFPD had agreed to institute safeguards following its illegal execution of a search warrant against freelance journalist Bryan Carmody in 2019. In settling a lawsuit brought by Carmody, the SFPD agreed to ensure all its employees were aware of its policies concerning warrants to journalists. As a result the department instituted internal guidance and procedures, which do not all appear to have been followed with Indybay.

Moreover, the search warrant and gag order should never have been signed by the court given that it was obviously directed to a news organization. We call on the court and the SFPD to meet with those representing journalists to make sure that we don't have to deal with another unconstitutional gag order and search warrant in another few years.

The San Francisco Police Department's public statement on this case is incomplete. It leaves out the fact that Indybay was gagged for more than a month and that it was only Indybay's continuous resistance that prevented the police from acting on the warrant. It also does not mention whether the police department's internal policies were followed in this case. For one thing, this type of warrant requires approval from the chief of police before it is sought, not after. 

Read more here: 

Stipulated Order

Motion to Quash

Search Warrant

Trujillo Declaration

Burdett Declaration

SFPD Press Release

New York's Coercion of Private Companies to Blacklist the NRA Has a Long and Dark History

pMore than 60 years ago the Supreme Court ruled that the First Amendment bars the government from coercing private entities to punish speech that the government disfavors. Just as the government can’t directly punish or censor speech it disagrees with, it cannot do so indirectly by coercing private parties to do the same./p pHistory underscores the importance of this free speech protection. Government officials have all too often enlisted private parties—from the White Citizens’ Councils of the Jim Crow South to the blacklists of Communists in the McCarthy era—to punish those with whom they disagree. New York’s efforts to punish the National Rifle Association, at issue before the Supreme Court in a href=https://www.aclu.org/cases/national-rifle-association-v-vulloiNational Rifle Association v. Vullo/i/a, follow in the footsteps of those earlier censorship efforts./p div class=alignfullwidth mb-8 wp-pullquote div class= wp-pullquote-inner pThe ACLU disagrees sharply with the NRA on many issues, yet we are representing the group in this case because of the First Amendment principles at stake./p /div /div pThe ACLU disagrees sharply with the NRA on many issues, yet we are representing the group in this case because of the First Amendment principles at stake. We argue that Maria Vullo, a New York state regulator, threatened to use her regulatory power over banks and insurance companies to coerce them into denying basic financial services to the NRA and, in Vullo’s own words, “other gun promotion” groups. Vullo’s threats were expressly based on her disagreement with the NRA’s advocacy. And they worked. Several insurance companies and banks refused to work with the NRA out of fear of reprisals from New York regulators. The ACLU urges the Supreme Court to hold that coercing third parties to break ties with the NRA because of its advocacy violates the First Amendment./p pEven those who oppose government censorship may be sympathetic to New York’s efforts to shut down the NRA. The NRA is dedicated to promoting guns, which play an outsized role in violence and death in this country. The ACLU does not support the NRA’s mission. In fact, we directly oppose the NRA and support the government’s power to adopt sensible tools, like public carry permits and disarming persons subject to domestic violence protective orders. While it#8217;s understandable that Vullo wanted to address the gun violence epidemic, government censorship wasn#8217;t a constitutional response to the problem./p div class=alignfullwidth mb-8 wp-pullquote div class= wp-pullquote-inner pThe right to advocate views the government opposes safeguards our ability to organize for the country we want to see./p /div /div pThe NRA’s case is hardly the first time government officials have sought to use private parties to penalize those with whom they disagree. Our nation’s history is replete with examples. And when the government threatens businesses in this way, the businesses often go along. As summed up by a slogan during the McCarthy Era: “Why buy yourself a headache?”/p pDuring the McCarthy era, from the late 1940s to 60s, the government regularly pressured private entities to fire people it perceived as connected with the Communist Party. The FBI and the House Committee on Un-American Activities delivered the names of employees who had alleged connections to “subversive” organizations, or even subscriptions to their publications, to private employers like defense contractors, universities, newspapers, and major corporations such as General Electric and U.S. Steel. Employers that failed to fire these employees faced loss of lucrative government contracts, necessary licenses, targeted investigations, and public smearing./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/cases/national-rifle-association-v-vullo target=_blank tabindex=-1 img width=700 height=350 src=https://www.aclu.org/wp-content/uploads/2023/02/29cdadc17d83f5ef0a78a0e3eca67374.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2023/02/29cdadc17d83f5ef0a78a0e3eca67374.jpg 700w, https://www.aclu.org/wp-content/uploads/2023/02/29cdadc17d83f5ef0a78a0e3eca67374-400x200.jpg 400w, https://www.aclu.org/wp-content/uploads/2023/02/29cdadc17d83f5ef0a78a0e3eca67374-600x300.jpg 600w sizes=(max-width: 700px) 100vw, 700px / /a /div div class=wp-link__title a href=https://www.aclu.org/cases/national-rifle-association-v-vullo target=_blank National Rifle Association v. Vullo /a /div div class=wp-link__description a href=https://www.aclu.org/cases/national-rifle-association-v-vullo target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletOn January 9th, 2024, the American Civil Liberties Union filed its opening brief on behalf of the National Rifle Association (NRA) in National.../p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/cases/national-rifle-association-v-vullo target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pThe ACLU itself has been the target of such efforts. In the late 1930s, Jersey City Mayor Frank Hague bragged that the reason the ACLU and the Congress of Industrial Organizations (CIO) had been unable to book a single private hall for meetings or speakers was because the hall owners are his “friends” and knew that he did “not approve of un-American groups coming into Jersey City.” The one hall owner who did rent his hall to the CIO for a meeting was then charged with a building violation. When asked about the violation at trial, Hague responded “Any port in a storm, Counselor”—effectively acknowledging that the violation was in retaliation for renting the private hall to a disfavored speaker./p pThe ACLU’s predecessor, the National Civil Liberties Bureau, confronted similar efforts during World War I. When the Justice Department attempted to put the Industrial Workers of the World (IWW) out of business by filing criminal charges against more than 100 members who had called for labor strikes, accusing them of undermining the war effort, the National Civil Liberties Bureau placed an advertisement seeking funds for the IWW’s “right of a fair trial.” The government responded by coercing iThe New Republic/i, a privately-run media company, to support its goal by threatening to revoke the magazine’s second-class mailing privileges if it reprinted the message./p pSouthern states turned to this tactic in their resistance to racial integration established in a href=https://www.aclu.org/podcast/school-segregation-65-years-after-brown-v-board-ep-46iBrown v. Board of Education/i/a. Some states mandated public disclosure of the National Association for the Advancement of Colored People’s (NAACP) members, and relied on private entities that shared the state’s commitment to maintaining white supremacy, such as the White Citizens’ Councils, to publicize the disclosures to private business owners who were expected to then punish those named. As a result, NAACP members were fired, denied credit, prohibited from purchasing goods, evicted or had their home loans foreclosed, and subjected to threats of and actual violence. This public-private partnership became a blueprint for how to use racialized violence as an “economic cold war” to render both Black and white supporters of the NAACP “destitute” and undermine their ability to advocate for racial justice./p pNor is this tactic a relic of the past. In Florida, Gov. Ron DeSantis directed the state agency in charge of liquor licensing to see if it could stop private entities hosting performances of “A Drag Queen Christmas.” After the shows went forward, a non-profit theater venue in Orlando and the Hyatt Regency Miami faced actions to revoke their liquor licenses for allegedly violating laws prohibiting lewdness, vulgar exposure of sexual organs, and obscene performances—despite the agency’s own undercover agents attending and reporting that there were no “lewd acts” or “exposure of genital organs.”/p pMaria Vullo followed the same playbook. As the state’s top financial regulator, in coordination with then-Governor Andrew Cuomo, she expressly targeted the NRA for its “gun promotion” advocacy and urged all the banks and insurance companies she regulates to refuse to do business with the NRA. She offered leniency to one insurer for legal infractions if it would cut its ties to the NRA, and extracted promises from the NRA’s three largest insurance partners never to provide “affinity insurance” to the group’s members ever again./p piNRA v. Vullo /iisn’t just about the NRA. It’s about all of our First Amendment rights to advocate for causes we believe in, without being targeted by public-private ventures of retaliation. If New York can do this to the NRA, then Oklahoma could similarly penalize criminal justice reformers advocating for bail reform, and Texas could target climate change organizations advancing the view that all fossil fuel extraction must end. The right to advocate views the government opposes safeguards our ability to organize for the country we want to see. It’s a principle the ACLU has defended for more than 100 years, and one we will continue to protect from government censorship of all kinds, whether we agree or disagree with the views of those being targeted./p

Ghana's President Must Refuse to Sign the Anti-LGBTQ+ Bill

29 February 2024 at 17:52

After three years of political discussions, MPs in Ghana's Parliament voted to pass the country’s draconian Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

President Nana Akufo-Addo must protect the human rights of all people in Ghana and refuse to provide assent to the bill.

This anti-LGBTQ+ legislation introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. This would effectively ban all speech and activity on and offline that even remotely supports LGBTQ+ rights.

Ghanaian authorities could probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. They could also require online platforms to suppress content about LGBTQ+ issues, regardless of where it was created. 

Doing so would criminalize the activity of many major cultural and commercial institutions. If President Akufo-Addo does approve the bill, musicians, corporations, and other entities that openly support LGBTQ+ rights would be banned in Ghana.

Despite this direct threat to online freedom of expression, tech giants are yet to speak out publicly against the LGBTQ+ persecution in Ghana. Twitter opened its first African office in Accra in April 2021, citing Ghana as “a supporter of free speech, online freedom, and the Open Internet.” Adaora Ikenze, Facebook’s head of Public Policy in Anglophone West Africa has said: “We want the millions of people in Ghana and around the world who use our services to be able to connect, share and express themselves freely and safely, and will continue to protect their ability to do that on our platforms.” Both companies have essentially dodged the question.

For many countries across Africa, and indeed the world, the codification of anti-LGBTQ+ discourses and beliefs can be traced back to colonial rule, and a recent CNN investigation from December 2023 found alleged links between the drafting of homophobic laws in Africa and a US nonprofit. The group denied those links, despite having hosted a political conference in Accra shortly before an early version of this bill was drafted.

Regardless of its origin, the past three years of political and social discussion have contributed to a decimation of LGBTQ+ rights in Ghana, and the decision by MPs in Ghana’s Parliament to pass this bill creates severe impacts not just for LGBTQ+ people in Ghana, but for the very principle of free expression online and off. President Nana Akufo-Addo must reject it.

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

27 February 2024 at 16:24

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

EFF Urges Ninth Circuit to Reinstate X’s Legal Challenge to Unconstitutional California Content Moderation Law

23 February 2024 at 16:06

The Electronic Frontier Foundation (EFF) urged a federal appeals court to reinstate X’s lawsuit challenging a California law that forces social media companies to file reports to the state about their content moderation decisions, and with respect to five controversial issues in particular—an unconstitutional intrusion into platforms’ right to curate hosted speech free of government interference.

While we are enthusiastic proponents of transparency and have worked, through the Santa Clara Principles and otherwise, to encourage online platforms to provide information to their users, we see the clear threat in the state mandates. Indeed, the Santa Clara Principles itself warns against government’s use of its voluntary standards as mandates. California’s law is especially concerning since it appears aimed at coercing social media platforms to more actively moderate user posts.

In a brief filed with the U.S. Court of Appeals for the Ninth Circuit, we asserted—as we have repeatedly in the face of state mandates around the country about what speech social media companies can and cannot host—that allowing California to interject itself into platforms’ editorial processes, in any form, raises serious First Amendment concerns.

At issue is California A.B. 587, a 2022 law requiring large social media companies to semiannually report to the state attorney general detailed information about the content moderation decisions they make and, in particular, with respect to hot button issues like hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference.

A.B. 587 requires companies to report “detailed descriptions” of its content moderation practices generally and for each of these categories, and also to report detailed information about all posts flagged as belonging to any of those categories, including how content in these categories is defined, how it was flagged, how it was moderated, and whether their action was appealed. Companies can be fined up to $15,000 a day for failing to comply.

X, the social media company formerly known as Twitter, sued to overturn the law, claiming correctly that it violates its First Amendment right against being compelled to speak. A federal judge declined to put the law on temporary hold and dismissed the lawsuit.

We agree with Twitter and urge the Ninth Circuit to reverse the lower court. The law was intended to be and is operating as an informal censorship scheme to pressure online intermediaries to moderate user speech, which the First Amendment does not allow.

It’s akin to requiring a state attorney general or law enforcement to be able to listen in on editorial board meetings at the local newspaper or TV station, a clear interference with editorial freedom. The Supreme Court has consistently upheld this general principle of editorial freedom in a variety of speech contexts. There shouldn’t be a different rule for social media.

From a legal perspective, the issue before the court is what degree of First Amendment scrutiny is used to analyze the law. The district court found that the law need only be justified and not burdensome to comply with, a low degree of analysis known as Zauderer scrutiny, that is reserved for compelled factual and noncontroversial commercial speech. In our brief, we urge that as a law that both intrudes upon editorial freedom and disfavors certain categories of speech it must survive the far more rigorous strict First Amendment scrutiny. Our brief sets out several reasons why strict scrutiny should be applied.

Our brief also distinguishes A.B. 587’s speech compulsions from ones that do not touch the editorial process such as requirements that companies disclose how they handle user data. Such laws are typically subject to an intermediate level of scrutiny, and EFF strongly supports such laws that can pass this test.

A.B. 587 says X and other social media companies must report to the California Attorney General whether and how it curates disfavored and controversial speech and then adhere to those statements, or face fines. As a practical matter, this requirement is unworkable—content moderation policies are highly subjective, constantly evolving, and subject to numerous influences.

And as a matter of law, A.B. 587 interferes with platforms’ constitutional right to decide whether, how, when, and in what way to moderate controversial speech. The law is a thinly veiled attempt to coerce sites to remove content the government doesn’t like.

We hope the Ninth Circuit agrees that’s not allowed under the First Amendment.

EFF Opposes California Initiative That Would Cause Mass Censorship

23 February 2024 at 12:37

In recent years, lots of proposed laws purport to reduce “harmful” content on the internet, especially for kids. Some have good intentions. But the fact is, we can’t censor our way to a healthier internet.

When it comes to online (or offline) content, people simply don’t agree about what’s harmful. And people make mistakes, even in content moderation systems that have extensive human review and appropriate appeals. The systems get worse when automated filters are brought into the mix–as increasingly occurs, when moderating content at the vast scale of the internet.

Recently, EFF weighed in against an especially vague and poorly written proposal: California Ballot Initiative 23-0035, written by Common Sense Media. It would allow for plaintiffs to sue online information providers for damages of up to $1 million if it violates “its responsibility of ordinary care and skill to a child.”

We sent a public comment to California Attorney General Rob Bonta regarding the dangers of this wrongheaded proposal. While the AG’s office does not typically take action for or against ballot initiatives at this stage of the process, we wanted to register our opposition to the initiative as early as we could.

Initiative 23-0035  would result in broad censorship via a flood of lawsuits claiming that all manner of content online is harmful to a single child. While it is possible for children (and adults) to be harmed online, Initiative 23-0035’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults. Many online platforms will censor user content in order to avoid this legal risk.

The following are just a few of the many areas of culture, politics, and life where people have different views of what is “harmful,” and where this ballot initiative thus could cause removal of online content:

  • Discussions about LGBTQ life, culture, and health care.
  • Discussions about dangerous sports like tackle football, e-bikes, or sport shooting.
  • Discussions about substance abuse, depression, or anxiety, including conversations among people seeking treatment and recovery.

In addition, the proposed initiative would lead to mandatory age verification. It’s wrong to force someone to show ID before they go online to search for information. It eliminates the right to speak or to find information anonymously, for both minors and adults.

This initiative, with its vague language, is arguably worse than the misnamed Kids Online Safety Act, a federal censorship bill that we are opposing. We hope the sponsors of this initiative choose not to move forward with this wrongheaded and unconstitutional proposal. If they do, we are prepared to oppose it.

You can read EFF’s full letter to A.G. Bonta here.

As India Prepares for Elections, Government Silences Critics on X with Executive Order

23 February 2024 at 06:55

It is troubling to see that the Indian government has issued new demands to X (formerly Twitter) to remove accounts and posts critical of the government and its recent actions. This is especially bears watching as India is preparing for general elections this spring, and concerns for the government’s manipulation of social media critical of it grows.

On Wednesday, X’s Global Government Affairs account (@GlobalAffairs) tweeted:

The Indian government has issued executive orders requiring X to act on specific accounts and posts, subject to potential penalties including significant fines and imprisonment. 

In compliance with the orders, we will withhold these accounts and posts in India alone; however, we disagree with these actions and maintain that freedom of expression should extend to these posts.

Consistent with our position, a writ appeal challenging the Indian government's blocking orders remains pending. We have also provided the impacted users with notice of these actions in accordance with our policies.

Due to legal restrictions, we are unable to publish the executive orders, but we believe that making them public is essential for transparency. This lack of disclosure can lead to a lack of accountability and arbitrary decision-making.

India’s general elections are set to take place in April or May and will elect 543 members of the Lok Sabha, the lower house of the country’s parliament. Since February, farm unions in the country have been striking for floor pricing (also known as a minimum support price) for their crops. While protesters have attempted to march to Delhi from neighboring states, authorities have reportedly barricaded city borders, and two neighboring states ruled by the governing Bharatiya Janata Party (BJP) have deployed troops in order to stop the farmers from reaching the capital.

According to reports, the accounts locally withheld by X in response to the Indian government’s orders are critical of the BJP, while some accounts that were supporting or merely covering the farmer’s protests have also been withheld. Several account holders have identified themselves as being among those notified by X, while other users have identified many other accounts.

This isn’t the first time that the Indian government has gone after X users. In 2021, when the company—then called Twitter—was under different leadership, it suspended 500 accounts, then first reversed its decision, citing freedom of speech, and later re-suspended the accounts, citing compliance with India’s Information Technology Act. And in 2023, the company withheld 120 accounts critical of the BJP and Prime Minister Narendra Modi.

This is exactly the type of censorship we feared when EFF previously criticized the ITA’s rules, enacted in 2021, that force online intermediaries to comply with strict removal time frames under government orders. The rules require online intermediaries like X to remove restricted posts within 36 hours of receiving notice. X can challenge the order—as they have indicated they intend to—but the posts will remain down until that challenge is fully adjudicated.

EFF is also currently fighting back against efforts related to an Indian court order that required Reuters news service to de-publish one of its articles while a legal challenge to it is considered by the courts. This type of interim censorship is unauthorized in most legal systems. Those involved in the case have falsely represented to others who wrote about the Reuters story that the order applied to them as well.

EFF to Court: Strike Down Age Estimation in California But Not Consumer Privacy

14 February 2024 at 18:44

The Electronic Frontier Foundation (EFF) called on the Ninth Circuit to rule that California’s Age Appropriate Design Code (AADC) violates the First Amendment, while not casting doubt on well-written data privacy laws. EFF filed an amicus brief in the case NetChoice v. Bonta, along with the Center for Democracy & Technology.

A lower court already ruled the law is likely unconstitutional. EFF agrees, but we asked the appeals court to chart a narrower path. EFF argued the AADC’s age estimation scheme and vague terms that describe amorphous “harmful content” render the entire law unconstitutional. But the lower court also incorrectly suggested that many foundational consumer privacy principles cannot pass First Amendment scrutiny. That is a mistake that EFF asked the Ninth Circuit to fix.

In late 2022, California passed the AADC with the goal of protecting children online. It has many data privacy provisions that EFF would like to see in a comprehensive federal privacy bill, like data minimization, strong limits on the processing of geolocation data, regulation of dark patterns, and enforcement of privacy policies.

Government should provide such privacy protections to all people. The protections in the AADC, however, are only guaranteed to children. And to offer those protections to children but not adults, technology companies are strongly incentivized to “estimate the age” to their entire user base—children and adults alike. While the method is not specified, techniques could include submitting a government ID or a biometric scan of your face. In addition, technology companies are required to assess their products to determine if they are designed to expose children to undefined “harmful content” and determine what is in the undefined “best interest of children.”

In its brief, EFF argued that the AADC’s age estimation scheme raises the same problems as other age verification laws that have been almost universally struck down, often with help from EFF. The AADC burdens adults’ and children’s access to protected speech and frustrates all users’ right to speak anonymously online. In addition, EFF argued that the vague terms offer no clear standards, and thus give government officials too much discretion in deciding what conduct is forbidden, while incentivizing platforms to self-censor given uncertainty about what is allowed.

“Many internet users will be reluctant to provide personal information necessary to verify their ages, because of reasonable doubts regarding the security of the services, and the resulting threat of identity theft and fraud,” EFF wrote.

Because age estimation is essential to the AADC, the entire law should be struck down for that reason alone, without assessing the privacy provisions. EFF asked the court to take that narrow path.

If the court instead chooses to address the AADC’s privacy protections, EFF cautioned that many of the principles reflected in those provisions, when stripped of the unconstitutional censorship provisions and vague terms, could survive intermediate scrutiny. As EFF wrote:

“This Court should not follow the approach of the district court below. It narrowly focused on California’s interest in blocking minors from harmful content. But the government often has several substantial interests, as here: not just protection of information privacy, but also protection of free expression, information security, equal opportunity, and reduction of deceptive commercial speech. The privacy principles that inform AADC’s consumer data privacy provisions are narrowly tailored to these interests.”

EFF has a long history of supporting well-written privacy laws against First Amendment attacks. The AADC is not one of them. We have filed briefs supporting laws that protect video viewing history, biometric data, and other internet records. We have advocated for a federal law to protect reproductive health records. And we have written extensively on the need for a strong federal privacy law.

Anti-DEI Efforts Are the Latest Attack on Racial Equity and Free Speech

14 February 2024 at 16:23
pFirst, Donald Trump and right-wing extremists attacked government trainings on racism and sexism. Then the far right tried to censor classroom instruction on racism and sexism. Next, they banned books about BIPOC and LGBTQ lives. Today, the extreme right’s latest attack is aimed at dismantling diversity, equity and inclusion (DEI) programs./p pIn 2023, the far right introduced at least a href=https://www.chronicle.com/article/here-are-the-states-where-lawmakers-are-seeking-to-ban-colleges-dei-efforts?emailConfirmed=trueamp;supportSignUp=trueamp;supportForgotPassword=trueamp;email=lwatson%40aclu.orgamp;success=trueamp;code=successamp;bc_nonce=7dgurpqns0w1d7cyy44vqy65 bills/a to limit DEI in higher education in 25 states and the U.S. Congress. Eight bills became law. If this assault on our constitutional rights feels familiar, that’s because it is. It was last seen in 2020 when Trump-aligned politicians fought to pass unconstitutional laws aimed at censoring student and faculty speech about race, racism, sex and sexism. The ACLU challenged these laws in three states, but today, anti-DEI efforts are the new frontier in the fight to end the erasure of marginalized communities./p pDEI programs recruit and retain BIPOC, LGBTQ+, and other underrepresented faculty and students to repair decades of discriminatory policies and practices that excluded them from higher education. The far right, however, claims that DEI programs universally promote undeserving people who only advance because they a href=https://twitter.com/JDVance1/status/1742925449465135262check a box/a. Anti-DEI activists like Christopher Rufo consistently frame their attack as a strike against “identity politics,” and have a href=https://twitter.com/realchrisrufo/status/1371540368714428416?lang=enweaponized/a the term “DEI#8221; to reference any ideas and policies they disagree with, especially those that address systemic racism or sexism./p pThis attack on DEI is part of a larger a href=https://journals.law.harvard.edu/crcl/wp-content/uploads/sites/80/2023/09/HLC208_Watson.pdfbacklash/a against racial justice efforts that ignited after the 2020 killings of George Floyd, Ahmaud Arbery and Breonna Taylor. At the time, workplaces, schools, and other institutions announced plans to expand DEI efforts and to incorporate anti-racism principles in their communities. In response, far-right activists, led by Rufo and supported by right-wing think tanks such as The Manhattan Institute, The Claremont Institute, and The Heritage Foundation, went on the offensive./p pLeveraging Fox News and other mainstream media outlets, Rufo and his supporters sought to manufacture hysteria around the inclusion of critical race theory in schools and workplaces. After a 2020 appearance on Fox News where Rufo misrepresented the nature of federal trainings on oppression, white privilege, and intersectionality as indoctrination of critical race theory in our public spaces, Rufo convinced former President Trump to end federal DEI training. Rufo’s goal was to limit discourse, instruction, and research that refuted the false assertion that racism is not real in America – and he succeeded. Just three weeks later, a href=https://www.aclu.org/news/civil-liberties/the-trump-administration-is-banning-talk-about-race-and-genderTrump issued Executive Order 13950/a, which banned federal trainings on systemic racism and sexism. This Executive Order served as the template for most of the educational gag orders, or bills introduced to limit instruction on systemic sexism and racism in 40 states, 20 of which are now law./p pThe ACLU has consistently opposed efforts to censor classroom instruction on racism and sexism, including in Florida where some of the most egregious attacks on DEI, critical race theory and inclusive education have been mounted. Following the far right’s “anti-wokeism” playbook, in April 2022, Florida Governor Ron Desantis signed the Stop W.O.K.E. Act, which seeks to ban training or instruction on systemic racism and sexism in workplaces, K-12 schools, and higher education. The ACLU, the ACLU of Florida and our co-counsel challenged the law, claiming it violates the First and Fourteenth Amendments by imposing viewpoint-based restrictions on instructors and students in higher education, and fails to state explicitly and definitely what conduct is punishable. A federal judge has blocked it from being enforced in public universities across the state./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/free-speech/lessons-learned-from-our-classroom-censorship-win-against-floridas-stop-w-o-k-e-act target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/02/a826b64d446092dcdc923dd2a83f8cad-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/free-speech/lessons-learned-from-our-classroom-censorship-win-against-floridas-stop-w-o-k-e-act target=_blank Lessons Learned from Our Classroom Censorship Win Against Florida’s Stop W.O.K.E. Act /a /div div class=wp-link__description a href=https://www.aclu.org/news/free-speech/lessons-learned-from-our-classroom-censorship-win-against-floridas-stop-w-o-k-e-act target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletHere’s what the judge’s order could mean for challenges to censorship efforts nationwide./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/free-speech/lessons-learned-from-our-classroom-censorship-win-against-floridas-stop-w-o-k-e-act target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pInstead of ceasing to censor free speech, the far right pivoted to target DEI programs. For example, Florida passed Senate Bill 266 in April 2023. This law would expand the Stop W.O.K.E. Act’s prohibition on training and instruction on racism and sexism, seeking to eliminate DEI programs and heavily restrict certain college majors related to DEI. Just last month, the Florida State Board of Education moved forward with regulations to limit the use of public funds for DEI efforts in Florida’s 28 state colleges. The State Board also replaced the Principles of Sociology course, which was previously required, with an American History course to avoid “radical woke ideologies.”/p pLed by the same far-right leaders, including Rufo and various think-tanks, these anti-DEI efforts utilize the same methods as the attack on critical race theory. They represent yet another attempt to re-whitewash America’s history of racial subjugation, and to reverse efforts to pursue racial justice—or any progress at all. Anti-DEI rhetoric has been used to a href=https://twitter.com/JDVance1/status/1742925449465135262invalidate/a immunological research supporting the COVID-19 vaccine, conclusions by economists on mass migration, and even the January 6 insurrection. But these false claims are not what DEI is about. By definition equity means levelling the playing field so qualified people from underrepresented backgrounds have a fair chance to succeed. We cannot let a loud fringe movement convince us otherwise./p pIn its attacks on DEI, the far right undermines not only racial justice efforts, but also violates our right to free speech and free association. Today, the ACLU is determined to push back on anti-DEI efforts just as we fought efforts to censor instruction on systemic racism and sexism from schools./p div class=rss-cta__titleWe need you with us to keep fighting/diva href=https://action.aclu.org/give/now class=rss-cta__buttonDonate today/a/div

When Florida Officials Tried to Silence Our Pro-Palestinian Student Group, We Sued

pWhile studying abroad a couple of years ago, I heard first-hand accounts from Jordanian-Palestinian friends about the displacement their families, and families like theirs, experienced during the 1948 Nakba (Arabic for “catastrophe”). Moved by the painful memories they shared, I started researching student organizations advocating for Palestinian rights, and came across the Instagram of the University of Florida (UF) chapter of Students for Justice in Palestine (SJP). When I enrolled at UF a few months later, I immediately joined./p pAs a member of UF SJP, it was devastating when top Florida officials ordered public universities to deactivate all SJP chapters in the state, including ours. I remember being in shock when I read the order. Officials justified deactivating our chapter not because of anything our group had said or done—but because of our affiliation with the national chapter of Students for Justice in Palestine, a separate group. According to the order, certain views expressed in an advocacy toolkit the National SJP issued on October 7 violated Florida’s “material support of terrorism” law. But my student group was not even involved with the creation of that toolkit, which itself is protected by the First Amendment./p pOn October 8, our SJP chapter issued its own statement, saying that we “mourned the loss of innocent Palestinian and Israeli life,” and made clear that “the killing of any life is always undignified and heartbreaking.” Later, we issued another statement urging the University of Florida to condemn all violence, antisemitism, Islamophobia, Palestinian erasure, and anti-Palestinian sentiment./p pOur chapter has students from a variety of religious, racial, and cultural backgrounds, including members who are Jewish, Palestinian, and Palestinian-American, who believe that speaking up for Palestine is speaking up for humanity. Reading the deactivation order, we felt like we had no choice but to sue to protect our First Amendment right to free speech and free assocation. We know of multiple current and potential members of UF SJP who feared being punished and investigated. Our advocacy has suffered from having our state and university officials levy false accusations of “terrorism” against us. For months, we feared that at any moment the University could have denied us access to critical school funds, resources, and facilities that are fundamental to the survival and operation of our organization./p div class=mp-md wp-link div class=wp-link__img-wrapper a href=https://www.aclu.org/news/free-speech/defending-free-speech-students-justice-palestine-florida target=_blank tabindex=-1 img width=1200 height=628 src=https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae.jpg class=attachment-original size-original alt= decoding=async loading=lazy srcset=https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae.jpg 1200w, https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae-768x402.jpg 768w, https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae-400x209.jpg 400w, https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae-600x314.jpg 600w, https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae-800x419.jpg 800w, https://www.aclu.org/wp-content/uploads/2024/02/32e3bcaaa00f43000507a39dc63faeae-1000x523.jpg 1000w sizes=(max-width: 1200px) 100vw, 1200px / /a /div div class=wp-link__title a href=https://www.aclu.org/news/free-speech/defending-free-speech-students-justice-palestine-florida target=_blank The Importance of Defending the Free Speech Rights of Pro-Palestinian Students in Florida /a /div div class=wp-link__description a href=https://www.aclu.org/news/free-speech/defending-free-speech-students-justice-palestine-florida target=_blank tabindex=-1 p class=is-size-7-mobile is-size-6-tabletExplore the critical case that could shape the future of public students’ right to free speech and free association on campus./p /a /div div class=wp-link__source p-4 px-6-tablet a href=https://www.aclu.org/news/free-speech/defending-free-speech-students-justice-palestine-florida target=_blank tabindex=-1 p class=is-size-7Source: American Civil Liberties Union/p /a /div /div pNear the end of January, the ACLU, ACLU of Florida, and Palestine Legal went to court to defend our rights and, on January 31, a federal judge dismissed our lawsuit. The court found that after issuing the order, Florida officials do not intend to deactivate our chapter. Although the court did not rule on our First Amendment claims, it’s a relief to know that the court concluded our chapter does not currently face deactivation./p pAs the judge acknowledged during the hearing on our case: “Words have consequences.” For months we have lived with fear and anxiety as a result of the order. I still carry a deep worry for my safety, for my loved ones’ safety, and the safety of any student who chooses to get involved in our SJP chapter. We hope that state officials learned their lesson when they walked back the deactivation order, and that the State University System Chancellor will now take his order down from his website./p pI remember learning about my constitutional rights in seventh grade civics class, including how we were all entitled to free speech. The juxtaposition of what I grew up thinking college kids in the U.S. were allowed to do and say and what we went through last semester is really stark. I can’t overstate how deeply disappointing it was to see our state’s highest officials attempt to censor us. Their actions were contrary to everything I understood about how our democracy is supposed to work./p div class=alignfullwidth mb-8 wp-pullquote div class= wp-pullquote-inner pI can’t overstate how deeply disappointing it was to see our state’s highest officials attempt to censor us. Their actions were contrary to everything I understood about how our democracy is supposed to work./p /div /div pAt a time when the number of Palestinians killed or injured in Gaza is rising exponentially each day, standing up for our right to speak out on the issue felt like a no-brainer. While this experience hasn’t been easy, we’re proud to have fought for our rights in court. We hope our case sets a precedent that students cannot be silenced./p div class=rss-cta__titleWe need you with us to keep fighting/diva href=https://action.aclu.org/give/now class=rss-cta__buttonDonate today/a/div

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

7 February 2024 at 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

31 January 2024 at 10:06

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

29 January 2024 at 12:42

Update: Delegates at the concluding negotiating session failed to reach consensus on human rights protections, government surveillance, and other key issues. The session was suspended Feb. 8 without a final draft text. Delegates will resume talks at a later day with a view to concluding their work and providing a draft convention to the UN General Assembly at its 78th session later this year.

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

Save Your Twitter Account

By: Rory Mir
25 January 2024 at 19:02

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Amid reports that X—the site formerly known as Twitter—is dropping in value, hindering how people use the site, and engaging in controversial account removals, it has never been more precarious to rely on the site as a historical record. So, it’s important for individuals to act now and save what they can. While your tweets may feel ephemeral or inconsequential, they are part of a greater history in danger of being wiped out.

Any centralized communication platform, particularly one operated for profit, is vulnerable to being coopted by the powerful. This might mean exploiting users to maximize short-term profits or changing moderation rules to silence marginalized people and promote hate speech. The past year has seen unprecedented numbers of users fleeing X, Reddit, and other platforms over changes in policy

But leaving these platforms, whether in protest, disgust, or boredom, leaves behind an important digital record of how communities come together and grow.

Archiving tweets isn’t just for Dril and former presidents. In its heyday, Twitter was an essential platform for activists, organizers, journalists, and other everyday people around the world to speak truth to power and fight for social justice. Its importance for movements and building community was noted by oppressive governments around the world, forcing the site to ward off data requests and authoritarian speech suppression

A prominent example in the U.S. is the movement for Black Lives, where activists built momentum on the site and found effective strategies to bring global attention to their protests. Already though, #BlackLivesMatter tweets from 2014 are vanishing from X, and the site seems to be blocking and disabling  tools from archivists preserving this history.

In documenting social movements we must remember social media is not an archive, and platforms will only store (and gate keep) user work insofar as it's profitable, just as they only make it accessible to the public when it is profitable to do so. But when platforms fail, with them goes the history of everyday voices speaking to power, the very voices organizations like EFF fought to protect. The voice of power, in contrast, remains well documented.

In the battleground of history, archival work is cultural defense. Luckily, digital media can be quickly and cheaply duplicated and shared. In just a few minutes of your time, the following easy steps will help preserve not just your history, but the history of your community and the voices you supported.

1. Request Your Archive

Despite the many new restrictions on Twitter access, the site still allows users to backup their entire profile in just a few clicks.

  • First, in your browser or the X app, navigate to Settings. This will look like three dots, and may say "More" on the sidebar.

  • Select Settings and Privacy, then Your Account, if it is not already open.

  • Here, click Download an archive of your data

  • You'll be prompted to sign into your account again, and X will need to send a verification code to your email or text message. Verifying with email may be more reliable, particularly for users outside of the US.

  • Select Request archive

  • Finally—wait. This process can take a few days, but you will receive an email once it is complete. Eventually you will get an email saying that your archive is ready. Follow that link while logged in and download the ZIP files.

2. Optionally, Share with a Library or Archive.

There are many libraries, archives, and community groups who would be interested in preserving these archives. You may want to reach out to a librarian to help find one curating a collection specific to your community.

You can also request that your archive be preserved by the Internet Archive's Wayback Machine. While these steps are specific to the Internet Archive. We recommend using a desktop computer or laptop, rather than a mobile device.

  • Unpack the ZIP file you downloaded in the previous section.
  • In the Data folder, select the tweets.js file. This is a JSON file with just your tweets. JSON files are difficult to read, but you can convert it to a CSV file and view them in a spreadsheet program like Excel or LibreOffice Calc as a free alternative.
  • With your accounts and tweets.js file ready, go to the Save Page Now's Google Sheet Interface and select "Archive all your Tweets with the Wayback Machine.”

  • Fill in your Twitter handle, select your "tweets.js" file from Step 2 and click "Upload."

  • After some processing, you will be able to download the CSV file.
  • Import this CSV to a new Google Sheet. All of this information is already public on Twitter, but if you notice very sensitive content, you can remove those lines. Otherwise it is best to leave this information untampered.
  • Then, use Save Page Now's Google Sheet Interface again to archive from the sheet made in the previous step.
  • It may take hours or days for this request to fully process, but once it is complete you will get an email with the results.
  • Finally, The Wayback Machine will give you the option to also preserve all of your outlinks as well. This is a way to archive all the website URLs you shared on Twitter. This is an easy way to further preserve the messages you've promoted over the years.

3. Personal Backup Plan

Now that you have a ZIP file with all of your Twitter data, including public and private information, you may want to have a security plan on how to handle this information. This plan will differ for everyone, but these are a few steps to consider.

If you only wish to preserve the public information you already successfully shared with an archive, you can delete the archive. For anything you would like to keep but may be sensitive, you may want to use a tool to encrypt the file and keep it on a secure device.

Finally, even if this information is not sensitive, you'll want to be sure you have a solid backup plan. If you are still using Twitter, this means deciding on a schedule to repeat this process so your archive is up to date. Otherwise, you'll want to keep a few copies of the file across several devices. If you already have a plan for backing up your PC, this may not be necessary.

4. Closing Your Account

Finally, you'll want to consider what to do with your current Twitter account now that all your data is backed up and secure.

(If you are planning on leaving X, make sure to follow EFF on Mastodon, Bluesky or another platform.)

Since you have a backup, it may be a good idea to request data be deleted on the site. You can try to delete just the most sensitive information, like your account DMs, but there's no guarantee Twitter will honor these requests—or that it's even capable of honoring such requests. Even EU citizens covered by the GDPR will need to request the deletion of their entire account.

If you aren’t concerned about Twitter keeping this information, however, there is some value in keeping your old account up. Holding the username can prevent impersonators, and listing your new social media account will help people on the site find you elsewhere. In our guide for joining mastodon we recommended sharing your new account in several places. However, adding the new account to one's Twitter name will have the best visibility across search engines, screenshots, or alternative front ends like nitter.

The PRESS Act Will Protect Journalists When They Need It Most

22 January 2024 at 14:45

Our government shouldn’t be spying on journalists. Nor should law enforcement agencies force journalists to identify their confidential sources or go to prison. 

To fix this, we need to change the law. Now, we’ve got our best chance in years. The House of Representatives has passed the Protect Reporters from Exploitive State Spying (PRESS) Act, H.R. 4250, and it’s one of the strongest federal shield bills for journalists we’ve seen. 

Take Action

Tell Congress To Pass the PRESS Act Now

The PRESS Act would do two critical things: first, it would bar federal law enforcement from surveilling journalists by gathering their phone, messaging, or email records. Secondly, it strictly limits when the government can force a journalist to disclose their sources. 

Since its introduction, the bill has had strong bipartisan support. And such “shield” laws for reporters have vast support across the U.S., with 49 states and the District of Columbia all having some type of law that prevents journalists from being forced to hand over their files to assist in criminal prosecutions, or even private lawsuits. 

While journalists are well protected in many states, federal law is currently lacking in protections. That’s had serious consequences for journalists, and for all Americans’ right to freely access information. 

Multiple Presidential Administrations Have Abused Laws To Spy On Journalists

The Congressional report on this bill details abuses against journalists by all of the past three Presidential administrations. Federal law enforcement officials improperly acquired reporters’ phone records on numerous occasions since 2004, under both Democratic and Republican administrations. 

On at least 12 occasions since 1990, law enforcement threatened journalists with jail or home confinement for refusing to give up their sources; some reporters served months in jail. 

Elected officials must do more about these abuses than preside over after-the-fact apologies. 

PRESS Act Protections

The PRESS Act bars the federal government from surveilling journalists through their phones, email providers, or other online services. These digital protections are critical because they reflect how journalists operate in the field today. The bill restricts subpoenas aimed not just at the journalists themselves, but their phone and email providers. Its exceptions are narrow and targeted. 

The PRESS Act also has an appropriately broad definition of the practice of journalism, covering both professional and citizen journalists. It applies regardless of a journalist’s political leanings or medium of publication. 

The government surveillance of journalists over the years has chilled journalists’ ability to gather news. It’s also likely discouraged sources from coming forward, because their anonymity isn’t guaranteed. We can’t know the important stories that weren’t published, or weren’t published in time, because of fear of retaliation on the part of journalists or their sources. 

In addition to EFF, the PRESS Act is supported by a wide range of press and rights groups, including the ACLU, the Committee to Protect Journalists, the Freedom of the Press Foundation, the First Amendment Coalition, the News Media Alliance, the Reporters Committee for Freedom of the Press, and many others. 

Our democracy relies on the rights of both professional journalists and everyday citizens to gather and publish information. The PRESS Act is a long overdue protection. We have sent Congress a clear message to pass it; please join us by sending your own email to the Senate using our links below. 

Take Action

Tell Congress To Pass the PRESS Act Now

The No AI Fraud Act Creates Way More Problems Than It Solves

19 January 2024 at 18:27

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes senseyou should be able to prevent a company from running an advertisement that falsely claims that you endorse its productsbut the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that categoryfrom pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyoneincluding performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendmentthat’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

EFF’s 2024 In/Out List

19 January 2024 at 09:46

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.


IN

1. Affordable and future-proof
internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A
privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

UAE Confirms Trial Against 84 Detainees; Ahmed Mansoor Suspected Among Them

10 January 2024 at 05:51

The UAE confirmed this week that it has placed 84 detainees on trial, on charges of “establishing another secret organization for the purpose of committing acts of violence and terrorism on state territory.” Suspected to be among those facing trial is award-winning human rights defender Ahmed Mansoor, also known as the “the million dollar dissident,” as he was once the target of exploits that exposed major security flaws in Apple’s iOS operating system—the kind of “zero-day” vulnerabilities that fetch seven figures on the exploit market. Mansoor drew the ire of UAE authorities for criticizing the country’s internet censorship and surveillance apparatus and for calling for a free press and democratic freedoms in the country.

Having previously been arrested in 2011 and sentenced to three years' imprisonment for “insulting officials,'' Ahmed Mansoor was released after eight months due to a presidential pardon influenced by international pressure. Later, Mansoor faced new speech-related charges for using social media to “publish false information that harms national unity.” During this period, authorities held him in an unknown location for over a year, deprived of legal representation, before convicting him again in May 2018 to ten years in prison under the UAE’s draconian cybercrime law. We have long advocated for his release, and are joined in doing so by hundreds of digital and human rights organizations around the world.

At the recent COP28 climate talks, Human Rights Watch and Amnesty International and other activists conducted a protest inside the UN-protected “blue zone” to raise awareness of Mansoor’s plight, as well the cases of both UAE detainee Mohamed El-Siddiq and Egyptian-British activist  Alaa Abd El Fattah. At the same time, it was reported by a dissident group that the UAE was proceeding with the trial against 84 of its detainees.

We reiterate our call for Ahmed Mansoor’s freedom, and take this opportunity to raise further awareness of the oppressive nature of the legislation that was used to imprison him. The UAE’s use of its criminal law to silence those who speak truth to power is another example of how counter-terrorism laws restrict free expression and justify disproportionate state surveillance. This concern is not hypothetical; a 2023 study by the Special Rapporteur on counter-terrorism found widespread and systematic abuse of civil society and civic space through the use of similar laws supposedly designed to counter terrorism. Moreover, and problematically, references 'related to terrorism’ in the treaty preamble are still included in the latest version of a proposed United Nations Cybercrime Treaty, currently being negotiated with more than 190 member states, even though there is no  agreed-upon definition of terrorism in international law. If approved as currently written, the UN Cybercrime Treaty has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide, and could enable States to justify repressive measures that overly restrict free expression and peaceful dissent.

EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

4 January 2024 at 13:41

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

Digital Rights for LGBTQ+ People: 2023 Year in Review

1 January 2024 at 08:16

An increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and offline across the globe. Throughout 2023, several countries sought to pass explicitly anti-LGBTQ+ initiatives restricting freedom of expression and privacy. This fuels offline intolerance against LGBTQ+ people, and forces them to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One growing threat to LGBTQ+ people is data surveillance. Across the U.S., a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging

It is imperative that businesses are prevented from collecting and retaining this data in the first place, so that it cannot later be seized by police and used as evidence. Legislators should start with Rep. Jacobs’ My Body, My Data bill. We also need new laws to ban reverse warrants, which police can use to identify every person who searched for the keywords “how do I get gender-affirming care,” or who was physically located near a trans health clinic. 

Moreover, LGBTQ+ expression was targeted by U.S. student monitoring tools like GoGuardian, Gaggle, and Bark. The tools scan web pages and documents in students’ cloud drives for keywords about topics like sex and drugs, which are subsequently blocked or flagged for review by school administrators. Numerous reports show regular flagging of LGBTQ+ content. This creates a harmful atmosphere for students; for example, some have been outed because of it. In a positive move, Gaggle recently removed LGBTQ+ terms from their keyword list and GoGuardian has done the same. But, LGBTQ+ resources are still commonly flagged for containing words like "sex," "breasts," or "vagina." Student monitoring tools must remove all terms from their blocking and flagging lists that trigger scrutiny and erasure of sexual and gender identity. 

Looking outside the U.S., LGBTQ+ rights were gravely threatened by expansive cybercrime and surveillance legislation in the Middle East and North Africa throughout 2023. For example, the Cybercrime Law of 2023 in Jordan, introduced as part of King Abdullah II’s modernization reforms, will negatively impact LGBTQ+ people by restricting encryption and anonymity in digital communications, and criminalizing free speech through overly broad and vaguely defined terms. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. 

For many countries across Africa, and indeed the world, anti-LGBTQ+ discourses and laws can be traced back to colonial rule. These laws have been used to imprison, harass, and intimidate LGBTQ+ individuals. In May 2023, Ugandan President Yoweri Museveni signed into law the extremely harsh Anti-Homosexuality Act 2023. It imposes, for example, a 20-year sentence for the vaguely worded offense of “promoting” homosexuality. Such laws are not only an assault on the rights of LGBTQ+ people to exist, but also a grave threat to freedom of expression. They lead to more censorship and surveillance of online LGBTQ+ speech, the latter of which will lead to more self-censorship, too.

Ghana’s draft Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill 2021 goes much further. It threatens up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter, Facebook, and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content. If passed, Ghanaian authorities could also probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. EFF this year joined other human rights groups to oppose this law.

Taking inspiration from Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—would impose ten years imprisonment for homosexuality, and life imprisonment for “aggravated homosexuality.” The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. Kenya today is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation; sadly, that may change. EFF has called on the authorities in Kenya and Ghana to reject their respective repulsive bills, and for authorities in Uganda to repeal the Anti-Homosexuality Act.

2023 was a challenging year for the digital rights of LGBTQ+ people. But we are optimistic that in the year to come, LGBTQ+ people and their allies, working together online and off, will make strides against censorship, surveillance, and discrimination.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

30 December 2023 at 10:58

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Kids Online Safety Shouldn’t Require Massive Online Censorship and Surveillance: 2023 Year in Review

28 December 2023 at 11:25

There’s been plenty of bad news regarding federal legislation in 2023. For starters, Congress has failed to pass meaningful comprehensive data privacy reforms. Instead, legislators have spent an enormous amount of energy pushing dangerous legislation that’s intended to limit young people’s use of some of the most popular sites and apps, all under the guise of protecting kids. Unfortunately, many of these bills would run roughshod over the rights of young people and adults in the process. We spent much of the year fighting these dangerous “child safety” bills, while also pointing out to legislators that comprehensive data privacy legislation would be more likely to pass constitutional muster and address many of the issues that these child safety bills focus on. 

But there’s also good news: so far, none of these dangerous bills have been passed at the federal level, or signed into law. That's thanks to a large coalition of digital rights groups and other organizations pushing back, as well as tens of thousands of individuals demanding protections for online rights in the many bills put forward.

Kids Online Safety Act Returns

The biggest danger has come from the Kids Online Safety Act (KOSA). Originally introduced in 2022, it was reintroduced this year and amended several times, and as of today, has 46 co-sponsors in the Senate. As soon as it was reintroduced, we fought back, because KOSA is fundamentally a censorship bill. The heart of the bill is a “Duty of Care” that the government will force on a huge number of websites, apps, social networks, messaging forums, and online video games. KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a  wormhole of internet content to get anxious; they could see a newspaper on the breakfast table. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote.

KOSA will empower every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. Platforms affected by KOSA would likely find it impossible to filter out this type of “harmful” content, though they would likely try. Online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave, and institute age verification tools to ensure that it happens. Age verification systems are surveillance systems that threaten everyone’s privacy. Mandatory age verification, and with it, mandatory identity verification, is the wrong approach to protecting young people online.

The Senate passed amendments to KOSA later in the year, but these do not resolve its issues. As an example, liability under the law was shifted to be triggered only for content that online services recommend to users under 18, rather than content that minors specifically search for. In practice, that means platforms could not proactively show content to young users that could be “harmful,” but could present that content to them. How this would play out in practice is unclear; search results are recommendations, and future recommendations are impacted by previous searches. But however it’s interpreted, it’s still censorship—and it fundamentally misunderstands how search works online. Ultimately, no amendment will change the basic fact that KOSA’s duty of care turns what is meant to be a bill about child safety into a censorship bill that will harm the rights of both adult and minor users. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote. In fact, even many of the young people it is intended to help are vehemently against it. We will continue to oppose it in the new year, and urge you to contact your congressperson about it today

Most KOSA Alternatives Aren’t Much Better

KOSA wasn’t the only child safety bill Congress put forward this year. The Protecting Kids on Social Media Act would combine some of the worst elements of other social media bills aimed at “protecting the children” into a single law. It includes elements of KOSA as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law

When originally introduced, the Protecting Kids on Social Media Act had five major components: 

  • A mandate that social media companies verify the ages of all account holders, including adults 
  • A ban on children under age 13 using social media at all
  • A mandate that social media companies obtain parent or guardian consent before minors over 12 years old and under 18 years old may use social media
  • A ban on the data of minors (anyone over 12 years old and under 18 years old) being used to inform a social media platform’s content recommendation algorithm
  • The creation of a digital ID pilot program, instituted by the Department of Commerce, for citizens and legal residents, to verify ages and parent/guardian-minor relationships

EFF is opposed to all of these components, and has written extensively about why age verification mandates and parental consent requirements are generally dangerous and likely unconstitutional. 

In response to criticisms, senators updated the bill to remove some of the most flagrantly unconstitutional provisions: it no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media.  

One silver lining to this fight is that it has activated young people. 

Still, it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition. It would still prohibit children under 13 from using any ad-based social media, despite the vast majority of content on social media being lawful speech fully protected by the First Amendment. If enacted, the bill would suffer a similar fate to a California law struck down in 2011 for violating the First Amendment, which was aimed at restricting minors’ access to violent video games. 

What’s Next

One silver lining to this fight is that it has activated young people. The threat of KOSA, as well as several similar state-level bills that did pass, has made it clear that young people may be the biggest target for online censorship and surveillance, but they are also a strong weapon against them

The authors of these bills have good, laudable intentions. But laws that would force platforms to determine the age of their users are privacy-invasive, and laws that restrict speech—even if only for those who can’t prove they are above a certain age—are censorship laws. We expect that KOSA, at least, will return in one form or another. We will be ready when it does.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

❌
❌