Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

EFF Urges Supreme Court to Reject Texas’ Speech-Chilling Age Verification Law

21 May 2024 at 18:01

A Texas age verification law will rob people of anonymity online, chill access to speech for privacy- and security-minded internet users, and entirely block some adults from accessing constitutionally protected online content, EFF argued in a brief filed with the Supreme Court last week.

EFF joined the Woodhull Freedom Foundation in filing a friend-of-the-court brief urging the U.S. Supreme Court to grant review of—and ultimately overturn—the Fifth Circuit’s decision upholding the Texas law.

Last year, the state of Texas passed HB 1181 in a misguided attempt to shield minors from certain online content. The law requires all Texas internet users, including adults, to complete invasive “age verification” procedures on every website the state deems to be at least one-third composed of sexual material. Under the law, adult users must upload sensitive personal records—such as a driver’s license or other photo ID—to access any content on these sites, including non-explicit content. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect.

The Fifth Circuit’s decision disregards important constitutional principles. The First Amendment protects our right to access protected online speech without substantial government interference. For adults, this is true even if that speech constitutes sexual or explicit content. The government cannot burden adult internet users and force them to sacrifice their anonymity, privacy, and security simply to access lawful speech.

EFF’s position is hardly unique. Courts have repeatedly and consistently held similar age verification laws to be unconstitutional due to these and other harms. As EFF noted in its brief, the Fifth Circuit’s decision is an anomaly and has created a split among federal circuit courts. 

In coming to its decision, the Fifth Circuit relied largely on a single Supreme Court case from 1968, involving a law that required an in-person ID check to buy magazines featuring adult content. But online age verification is nothing like flashing an ID card in person to buy a particular physical item.

For one, HB 1181 blocks access to entire websites, not just individual offending magazines. This could include many common, general-purpose websites, so long as only one-third of the content is conceivably adult content. “HB 1181’s requirements are akin to requiring ID every time a user logs into a streaming service like Netflix, regardless of whether they want to watch a G- or R-rated movie,” EFF wrote.

Second, and unlike with in-person age-gates, the only viable way for a website to comply with HB 1181 is to require all users to upload and submit, not just momentarily display, a data-rich government-issued ID or other document with personal identifying information. In its brief, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns.

For example, HB 1181 may permit the Texas government to log and track user access when verification is done via government-issued ID. As the trial court explained, the law “runs the risk that the state can monitor when an adult views sexually explicit materials” and threatens to force individuals “to divulge specific details of their sexuality to the state government to gain access to certain speech.”

Additionally, a person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. EFF noted that HB 1181 does not require all parties who may have access to the data—such as third-party intermediaries, data brokers, or advertisers—to delete that data. This leaves users highly vulnerable to data breaches and other security harms.

Finally, EFF explained that millions of adult internet users would be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.

There are less restrictive alternatives to mass online age-gating that would still protect minors without substantially burdening adults. The trial court, in fact, outlined several of these alternatives in its decision, based on the factual evidence presented by the parties. The Fifth Circuit completely ignored these findings.

EFF has been a steadfast critic of efforts to censor the internet and burden access to online speech. We hope the Supreme Court agrees to hear this appeal and reverses the decision of the Fifth Circuit.

Crypto Mixer Money Laundering: Samourai Founders Arrested

9 May 2024 at 03:00

The recent crackdown on the crypto mixer money laundering, Samourai, has unveiled a sophisticated operation allegedly involved in facilitating illegal transactions and laundering criminal proceeds. The cryptocurrency community was shocked by the sudden Samourai Wallet shutdown. The U.S Department of Justice (DoJ) revealed the arrest of two co-founders, shedding light on the intricacies of their […]

The post Crypto Mixer Money Laundering: Samourai Founders Arrested appeared first on TuxCare.

The post Crypto Mixer Money Laundering: Samourai Founders Arrested appeared first on Security Boulevard.

EFF Urges New York Court to Protect Online Speakers’ Anonymity

12 March 2024 at 16:54

The First Amendment requires courts to apply a robust balancing test before unmasking anonymous online speakers, EFF explained in an amicus brief it filed recently in a New York State appeal.

In the case on appeal, GSB Gold Standard v. Google, a German company that sells cryptocurrency investments is seeking to unmask an anonymous blogger who criticized the company. Based upon a German court order, the company sought a subpoena that would identify the blogger. The blogger fought back, without success, and they are now appealing.

Like speech itself, the First Amendment right to anonymity fosters and advances public debate and self-realization. Anonymity allows speakers to communicate their ideas without being defined by their identity. Anonymity protects speakers who express critical or unpopular views from harassment, intimidation, or being silenced. And, because powerful individuals or entities’ efforts to punish one speaker through unmasking may well lead others to remain silent, protecting anonymity for one speaker can promote free expression for many others.

Too often, however, corporate or human persons try to abuse the judicial process to unmask anonymous speakers. Thus, courts should apply robust evidentiary and procedural standards before compelling the disclosure of an anonymous speaker’s identity. 

Under these standards, parties seeking to unmask anonymous speakers must first show they have meritorious legal claims, to help ensure that the litigation isn’t a pretext for harassment. Those parties that meet this first step must then also show that their interests in unmasking an anonymous speaker outweigh the speaker’s interests in retaining their anonymity. In this case, the trial court didn’t require the German company to meet this standard, and it could not have in any event.

Courts around the United States have adopted various forms of this test, with EFF often participating as amicus or counsel. We hope that New York follows their lead.

Don’t Fall for the Latest Changes to the Dangerous Kids Online Safety Act 

15 February 2024 at 17:27

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version this week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not like some lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

30 December 2023 at 10:58

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

❌
❌