Normal view

There are new articles available, click to refresh the page.
Before yesterdayEFF Deeplinks

The Alaska Supreme Court Takes Aerial Surveillance’s Threat to Privacy Seriously, Other Courts Should Too

29 May 2024 at 18:16

In March, the Alaska Supreme Court held in State v. McKelvey that the Alaska Constitution required law enforcement to obtain a warrant before photographing a private backyard from an aircraft. In this case, the police took photographs of Mr. McKelvey’s property, including the constitutionally protected curtilage area, from a small aircraft using a zoom lens.

In arguing that Mr. McKelvey did not have a reasonable expectation of privacy, the government raised various factors which have been used to justify warrantless surveillance in other jurisdictions. These included the ubiquity of small aircrafts flying overhead in Alaska; the commercial availability of the camera and lens; the availability of aerial footage of the land elsewhere; and the alleged unobtrusive nature of the surveillance. 

In response, the Court divorced the ubiquity and availability of the technology from whether people would reasonably expect the government to use it to spy on them. The Court observed that the fact the government spent resources to take photos demonstrates that whatever available images were insufficient for law enforcement needs. Also, the inability or unlikelihood the spying was detected adds to, not detracts from, its pernicious nature because “if the surveillance technique cannot be detected, then one can never fully protect against being surveilled.” 

Throughout its analysis, the Alaska Supreme Court demonstrated a grounded understanding of modern technology—as well as its future—and its effect on privacy rights. At the outset, the Court pointed out that one might think that this warrantless aerial surveillance was not a significant threat to privacy rights because "aviation gas is expensive, officers are busy, and the likelihood of detecting criminal activity with indiscriminate surveillance flights is low." However, the Court added pointedly, “the rise of drones has the potential to change that equation." We made similar arguments and are glad to see that courts are taking the threat seriously. 

This is a significant victory for Alaskans and their privacy rights, and stands in contrast to a couple of U.S. Supreme Court cases from the 1980s, Ciraolo v. California and Florida v. Riley. In those cases, the justices found no violation of the federal constitution for aerial surveillance from low-flying manned aircrafts. But there have been seismic changes in the capabilities of surveillance technology since those decisions, and courts should consider these developments rather than merely applying precedents uncritically. 

With this decision, Alaska joins California, Hawaii, and Vermont in finding that warrantless aerial surveillance violates their state’s constitutional prohibition of unreasonable search and seizure. Other courts should follow suit to ensure that privacy rights do not fall victim to the advancement of technology.

Don't Let the Sun Go Down on Section 230 | EFFector 36.7

29 May 2024 at 13:49

Curious about the latest digital rights news? Well, you're in luck! In our latest newsletter we cover topics ranging from: lawmakers planning to sunset the most important law to free expression online, Section 230; our brief regarding data sharing of electronic ankle monitoring devices; and the simple proposition that no one country should be restricting speech across the entire internet.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.7 - Don't Let The Sun Go Down on Section 230

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

A Wider View on TunnelVision and VPN Advice

29 May 2024 at 01:04

If you listen to any podcast long enough, you will almost certainly hear an advertisement for a Virtual Private Network (VPN). These advertisements usually assert that a VPN is the only tool you need to stop cyber criminals, malware, government surveillance, and online tracking. But these advertisements vastly oversell the benefits of VPNs. The reality is that VPNs are mainly useful for one thing: routing your network connection through a different network. Many people, including EFF, thought that VPNs were also a useful tool for encrypting your traffic in the scenario that you didn’t trust the network you were on, such as at a coffee shop, university, or hacker conference. But new research from Leviathan Security demonstrates a reminder that this may not be the case and highlights the limited use-cases for VPNs.

TunnelVision is a recently published attack method that can allow an attacker on a local network to force internet traffic to bypass your VPN and route traffic over an attacker-controlled channel instead. This allows the attacker to see any unencrypted traffic (such as what websites you are visiting). Traditionally, corporations deploy VPNs for employees to access private company sites from other networks. Today, many people use a VPN in situations where they don't trust their local network. But the TunnelVision exploit makes it clear that using an untrusted network is not always an appropriate threat model for VPNs because they will not always protect you if you can't trust your local network.

TunnelVision exploits the Dynamic Host Configuration Protocol (DHCP) to reroute traffic outside of a VPN connection. This preserves the VPN connection and does not break it, but an attacker is able to view unencrypted traffic. Think of DHCP as giving you a nametag when you enter the room at a networking event. The host knows at least 50 guests will be in attendance and has allocated 50 blank nametags. Some nametags may be reserved for VIP guests, but the rest can be allocated to guests if you properly RSVP to the event. When you arrive, they check your name and then assign you a nametag. You may now properly enter the room and be identified as "Agent Smith." In the case of computers, this “name” is the IP address DHCP assigns to devices on the network. This is normally done by a DHCP server but one could manually try it by way of clothespins in a server room.

TunnelVision abuses one of the configuration options in DHCP, called Option 121, where an attacker on the network can assign a “lease” of IPs to a targeted device. There have been attacks in the past like TunnelCrack that had similar attack methods, and chances are if a VPN provider addressed TunnelCrack, they are working on verifying mitigations for TunnelVision as well.

In the words of the security researchers who published this attack method:

“There’s a big difference between protecting your data in transit and protecting against all LAN attacks. VPNs were not designed to mitigate LAN attacks on the physical network and to promise otherwise is dangerous.”

Rather than lament the many ways public, untrusted networks can render someone vulnerable, there are many protections provided by default that can assist as well. Originally, the internet was not built with security in mind. Many have been working hard to rectify this. Today, we have other many other tools in our toolbox to deal with these problems. For example, web traffic is mostly encrypted with HTTPS. This does not change your IP address like a VPN could, but it still encrypts the contents of the web pages you visit and secures your connection to a website. Domain Name Servers (which occur before HTTPS in the network stack) have also been a vector for surveillance and abuse, since the requested domain of the website is still exposed at this level. There have been wide efforts to secure and encrypt this as well. Availability for encrypted DNS and HTTPS by default now exists in every major browser, closing possible attack vectors for snoops on the same network as you. Lastly, major browsers have implemented support for Encrypted Client Hello (ECH). Which encrypts your initial website connection, sealing off metadata that was originally left in cleartext.

TunnelVision is a reminder that we need to clarify what tools can and cannot do. A VPN does not provide anonymity online and neither can encrypted DNS or HTTPS (Tor can though). These are all separate tools that handle similar issues. Thankfully, HTTPS, encrypted DNS, and encrypted messengers are completely free and usable without a subscription service and can provide you basic protections on an untrusted network. VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

EFF Submission to the Oversight Board on Posts That Include “From the River to the Sea”

As part of the Oversight Board’s consultation on the moderation of social media posts that include reference to the phrase “From the river to the sea, Palestine will be free,” EFF recently submitted comments highlighting that moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.

“From the river to the sea, Palestine will be free” is a historical political phrase or slogan referring geographically to the area between the Jordan River and the Mediterranean Sea, an area that includes Israel, the West Bank, and Gaza. Today, the meaning of the slogan for many continues to be one of freedom, liberation, and solidarity against the fragmentation of Palestinians over the land which the Israeli state currently exercises its sovereignty—from Gaza, to the West Bank, and within the Israeli state.

But for others, the phrase is contentious and constitutes support for extremism and terrorism. Hamas—a group that is a designated terrorist organization by governments such as the United States and the European Union—adopted the phrase in its 2017 charter, leading to the claim that the phrase is solely a call for the extermination of Israel. And since Hamas’ deadly attack on Israel on October 7th 2023, opponents have argued that the phrase is a hateful form of expression targeted at Jews in the West.

But international courts have recognized that despite its co-optation by Hamas, the phrase continues to be used by many as a rallying call for liberation and freedom that is explicit both in its meaning on a physical and symbolic level. The censorship of such a phrase due to a perceived “hidden meaning” of inciting hatred and extremism constitutes an infringement on free speech in those situations.

Meta has a responsibility to uphold the free expression of people using the phrase in its protected sense, especially when those speakers are otherwise persecuted and marginalized. 

Read our full submission here

Wanna Make Big Tech Monopolies Even Worse? Kill Section 230

24 May 2024 at 10:00

It’s no fun when your friends ask you to take sides in their disputes. The plans for every dinner party, wedding, and even funeral arrive at a juncture where you find yourself thinking, “Dang, if I invite her, then he won’t come.”

It’s even less fun when you’re running an online community, from a groupchat to a Mastodon server (or someday, a Bluesky server), or any other (increasingly cheap and easy) space where your friends (and their friends) can hang out online, far from the unquenchable dumpster-fires of Big Tech social media.

But there’s a circle of hell that’s infinitely worse than being asked to choose sides in a flamewar: being threatened with a lawsuit for refusing to do so (or even for complying with one side’s request over the other).

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

At EFF, we’ve had decades of direct experience with the, uh, heated rhetoric that attends online disputes (there’s a reason the most famous law about online arguments was coined by the very first person EFF ever hired).

That’s one of the reasons we’re such big fans of Section 230 (47 U.S.C. § 230), a much-maligned, badly misunderstood law that protects people who run online services from being dragged into legal disputes between their users.

Getting sued can profoundly disrupt your life, even if you win. Much of the time, people on the receiving end of legal threats are forced to settle because they can’t afford to defend themselves in court. There's a whole cottage industry of legal bullies who’ll help the thin-skinned, vindictive and deep-pocketed to silence their critics.

That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.

Look, running online communities is already a thankless task that can convert a generous digital host into a bitter ex-online host.

The alternatives to Big Tech come from individuals, co-ops, nonprofits and startups. These cannot exist in a world where we change the law to make people who offer a space where communities may gather vulnerable to being dragged into lawsuits between their community members.

It’s one thing to volunteer your time and resources to create a hospitable place online; it’s another thing entirely to assume an uninsurable risk that could jeopardize your life’s savings, your home, and your retirement fund. Defending against a single such case can cost hundreds of thousands of dollars.

That’s very bad news indeed, because a world without Section 230 will desperately need alternatives to Big Tech.

Big Tech has deep pockets, which means that even if it creates a system of hair-trigger moderation that takes down anything remotely controversial on sight, it will still attract a staggering number of legal threats.

There’s a useful analogy here to FTX, the disgraced, fraudulent cryptocurrency exchange. Like Big Tech, FTX has some genuinely aggrieved users, but FTX has also been targeted by opportunistic treasure hunters who have laid claims against the company totaling 23.6 quintillion dollars.

We know what Big Tech will do in a post-230 world, because some of us are already living in that world. Donald Trump signed SESTA-FOSTA into law in 2018. The law was billed as a narrowly targeted measure to make platforms liable for failing to intervene in cases where they were aware of human trafficking. In practice, the law has been used to indiscriminately target consensual sex work, placing sex workers in harm’s way (just as we predicted).

Without Section 230, Big Tech will shoot first, ask questions later when it comes to taking down controversial online speech (like #MeToo or Black Lives Matter). For marginalized users with little social power (again, like #MeToo or Black Lives Matter participants), Big Tech takedowns will be permanent, because Big Tech has no incentive to figure out whether it’s worth hosting their speech.

Meanwhile, for the wealthy and powerful, a post-230 world is one where dictators, war criminals, and fraudsters will have a new, powerful tool to silence their critics.

A post-230 world, in other words, is a world where Big Tech is infinitely worse for the users who already suffer most from the large platforms’ moderation failures.

But it’s also a world where it’s infinitely harder to start an alternative to Big Tech’s gigantic walled gardens.

No wonder tech billionaires support getting rid of Section 230: they understand that their overgrown, universally loathed services are vulnerable to real alternatives.

Four years ago, the Biden Administration declared that promoting competition was a whole-of-government priority (and we cheered). Getting rid of Section 230 will do the opposite: freeze the internet in its current, monopolized state, creating a world where the rule of today’s tech barons is never challenged by a more democratic, user-centric internet.

Take Action

Ending Section 230 Will Make Big Tech Monopolies Even Worse

NETMundial+10 Multistakeholder Statement Pushes for Greater Inclusiveness in Internet Governance Processes

23 May 2024 at 17:55

A new statement about strengthening internet governance processes emerged from the NETMundial +10 meeting in Brazil last month, strongly reaffirming the value of and need for a multistakeholder approach involving full and balanced participation of all parties affected by the internet—from users, governments, and private companies to civil society, technologists, and academics.

But the statement did more than reiterate commitments to more inclusive and fair governance processes. It offered recommendations and guidelines that, if implemented, can strengthen multistakeholder principles as the basis for global consensus-building and democratic governance, including in existing multilateral internet policymaking efforts.


The event and statement, to which EFF contributed with dialogue and recommendations, is a follow-up to the 2014 NETMundial meeting, which ambitiously sought to consolidate multistakeholder processes to internet governance and recommended
10 process principles. It’s fair to say that over the last decade, it’s been an uphill battle turning words into action.

Achieving truly fair and inclusive multistakeholder processes for internet governance and digital policy continues to face many hurdles.  Governments, intergovernmental organizations, international standards bodies, and large companies have continued to wield their resources and power. Civil society
  organizations, user groups, and vulnerable communities are too often sidelined or permitted only token participation.

Governments often tout multistakeholder participation, but in practice, it is a complex task to achieve. The current Ad Hoc Committee negotiations of the proposed
UN Cybercrime Treaty highlight the complexity and controversy of multistakeholder efforts. Although the treaty negotiation process was open to civil society and other nongovernmental organizations (NGOs), with positive steps like tracking changes to amendments, most real negotiations occur informally, excluding NGOs, behind closed doors.

This reality presents a stark contrast and practical challenge for truly inclusive multistakeholder participation, as the most important decisions are made without full transparency and broad input. This demonstrates that, despite the appearance of inclusivity, substantive negotiations are not open to all stakeholders.

Consensus building is another important multistakeholder goal but faces significant practical challenges because of the human rights divide among states in multilateral processes. For example, in the context of the Ad Hoc Committee, achieving consensus has remained largely unattainable because of stark differences in human rights standards among member States. Mechanisms for resolving conflicts and enabling decision-making should consider human rights laws to indicate redlines. In the UN Cybercrime Treaty negotiations, reaching consensus could potentially lead to a race to the bottom in human rights and privacy protections.

To be sure, seats at the policymaking table must be open to all to ensure fair representation. Multi-stakeholder participation in multilateral processes allows, for example, civil society to advocate for more human rights-compliant outcomes. But while inclusivity and legitimacy are essential, they alone do not validate the outcomes. An open policy process should always be assessed against the specific issue it addresses, as not all issues require global regulation or can be properly addressed in a specific policy or governance venue.

The
NETmundial+10 Multistakeholder Statement, released April 30 following a two-day gathering in São Paulo of 400 registered participants from 60 countries, addresses issues that have prevented stakeholders, especially the less powerful, from meaningful participation, and puts forth guidelines aimed at making internet governance processes more inclusive and accessible to diverse organizations and participants from diverse regions.

For example, the 18-page statement contains recommendations on how to strengthen inclusive and diverse participation in multilateral processes, which includes State-level policy making and international treaty negotiations. Such guidelines can benefit civil society participation in, for example, the UN Cybercrime Treaty negotiations. EFF’s work with international allies in the UN negotiating process is outlined here.

The NETmundial statement takes asymmetries of power head on, recommending that governance processes provide stakeholders with information and resources and offer capacity-building to make these processes more accessible to those from developing countries and underrepresented communities. It sets more concrete guidelines and process steps for multistakeholder collaboration, consensus-building, and decision-making, which can serve as a roadmap in the internet governance sphere.

The statement also recommends strengthening the UN-convened Internet Governance Forum (IGF), a predominant venue for the frank exchange of ideas and multistakeholder discussions about internet policy issues. The multitude of initiatives and pacts around the world dealing with internet policy can cause duplication, conflicting outcomes, and incompatible guidelines, making it hard for stakeholders, especially those from the Global South, to find their place. 


The IGF could strengthen its coordination and information sharing role and serve as a venue for follow up of multilateral digital policy agreements. The statement also recommended improvements in the dialogue and coordination between global, regional, and national IGFs to establish continuity between them and bring global attention to local perspectives.

We were encouraged to see the statement recommend that IGF’s process for selecting its host country be transparent and inclusive and take into account human rights practices to create equitable conditions for attendance.

EFF and 45 digital and human rights organizations last year called on the UN Secretary-General and other decision-makers to reverse their decision to grant host status for the 2024 IGF to Saudi Arabia, which has a long history of human rights violations, including the persecution of human and women’s rights defenders, journalists, and online activists. Saudi Arabia’s draconian cybercrime laws are a threat to the safety of civil society members who might consider attending an event there.  

Nominations Open for 2024 EFF Awards!

22 May 2024 at 18:01

Nominations are now open for the 2024 EFF Awards! The nomination window will be open until May 31st at 2:00 PM Pacific time. You could nominate the next winner today!

For over thirty years, the Electronic Frontier Foundation presented awards to key leaders and organizations in the fight for freedom and innovation online. The EFF Awards celebrate the longtime stalwarts working on behalf of technology users, both in the public eye and behind the scenes. Past Honorees include visionary activist Aaron Swartz, human rights and security researchers The Citizen Lab, media activist Malkia Devich-Cyril, cyberpunk author William Gibson, and whistle-blower Chelsea Manning.

The internet is a necessity in modern life and a continually evolving tool for communication, creativity, and human potential. Together we carry—and must always steward—the movement to protect civil liberties and human rights online. Will you help us spotlight some of the latest and most impactful work towards a better digital future?

Remember, nominations close on May 31st at 2:00 PM Pacific time!

GO TO NOMINATION PAGE

Nominate your favorite digital rights Heroes now!

After you nominate your favorite contenders, we hope you will consider joining us on September 12 to celebrate the work of the 2024 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

The EFF Awards depend on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the EFF Awards, please email tierney@eff.org

 

EFF Urges Supreme Court to Reject Texas’ Speech-Chilling Age Verification Law

21 May 2024 at 18:01

A Texas age verification law will rob people of anonymity online, chill access to speech for privacy- and security-minded internet users, and entirely block some adults from accessing constitutionally protected online content, EFF argued in a brief filed with the Supreme Court last week.

EFF joined the Woodhull Freedom Foundation in filing a friend-of-the-court brief urging the U.S. Supreme Court to grant review of—and ultimately overturn—the Fifth Circuit’s decision upholding the Texas law.

Last year, the state of Texas passed HB 1181 in a misguided attempt to shield minors from certain online content. The law requires all Texas internet users, including adults, to complete invasive “age verification” procedures on every website the state deems to be at least one-third composed of sexual material. Under the law, adult users must upload sensitive personal records—such as a driver’s license or other photo ID—to access any content on these sites, including non-explicit content. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect.

The Fifth Circuit’s decision disregards important constitutional principles. The First Amendment protects our right to access protected online speech without substantial government interference. For adults, this is true even if that speech constitutes sexual or explicit content. The government cannot burden adult internet users and force them to sacrifice their anonymity, privacy, and security simply to access lawful speech.

EFF’s position is hardly unique. Courts have repeatedly and consistently held similar age verification laws to be unconstitutional due to these and other harms. As EFF noted in its brief, the Fifth Circuit’s decision is an anomaly and has created a split among federal circuit courts. 

In coming to its decision, the Fifth Circuit relied largely on a single Supreme Court case from 1968, involving a law that required an in-person ID check to buy magazines featuring adult content. But online age verification is nothing like flashing an ID card in person to buy a particular physical item.

For one, HB 1181 blocks access to entire websites, not just individual offending magazines. This could include many common, general-purpose websites, so long as only one-third of the content is conceivably adult content. “HB 1181’s requirements are akin to requiring ID every time a user logs into a streaming service like Netflix, regardless of whether they want to watch a G- or R-rated movie,” EFF wrote.

Second, and unlike with in-person age-gates, the only viable way for a website to comply with HB 1181 is to require all users to upload and submit, not just momentarily display, a data-rich government-issued ID or other document with personal identifying information. In its brief, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns.

For example, HB 1181 may permit the Texas government to log and track user access when verification is done via government-issued ID. As the trial court explained, the law “runs the risk that the state can monitor when an adult views sexually explicit materials” and threatens to force individuals “to divulge specific details of their sexuality to the state government to gain access to certain speech.”

Additionally, a person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. EFF noted that HB 1181 does not require all parties who may have access to the data—such as third-party intermediaries, data brokers, or advertisers—to delete that data. This leaves users highly vulnerable to data breaches and other security harms.

Finally, EFF explained that millions of adult internet users would be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.

There are less restrictive alternatives to mass online age-gating that would still protect minors without substantially burdening adults. The trial court, in fact, outlined several of these alternatives in its decision, based on the factual evidence presented by the parties. The Fifth Circuit completely ignored these findings.

EFF has been a steadfast critic of efforts to censor the internet and burden access to online speech. We hope the Supreme Court agrees to hear this appeal and reverses the decision of the Fifth Circuit.

Speaking Freely: Ethan Zuckerman

21 May 2024 at 13:12

Ethan Zuckerman is a professor at the University of Massachusetts at Amherst, where he teaches Public Policy, Communication and Information. He is starting a new research center called the Institute for Digital Public Infrastructure. Over the years, he’s been a tech startup guy (with Tripod.com), a non-profit founder (Geekcorps.org) and co-founder (Globalvoices.org), and throughout it all, a blogger.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean to you? 

It is such a complicated question. It sounds really easy, and then it gets really complicated really quickly. I think freedom of expression is this idea that we want to hear what people think and feel and believe, and we want them to say those things as freely as possible. But we also recognize at the same time that what one person says has a real effect on what other people are able to say or feel comfortable saying. So there’s a naive version of freedom of expression which sort of says, “I’m going to say whatever I want all the time.” And it doesn’t do a good job of recognizing that we are in community. And that the ways in which I say things may make it possible or not possible for other people to say things. 

So I would say that freedom of expression is one of these things that, on the surface, looks super simple. You want to create spaces for people to say what they want to say and speak their truths no matter how uncomfortable they are. But then you go one level further than that and you start realizing, oh, okay, what I’m going to do is create spaces that are possible for some people to speak and not for other people to speak. And then you start thinking about how you create a multiplicity of spaces and how those spaces interact with one another. So it’s one of these fractally complicated questions. The first cut at it is super simple. And then once you get a little bit into it it gets incredibly complicated. 

York: Let’s dig into that complexity a bit. You and I have known each other since about 2008, and the online atmosphere has changed dramatically in that time. When we were both, I would say, pretty excited about how the internet was able to bring people together across borders, across affinities, etc. What are some of the changes you’ve seen and how do you think we can preserve a sense of free expression online while also countering some of these downsides or harms? 

Let’s start with the context you and I met in. You and I both were very involved in early years with Global Voices. I’m one of the co-founders along with Rebecca MacKinnon and a whole crew of remarkable people who started this online community as a way of trying to amplify voices that we don’t hear from very often. A lot of my career on the internet has been about trying to figure out whether we can use technology to help amplify voices of people in parts of the world where most of us haven’t traveled, places that we seldom hear from, places that don’t always get attention in the news and such. So Rebecca and I, at the beginning of the 2000s, got really interested in ways that people were using blogs and new forms of technology to report on what was going on. And for me it was places like Sub-Saharan Africa. Rebecca was interested in places like North Korea and sort of getting a picture of what was going on in some of those places, through the lens, often, of Chinese business people who were traveling to those places. 

And we started meeting bloggers who were writing from Iraq, which was under US attack at that point. Who were writing from countries like Madagascar, which had a lot going on politically, but almost no one knew about it or was hearing about it. So you and I started working in this context of, can we amplify these voices? Can we help people speak freely and have an audience? Because that’s one of these interesting problems— you can speak freely if you’re anonymous and on an onion site, etc, but no one’s going to hear you. So can we help people not just speak freely, but can we help find an audience associated with it? And some of the work that I was doing when you and I first met was around things like anonymous blogging with wordpress and Tor. And literally building guides to help people who are whistleblowers in closed societies speak online. 

You and I were also involved with the Berkman Center at Harvard, and we were both working on questions of censorship. One of the things that’s so interesting for me—to sort of go back in history—is to think about how censorship has changed online. Who those opponents to speech are. We started with the assumption that it was going to be the government of Saudi Arabia, or the government of Tunisia, or the government of China, who was going to block certain types of speech at the national level. You know, “You can’t say this. You’re going to be taken down, or, at worst, arrested for saying this.” We then pivoted, to a certain extent, to worries about censorship by companies, by platforms. And you did enormous amounts of work on this! You were at war with Facebook, now Meta, over their work on the female-presenting nipple. Now looking at the different ways which companies might decide that something was allowable speech or unallowable speech based on standards that had nothing to do with what their users thought, but really what the platforms’ decisions were. 

Somewhere in the late 20-teens, I think the battlefield shifted a little bit. And I think there are still countries censoring the internet, there are still platforms censoring the internet, but we got much better at censorship by each other. And, for me, this begins in a serious way with Gamergate. Where you have people—women, critics of the gaming industry—talking about feminist counter-narratives in video games. And the reaction from certain members of an online community is so hostile and so abusive, there’s so much violent misogyny named at people like Anita Sarkeesian and sort of other leaders in this field, that it’s another form of silencing speech. Basically the consequences for some people speaking are now so high, like the amount of abuse you’re going to suffer, whether it’s swatting, whether it’s people releasing a videogame to beat you up—and that’s what happened to Anita—it doesn’t silence you in the same way that, like, the Great Firewall or having your blog taken down might silence you. But the consequences for speech get so high that they really shift and change the speech environment. And part of what’s so tricky about this is some of the people who are using speech to silence speech talk about their right to free speech and how free speech protects their ability to do this. And in some sense, they’re right. In another sense, they’re very wrong. They’re using speech to raise the consequences for other people’s speech and make it incredibly difficult for certain types of speech to take place. 

So I feel like we’ve gone from these very easy enemies—it’s very easy to be pissed off at the Saudis or the Chinese, it’s really satisfying to be pissed off at Facebook or any of the other platforms. But once we start getting to the point where we’re sort of like, hey, your understanding of free speech is creating an environment where it’s very hard or it’s very dangerous for others to speak, that’s where it gets super complicated. And so I would say I’ve gone from a firm supporter of free speech online, to this sort of complicated multilayered, “Wow, there’s a lot to think about in this” that I sort of gave you based on your opening question. 

York: Let’s unpack that a bit, because it’s complicated for me as well. I mean, over the years my views have also shifted. But right now we are seeing an uptick in attempts to censor legitimate speech from the various bills that we’re seeing across the African continent against LGBTQ+ speech, Saudi Arabia is always an evergreen example, Sudan just shut down the internet again, Israel shut down the internet in Palestine, Iran still has some sort of ongoing shutdown, etc etc, I mean name a country and there’s probably something ongoing. And, of course, including the US with the Kids Online Safety Act (KOSA), which will absolutely have a negative impact on free expression for a lot of people. And of course we’re also seeing abortion-related speech being chilled in the US. So, with all of those examples, how do we separate the questions of how we deal with this idea of crowding or censoring eachother’s speech with the very real, persistent threats to speech that we’re seeing? 

I think it is totally worthwhile to mention that actors in this situation have different levels of power. So when you look at something like the Kids Online Safety Act (KOSA), which has the real danger of essentially leaving what is prohibited speech up to individual state attorneys general. And we are seeing different American state attorneys general essentially say we are going to use this to combat “transgenderism,” we’re going to use this to combat—what they see as—the “LGBTQ agenda”, but a lot of the rest of us see as humanity and people having the ability to express their authentic selves. When you have a state essentially saying, “We’re going to censor content accessible to people under 18,” first of all, I don’t think it will pass Supreme Court muster. I think even under the crazy US Supreme Court at the moment, that’s actually going to get challenged successfully. 

When I talk about this progression from state censorship to platform censorship to individual censorship, there is a decreasing amount of power. States have guns, they can arrest you. There’s a lot of things Facebook can do to you, but they can’t, at this point, arrest you. They do have enormous power in terms of large swaths of the online environment, and we need to hold that sort of power accountable as well. But these things have to be an “and”, not an “or.” 

And, at the same time, as we are deeply concerned about state power and we’re deeply concerned about platform power, we also have to recognize that changes to a speech environment can make it incredibly difficult for people to participate or not participate. So one of the examples of this, in many ways, is changes to Twitter under Elon Musk. Where technical changes as well as moderation changes have made this a less safe space for a lot of people. And under the heading of free speech, you now have an environment where it is a whole lot easier to be harassed and intimidated to the point where it may not be easy to be on the platform anymore. Particularly if you are, say, a Muslim woman coming from India, for instance. This is a subject that I’m spending a lot of time with my friend and student Ifat Gazia looking at, how Hindutva is sort of using Twitter to gang up on Kashmirian women and create circumstances where it’s incredibly unsafe and unpleasant for them to be speaking where anything they say will turn into misogynistic trolling as well as attempts to get them kicked off the platform. And so, what’s become a free speech environment for Hindu nationalism turns out to make that a much less safe environment for the position that Kashmir should be independent or that Muslims should be equal Indian citizens. And so, this then takes us to this point of saying we want either the State or the platform to help us create a level playing field, help us create a space in which people can speak. But then suddenly we have both the State and the platform coming in and saying, “you can say this, and not say this.” And that’s why it gets so complicated so fast. 

York: There are many challenges to anonymous speech happening around the world. One example that comes to mind is the UK’s Online Safety Act, which digs into it a bit. We also both have written about the importance of anonymity for protecting vulnerable communities online. Have your views on anonymity or pseudonymity changed over the years? 

One of the things that was so interesting about early blogging was that we started seeing whistleblowers. We started seeing people who had information from within governments finding ways to express what was going on, within their states and within their countries. And I think to a certain extent, kind of leading up to the rise of WikiLeaks, there was this sort of idea that anonymity was almost a mark of authenticity. If you had to be anonymous perhaps it was because you were really close to the truth. Many of us took leaks very seriously. We took this idea that this was a leak, this was the unofficial narrative, we should pay an enormous amount of attention to it. I think, like most things in a changing media environment, the notion of leaking and the notion of protected anonymity has gotten weaponized to a certain extent. I think, you know, Wikileaks is its own complicated narrative where things which were insider documents within, say, Kenya, early on in WikiLeak’s history, sort of turned into giant document dumps with the idea that there must be something in here somewhere that’s going to turn out to be important. And, often, there was something in there, and there was also a lot of chaff in there. I think people learned how to use leaking as a strategy. And now, anytime you want people to pay attention to a set of documents, you say, I’m going to go ahead and “leak” them. 

At the same time, we’ve also seen people weaponize anonymity. And a story that you and I are both profoundly familiar with is Gay Girl in Damascus. Where you had someone using anonymity to claim that she was a lesbian living in a conservative community and talking about her experiences there. But of course it turned out to be a middle aged male Scotsman who had taken on this identity in the hopes of being taken more seriously. Because, of course, everyone knows that middle aged white men never get a voice in online dialogues, he had to make himself into a queer, Syrian woman to have a voice in that dialogue. Of course, the real amusing part of that, and what we found out in unwinding that situation, was that he was in a relationship with another fake lesbian who was another dude pretending to be a lesbian to have a voice online. So there’s this way in which we went from this very sort of naive, “it’s anonymous, therefore it’s probably a very powerful source,” to, “it’s anonymous, it’s probably yet another troll.” 

I think the answer is anonymity is really complicated. Some people really do need anonymity. And it’s really important to construct ways in which people can speak freely. But anyone who has ever worked with whistleblowers—and I have—will tell you that finding a way to actually put your name to something gives it vastly more power. So I think anonymity remains important, we’ve got to find ways to defend and protect it. I think we’re starting to find that the sort of Mark Zuckerberg idea, “you get rid of anonymity and the web will be wonderful”, is complete crap. There’s many communities that end up being very healthy with persistent pseudonyms or even anonymity. It has more to do with the space and the norms associated with it. But anonymity is neither the one size fits all solution to making whistleblowing safe, nor is it the “oh no, if you let anonymity in your community will collapse.” Like everything in this space, it turns out to be complicated and nuanced. And both more and less important than we tend to think. 

York: Tell me about an early experience that shaped your views on free expression. 

The story of Hao Wu is the story I want to tell here. When I think about freedom of expression online, I find myself thinking a lot about his story. Hao Wu is a documentary filmmaker. At this point, a very accomplished documentary filmmaker. He has made some very successful films, including one called The People’s Republic of Desire about Chinese live-streaming, which has gotten a great deal of celebration. He has a new film out called 76 Days about the lockdown of Wuhan. But I got to know him very indirectly, and it was from the fact that he was making a film in China about the phenomenon of underground Christian churches. And he got arrested and held for five months, and we knew about him through the Global Voices community because he had been an active blogger. We’d been paying attention to some of the work he was doing and suddenly he’d gone silent. 

I ended up working with Rebecca MacKinnon, who speaks Chinese and was in touch with all the folk involved, and I was doing the websites and such, building a free Hao Wu blog. And using that, and sort of platforming his sister, as a chance to advocate for his release. And what was so fascinating about this was Rebecca and I spent months writing about and talking about what was going on, and encouraging his sister to speak out, but she—completely understandably—was terrified about the consequences for her own life and her own career and family. At a certain point she was willing to write online and speak out, but that experience of sort of realizing that something that feels very straightforward and easy from your perspective, miles and miles away from the political situation, like, here’s this young man who is a filmmaker and a blogger and clearly a smart, interesting person, he should be able to speak freely, of course we’re going to advocate for his release. And then talking to his family and seeing the genuine terror that his sister had, that her life could be entirely transformed, and transformed negatively, by advocating for something as simple as her brother’s release. 

It’s interesting, I think about our mutual friend Alaa Abd El-Fattah, who has spent most of his adult life in Egyptian prisons, getting detained again and again and again. His family, his former partner, and many of his friends have spent years and years and years advocating for him. This whole process of advocating for someone’s ability to speak, advocating for someone’s ability to take political action, advocating for someone’s ability to make art—the closer you get to the situation, the harder it gets. Because the closer you are to the situation, the more likely that the injustice that you’re advocating to have overturned, is one that you’re experiencing as well. And it’s really interesting. I think it makes it very easy to advocate from a distance, and often much harder to advocate when you’re much closer to a situation. I think any situations where we find ourselves yelling about something on the other side of the world, it’s a good moment to sort of check and ask, are the people who are yelling the people who are directly affected by this—are they not yelling because the danger is so high, are they not yelling because maybe we misunderstand and are advocating for something that seems right and seems obvious but is actually much more complicated than we might otherwise think? 

York: Your lab is advocating for what you call a pluraverse. So you recognize that all these major platforms are going to continue to exist, people are going to continue to use them, but as we’re seeing a multitude of mostly decentralized platforms crop up, how do we see the future of moderation on those platforms? 

It’s interesting, I spend a ton of my time these days going out and sort of advocating for a pluraverse vision of the internet. And a lot of my work is trying to both set up small internet communities with very specific foci associated with them and thinking about an architecture that allows for a very broad range of experiences. One thing I found in all this is that small platforms often have much more restrictive rules than you would expect, and often for the better. And I’ll give a very tangible example. 

I am a large person. I am, for the first time in a long time, south of 300 pounds. But for a long time I have been around between 290 and 310 for most of my adult life. And I started running about six months ago. I was inspired by a guy named Martinus Evans, who ran his first marathon at 380 pounds, and started a running club called the Slow AF Running Club, which has a very active online community and advocates for fitness and running at any size. And so I now log on to this group probably three or four times a week to log my runs, get encouragement, etc. I had to write an essay to join this community. I had to sign on to an incredible set of rules, including no weight talk, no weight loss talk, no body talk. All sorts of things. And you might say, I have freedom of speech! I have freedom of expression! Well, I’m choosing to set that aside so that I can be a member of this community and get support in particular ways. And in a pluraverse, if I want to talk about weight loss or bodies or something like that I can do it somewhere else! But to be a part of this extremely healthy online community that’s really helping me out a lot, I have to sort of agree and put certain things in a box. 

And this is what I end up referring to as “small rooms.” Small rooms have a purpose. They have a community. They might have a very tight set of speech regulations. And they’re great—for that specific conversation. They’re not good for broader conversations. If I want to advocate for body positivity. If I want to advocate for healthy at any weight, any number of other things, I’m going to need to step into a bigger room. I’m going to need to go to Twitter or Facebook or something like that. And there the rules are going to be very different. They’re going to be much broader. They’re going to encourage people to come back and say, “Shut up you fat fuck.” And that is in fact what happens when you encounter some of these things on a space like Reddit. So this world of small rooms and big rooms is a world in which you might find yourself advocating for very tight speech restrictions if the community chooses them on specific platforms. And you might be advocating for very broad open rules in the large rooms with the notion that there’s always going to be conflict and there’s a need for moderation. 

Here is one of the problems that always comes up in these spaces. What happens if the community wants to have really terrible rules? What if the community is KiwiFarms and the rules are we’re going to find trans people and we’re going to harass them, preferably to death? What if that tiny room is Stormfront and we’re going to party like it’s 1939? We’re going to go right back to going after white nationalism and Christian nationalism and anti-Jewish and anti-Muslim? And things get really tricky when the group wants to trade Child Sexual Abuse Material (CSAM), because they certainly do. Or they want to create un-permissioned nonconsensual sexual imagery? What if it’s a group that wants to make images of Taylor Swift doing lots of things that she has never done or certainly has not circulated photos of? 

So I’ve been trying to think about this architecturally. So I think the way that I want to handle this architecturally is to have the friendly neighborhood algorithm shop. And the friendly neighborhood algorithm shop lets you do two things. It lets you view social media on a client that you control through a set of algorithms that you care about. So if you want to go in and say, “I don’t want any politics today,” or “I want politics, but only highly-verified news,” or “frankly, today give me nothing but puppies.” I think you should have the ability to choose algorithms that are going to filter your media, and choose to use them that way. But I also think the friendly neighborhood algorithm shop needs to serve platforms. And I think some platforms may say, “Hey, we’re going to have this set of rules and we’re going to enforce them algorithmically, and here are the ones we’re going to enforce by hand.” And I think certain algorithms are probably going to become de rigeur. 

I think having a check for known CSAM is probably a bare minimum for running a responsible platform these days. And having these sorts of tools that Facebook and such have created to scan large sets of images for  known CSAM, making those tools available to even small platform operators is probably a very helpful thing to do. I don’t think you’re going to require someone to do this for a Mastodon node, but I think it’s going to be harder and harder to run a Mastodon node if you don’t have some of those basic protections in place. Now this gets real hard really quickly. It gets real hard because we know that some other databases out there—including databases of extremist and terrorist content—are not reviewable. We are concerned that those databases may be blocking content that is legitimate political expression, and we need to figure out ways to be able to audit these and make sure that they’re used correctly. We also, around CSAM specifically, are starting to experience a wave of people generating novel CSAM that may not actually involve an actual child, but are recombinations of images to create new scenarios. I’ve got be honest with you, I don’t know what we’re going to do there. I don’t know how we anticipate it and block it, I don’t even know the legal status of blocking some of that imagery where there is not an actual child harmed. 

So these aren’t complete solutions. But I think getting to the point where we’re running a lot of different communities, we have an algorithmic toolkit that’s available to try to do some of that moderation that we want around the community, and there is an expectation that you’re doing that work. And if you’re not, it may be harder and harder to keep that community up and running and have people interact and interoperate with you. I think that’s where I find myself doing a lot of thinking and a lot of advocacy these days. 

We did a piece a few months ago called “The Three Legged Stool,” which is our manifesto for how to do a pluraverse internet and also have moderation and governability. It’s this sort of idea that you want to have quite a bit of control through what we call the loyal client, but you also want the platforms to have the ability to use these sorts of things. So you’ve got folks out there who are basically saying, “Oh no, Mastodon is going to become a cesspit of CSAM.” And, you know, there’s some evidence of that. We’re starting to see some pockets of that. The truth is, I don’t think Mastodon is where it’s mostly happening. I think it’s mostly on much more closed channels. But something we’ve seen from day one is that when you have the ability to do user-generated content, you’re going to get pornography and some of that pornography is going to go beyond the bounds of the galley. And you’re going to end up with that line between pornography and other forms of imagery that are legally prohibited. So there’s gotta be some architectural solution, and I think at some point, running a node without having thought about those technical and architectural solutions is going to start feeling deeply irresponsible. And I think there may be ways in which not only does it end up being irresponsible, but people may end up refusing services to you if you’re not putting those basic protections into place. 

York: Do you have a free speech or free expression hero? 

Oh, that’s interesting. I mean I think this one is probably one that a lot of people are going to say, but it’s Maria Ressa. I think the places in which free expression, to me, feel absolutely the most important to defend is in holding power to account. And what Maria was doing with Rappler in the Philippines was trying to hold an increasingly autocratic government responsible for its actions. And in the process found herself facing very serious consequences—imprisonment, loss of employment, those sorts of things—and managed to find a way to turn that fight into something that called an enormous amount of attention to the Duterte government and opened global conversations about how important it is to protect journalistic freedom of expression. So I’m not saying that journalistic freedom of expression is the only freedom of expression that’s important, I think enormous swaths of freedom of expression are important, but I think it’s particularly important. And I think freedom of expression in the face of real power and real consequences is particularly worth lauding and praising. And I think Maria has done something very interesting which is she has implicated a whole bunch of other actors, not just the Philippines government, but also Facebook and also the sort of economic model of surveillance capitalism. And she encouraged people to think about how all of these are playing into freedom of expression conversations. So I think that ability to take a struggle where the consequences for you are very personal and very individual and turn it into a global conversation is incredibly powerful.

Podcast Episode: Chronicling Online Communities

21 May 2024 at 03:08

From Napster to YouTube, some of the most important and controversial uses of the internet have been about building community: connecting people all over the world who share similar interests, tastes, views, and concerns. Big corporations try to co-opt and control these communities, and politicians often promote scary narratives about technology’s dangerous influences, but users have pushed back against monopoly and rhetoric to find new ways to connect with each other.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Alex Winter is a leading documentarian of the evolution of internet communities. He joins EFF’s Cindy Cohn and Jason Kelley to discuss the harms of behavioral advertising, what algorithms can and can’t be blamed for, and promoting the kind of digital literacy that can bring about a better internet—and a better world—for all of us. 

In this episode you’ll learn about: 

  • Debunking the monopolistic myth that communicating and sharing data is theft. 
  • Demystifying artificial intelligence so that it’s no longer a “black box” impervious to improvement. 
  • Decentralizing and democratizing the internet so more, diverse people can push technology, online communities, and our world forward. 
  • Finding a nuanced balance between free speech and harm mitigation in social media. 
  • Breaking corporations’ addiction to advertising revenue derived from promoting disinformation. 

Alex Winter is a director, writer and actor who has worked across film, television and theater. Best known on screen for “Bill & Ted’s Excellent Adventure” (1989) and its sequels as well as “The Lost Boys” (1987), “Destroy All Neighbors” (2024) and other films, he has directed documentaries including “Downloaded” (2013) about the Napster revolution; “Deep Web” (2015) about the online black market Silk Road and the trial of its creator Ross Ulbricht; “Trust Machine” (2018) about the rise of bitcoin and the blockchain; and “The YouTube Effect” (2022). He also has directed critically acclaimed documentaries about musician Frank Zappa and about the Panama Papers, the biggest global corruption scandal in history and the journalists who worked in secret and at great risk to break the story.   

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

ALEX WINTER
I think that people keep trying to separate the Internet from any other social community or just society, period. And I think that's very dangerous because I think that it allows them to be complacent and to allow these companies to get more powerful and to have more control and they're disseminating all of our information. Like, that's where all of our news, all of how anyone understands what's going on on the planet. 

And I think that's the problem, is I don't think we can afford to separate those things. We have to understand that it's part of society and deal with making a better world, which means we have to make a better internet.

CINDY COHN
That’s Alex Winter. He’s a documentary filmmaker who is also a deep geek.  He’s made a series of films that chronicle the pressing issues in our digital age.  But you may also know him as William S. Preston, Esquire - aka Bill of the Bill and Ted movies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet. 

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. You know, at EFF we spend a lot of time pointing out the way things could go wrong – and then of course  jumping in to fight when they DO go wrong. But this show is about envisioning – and hopefully helping create – a better future.

JASON KELLEY
Our guest today, Alex Winter, is an actor and director and producer who has been working in show business for most of his life. But as Cindy mentioned, in the past decade or so he has become a sort of chronicler of our digital age with his documentary films. In 2013, Downloaded covered the rise and fall, and lasting impact, of Napster. 2015’s Deep Web – 

CINDY COHN
Where I was proud to be a talking head, by the way. 

JASON KELLEY
– is about the dark web and the trial of Ross Ulbricht who created the darknet market the Silk Road. And 2018’s Trust Machine was about blockchain and the evolution of cryptocurrency. And then most recently, The YouTube Effect looks at the history of the video site and its potentially dangerous but also beneficial impact on the world. That’s not to mention his documentaries on The Panama Papers and Frank Zappa. 

CINDY COHN
Like I said in the intro, looking back on the documentaries you’ve made over the past decade or so, I was struck with the thought that you’ve really become this chronicler of our digital age – you know, capturing some of the biggest online issues, or even shining a light a bit on some of the corners of the internet that people like me might live in, but others might not see so much. . Where does that impulse come from you?

ALEX WINTER
I think partly my age. I came up, obviously, before the digital revolution took root, and was doing a lot of work around the early days of CGI and had a lot of friends in that space. I got my first computer probably in ‘82 when I was in college, and got my first Mac in ‘83, got online by ‘84, dial-up era and was very taken with the nascent online communities at that time, the BBS and Usenet era. I was very active in those spaces. And I'm not at all a hacker, I was an artist and I was more invested in the spaces in that way, which a lot of artists were in the eighties and into the nineties, even before the web.

So I was just very taken with the birth of internet based communities and the fact that it was such a democratized space and I mean that, you know, literally – that it was such an interesting mix of people from around the world who felt free to speak about whatever topics they were interested in, there were these incredible people from around the world who were talking about politics and art and everything  in extremely a robust way.

But I also, um, It really seemed clear to me that this was the beginning of something, and so my interest from the doc side has always been charting the internet in terms of community, and what the impact of that community is on different things, either political or whatever. And that's why my first doc was about Napster, because, you know, fast forward to 1998, which for many people is ancient history, but for us was the future.

And you're still in a modem dial up era and you now have an online community that has over a hundred million people on it in real time around the world who could search each other's hard drives and communicate.  What made me, I think, want to make docs was Napster was the beginning of realizing this disparity between the media or the news or the public's perception of what the internet was and what my experience was.

Where Sean Fanning was kind of being tarred as this pirate and criminal. And while there were obviously ethical considerations with Napster in terms of the  distribution of music, that was not my experience. My experience was this incredibly robust community and that had extreme validity and significance in sort of human scale.

And that's, I think, what really prompted me to start telling stories in this space. I think if anyone's interested in doing anything, including what you all do there, it's because you feel like someone else isn't saying what you want to be said, right? And so you're like, well, I better say it because no one else is saying it. So I think that was the inspiration for me to spend more time in this space telling stories here.

CINDY COHN
That's great. I mean, I do, and the stuff I hear in this is that, you know, first of all, the internet kind of erased distance so you could talk to people all over the world from this device in your home or in one place. And that people were really building community. 

And I also hear this, in terms of Napster, this huge disconnect between the kind of business model view of music, and music fan’s views of music. One of the most amazing things for me was realizing that I could find somebody who had a couple of songs that I really liked and then look at everything else they liked. And it challenged this idea that only kind of professional music critics who have a platform can suggest music to you and opened up a world, like literally felt like something just like a dam broke, and it opened up a world to music. It sounds like that was your experience as well.

ALEX WINTER
It was, and I think that really aptly describes the, the almost addictive fascination that people had with Napster and the confusion, even retrospectively, that that addiction came from theft, from this desire to steal in large quantities. I mean obviously you had kids in college dorm rooms pulling down gigabytes of music but the pull, the attraction to Napster was exactly what you just said – like I would find friends in Japan and Africa and Eastern Europe who had some weird like Coltrane bootleg that I'd never heard and then I was like, oh, what else do they have? And then here's what I have, and I have a very eclectic music collection. 

Then you start talking about art then you start talking about politics because it was a very robust forum So everyone was talking to each other. So it really was community and I think that gets lost because the narrative wants to remain the narrative, in terms of gatekeepers, in terms of how capitalism works, and that power dynamic was so completely threatened by, by Napster that, you know, the wheels immediately cranked into gear to sort of create a narrative that was, if you use this, you're just a terrible human being. 

And of course what it created was the beginning of this kind of online rebellion where people before weren't probably, didn't think of themselves as technical, or even that interested in technology, were saying, well, I'm not this thing that you're saying I am, and now I'm really going to rebel against you. Now I'm really going to dive into this space. And I think that it actually created more people sort of entering online community and building online communities, because they didn't feel like they were understood or being adequately represented.

And that led all the way to the Arab Spring and Occupy, and so many other things that came up after that.

JASON KELLEY
The community's angle that you're talking about is probably really, I think, useful to our audience. Because I think they probably find themselves, I certainly find myself in a lot of the kinds of communities that you've covered. Which often makes me think, like, how is this guy inside my head?

How do you think about the sort of communities that you need to, or want to chronicle. I know you mentioned this disconnect between the way the media covers it and the actual community. But like, I'm wondering, what do you see now? Are there communities that you've missed the boat on covering?

Or things that you want to cover at this moment that just aren't getting the attention that you think they should?

ALEX WINTER
I honestly just follow the things that interest me the most. I don't particularly … look, because I don't see myself as a, you know, in brackets as a chronicler of anything. I'm not that self, you know, I have a more modest view of myself. So I really just respond to the things that I find interesting, that on two tracks, one that I'm personally being impacted by.

So I'm not really like an outsider viewing, like, what will I cover next or what topics should I address, but what's really impacting me personally, I was hugely invested in Napster. I mean, I was going into my office on weekends and powering every single computer up all weekend onto Napster for the better part of a year. I mean, Fanning laughed at me when I met him, but -

CINDY COHN  
Luckily, the statute of limitations may have run on that, that's good.

ALEX WINTER
Yeah, exactly. 

JASON KELLEY  
Yeah, I'm sure you're not alone.

ALEX WINTER
Yeah, but I mean as I told Don Ienner when I did the movie I was like I was like dude I'd already bought all this music like nine times over on vinyl, on cassette, on CD. I think I even had elcasets at one point. So the record industry still owes me money as far as I’m concerned.

CINDY COHN
I agree.

ALEX WINTER
But no, it was really a personal investment. Even, you know, my interest in the blockchain and Bitcoin, which I have mixed feelings about, I really tried to cover that almost more from a political angle. I was interested, same with DeepWeb in a way, but I was interested in how the sort of counter narrators were building online and how people were trying to create systems and spaces online once online became corporatized, which it really did as soon as the web appeared, what did people do in response to the corporatization of these spaces? 

And that's why I was covering Lowry Love's case in England, and eventually Barrett Brown's case, and then the Silk Road, which I was mostly interested in for the same reason as Napster, which was, who were these people, what were they talking about, what drew them to this space, because it was a very clunky, clumsy way to buy drugs, if that was really what you wanted to do, and Bitcoin is a terrible tool for crime, as everyone now, I think, knows, but didn't so well back then.

So what was really compelling people, and a lot of that was, again, it was Silk Road was very much like the sort of alt rec world of the early Usenet days. A lot of divergent voices and politics and, and things like that. 

So YouTube is different because it was, Gayle Ayn Hurd had approached me and asked me if I wanted to tackle this with her, the producer. And I'd been looking at Google, largely. And that was why I had a personal interest. And I've got three boys, all of whom came up in the YouTube generations. They all moved off of regular TV and onto their laptops at a certain point in their childhood, and just were on YouTube for everything.

So I wanted corporatization of the internet, about what was the societal impact of the fact that our, our largest online community, which is YouTube, is owned by arguably the largest corporation on the planet, which is also a monopoly, which is also a black box.

And what does that mean? What are the societal  implications of that? So that was the kind of motive there, but it still was looking at it as a community largely.

CINDY COHN
So the conceit of the show is that we're trying to fix the internet and I want to know, you've done a lot to shine these stories in different directions, but what does it look like if we get it right? What are the things that we will see if we build the kind of online communities that are better than I think the ones that are getting the most attention now.

ALEX WINTER
I think that, you know, I've spent the last two years since I made the film and up until very recently on the road, trying to answer that question for myself, really, because I don't believe I have the answer that I need to bestow upon the world. I have a lot of questions, yeah. I do have an opinion. 

But right now, I mean, I generally feel like many people do that we slept – I mean, you all didn't, but many people slept on the last 20 years, right? And so there's a kind of reckoning now because we let these corporations get away with murder, literally and figuratively. And I think that we're in a phase of debunking various myths, and I think that's going to take some time before we can actually even do the work to make the internet better. 

But I think, you know, I have a big problem, a large thesis that I had in making The YouTube Effect was to kind of debunk the theory of the rabbit hole and the algorithm as being some kind of all encompassing evil. Because I think, sort of like we're seeing in AI now with this rhetoric about AI is going to kill everybody. To me, those are very agenda based narratives. They convince the public that this is all beyond them, and they should just go back to their homes, and keep buying things and eating food, and ignore these thorny areas of which they have no expertise, and leave it to the experts.

And of course, that means the status quo is upheld. The corporations keep doing whatever they want and they have no oversight, which is what they want. Every time Sam Altman says, AI is going to kill the world, he's just saying, Open AI is a black box, please leave us alone and let us make lots of money and go away. And that's all that means. So I think that we have to start looking at the internet and technology as being run by people. There aren't even that many people running it, there's only a handful of people running the whole damn thing for the most part. They have agendas, they have motives, they have political affiliations, they have capitalist orientation.

So I think really being able to start looking at the internet in a much more specific way, I know that you all have been doing this for a long time, most people do not. So I think more of that, more calling people on the carpet, more specificity. 

The other thing that we're seeing, and again, I'm preaching to the choir here with EFF, but like any time the public or the government or the media wakes up to something that they're behind, their inclination of how to fix it is way wrong, right?

And so that's the other place that we're at right now, like with COSA and the DSA and the Section 230 reform discussions, and they're bananas. And you feel like you're screaming into a chasm, right? Because if you say these things, people treat you like you're some kind of lunatic. Like, what do you mean you don't want to turn off Section 230? That would solve everything! I'm like, it wouldn't, it would just break the internet! So I feel a little, you know, like a Cassandra, but you do feel like you're yowling into a void. 

And so I do think that it's going to take a minute to fix the internet. And I think that one of the things that I think we'll get there, I think the new generations are smarter, the stakes are higher for them. You know kids in school… Well, I don't think the internet or social media is necessarily bad for kids, like, full stopping. There's a lot of propaganda there, but I think that, you know, they don't want harms. They want a safer environment for themselves. They don't want to stop using these platforms. They just want them to work better. 

But what's happened in the last couple of years, I think is a good thing, is that people are breaking off and forming their own communities again, even kids, like even my teenagers started doing it during COVID. Even on Discord, they would create their own servers, no one could get on it but them. There was no danger of, like, being infiltrated by crazy people. All their friends were there. They could bring other friends in, they could talk about whatever issues they wanted to talk about. So there's a kind of return to, of kind of fractured or fragmented or smaller set of communities.

And I think if the internet continues to go that way, that's a good thing, right? That you don't have to be on Tik TOK or YouTube or whatever to find your people. And I think for grownups would be the silver lining of what happened with Twitter, with, you know, Elon Musk buying it and immediately turning it into a Nazi crash pad is that the average adult realized they didn't have to be there either, right? That they don't have to just use one place that the internet is filled with little communities that they could go to to talk to their friends. 

So I think we're back in this kind of Wild West like we almost were pre-web and at the beginning of the web and I think that's good.  But I do think there's an enormous amount of misinformation and some very bad policy all over the world that is going to cause a lot of harm.

CINDY COHN
I mean, that's kind of my challenge to you is once we've realized that things are broken, how do we evaluate all the people who are coming in and claiming that they have the fix? And you know, in The YouTube effect, you talked to Carrie Goldberg. She has a lot of passion.

I think she's wrong about the answer. She's, I think, done a very good job illuminating some of the problems, especially for specific communities, people facing domestic violence and doxing and things like that. But she's rushed to a really dangerous answer for the internet overall. 

So I guess my challenge is, how do we help people think critically about not just the problems, but the potential issues with solutions? You know, the TikTok bans are something that's going on across the country now, and it feels like the Napster days, right?

ALEX WINTER
Yeah, totally.

CINDY COHN
People have focused on a particular issue and used it to try to say, Oh, we're just going to ban this. And all the people who use this technology for all the things that are not even remotely related to the problem are going to be impacted by this “ban-first” strategy.

ALEX WINTER
Yeah. I mean, it's media literacy. It's digital literacy. One of the most despairing things for me making docs in this space is how much prejudice there is to making docs in this space. You know, people consider the internet, especially, you know, a huge swath of, because obviously the far right has their agenda, which is just to silence everybody they don't agree with, right? I mean, the left can do the same thing, but the right is very good at it.  

The left, where they make mistakes, or, you know, center to left, is that they're ignorant about how these technologies work, and so their solutions are wrong. We see that over and over. They have really good intentions, but the solutions are wrong, and they don't actually make sense to how these technologies work. We're seeing that in AI. That was an area that I was trying to do as much work as I could in during the The Hollywood strike to educate people about AI'because they were so completely misinformed and their fixes were not fixes. They were not effective and they would not be legally binding. And it was despairing only because it's kind of frowned upon to say anything about technology other than don't use it.

CINDY COHN
Yeah.

ALEX WINTER
Right? Like, even other documentaries are like the thesis is like, well, just, you know, tell your kids they can't be on, like, tell them to read more literature.

Right? And it just drives me crazy because I'm like, I'm a progressive lefty and my kids are all online and guess what? They still read books and like, play music and go outside. So it's this kind of very binary black or white attitude towards technology that like, ‘Oh, it's just bad. Why can't we go back to the days?’

CINDY COHN
And I think there's a false sense that if we just could turn back the clock pre internet, everything was perfect. Right? My friend Cory Doctorow talks about this, like how we need to build the great new world, not the good old world. And I think that's true even for, you know, Internet oldies like you and me who are thinking about maybe the 80s and 90s.

Like, I think we need to embrace where we are now and then build the better world forward. Now, I agree with you strongly about decentralization in smaller communities. As somebody who cares about free speech and privacy, I don't see a way to solve the free speech and privacy problems of the giant platforms.

We're not going to get better dictators. We need to get rid of the dictators and make a lot more smaller, not necessarily smaller, but different spaces, differently governed spaces. But I agree with you that there is this rush to kind of turn back the clock and I think we should try to turn it forward. And again, I kind of want to push you a little bit. What does the turning it forward world look like?

ALEX WINTER
I mean, I have really strong opinions about that. I mean, thankfully, my kids are very tech savvy, like any kid. And I pay attention to what they're doing, and I find it fascinating. And the thing about thinking backwards is that it's a losing proposition. Because the world will leave you behind.

Because the world's not going to go backwards. And the world is only going to go forward. And so you either have a say in what that looks like, or you don't. 

I think two things have to happen. One is media literacy and a sort of weakening of this narrative that it's all bad, so that more people, intelligent people, are getting involved in the future. I think that will help adults get immersed into new technologies and new communities and what's going on. I think at the same time that we have to be working harder to attack the tech monopolies. 

I think being involved as opposed to being, um, abstinent. is really, really important. Um, and I think more of that will happen with new generations, so uh, and because then your eyes and your ears are open, and you'll find new communities and, and the like, but at the same time we have to work much harder at um, uh, this idea that we're allowing the big tech to police themselves is just ludicrous, and there's still the world that we're in, and it just drives me crazy and Uh, you know, they have one agenda, which is profit, and they don't care about anything else, and, and power.

And I think that's the danger of AI. I mean, it's not the, we're not all gonna die by robots. It's just, it's just this sort of capitalist machine is just gonna roll along unchecked. That's the problem, and it will eat labor, and it will eat other companies, and that's the problem.

CINDY COHN  
I mean, I think that's one of the tricky parts about, you know, kind of the, the Sam Altman shift, right, from don't regulate us to please regulate us. Behind that, please regulate us is, you know, and we'll, we'll tell you what the regulations look like because we're the only ones, these giant gurus who can understand enough about it to figure out how to regulate us.

And I just think that's, you know, it's, it's important to recognize that it's a pivot, but I think you could get tricked into thinking that's actually better. And I don't actually think it is.

ALEX WINTER
It’s a 100 percent agenda based. I mean, it's not only not better, it's completely self serving. And I think that as long as we are following these people as opposed to leading them, we're going to have a problem.

CINDY COHN:
Absolutely.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Alex Winter about YouTube.

ALEX WINTER
There's a lot of information there that's of extreme value, medical, artistic,historical, political. In the film, we go to great length to show that Caleb Kane, who got kind of pulled into and, and radicalized, um, by the, the proliferation of far right, um, neo and even neo Nazi and nationalist, uh, white supremacist content, which is still proliferate on YouTube, um, because it really is not algorithm oriented, it’s business and incentive based, how he himself was unindoctrinated by ContraPoints, by Natalie Wynn's channel. 

And you have to understand that, you know, more teenagers watch YouTube than Netflix. Like, it is everything. Iit is by an order of magnitude, so much more of how they spend their time, um, consuming media than anything else. And they're watching their friends talk, they're watching political speakers talk, they're watching, you know, my son who's like, his various interests from photography to weightlifting to whatever, he's young. All of that's coming from YouTube. All of it.

And they're pretty good at discerning the crap from, you know, unless like now it's like a lot of the studies show you have to be generally predisposed to this kind of content to really go down, the sort of darker areas those younger people can be.

You know, I often say that the greatest solution to people who end up getting radicalized on YouTube is more YouTube. Right? Is to find the people on YouTube who are doing good. And I think that's one of the big misunderstandings about disinfo is that you can consume good sources. You just have to find them. And people are actually better at discerning truth from lies if that's really what they want to do as opposed to, like, I just want to get a wash in QAnon or whatever. 

I think YouTube started not necessarily with pure intentions, but I think that they did start with some good intentions in terms of intentionally democratizing the landscape and voices and allowing people in marginalized groups, and under autocratic governments. They allowed and they, and they promoted that content and they created the age of the democratized influencer.

That was intentional. And I would argue that they did a better job of that than my industry did. And I think my industry followed their lead. I think the diversity initiatives in Hollywood came after Hollywood, because Hollywood's Like everyone else is driven by money only and they were like, Oh my God, there are these giant trans and African and Chinese influencers that have huge audiences, we should start allowing more people to have a voice in our business too. Cause we'll make money off of them. But I think that now, YouTube has grown so big and so far beyond them, and it's making them so much money and they're so incentivized to promote disinformation, propaganda, sort of violent, um, content because it, it just makes so much money for them on the ad side, uh, that it's sort of a runaway train at this point.

CINDY COHN
One of the things that EFF has taken a stand on is about banning behavioral advertising. And I think one of the things you did in The YouTube Effect is kind of take a hard look at, you know, how, how big a role the algorithm is actually playing. And I think the movie kind of points that it's not as big a role as people who, uh, who want an easy answer to the problem are, are saying.

We've been thinking about this from the privacy perspective, and we decided that behavioral advertising was behind so many of the problems we had, and I wondered, um, how you think about that, because that is the kind of tracking and targeting that feeds some of those algorithms, but it does a lot more.

ALEX WINTER
Yeah, I think that there's absolutely no doubt for all the hue and cry that they can't moderate their content. And I think that we're beginning, again, this is an area you, you, that you, that EFF specifically specializes in. But I think in terms of the area of free speech, and what constitutes free speech as opposed to what they could actually be doing to mitigate harms is very nuanced.

And it serves them to say that it is not. That it's not nuanced and it's either, either they're going to be shackling free speech or they should be left alone to do whatever they want, which is make money off of advertising, a lot of which is harmful. So I think getting into the weeds on that is extremely important.

You know, a recent example was just how they stopped deplatforming all the Stop the Steal content, which they were doing very successfully. The just flat out  you know, uh, election 2020 election propaganda and, you know, and that gets people hurt. I mean, it can get people killed and it's not, it's really not hard to do, um, but they make more money if they allow this kind of rampant, aggressive, propagandized advertising as well as content on their platform.

I just think that we have to be looking at advertising and how it functions in a very granular way, because these are,  the whole thesis of YouTube, such as we had one, is that this is not about an algorithm, it's about a business model. 

These are business incentives, it's no different, I've been saying this everywhere, it's like, it's exactly the same as, as the, the Hurst and Pulitzer wars of the late 1800s, it's the same. It's just, we want to make money. We know what attracts eyeballs. We want to advertise and make money from ad revenue from pumping out this garbage because people eat it up. It's really similar to that. That doesn't require an algorithm. 

CINDY COHN
My dream is Alex Winter makes a movie that helps us evaluate all the things that people who are worried about the internet are jumping in to say that we ought to do, and helps give people that kind of evaluative  power, because we do see over and over again this rush to go to censorship, which, you know, is problematic, for free expression, but also just won't work, this kind of gliding over the idea that privacy has anything to do with online harms and that standing up for privacy will do anything.

I just feel like sometimes, this literacy place needs to be both about the problems and about critically thinking about the things that are being put forward as solutions.

ALEX WINTER
Yeah, I mean, I've been writing a lot about that for the last two years. I've written, I think, I don't know, countless op eds. And there are way smarter people than me, like you all and Cory Doctorow, writing about this like crazy. And I think all of that is having an impact. I think that we are building the building blocks of proper internet literacy are being set. 

CINDY COHN
Well I appreciate that you've got three kids who are, you know, healthy and happy using the internet because I think those stories get overlooked as well. Not that there aren't real harms. It's just that there's this baby with the bathwater kind of approach that we find in policymaking.

ALEX WINTER
Yeah, completely. So I think that people feel like their arms are being twisted. That they have to say these hyper negative things, or fall in line with these narratives. You know, a movie requires characters, right? And I would need a court case or something to follow to find the way in and I've always got my eyes on that. But I do think we're at it. We're at a kind of a critical point.

It's really funny because when I made this film I'm friends with a lot of different film critics. I've just been around a long time I like, you know reading good film criticism and one of them who I respect greatly was like I don't want to review your movie because I really didn't like it and I don't want to give you a really bad review.

And I said, well, why didn't you like it? It's like, because I did just didn't like your perspective. And I was like, well, what didn't you like about my replicas? Like, well, you just weren't hard enough on YouTube. Like you, you didn't just come right out and say, they're just terrible and no one should be using it.

And I was like, You're the problem. and here's so much of that, um, that I feel like there is a, uh, you know, there's a bias that is going to take time to overcome. No matter what anyone says or whatever film anyone makes, there's just, we just have to kind of keep chipping away at it.

JASON KELLEY
Well, it's a shame we didn't get a chance to talk to him about Frank Zappa. But what we did talk to him about was probably more interesting to our audience. The thing that stood out to me was the way he sees these technologies and sort of focuses his documentaries on the communities that they facilitate.

And that was just sort of a, I think, useful way to think about, you know, everything from the deep web to blockchain to YouTube. To Napster, just like he sees these as building communities and those communities are not necessarily good or bad, but they have some really positive elements and that led him to this really interesting idea of, of a future of smaller communities, which I think, I think we all agree with.

Does that sound sort of like what you pulled away from the conversation, Cindy?

CINDY COHN
I think that's right. And I also think he was really smart at noticing the difference between what it was like to be inside some of those communities and how they got portrayed in broader society. And pointing out that when corporate interests, who were the copyright interests, saw what was happening on Napster, they very quickly put together a narrative that everybody was pirates, that was very different than how it felt to be inside that community and having access to all of that information and that disconnect, you know, what happens when the people who control our broader societal conversation, who are often corporate interests with their own commercial interests at heart.

And what it's like to be inside the communities is what connected the Silk Road story with the Napster story. And in some ways YouTube is interesting because it's actually gigantic. It's not a little corner of the internet, but yet, I think he's trying to lift up, you know, both the issues that we see in YouTube that are problematic, but also all the other things inside YouTube that are not problematic and as he pointed out in the story about Caleb Cain, you know, can be part of the solution to pulling people out of the harms. 

So I really appreciate this focus. I think it really hearkens back to, you know, one of the coolest things about the internet when it first came along was this idea that we could build communities free of distance and outside of the corporate spaces.

JASON KELLEY
Yeah. And the point you're making about his recognition of. Who gets to decide what's to blame, I think leads us right to the conversation around YouTube, which is it's easy to blame the algorithm when what's actually driving a lot of the problems we see with the site are corporate interests and engagement with the kind of content that gets people riled up and also makes a lot of money.

And I just love that he's able to sort of parse out these nuances in a way that surprisingly few people do, um, you know, across media and journalism and certainly in unfortunately government.

CINDY COHN
Yeah, and I think that, you know, it's, it's fun to have a conversation with somebody who kind of gets it at this level about the problems with, and he, you know, name checked issues that EFF has been working on for a long time, whether that's COSA or Section 230 or algorithmic issues. About how wrongheaded the solutions are and how it kind of drives it.

I appreciate that it kind of drives him crazy in the way it drives me crazy that once you've articulated the harms, people seem to rush towards solutions, or at least are pushed towards solutions that are not getting out of this corporate control, but rather in some ways putting us deeper in that.

And he's already seeing that in the AI push for regulation. I think he's exactly right about that. I don't know if I convinced him to make his next movie about all of these solutions and how to evaluate them. I'll have to keep trying. He may not, that may not be where he gets his inspiration.

JASON KELLEY
We'll see, I mean, at least if nothing else, EFF is in many of the documentaries that he has made and my guess is that will continue to be a voice of reason in the ones he makes in the future.

CINDY COHN
I really appreciate that Alex has taken his skills and talents and platforms to really lift up the kind of ordinary people who are finding community online and help us find ways to keep that part, and even lift it up as we move into the future.

JASON KELLEY

Thanks for joining us for this episode of how to fix the internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. 

In this episode you heard Perspectives by J.Lang featuring Sackjo22 and Admiral Bob 

You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

Shots Fired: Congressional Letter Questions DHS Funding of ShotSpotter

20 May 2024 at 19:38

There is a growing pile of evidence that cities should drop Shotspotter, the notorious surveillance system that purportedly uses acoustic sensors to detect gunshots, due to its inaccuracies and the danger it creates in communities where it’s installed. In yet another blow to the product and the surveillance company behind it—SoundThinking—Congress members have sent a letter calling on the Department of Homeland Security to investigate how it provides funding to local police to deploy the product.

The seven page letter, from Senators Ed Markey, Ron Wyden and Elizabeth Warren, and Representative Ayanna Pressley, begins by questioning the “accuracy and effectiveness” of ShotSpotter, and then outlines some of the latest evidence of its abysmal performance, including multiple studies showing false positive rates—i.e. incorrectly classifying non-gunshot sounds as gunshots—at 70% or higher. In addition to its ineffectiveness, the Congress members voiced their serious concerns regarding ShotSpotter’s contribution to discrimination, civil rights violations, and poor policing practices due to the installation of most ShotSpotter sensors in overwhelmingly “Black, Brown and Latin[e] communities” at the request of local law enforcement. Together, the inefficacy of the technology and the placements can result in the deployment of police to what they expect to be a dangerous situation with guns drawn, increasing the chances of all-too-common police violence against civilians in the area.

In light of the grave concerns raised by the use of ShotSpotter, the lawmakers are demanding that DHS investigate its funding, and whether it’s an appropriate use of taxpayer dollars. We agree: DHS should investigate, and should end its program of offering grants to local law enforcement agencies to contract with SoundThinking. 

The letter can be read in its entirety here.

Georgia Prosecutors Stoke Fears over Use of Encrypted Messengers and Tor

20 May 2024 at 16:23

In an indictment against Defend the Atlanta Forest activists in Georgia, state prosecutors are citing use of encrypted communications to fearmonger. Alleging the defendants—which include journalists and lawyers, in addition to activists—in the indictment were responsible for a number of crimes related to the Stop Cop City campaign, the state Attorney General’s prosecutors cast suspicion on the defendants’ use of Signal, Telegram, Tor, and other everyday data-protecting technologies.

“Indeed, communication among the Defend the Atlanta Forest members is often cloaked in secrecy using sophisticated technology aimed at preventing law enforcement from viewing their communication and preventing recovery of the information” the indictment reads. “Members often use the dark web via Tor, use end-to-end encrypted messaging app Signal or Telegram.”

The secure messaging app Signal is used by tens of millions of people, and has hundreds of millions of global downloads. In 2021, users moved to the nonprofit-run private messenger en masse as concerns were raised about the data-hungry business models of big tech. In January of that year, former world’s richest man Elon Musk tweeted simply “Use Signal.” And world-famous NSA whistle-blower Edward Snowden tweeted in 2016 what in information security circles would become a meme and truism: “Use Tor. Use Signal.”

Despite what the bombastic language would have readers believe, installing and using Signal and Tor is not an initiation rite into a dark cult of lawbreaking. The “sophisticated technology” being used here are apps that are free, popular, openly distributed, and widely accessible by anyone with an internet connection. Going further, the indictment ascribes the intentions of those using the apps as simply to obstruct law enforcement surveillance. Taking this assertion at face value, any judge or reporter reading the indictment is led to believe everyone using the app simply wants to evade the police. The fact that these apps make it harder for law enforcement to access communications is exactly because the encryption protocol protects messages from everyone not intended to receive them—including the users’ ISP, local network hackers, or the Signal nonprofit itself.

Elsewhere, the indictment hones in on the use of anti-surveillance techniques to further its tenuous attempts to malign the defendants: “Most ‘Forest Defenders’ are aware that they are preparing to break the law, and this is demonstrated by premeditation of attacks.” Among a laundry list of other techniques, the preparation is supposedly marked by “using technology avoidance devices such as Faraday bags and burner phones.” Stoking fears around the use of anti-surveillance technologies sets a dangerous precedent for all people who simply don’t want to be tracked wherever they go. In protest situations, carrying a prepaid disposable phone can be a powerful defense against being persecuted for participating in first-amendment protected activities. Vilifying such activities as the acts of wrongdoers would befit totalitarian societies, not ones in which speech is allegedly a universal right.

To be clear, prosecutors have apparently not sought to use court orders to compel either the defendants or the companies named to enter passwords or otherwise open devices or apps. But vilifying the defendants’ use of common sense encryption is a dangerous step in cases that the Dekalb County District Attorney has already dropped out of, citing “different prosecutorial philosophies.”

Using messengers which protect user communications, browsers which protect user anonymity, and employing anti-surveillance techniques when out and about are all useful strategies in a range of situations. Whether you’re looking into a sensitive medical condition, visiting a reproductive health clinic with the option of terminating a pregnancy, protecting trade secrets from a competitor, wish to avoid stalkers or abusive domestic partners, protecting attorney-client exchanges, or simply want to keep your communications, browsing, and location history private, these techniques can come in handy. It is their very effectiveness which has led to the widespread adoption of privacy-protective technologies and techniques. When state prosecutors spread fear around the use of these powerful techniques, this sets us down a dangerous path where citizens are more vulnerable and at risk.

Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech 

20 May 2024 at 13:02

As Congress appears ready to gut one of the internet’s most important laws for protecting free speech, they are ignoring how that law protects and benefits millions of Americans’ ability to speak online every day.  

The House Energy and Commerce Committee is holding a hearing on Wednesday on a bill that would end Section 230 (47 U.S.C. § 230) in 18 months. The authors of the bill argue that setting a deadline to either change or eliminate Section 230 will force the Big Tech online platforms to the bargaining table to create a new regime of intermediary liability. 

Take Action

Ending Section 230 Will Make Big Tech Monopolies Worse

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it needs to abandon this bill and spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

The bill introduced by House Energy & Commerce Chair Cathy McMorris Rogers (R-WA) and Ranking Member Frank Pallone (D-NJ) is based on a series of mistaken assumptions and fundamental misunderstandings about Section 230. Mike Masnick at TechDirt has already explained many of the flawed premises and factual errors that the co-sponsors have made. 

We won’t repeat the many errors that Masnick identifies. Instead, we want to focus on what we see as a glaring omission in the co-sponsor’s argument: how central Section 230 is to ensuring that every person can speak online.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most American’s belief that people should be held responsible for their own speech—not that of other people.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services. 

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. 

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

The co-sponsors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

Nothing could be further from the truth. If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. Taking the sponsors of the bill at their word, this would result in the exact opposite of their goal of protecting children and adults from harmful content online.  

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

17 May 2024 at 13:59

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

17 May 2024 at 13:02

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect.

Montana’s ban (which EFF and others opposed) prohibits TikTok from operating anywhere within the state and imposes financial penalties on TikTok or any mobile application store that allows users to access TikTok. The district court recognized that Montana’s law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” and blocked Montana’s ban from going into effect. Last year, EFF—along with the ACLU, Freedom of the Press Foundation, Reason Foundation, and the Center for Democracy and Technology—filed a friend-of-the-court brief in support of TikTok and Montana TikTok users’ challenge to this law at the trial court level.

As the brief explains, Montana’s TikTok ban is a prior restraint on speech that prohibits Montana TikTok users—and TikTok itself—from posting on the platform. The law also prohibits TikTok’s ability to make decisions about curating its platform.

Prior restraints such as Montana’s ban are presumptively unconstitutional. For a court to uphold a prior restraint, the First Amendment requires it to satisfy the most exacting scrutiny. The prior restraint must be necessary to further an urgent interest of the highest magnitude, and the narrowest possible way for the government to accomplish its precise interest. Montana’s TikTok ban fails to meet this demanding standard.

Even if the ban is not a prior restraint, the brief illustrates that it would still violate the First Amendment. Montana’s law is a “total ban” on speech: it completely forecloses TikTok users’ speech with respect to the entire medium of expression that is TikTok. As a result, Montana’s ban is subject to an exacting tailoring requirement: it must target and eliminate “no more than the exact source of the ‘evil’ it seeks to remedy.” Montana’s law is undeniably overbroad and fails to satisfy this scrutiny.

This appeal is happening in the immediate aftermath of President Biden signing into law federal legislation that effectively bans TikTok in its current form, by requiring TikTok to divest of any Chinese ownership within 270 days. This federal law raises many of the same First Amendment concerns as Montana’s.

It’s important that the Ninth Circuit take this opportunity to make clear that the First Amendment requires the government to satisfy a very demanding standard before it can impose these types of extreme restrictions on Americans’ speech.

Fair Use Still Protects Histories and Documentaries—Even Tiger King

15 May 2024 at 16:28

Copyright’s fair use doctrine protects lots of important free expression against the threat of ruinous lawsuits. Fair use isn’t limited to political commentary or erudite works – it also protects popular entertainment like Tiger King, Netflix’s hit 2020 documentary series about the bizarre and sometimes criminal exploits of a group of big cat breeders. That’s why a federal appeals court’s narrow interpretation of fair use in a recent copyright suit threatens not just the producers of Tiger King but thousands of creators who make documentaries, histories, biographies, and even computer software. EFF and other groups asked the court to revisit its decision. Thankfully, the court just agreed to do so.

The case, Whyte Monkee Productions v. Netflix, was brought by a videographer who worked at the Greater Wynnewood Exotic Animal Park, the Oklahoma attraction run by Joe Exotic that was chronicled in Tiger King. The videographer sued Netflix for copyright infringement over the use of his video clips of Joe Exotic in the series. A federal district court in Oklahoma found Netflix’s use of one of the video clips—documenting Joe Exotic’s eulogy for his husband Travis Maldonado—to be a fair use. A three-judge panel of the Court of Appeals for the Tenth Circuit reversed that decision and remanded the case, ruling that the use of the video was not “transformative,” a concept that’s often at the heart of fair use decisions.

The appeals court based its ruling on a mistaken interpretation of the Supreme Court’s opinion in Andy Warhol Foundation for the Visual Arts v. Goldsmith. Warhol was a deliberately narrow decision that upheld the Supreme Court’s prior precedents about what makes a use transformative while emphasizing that commercial uses are less likely to be fair. The Supreme Court held that commercial re-uses of a copyrighted work—in that case, licensing an Andy Warhol print of the artist Prince for a magazine cover when the print was based on a photo that was also licensed for magazine covers—required a strong justification. The Warhol Foundation’s use of the photo was not transformative, the Supreme Court said, because Warhol’s print didn’t comment on or criticize the original photograph, and there was no other reason why the foundation needed to use a print based on that photograph in order to depict Prince. In Whyte Monkee, the Tenth Circuit honed in on the Supreme Court’s discussion about commentary and criticism but mistakenly read it to mean that only uses that comment on an original work are transformative. The court remanded the case to the district court to re-do the fair use analysis on that basis.

As EFF, along with Authors Alliance, American Library Association, Association of Research Libraries, and Public Knowledge explained in an amicus brief supporting Netflix’s request for a rehearing, there are many kinds of transformative fair uses. People creating works of history or biography frequently reproduce excerpts from others’ copyrighted photos, videos, or artwork as indispensable historical evidence. For example, using sketches from the famous Zapruder film in a book about the assassination of President Kennedy was deemed fair, as was reproducing the artwork from Grateful Dead posters in a book about the band. Software developers use excerpts from others’ code—particularly declarations that describe programming interfaces—to build new software that works with what came before. And open government organizations, like EFF client Public.Resource.Org, use technical standards incorporated into law to share knowledge about the law. None of these uses involves commentary or criticism, but courts have found them all to be transformative fair uses that don’t require permission.

The Supreme Court was aware of these uses and didn’t intend to cast doubt on their legality. In fact, the Supreme Court cited to many of them favorably in its Warhol decision. And the Court even engaged in some non-commentary fair use itself when it included photos of Prince in its opinion to illustrate how they were used on magazine covers. If the Court had meant to overrule decades of court decisions, including its own very recent Google v. Oracle decision about software re-use, it would have said so.

Fortunately, the Tenth Circuit heeded our warning, and the warnings of Netflix, documentary filmmakers, legal scholars, and the Motion Picture Association, all of whom filed briefs. The court vacated its decision and asked for further briefing about Warhol and what it means for documentary filmmakers.

The bizarre story of Joe Exotic and his friends and rivals may not be as important to history as the Kennedy assassination, but fair use is vital to bringing us all kinds of learning and entertainment. If other courts start treating the Warhol decision as a radical rewriting of fair use law when that’s not what the Supreme Court said at all, many kinds of free expression will face an uncertain future. That’s why we’re happy that the Tenth Circuit withdrew its opinion. We hope the court will, as the Supreme Court did, reaffirm the importance of fair use.

The Cybertiger Strikes Again! EFF's 8th Annual Tech Trivia Night

Being well into spring, with the weather getting warmer, we knew it was only a matter of time till the Cybertiger awoke from his slumber. But we were prepared. Prepared to quench the Cybertiger's thirst for tech nerds to answer his obscure and fascinating minutiae of tech-related questions.

But how did we prepare for the Cybertiger's quiz? Well, with our 8th Annual Tech Trivia Night of course! We gathered fellow digital freedom supporters to test their tech-know how, and to eat delicious tacos, churros, and special tech-themed drinks, including LimeWire, Moderated Content, and Zero Cool.

Nine teams gathered before the Cybertiger, ready to battle for the *new* wearable first, second, and third place prizes:

EFF's Tech Trivia Awards! An acrylic award with an image of a blue/pink tiger.

But this year, the Cybertiger had a surprise up his sleeve! A new way to secure points had been added: bribes. Now, teams could donate to EFF to sway the judges and increase their total points to secure their lead. Still, the winner of the first-place prize was the Honesty Winner, so participants needed to be on their A-game to win!

At the end of round two of six, team Bad @ Names and 0x41434142 were tied for first place, making a tense game! It wasn’t until the bonus question after round two, where the Cybertiger asked each team, “What prompt would you use to jailbreak the Cybertiger AI?” where the team Bad @ Names came in first place with their answer.

By the end of round 4, Bad @ Names was still in first place, only in the lead by three points! Could they win the bonus question again? This time, each team was asked to create a ridiculous company elevator pitch that would be on the RSA expo floor. (Spoiler alert: these company ideas were indeed ridiculous!)

After the sixth round of questions, the Cybertiger gave one last chance for teams to scheme their way to victory! The suspense built, but after some time, we got our winners... 

In third place, AI Hallucinations with 60 total points! 

In second place, and also winning the bribery award, 0x41434142, with 145 total points!

In first place... Bad @ Names with 68 total points!

EFF’s sincere appreciation goes out to the many participants who joined us for a great quiz over tacos and drinks while never losing sight of EFF’s mission to drive the world towards a better digital future. Thank you to the digital freedom supporters around the world helping to ensure that EFF can continue working in the courts and on the streets to protect online privacy and free expression.

Thanks to EFF's Luminary Organizational Members DuckDuckGo, No Starch Press, and the Hering Foundation for their year-round support of EFF's mission. If you or your company are interested in supporting a future EFF event, or would like to learn more about Organizational Membership, please contact Tierney Hamilton.

Learn about upcoming EFF events when you sign up for our email list, or just check out our event calendar. We hope to see you soon!

Coalition to Calexico: Think Twice About Reapproving Border Surveillance Tower Next to a Public Park

14 May 2024 at 16:23

Update May 15, 2024: The letter has been updated to include support from the Southern Border Communities Coalition. It was re-sent to the Calexico City Council. 

On the southwest side of Calexico, a border town in California’s Imperial Valley, a surveillance tower casts a shadow over a baseball field and a residential neighborhood. In 2000, the Immigration and Naturalization Service (the precursor to the Department of Homeland Security (DHS)) leased the corner of Nosotros Park from the city for $1 a year for the tower. But now the lease has expired, and DHS component Customs & Border Protection (CBP) would like the city to re-up the deal 

Map of Nosotros park with location of tower

But times—and technology—have changed. CBP’s new strategy calls for adopting powerful artificial intelligence technology to not only control the towers, but to scan, track and categorize everything they see.  

Now, privacy and social justice advocates including the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition have joined EFF in sending the city council a letter urging them to not sign the lease and either spike the project or renegotiate it to ensure that civil liberties and human rights are protected.  

The groups write 

The Remote Video Surveillance System (RVSS) tower at Nosotros Park was installed in the early 2000s when video technology was fairly limited and the feeds required real-time monitoring by human personnel. That is not how these cameras will operate under CBP's new AI strategy. Instead, these towers will be controlled by algorithms that will autonomously detect, identify, track and classify objects of interest. This means that everything that falls under the gaze of the cameras will be scanned and categorized. To an extent, the AI will autonomously decide what to monitor and recommend when Border Patrol officers should be dispatched. While a human being may be able to tell the difference between children playing games or residents getting ready for work, AI is prone to mistakes and difficult to hold accountable. 

In an era where the public has grave concerns on the impact of unchecked technology on youth and communities of color, we do not believe enough scrutiny and skepticism has been applied to this agreement and CBP's proposal. For example, the item contains very little in terms of describing what kinds of data will be collected, how long it will be stored, and what measures will be taken to mitigate the potential threats to privacy and human rights. 

The letter also notes that CBP’s tower programs have repeatedly failed to achieve the promised outcomes. In fact, the DHS Inspector General found that the early 2000s program,yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts.”  

The groups are calling for Calexico to press pause on the lease agreement until CBP can answer a list of questions about the impact of the surveillance tower on privacy and human rights. Should the city council insist on going forward, they should at least require regular briefings on any new technologies connected to the tower and the ability to cancel the lease on much shorter notice than the 365 days currently spelled out in the proposed contract.  

One (Busy) Day in the Life of EFF’s Activism Team

14 May 2024 at 15:22

EFF is an organization of lawyers, technologists, policy professionals, and importantly–full-time activists–who fight to make sure that technology enhances rather than threatens civil liberties on a global scale. EFF’s activism team includes experienced issue experts, master communicators, and grassroots organizers who help to coordinate and orchestrate EFF’s activist campaigns that include but go well beyond litigation, technical analyses and solutions, and direct lobbying to legislators.

If you’ve ever wondered what it would be like to work on the activism team at EFF, or if you are curious about applying for a job at EFF, take a look at one exceptional (but also fairly ordinary) day in the life of five members of the team:

Jillian York, Director For International Freedom of Expression

I wake up around 9:00, make coffee, and check my email and internal messages (we use Mattermost, a self-hosted chat tool). I live in Berlin—between four and nine hours ahead of most of my colleagues—which on most days enables me to get some “deep work” done before anyone else is online.

I see that one of my colleagues in San Francisco left a late-night message asking for someone to edit a short blog post. No one else is awake yet, so I jump on it. I then work on a piece of writing of my own, documenting the case of Alaa Abd El Fattah, an Egyptian technologist, blogger, and EFF supporter who’s been imprisoned on and off for the past decade. After that, I respond to some emails and messages from colleagues from the day prior.

EFF offers us flexible hours, and since I’m in Europe I often have to take calls in the evening (6 or 7 pm my time is 9 or 10 am San Francisco time, when a lot of team meetings take place). I see this as an advantage, as it allows me to meet a friend for lunch and hit the gym before heading back to work. 

There’s a dangerous new bill being proposed in a country where we don’t have so much expertise, but which looks likely to have a greater impact across the region, so a colleague and I hop on a call with a local digital rights group to plan a strategy. When we work internationally, we always consult or partner with local groups to make sure that we’re working toward the best outcome for the local population.

While I’m on the call, my Signal messages start blowing up. A lot of the partners we work with in another region of the world prefer to organize there for reasons of safety, and there’s been a cyberattack on a local media publication. Our partners are looking for some assistance in dealing with it, so I send some messages to colleagues (both at EFF and other friendly organizations) to get them the right help.

After handling some administrative tasks, it’s time for the meeting of the international working group. In that group, we discuss threats facing people outside the U.S., often in areas that are underrepresented by both U.S. and global media.

After that meeting, it's off to prep for a talk I'll be giving at an upcoming conference. There have been improvements in social media takedown transparency reporting, but there are a lot of ways to continue that progress, and a former colleague and I will be hosting a mock game show about the heroes and anti-heroes of transparency. By the time I finish that, it's nearly 11 pm my time, so it's off to bed for me, but not for everyone else!

Matthew Guariglia, Senior Policy Analyst Responsible for Government Surveillance Advocacy

My morning can sometimes start surprisingly early. This morning, a reporter I often speak to called to if I had any comments about a major change to how Amazon Ring security cameras will allow police to request access to user’s footage. I quickly try to make sense of the new changes—Amazon’s press release doesn’t say nearly enough.  Giving a statement to the press requires a brief huddle between me, EFF’s press director, and other lawyers, technologists, and activists who have worked on our Ring campaign over the last few years. Soon, we have a statement that conveys exactly what we think Amazon needs to do differently, and what users and non-users should know about this change and its impact on their rights.. About an hour after that, we turn our brief statement into a longer blog post for everyone to read. 

For the rest of the day now, in between other obligations and meetings, I take press calls or do TV interviews from curious reporters asking whether this change in policy is a win for privacy. My first meeting is with representatives of about a dozen mostly-local groups in the Bay Area, where EFF is located, about the next steps for opposing Proposition E, a ballot measure that greatly reduces the amount of oversight on the San Francisco Police Department concerning what technology they use. I send a few requests to our design team about printing window signs and then talk with our Activism Director about making plans to potentially fly a plane over the city. Shortly after that, I’m in a coalition meeting of national civil liberties organizations discussing ways of keeping a clean reauthorization of Section 702 (a mass surveillance authority that expires this year) out of a must-pass bill that would continue to fund the government. 

In the afternoon, I watch and take notes as a Congressional committee holds a hearing about AI use in law enforcement. Keeping an eye on this allows me to see what arguments and talking points law enforcement is using, which members of Congress seem critical of AI use in policing and might be worth getting in touch with, and whether there are any revelations in the hearing that we should communicate to our members and readers. 

After the hearing, I have to briefly send notes to a Senator and their staff on a draft of a public letter they intend to send to industry leaders about data collection—and when law enforcement may or may not request access to stored user data. 

Tomorrow,  I’ll follow up on many of the plans made over the course of this day: I’ll need to send out a mass email to EFF supporters in the Bay Area rallying them to join in the fight against Proposition E, and review new federal legislation to see if it offers enough reform of Section 702 that EFF might consider supporting it. 

Hayley Tsukayama, Associate Director of Legislative Activism

I settle in with a big mug of tea to start a day full of online meetings. This probably sounds boring to a lot of people, but I know I'll have a ton of interesting conversations today.

Much of my job coordinating our state legislative work requires speaking with like-minded organizations across the country. EFF tries, but we can't be everywhere we want to be all of the time. So, for example, we host a regular call with groups pushing for stronger state consumer data privacy laws. This call gives us a place to share information about a dozen or more privacy bills in as many states. Some groups on the call focus on one state; others, like EFF, work in multiple states. Our groups may not agree on every bill, but we're all working toward a world where companies must respect our privacy by default.

You know, just a small goal.

Today, we get a summary of a hearing that a friendly lawmaker organized to give politicians from several states a forum to explain how big tech companies, advertisers, and data brokers have stymied strong privacy legislation. This is one reason we compare notes: the more we know about what they're doing, the better we can fight them—even though the other side has more money and staff for state legislative work than all of us combined.

From there, I jump to a call on emerging AI legislation in states. Many companies pushing weak AI regulation make software that monitors employees, so this work has connected me to a universe of labor advocates I've never gotten to work with before. I've learned so much from them, both about how AI affects working conditions and about the ways they organize and mobilize people. Working in coalitions shows me how different people bring their strengths to a broader movement.

At EFF, our activists know: we win with words. I make a note to myself to start drafting a blog post on some bad copy-paste AI bills showing up across the country, which companies have carefully written to exempt their own products.

My position lets me stick my nose into almost every EFF issue, which is one thing I love about it. For the rest of the day, I meet with a group of right-to-repair advocates whose decades of advocacy have racked up incredible wins in the past couple of years. I update a position letter to the California legislature about automotive data. I send a draft action to one of our lawyers—who I get to work with every day— about a great Massachusetts bill that would prohibit the sale of location data without permission. I debrief with two EFF staffers who testified this week in Sacramento on two California bills—one on IP issues, another on police surveillance. I polish a speech I'm giving with one of my colleagues, who has kindly made time to help me. I prep for a call with young activists who want to discuss a bill idea.

There is no "typical" day in my job. The one constant is that I get to work with passionate people, at EFF and outside of it, who want to make the world a better place. We tackle tough problems, big and small—but always ones that matter. And, sure, I have good days and bad days. But I can say this: they are rarely boring.

Rory Mir, Associate Director of Community Organizing 

As an organizer at EFF, I juggle long-term projects and needs with rapid responses for both EFF and our local allies in our grassroots network, Electronic Frontier Alliance. Days typically start with morning rituals that keep me grounded as a remote worker: I wake up, make coffee, put on music. I log in, set TODOs, clear my inbox. I get dressed, check the news, morning dog walk..

Back at my desk, I start with small tasks—reach out to a group I met at a conference, add an event to the EFF calendar, and promote EFA events on social media. Then, I get a call from a Portland EFA group. A city ordinance shedding light on police use of surveillance tech needs support. They’re working on a coalition letter EFF can sign, so I send it along to our street level surveillance team, schedule a meeting, and reach out to aligned groups in PDX.

Next up is a policy meeting on consumer privacy. Yesterday in Congress, the House passed a bill undermining privacy (again) and we need to kill it (again). We discuss key Senate votes, and I remember that an EFA group had a good relationship with one of those members in a campaign last year. I reach out to the group with links on our current campaign and see if they can help us lobby on the issue.

After a quick vegan lunch, I start a short Deeplinks post celebrating a major website connecting to the Fediverse, promoting folks autonomy online. I’m not quite done in time for my next meeting, planning an upcoming EFA meetup with my team. Before we get started though, an urgent message from San Diego interrupts us—the city council moved a crucial hearing on ALPRs to tomorrow. We reschedule and pivot to drafting an action alert email for the area as well as social media pushes to rally support.

In the home stretch, I set that meeting with Portland groups and make sure our newest EFA member has information on our workshop next week. After my last meeting for the day, a coalition call on Right to Repair (with Hayley!), I send my blog to a colleague for feedback, and wrap up the day in one of our off-topic chats. While passionately ranking Godzilla movies, my dog helpfully reminds me it’s time to log off and go on another walk.

Thorin Klosowski, Security and Privacy Activist

I typically start my day with reading—catching up on some broad policy things, but just as often poking through product-related news sites and consumer tech blogs—so I can keep an eye out for any new sorts of technology terrors that might be on the horizon, privacy promises that seem too good to be true, or any data breaches and other security guffaws that might need to be addressed.

If I’m lucky (or unlucky, depending on how you look at it), I’ll find something strange enough to bring to our Public Interest Technology crew for a more detailed look. Maybe it’ll be the launch of a new feature that promises privacy but doesn’t seem to deliver it, or in rare cases, a new feature that actually seems to. In either instance, if it seems worth a closer look, I’ll often then chat through all this with the technologists who specialize in the technology at play, then decide whether it’s worth writing something, or just keeping in our deep log of “terrible technologies to watch out for.” This process works in reverse, too—where someone on the PIT team brings up something they’re working on, like sketchyware on an Android tablet, and we’ll brainstorm some ways to help people who’re stuck with these types of things make them less sucky.

Today, I’m also tagging along with a couple of members of the PIT team at a meeting with representatives from a social media company that’s rolling out a new feature in its end-to-end encryption chat app. The EFF technologists will ask smart, technical questions and reference research papers with titles like, “Unbreakable: Designing for Trustworthiness in Private Messaging” while I furiously take notes and wonder how on earth we’ll explain all the positive (or negative) effects on individual privacy this feature might pose if it does in fact release.

With whatever time I have left, I’ll then work on Surveillance Self-Defense, our guide to protecting you and your friends from online spying. Today, I’m working through updating several of our encryption guides, which means chatting with our resident encryption experts both on the legal and PIT teams. What makes SSD so good, in my eyes, is how much knowledge backs every single word of every guide. This is what sets SSD apart from the graveyard of security guides online, but it also means a lot of wrangling to get eyes on everything that goes on the site. Sometimes a guide update clicks together smoothly and we update things quickly. Sometimes one update to a guide cascades across a half dozen others, and I start to feel like I have one of those serial killer boards, but I’m keeping track of several serial killers across multiple timelines. But however an SSD update plays out, it all needs to get translated, so I’ll finish off the day with a look at a spreadsheet of all the translations to make sure I don’t need to send anything new over (or just as often, realize I’ve already gotten translations back that need to put online).

*****

We love giving people a picture of the work we do on a daily basis at EFF to help protect your rights online. Our former Activism Directors, Elliot Harmon and Rainey Reitman, each wrote one of these blogs in the past as well. If you’d like to join us on the EFF Activism Team, or anywhere else in the organization, check out opportunities to do so here.

Speaking Freely: Mohamed El Gohary

14 May 2024 at 13:58

Interviewer: Jillian York

Mohamed El Gohary is an open-knowledge enthusiast. After majoring in Biomedical Engineering in October 2010, he switched careers to work as a Social Media manager for Al-Masry Al-Youm newspaper until October 2011, when he joined Global Voices contracts managing Lingua until the end of 2021. He now works for IFEX as the MENA Network Engagement Specialist.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean for you?

Free speech, for me, freedom of expression, means the ability for people to govern themselves. It means to me that the real meaning of democracy can not happen without freedom of speech, without people expressing their needs in different spectrums. The idea of civic space, the idea of people basically living their lives and using different means of communication for getting things done right through freedom of speech.

York: What’s an experience that shaped your views on freedom of expression?

Well, my background is using the internet. So I always believed, in the early days of using the internet, that it would enable people to express themselves in a way for a better democratic process. But right now that changed because of the decentralization of online spaces to centralized spaces which are the antithesis of democracy. So the internet turns into an oligarch’s world. Which is, again, going back to freedom of expression. I think there are ways that are unchartered territories in terms of activism, in terms of platforms online and offline, to maybe reinvent the wheel in a way for people to have a better democratic process in terms of freedom of expression. 

York: You came up in an era where social media had so much promise, and now, like you said about the oligarchical online space—which I tend to agree with—we’re in kind of a different era. What are your views right now on regulation of social media?

Well, it’s still related to the democratic process. It’s a similar conversation to, let’s say, the Internet Governance Forum where… where is the decision making? Who has the power dynamics around decision making? So there are governments, then there are private companies, then there is law and the rule of law, and then there is civil society. And there’s good civil society and there’s bad civil society, in terms of their relationship with both governments and companies. So it goes back to freedom of expression as a collective and in an individual manner. And it comes to people and freedom of assembly in terms of absolute right and in terms of practice, to reinvent the democratic process. It’s the whole system. It turns out it’s not just freedom of expression. Freedom of expression has an important role, and the democratic process can’t be reinvented without looking at freedom of expression. The whole system, democracy, Western democracy and how different countries apply it in ways that affects and creates the power of the rich and powerful while the rest of the population just loses their hope in different ways. Everything goes back to reinventing the democratic process. And freedom of expression is a big part of it.

York: So this is a special interview, we’re here at the IFEX general meeting. What are some of the things that you’re seeing here, either good or bad, and maybe even what are some things that give you hope about the IFEX network?

I think, inside the IFEX network and the extended IFEX network, it’s the importance of connection. It’s the importance of collaboration. Different governments try to always work together to establish their power structures, while the resources governments have is not always available to civil society. So it’s important for civil society organizations—and IFEX is an example of collaboration between a large number of organizations around the world—in all scales, in all directions, that these kinds of collaborations happen in different organizations while still encouraging every organization in itself to look at itself, to look at itself as an organization, to look at how it’s working. To ask themselves, is it just a job? Are we working for a cause? Are we working for a cause in the right way? It’s the other side of the coin to how governments work and maintain existing power structures. There needs to be the other side of the coin in terms of, again, reinventing the democratic process.

York: Is there anything I didn’t ask that you want to mention?

My only frustration is where organizations work as if it is a job, and they only do the minimum, for example. And that’s in a good case scenario. A bad case scenario is when a civil society organization is working for the government or for private companies—where organizations can be a burden more than a resource. I don’t know how to approach that without cost. Cost is difficult, cost is expensive, it’s ugly, it’s not something you look for when you start your day. And there is a very small number of people and organizations who would be willing to even think about paying the price of being an inconvenience to organizations that are burdening entities. That would be my immediate and long term frustration with civil society at least in my vicinity.

Who is your free speech hero?

For me, as an Egyptian, that would be Alaa Abd El-Fattah. As a person who is a perfect example of looking forward to being an inconvenience. And there are not a lot of people who would be this kind of inconvenience. There are many people who appear like they are an inconvenience, but they aren’t really. This would be my hero.

Big Tech to EU: "Drop Dead"

13 May 2024 at 13:02

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so. 

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days. 

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could. 

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK. 

 

 

Victory! FCC Closes Loopholes and Restores Net Neutrality

By: Chao Liu
13 May 2024 at 12:30

Thanks to weeks of the public speaking up and taking action the FCC has recognized the flaw in their proposed net neutrality rules. The FCC’s final adopted order on net neutrality restores bright line rules against all forms of throttling, once again creating strong federal protections for all Americans.

The FCC’s initial order had a narrow interpretation of throttling that could have allowed ISPs to create so-called fast lanes, speeding up access to certain sites and services and effectively slowing down other traffic flowing through your network. The order’s bright line rule against throttling now explicitly bans this kind of conduct, finding that the “decision to speed up ‘on the basis of Internet content, applications, or services’ would ‘impair or degrade’ other content, applications, or services which are not given the same treatment.” With this language, the order both hews more closely to the 2015 Order and further aligns with the strong protections Californians already enjoy via California’s net neutrality law.

As we celebrate this victory, it is important to remember that net neutrality is more than just bright line rules against blocking, throttling, and paid prioritization: It is the principle that ISPs should treat all traffic coming over their networks without discrimination. Customers, not ISPs, should decide for themselves how they would like to experience the internet. EFF—standing with users, innovators, creators, public interest advocates, libraries, educators and everyone else who relies on the open internet—will continue to champion this principle. 

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

No Country Should be Making Speech Rules for the World

9 May 2024 at 15:38

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Free Speech Around the World | EFFector 36.6

8 May 2024 at 12:38

Let's gather around the campfire and tell tales of the latest happenings in the fight for privacy and free expression online. Take care in roasting your marshmallows while we share ways to protect your data from political campaigns seeking to target you; seek nominees for our annual EFF Awards; and call for immediate action in the case of activist Alaa Abd El Fattah.

As the fire burns out, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.6 - Free Speech Around the World

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

What Can Go Wrong When Police Use AI to Write Reports?

8 May 2024 at 11:52

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Speaking Freely : Nompilo Simanje

7 May 2024 at 13:45

Nompilo Simanje is a lawyer by profession and is the Africa Advocacy and Partnerships Lead at the International Press Institute. She leads the IPI Africa Program which monitors and collects data on press freedom threats and violations across the continent, including threats to journalists’ safety and gendered attacks against journalists both online and offline to inform evidence-based advocacy. Nompilo is an expert on the intersection of technology, the law, and human rights. She has years of experience in advocacy and capacity building aimed at promoting media freedom, freedom of expression, access to information, and the right to privacy. She also currently serves on the Advisory Board of the Global Forum on Cyber Expertise. Simanje is an alumnus of the Open Internet for Democracy Leaders Program and the US State Department IVLP Program on Promoting Cybersecurity.

This interview has been edited for length and clarity.*

York: What does free expression mean to you? 

For me, free expression or free speech is the capacity for one to be able to communicate their views and their opinions without any fear or without thinking that there might be some reprisals or repercussions for freely engaging on any conversation or any issue which might be personal, but also even on any issue of public interest. 

What are some of the qualities that have made you passionate about free speech?

Being someone who works in the civil society sector, I think when I look at free speech and free expression, I view it as an avenue for the realization of several other rights. One key thing for me is that free expression encourages interactive dialogue, it encourages public dialogue, which is very important. Especially for democracy, but also for transparency and accountability. Being based in Africa, we are always having conversations around corruption, around accountability by government actors and public officials. And I feel that free expression is a vehicle for that, because it allows people to be able to question those that hold power and to criticize certain conduct by people that are in power. Those are some of the qualities that I feel are very important for me when I think about free expression. It enables transparency and accountability, but also holding those in power to account, which is something I believe is very important for democracies in Africa. 

So you work all around the African continent. Broadly speaking, what are some of the biggest online threats you’re seeing today? 

The digital age has been quite a revolutionary development, especially when you think about free expression. And I always talk about this when I engage on the topic of digital rights, but it has opened the avenue for people to communicate across boundaries, across borders, across countries, but, at the same time—in terms of the impact of threats and risks—they become equally huge as well. As part of the work that I have been doing, there are a few key things that I’ve seen online. One would be the issue of legislation—that countries have increased or upscaled their regulation of the online space. And one of the biggest threats for me has been lawfare, seeing how countries have been implementing old and new laws to undermine free expression online. For example, cybercrime laws or even existing criminal law code or penal codes. So I’ve seen that increasingly happening in Africa. 

Other key things that come to mind are online harassment, which is also happening in various forms. So just sometime last year at the 77th Session of the ACHPR (African Commission on Human and Peoples' Rights) we hosted a side event on the online safety of female journalists in Africa. And there were so many cases which were being shared about how female journalists are fearing online harassment. One big issue discussed was targeted disinformation. Where individuals spread false information about a certain individual as a way of discrediting them or undermining them or just attempting to silence them and ensure that they don’t communicate freely online. But also sometimes online harassment in the form of doxxing. Where personal details are shared online. Someone’s address. Someone’s email. And people are mobilized to attack that person. I’ve seen all those cases happening and I feel that online harassment especially towards female journalists and politicians continue to be some of the biggest threats to free expression in the region. In addition, of course, to what state actors are doing. 

I think also, generally, what I’m also seeing as part of the regulation aspect, is sometimes even the suspension of news websites. Where journalists are using those platforms—you know, like podcasts, Twitter spaces—to freely express. So this increase in regulation is one of the key things I feel continues to threaten online expression, particularly in the region.

You also work globally, you serve on a couple of advisory boards, and I’m curious, coming from an African perspective, how you see things like the Cybercrime Treaty or other international developments impacting the nations that you work in? 

It’s a brilliant question because the adjunct committee for the UN Cybercrime Treaty just recently met. I think one of the aspects I’ve noticed is that sometimes African civil society actors are not meaningfully participating in global processes. And as a result, they don’t get to share their experiences and get to reflect on how some developments at the global level will impact the region. 

Just taking on the example you shared about the UN Cybercrime Treaty, as part of my role at IPI, we actually submitted a letter to the adjunct committee with about 49 other civil society actors within Africa, highlighting to the committee that if this treaty is enacted in the way it was currently crafted, with wide scope in terms of the crimes and minimal human rights safeguards, it would actually undermine free expression. And this was informed by our experiences with cybercrime laws in the region. And we’re saying we have seen how some authoritarian governments in the region have been using cybercrime laws. So imagine having a global treaty or a global cybercrime convention. It can be a tool for other authoritarian governments to justify some of their conduct which has been targeted at undermining free expression. Some of the examples include criminalizing inciting public violence or criminalizing publishing falsehoods. We have seen that consistently in several countries and how those laws have been used to undermine expression. I definitely think that whenever there are global engagements about conventions that can undermine fundamental rights it’s very important for Africa to be represented, particularly civil society, because civil society is there to promote human rights and ensure that human rights are safeguarded. 

Also, there have been other key discussions happening, for example, with the open-ended working group on ICTs. We’ve had conversations about cyber capacity-building in the region and how that would also look for Africa where internet penetration is not at its highest and already there are additional divisions where everyone is not able to freely express themselves online. I think all those deliberations need to be taken into account and they need to be contextualized. My opinion is that when I look at global processes and I think about Africa, I always feel that it’s important for civil society actors and key stakeholders to contribute meaningfully to those processes, but also for us to contextualize some of those discussions and deliberate on how they will potentially impact us. Even when I think about the Global Digital Compact and all those issues around the Compact that the Compact seeks to address, we also need to contextualize them with our experiences with countries in the region which have ongoing conflicts and with countries in the region that are led by military regimes—especially in West Africa. All those issues need to be taken into account when we deliberate about global conventions or global policies. So that’s how I’ve been approaching these conversations around the global process, but trying to contextualize them based on what’s happening in the region and what our experiences have been with similar legislation and policies. 

I’m also really curious, has your work touched on issues of content moderation? 

Yes, but not broadly, because I think our interaction with the platforms has been quite minimal, but, yes, we have engaged platforms before. I think I’ll give you an example of Somalia. There’ve been so many reported cases by our partners at Somali Journalist Syndicate where individual accounts of journalists have been suspended, permanently suspended, and sometimes taken down, simply because political sympathizers of the government consistently report those accounts for expressing dissenting views. Or state actors have reached out to the platforms and asked them to intervene and suspend either pages or individual accounts. So we’ve had conversations with the platforms and we have issued public statements to highlight that, as far as content moderation is concerned, it is very important for the platforms to be transparent about requests that they’re receiving from governments, and also to be deliberate as far as media freedom is concerned. Especially where content relates to content or news that has been disseminated by media outlets or pages or accounts that have been utilized by journalists. Because in some countries you see governments consistently trying to undermine or ensure that journalists or media outlets do not fully utilize the online space. So that’s the angle that we have interacted with the platforms as far as content moderation is concerned—just ensuring that as they undertake their work they prioritize media freedom, they prioritize journalists, but also they understand the operating context, that there are countries that are quite authoritarian where dissenting voices are being targeted. So we always try to engage the platforms whenever we get an opportunity to raise awareness where platforms are suspending accounts or taking down content where such content genuinely relates to expressional protected speech. 

York: Did you have any formative experiences that helped shape your views on freedom of expression? 

Funny story actually. When I was in high school I was in certain positions of leadership as a head girl in my high school, but also serving in Junior Parliament. We had this institution put on by the Youth Council where young people in high school can form a shadow Parliament representing different constituencies across the country. I happened to be a part of that in high school. So, of course, that meant being in public spaces, and also generally my identity being known outside my circles. So what that also meant was that it opened an avenue for me to be targeted by trolls online. 

At some point when I was in high school people posted some defamatory, false information about me on an online platform. And over the years I’ve seen that post still there, still in existence. When that happened, I was in high school, I was still a child. But I was interacting on Facebook, you know, we have used Facebook for so many years, that’s the platform I think so many of us have been most familiar with from the time we were still kids. When this post was put up it was posted through a certain page that was a tabloid of sorts. And no one knew who was behind that page, no one knew who was the administrator of that page. What that meant for me was there was no recourse. Because I didn’t even know who was behind this post, who posted this defamatory and false information about me. 

I think from there it really triggered an interest in me about regulation of free expression online. How do you approach issues around anonymity and how far can we go in terms of protecting free expression online in instances where, indeed, rights of other people are also being undermined? It really helped to shape my thoughts around regulation of social media, regulation of content online. So I think, for me, the position even in terms of the work I’ve continued to do in my adult life around digital rights literacy, I’ve really tried to emphasize a digital citizenship where the key focus is really to ensure that we can freely express, but we need to ensure the rights of others. Which is why I strongly condemn hate speech. Which is why I strongly condemn targeted attacks, for instance, on female politicians and female journalists. Because I know that while we can freely express ourselves, there are certain limitations or boundaries that we shouldn’t cross. And I think I learned that from experiencing that targeted attack on me online. 

York: Is there anything I haven’t touched on yet that you’d like to talk about? 

I’d like to maybe just speak briefly about the implications of free expression being undermined especially in the online space. And I’m emphasizing this because we are in the digital age where the online space has really provided a platform for the full realization of so many fundamental rights. So one of the key things I’ve seen is the increase in self-censorship. For example, if individuals are being arrested over their Tweets and Facebook posts, news websites are being suspended, there’s also an increase in self-censorship. But also limited participation in public dialogue. We have so many elections happening in 2024, and we’ve had recent elections happen in the region, also. Nigeria was a big election. DRC was another big election. What I’ve been seeing is really limited participation, especially by high risk groups like women and LGBTQI communities. Especially, for example, when they’ve been targeted in Uganda through legislation. So there’s been limited participation and interactive dialogue in the region because of all these various developments that have been happening. 

Also, one aspect that comes to mind for me is the correlation between free expression and freedom of assembly and association. Because we are also interacting with groups and other like-minded people in the online space. So while we are freely expressing, the online space is also a platform for assembly and association. And some people are also being robbed of that experience, of freely associating online, because of the threats or the attacks that have been targeting free expression. I think it’s also important for Africa to think about these implications—that when you’re targeting free expression, you’re also targeting other fundamental rights. And I think that’s quite important for me to emphasize as part of this conversation. 

York: Who is your free speech hero? Someone who has really inspired you? 

I haven’t really thought about that actually! I don’t think I have a specific person in mind, but I generally just appreciate everyone who freely expresses their mind, especially on Twitter, because Twitter can be quite brutal at times. But there are several individuals that I look at and really admire for their tenacity in continuing to engage on the platforms even when they’re constantly being targeted. I won’t mention a specific person, but I think, from a Zimbabwen perspective, I would highlight that I’ve seen several female politicians in Zimbabwe being targeted. Actually, I will mention, there’s a female politician in Zimbabwe, Fadzayi Mahere, she’s also an advocate. I’ll mention her as a free speech hero. Because every time I speak about online attacks or online gender-based violence in digital rights trainings, I always mention her. That’s because I’ve seen how she has been able to stand against so many coordinated attacks from a political front and from a personal front. Just to highlight that last year she published a video which had been circulating and trending online about a case where police had allegedly assaulted a woman who had been carrying a child on her back. And she tweeted about that and she was actually arrested, charged, and convicted for, I think, “publishing falsehoods”, or, there’s a provision in the criminal law code that I think is like “publishing falsehoods to undermine public authority or the police service.” So I definitely think she is a press freedom hero, her story is quite an interesting story to follow in terms of her experiences in Zimbabwe as a young lawyer and as a politician, and a female politician at that. 

Podcast Episode: Building a Tactile Internet

7 May 2024 at 03:15

Blind and low-vision people have experienced remarkable gains in information literacy because of digital technologies, like being able to access an online library offering more than 1.2 million books that can be translated into text-to-speech or digital Braille. But it can be a lot harder to come by an accessible map of a neighborhood they want to visit, or any simple diagram, due to limited availability of tactile graphics equipment, design inaccessibility, and publishing practices.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Chancey Fleet wants a technological future that’s more organically attuned to people’s needs, which requires including people with disabilities in every step of the development and deployment process. She speaks with EFF’s Cindy Cohn and Jason Kelley about building an internet that’s just and useful for all, and why this must include giving blind and low-vision people the discretion to decide when and how to engage artificial intelligence tools to solve accessibility problems and surmount barriers. 

In this episode you’ll learn about: 

  • The importance of creating an internet that’s not text-only, but that incorporates tactile images and other technology to give everyone a richer, more fulfilling experience. 
  • Why AI-powered visual description apps still need human auditing. 
  • How inclusiveness in tech development is always a work in progress. 
  • Why we must prepare people with the self-confidence, literacy, and low-tech skills they need to get everything they can out of even the most optimally designed technology. 
  • Making it easier for everyone to travel the two-way street between enjoyment and productivity online. 

Chancey Fleet’s writing, organizing and advocacy explores how cloud-connected accessibility tools benefit and harm, empower and expose communities of disability. She is the Assistive Technology Coordinator at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library, where she founded and maintains the Dimensions Project, a free open lab for the exploration and creation of accessible images, models and data representations through tactile graphics, 3D models and nonvisual approaches to coding, CAD and “visual” arts. She is a former fellow and current affiliate-in-residence at Data & Society; she is president of the National Federation of the Blind’s Assistive Technology Trainers Division; and she was recognized as a 2017 Library Journal Mover and Shaker. 

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

CHANCEY FLEET
The fact is, as I see it, that if you are presented with what seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. What I've already noticed is blind people in droves dumping their descriptions of personal images, sentimental images, generated by AI onto social media, and there is a certain hyper-normative quality to the language. Any scene that contains a child or a dog is heartwarming. Any sunset or sunrise is vibrant. Anything with a couch and a lamp is calm or cozy. Idiosyncrasies are left by the wayside.

Unflattering little aspects of an image are often unremarked upon, and I feel like I'm being served some Ikea pressboard of reality, and it is so much better than anything that we've had before on demand without having to involve a sighted human being. And it's good enough to mail, kind of like a Hallmark card, but do I want the totality of digital description online to slide into this hyper normative, serene anodyne description? I do not. I think that we need to do something about it.

CINDY COHN
That's Chancey Fleet describing one of the problems that has arisen as AI is increasingly used in assistive technologies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we start to get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fight when they DO go wrong. But this show is about optimism, hope and bright ideas for the future.

According to a National Health Interview Survey from 2018, more than 32 million Americans reported that they had vision loss, including blindness. And as our population continues to age, this number only increases. And a big part of fixing the internet means fixing it so that it works properly for everyone who needs and wants to use it – blind, sighted, and everyone in between.

JASON KELLEY
Our guest today is Chancey Fleet. She is the Assistive Technology Coordinator for the New York Public Library, where she teaches people how to use assistive technology to make their lives easier and more accessible. She’s also the president of the Assistive Technology Trainer’s Division for the National Federation of the Blind. 

CINDY COHN
We started our conversation as we often do – by asking Chancey what the world could be like if we started getting it right for blind and low vision people. 

CHANCEY FLEET
The unifying feature of rightness for blind and low vision folks is that we encounter a digital commons that plays to our strengths, and that means that it's easy for us to find information that we can access and understand. That might mean that web content always has semantic structure that includes things like headings for navigation. 

But it also includes things that we don't have much of right now, like a non-visual way to access maps and diagrams and images, because of course, the internet hasn't been in text only mode for the rest of us for a really long time.

I think getting the internet right also means that we're able to find each other and build community because we're a really low incidence disability. So odds are your colleague, your neighbor, your family members aren't blind or low-vision, and so we really have to learn and produce knowledge and circulate knowledge with each other. And when the internet gets it right, that's something that's easy for us to do. 

CINDY COHN
I think that's so right. And it's honestly consistent with, I think, what every community wants, right? I mean, the Internet's highest and best use is to connect us to the people we wanna be connected to. And the way that it works best is if the people who are the users of it, the people who are relying on it have, not just a voice, but a role in how this works.

I've heard you talk about that in the context of what you call ‘ghostwritten code.’ Do you wanna explain what that is? Am I right? I think that's one of the things that has concerned you.

CHANCEY FLEET
Yeah, you are right. A lot of people who work in design and development are used to thinking of blind and disabled people in terms of user stories and personas, and they may know on paper what the web content accessibility guidelines, for instance, say that a blind or low vision user or a keyboard-only user, or a switch user needs. The problems crop up when they interpret the concrete aspects of those guidelines without having a lived experience that leads them to understand usability in the real world.

I can give you one example. A few years ago, Google rolled out a transcribe feature within Google Translate, which I was personally super excited about. And by the way, I'm a refreshable Braille user, which means I use a Braille display with my iPhone. And if you were running VoiceOver, the screen reader for iPhone, when you launched the transcribed feature, it actually scolded you that it would not proceed, that it would not transcribe, until you plugged in headphones because well-meaning developers and designers thought, well, VoiceOver users have phones that talk, and if those phones are talking, it's going to ruin the transcription, so we'll just prevent that from happening. They didn't know about me. They didn't know about refreshable Braille users or users that might have another way to use VoiceOver that didn't involve speech out loud.

And so that, I guess you could call it a bug, I would call it a service denial, was around for a few weeks until our community communicated back about it, and if there had been blind people in the room or Braille users in the room, that would've never happened.

JASON KELLEY
I think this will be really interesting and useful for the designers at EFF who think a lot in user personas and also about accessibility. And I think just hearing what happens when you get it wrong and how simple the mistake can be is really useful I think for folks to think about inclusion and also just how essential it is to make sure there's more in-depth testing and personas as you're saying. 

I wanna talk a little bit about the variety of things you brought up in your opening salvo, which I think we're gonna cover a lot of. But one of the points you mentioned was, or maybe you didn't say it this way in the opening, but you've written about it, and talked about it, which is tactile graphics and something that's called the problem of image poverty online.

And that basically, as you mentioned, the internet is a primarily text-based experience for blind and low-vision users. But there are these tools that, in a better future, will be more accessible, both available and usable and effective. And I wonder if you could talk about some of those tools like tablets and 3D printers and things like that.

CHANCEY FLEET
So it's wild to me the way that our access to information as blind folks has evolved given the tools that we've had. So, since the eighties or nineties we've had Braille embossers that are also capable of creating tactile graphics, which is a fancy way to say raise drawings.

A graphics-capable embosser can emboss up to a hundred dots per inch. So if you look at it. Visually, it's a bit pixelated, but it approaches the limits of tactile perception. And in this way, we can experience media that includes maybe braille in the form of labels, but also different line types, dotted lines, dashed lines, textured infills.

Tactile design is a little bit different from visual design because our perceptual acuity is lower. It's good to scale things up. And it's good to declutter items. We may separate layers of information out to separate graphics. If Braille were print, it would be a thirty-six point font, so we use abbreviations liberally when we need to squeeze some braille onto an image.

And of course, we can't use color to communicate anything semantic. So when the idea of a red line or a blue line goes away we start thinking about a solid line versus a dashed or dotted line. When we think about a pie chart, we think about maybe textures or labels in place of colors. But what's interesting to me is that although tactile graphics equipment has been on the market since at least the eighties, probably someone will come along and correct me that it's even sooner than that.

Most of that equipment is on the wrong side of an institutional locked door, so it belongs to a disability services office in a university. It belongs to the makers of standardized tests. It belongs to publishers. I've often heard my library patrons say something along the lines of, oh yeah, there was a graphics embosser in my school, but I never got to touch it, I never got to use it. 

Sometimes the software that's used to produce tactile graphics is, in itself, inaccessible. And so I think blind people have experienced pretty remarkable gains in general in regard to our information literacy because of digital technologies and the internet. For example, I can go to Bookshare.org, which is an online library for people with print disabilities and have my choice of a million books right now.

And those can automatically be translated to text-to-speech or to digital braille. But if I want a map of the neighborhood that I'm going to visit tomorrow, or if I want a glimpse of how electoral races play out, that can be really hard to come by. And I think it is a combination of the limited availability of tactile graphics equipment, inaccessibility of design and publishing practices for tactile graphics, and then this sort of vicious circular lack of demand that happens when people don't have access. 

When I ask most blind people, they'll say that they've maybe encountered two or three tactile graphics in the past year, maybe less. Um, a lot of us got more than that during our K-12 instruction. But what I find, at least for myself, is that when tactile graphics are so strongly associated with standardized testing and homework and never associated with my own curiosity or fun or playfulness or exploration, for a long time, that actually dampened down my desire to experience tactile graphics.

And so most of us would say probably, if I can be so bold as to think that I speak for the community for a second, most of us would say that yes, we have the right to an accessible web. Yes, we have the right to digital text. I think far fewer of us are comfortable saying, or understand the power of saying we also have a right to images and so in the best possible version of the internet that I imagine we have three things. We have tactile graphics equipment that is bought more frequently, and so there are economies of scale and the prices come down. We have tactile design and graphics design programs that are more accessible than what's on the market right now. And critically, we have enough access to tactile graphics online that people can find the kind of information that engages and compels them. And within 10 years or so, people are saying, we don't live in a text-only world, images aren't inherently visual, they are spacial, and we have a right to them.

JASON KELLEY
I read a piece that you had written about the kind of importance of data visualizations during the pandemic and how important it was for that sort of flatten the curve graph to be able to be seen or, or touched in this case, um, by as many people as possible. But, and, and that really struck me, but I also love this idea that we shouldn't have to get these tools only because they're necessary, but also because people deserve to be able to enjoy the experience of the internet.

CHANCEY FLEET
Right, and you never know when enjoyment is going to lead to something productive or when something productive you're doing spins out into enjoyment. Somebody sent me a book of tactile origami diagrams. It's a four volume book with maybe 40 models in it, and I've been working through them all. I can do almost all of them now, and it's really hard as a blind person to go online and find origami instructions that make any sense from an accessibility perspective.

There is a wonderful website called AccessOrigami.com. Lindy Vandermeer out of South Africa does great descriptive origami instruction. So it's all text directing you step by step by step. But the thing is, I'm a spatial thinker. I'm what you might think of as a visual thinker, and so I can get more out of a diagram that's showing me where to flip dot A to dot B, then I can in reading three paragraphs. It's faster, it's more fluid, it's more fun. And so I treasure this book and unfortunately every other blind person I show it to also treasures it and can't have it 'cause I've got one copy. And I just imagine a world in which, when there's a diagram on screen, we can use some kind of process to re-render it in a more optimal format for tactile exploration. That might mean AI or machine learning, and we can talk a little bit about that later. But a lot of what we learn about. What we're good at, what we enjoy, want, what we want more of in life. You know, we do find online these days, and I want to be able to dive into those moments of curiosity and interest without having to first engineer a seven step plan to get access to whatever it is that's on my screen.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Chancey Fleet.

CINDY COHN
So let's talk a little bit about AI and I'd love to hear your perspective on where AI is gonna be helpful and where we ought to be cautious.

CHANCEY FLEET
So if you are blind and reasonably online and you have a smartphone and you're somebody that's comfortable enough with your smartphone that like you download apps on a discretionary basis, there's a good chance that you've heard of a new feature in this app, be my eyes called be my AI, and it's a ChatGPT with computer vision powered describer.

You aim your camera at something, wait a few seconds, and a fairly rich description comes back. It's more detailed and nuanced than anything that AI or machine learning has delivered before, and so it strikes a lot of us as transformational and or uncanny, and it allows us to grab glimpses of what I would call a hypothesized visual world because as we all know, these AI make up stories out of whole cloth and include details that aren't there, and skip details that to the average human observer would be obviously relevant. So I can know that the description I'm getting is probably not prioritized and detailed in quite the same way that a human describer would approach it.

So what's interesting to me is that, since interconnected blind folks have such a dense social graph, we are all sort of diving into this together and advising each other on what's going well and what's not. And I think that a lot of us are deriving authentic value from this experience as bounded by caveats as it is. At the same time, I fear that when this technology scales, which it will, if other forces don't counteract it, it may become a convincing enough business case that organizations and institutions can skip. Human authoring of alt text to describe images online and substitute these rich seeming descriptions that are generated by an AI, and even if that's done in such a way that a human auditor can go in and make changes.

The fact is, as I see it, that if you are presented with. What seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. 

CINDY COHN
I think what I hear in the answer is it can be an augment to the humans doing the describing, um, but not a replacement for, and that's where the, you know, but it's cheaper part comes in. Right. And I think keeping our North Star on the, you know, using these systems in ways that assist people rather than replace people is coming up over and over again in the conversations around AI, and I'm hearing it in what you're saying as well.

CHANCEY FLEET
Absolutely, and let me say as a positive it is both my due diligence as an educator and my personal joy to experiment with moments where AI technologies can make it easier for me to find information or learn things. For example, if I wanna get a quick visual description of the Bluebird trains that the MTA used to run, that's a question that I might ask AI.

I never would've bothered a human being with it. It was not central enough. But if I'm reading something and I want a quick visual description to fill it in, I'll do that.

I also really love using AI tools to look up questions about different artistic or architectural styles, or even questions about code.

I'm studying Python right now because when I go to look for information online on these subjects, often I'm finding websites that are riddled with. Lack of semantic structure that have graphics that are totally unlabeled, that have carousels, that are hard for screen reader users to navigate. And so one really powerful and compelling thing that current Conversational AI offers is that it lives in a text box and it won't violate the conventions of a chat by throwing a bunch of unwanted visual or structural clutter my way.

And when I just want an answer and I'm willing to grant myself that I'm going to have to live with the consequences of trusting that answer, or do some lateral reference, do some double checking, it can be worth my while. And in the best possible world moving forward, I'd like us to be able to harness that efficiency and that facility that conversational AI has for avoiding the hyper visual in a way that empowers us, but doesn't foreclose opportunities to find things out in other ways.

CINDY COHN
As you're describing it, I'm envisioning, you know, my drunk friend, right? They might do okay telling me stuff, but I wouldn't rely on them for stuff that really matters.

CHANCEY FLEET
Exactly.

CINDY COHN
You've also talked a little bit about the role of data privacy and consent and the special concerns that blind people have around some of the technologies that are offered to them. But making sure that consent is real. I'd love for you to talk a little bit about that.

CHANCEY FLEET
When AI is deployed on the server side to fix accessibility problems in lieu of baking, accessibility in from the ground up in a website or an application, that does a couple of things. It avoids changing the culture at the company, the customer company itself, around accessibility. It also involves an ongoing cost and technology debt to the overlay company that an organization is using and it builds in the need for ongoing supervision of the AI. So in a lot of ways, I think that that's not optimal. What I think is optimal is for developers and designers, perhaps, to use AI tools to flag issues in need of human remediation, and to use AI tools for education to speed up their immersion into accessibility and usability concepts.

You know, AI can be used to make short work of things that used to take a little bit more time. When it comes to deploying AI tools to solve accessibility problems, I think that that is a suite of tools that is best left to the discretion of the user. So we can decide, on the user side, for example, when to turn on a browser extension that tries to make those remediations. Because when they're made for us at scale, that doesn't happen with our consent and it can have a lot of collateral impacts that organizations might not expect.

JASON KELLEY
The points you're making about being involved in different parts of the process. Right. It's clear that people that use these tools or that, that actually these tools are designed for should be able to decide when to deploy them.

And it's also clear that they should be more involved, as you've mentioned a few times, in the creation. And I wanted to talk a little bit about that idea of inclusion because it's sort of how we get to a place where consent is  actually, truly given. 

And it's also how we get to a place where these tools that are created do what they're supposed to do, and the companies that you're describing, um, build the, the web, the way that it should be built so that people can can access it.

We have to have inclusion in every step of the process to get to that place where these, all of these tools and the web and, and everything we're talking about actually works for everyone. Is inclusion sort of across the spectrum a solution that you see as well?

CHANCEY FLEET
I would say that inclusion is never a solution because inclusion is a practice and a process. It's something that's never done. It's never achieved, and it's never comprehensive and perfect. 

What I see as my role as an educator, when it comes to inclusion, is meeting people where they are trying to raise awareness – among library patrons and everyone else – I serve about what technologies are available and the costs and benefits of each, and helping people road map a path from their goals and their intentions to achieving the things that they want to do.

And so I think of inclusion as sort of a guiding frame and a constant set of questions that I ask myself about what I'm noticing, what I may not be noticing, what I might be missing, who's coming in, for example, for tech lessons, versus who we're not reaching. And how the goals of the people I serve might differ from my goals for them.

And it's all kind of a spider web of things that add up to inclusion as far as I'm concerned.

CINDY COHN
I like that framing of inclusion as kind of a process rather than an end state. And I think that framing is good because I think it really moves away from the checkbox kind of approach to things like, you know, did we get the disabled person in the room? Check! 

Everybody has different goals and different things that work for them and there isn't just one box that can be checked for a lot of these kinds of things.

CHANCEY FLEET
Blind library patrons and blind people in general are as diverse as any library patrons or people in general. And that impacts our literacy levels. It impacts our thoughts and the thoughts of our loved ones about disability. It impacts our educational attainment, and especially for those of us who lose our vision later in life, it impacts how we interact with systems and services.

I would venture to say that at this time in the U.S, if you lose your vision as an adult, or if you grow up blind in a school system, the quality of literacy and travel and independent living instruction you receive is heavily dependent on the quality of the systems and infrastructure around you, who you know, and who you know who is primed to be a disability advocate or a mentor.

And I see such different outcomes when it comes to technology based on those things. And so we can't talk about a best possible world in the technology sphere without also imagining a world that prepares people with the self-confidence, the literacy skills, and the supports for developing low tech skills that are necessary to get everything that one can get out of even the most optimally designed technology. 

A step by step app for walking directions can be as perfect as it gets. But if the person that you are equipping with that app is afraid to step out of their front door and start moving their cane back and forth and listening to the traffic and trusting their reflexes and their instincts because they have been taught how to trust those things, the app won't be used and there'll be people who are unreached and so technology can only succeed to the extent that the people using it are set up to succeed. And I think that that is where a lot of our toughest work resides.

CINDY COHN
We're trying to fix the internet here, but the internet rests on the rest of the world. And if the rest of the world isn't setting people up for success, technology can't swoop in and solve a lot of these problems.

It needs to rest upon a solid foundation. I think that's just a wonderful place to close because all of us sit on top of what John Perry Barlow called meatspace, right, and if meatspace isn't serving us, then the digital world can only, you know, it can't solve for the problems that are not digital.

JASON KELLEY
I would have loved to talk to Chancey for another hour. That was fantastic.

CINDY  COHN
Yeah, that was a really fun conversation. And I have to say, I just love the idea of the internet going tactile, right? That right now it's all very visual, and that we have the technology to make it tactile so that maps and other things that are, you know, pretty hard for people with low vision or blindness to navigate now, but we have technology, some of the, tools that she talked about that really could make the internet something you could feel as well as see? 

JASON KELLEY
Yeah, I didn't know before talking to her that these tools even existed. And when you hear about it, you're like, oh, of course they do. But it was clear, uh, It was clear from what she said that a lot of people don't have access to them. The tools are relatively new and they need to be spread out more.  But when that happens, hopefully that does happen,  it sort of then requires us to rethink how the internet is built in some ways in terms of the hierarchy of text and what kinds of graphics exist and protocols for converting that information into tactile experiences for people. 

CINDY COHN
Yeah, I think so. And  it does sit upon something that she mentioned. I mean, she said these machines exist and have existed for a long time, but they're mainly in libraries or other places where people can't use them in their everyday lives. And, and I think, you know, one of the things that we ended with in the conversation was really important, which is, you know, we're all sitting upon a society that doesn't make a lot of these tools as widely available as they need to. 

And, you know, the good news in that is that the hard problem has been solved, which is how do you build a machine like this? The problem that we ought to be able to address as a society is how do we make it available much more broadly? I use this quote a lot, but you know, the future is here. It's just not evenly distributed. Seemed really, really clear in the way that she talked about these tools that like most blind people have used once or twice in school, but then don't get to use and turn part of their everyday life 

JASON KELLEY
Yeah. The, the way I heard this was that we have this problem solved sort of at an institutional level where you can access these tools at an institution, but not at the individual level. And it's really.  It is helpful to hear and and optimistic to hear that they will exist in theory in people's homes if we can just get that to happen. And I think what was really rare for this conversation is that it, like you said, we actually do have the technology to do these things a lot of times we're talking about what we need to improve or change about the technology and and how that technology doesn't quite exist or will always be problematic and in this case, sure, the technology can always get better, but  it sounds like we're actually  At a point where we have a lot of the problems solved, whether it's using tactile tablets or, um,  creating ways for people to  use technology to guide each other through places, whether that's through like a person, through Be My Eyes or even in some cases an AI with the Be My AI version of that.

But we just haven't gotten to the point where those things work for everyone. And everyone has  a level of technological proficiency that lets them use those things. And that's something that clearly we'll need to work on in the future.

CINDY COHN
Yeah, but she also pointed out the work that needs to be done about making sure that we're continuing to build the tech that actually serves this community. And she, you know, and they're talking about, you know, ghostwritten code and things like that, where, you know, people who don't have the experience are writing things and building things based upon what they think the people who are blind might want. So, you know, on the one hand, there's good news because a lot of really good technology already exists, but I think she also didn't let us off the hook as a society about something that we, we see all across the board, which is, you know, it need, we need to have the direct input of the people who are going to be using the tools in the building of the tools, lest we end up on a whole other path with things that other than what people actually need. And, you know, this is one of the kind of old, you know, what did they say? The lessons will be repeated until they are learned. This is one of those things where over and over again, we find that the need for people who are building technologies to not just talk to the people who are going to be using them, but really embed those people in the development is one of the ways we stay true to our, to our goal, which is to build stuff that will actually be useful to people.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some limited edition merch like tshirts or buttons or stickers and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Attribution 3.0 unported by their creators. In this episode, you heard Probably Shouldn't by J.Lang, commonGround by airtone and Klaus by Skill_Borrower

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN

And I’m Cindy Cohn.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

EFF Zine on Surveillance Tech at the Southern Border Shines Light on Ever-Growing Spy Network

6 May 2024 at 11:13
Guide Features Border Tech Photos, Locations, and Explanation of Capabilities

SAN FRANCISCO—Sensor towers controlled by AI, drones launched from truck-bed catapults, vehicle-tracking devices disguised as traffic cones—all are part of an arsenal of technologies that comprise the expanding U.S surveillance strategy along the U.S.-Mexico border, revealed in a new EFF zine for advocates, journalists, academics, researchers, humanitarian aid workers, and borderland residents.

Formally released today and available for download online in English and Spanish, “Surveillance Technology at the U.S.-Mexico Border” is a 36-page comprehensive guide to identifying the growing system of surveillance towers, aerial systems, and roadside camera networks deployed by U.S.-law enforcement agencies along the Southern border, allowing for the real-time tracking of people and vehicles.

The devices and towers—some hidden, camouflaged, or moveable—can be found in heavily populated urban areas, small towns, fields, farmland, highways, dirt roads, and deserts in California, Arizona, New Mexico, and Texas.

The zine grew out of work by EFF’s border surveillance team, which involved meetings with immigrant rights groups and journalists, research into government procurement documents, and trips to the border. The team located, studied, and documented spy tech deployed and monitored by the Department of Homeland Security (DHS), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), National Guard, and Drug Enforcement Administration (DEA), often working in collaboration with local law enforcement agencies.

“Our team learned that while many people had an abstract understanding of the so-called ‘virtual wall,’ the actual physical infrastructure was largely unknown to them,” said EFF Director of Investigations Dave Maass. “In some cases, people had seen surveillance towers, but mistook them for cell phone towers, or they’d seen an aerostat flying in the sky and not known it was part of the U.S. border strategy.

“That's why we put together this zine; it serves as a field guide to spotting and identifying the large range of technologies that are becoming so ubiquitous that they are almost invisible,” said Maass.

The zine also includes a copy off EFF’s pocket guide to crossing the U.S. border and protecting information on smart phones, computers, and other digital devices.

The zine is available for republication and remixing under EFF’s Creative Commons Attribution License and features photography by Colter Thomas and Dugan Meyer, whose exhibit “Infrastructures of Control,”—which incorporates some of EFF’s border research—opened in April at the University of Arizona. EFF has previously released a gallery of images of border surveillance that are available for publications to reuse, as well as a living map of known surveillance towers that make up the so-called “virtual wall.”

To download the zine:
https://www.eff.org/pages/zine-surveillance-technology-us-mexico-border

For more on border surveillance:
https://www.eff.org/issues/border-surveillance-technology

For EFF’s searchable Atlas of Surveillance:
https://atlasofsurveillance.org/ 

 

Contact: 
Dave
Maass
Director of Investigations

CCTV Cambridge, Addressing Digital Equity in Massachusetts

3 May 2024 at 16:14

Here at EFF digital equity is something that we advocate for, and we are always thrilled when we hear a member of the Electronic Frontier Alliance is advocating for it as well. Simply put, digital equity is the condition in which everyone has access to technology that allows them to participate in society; whether it be in rural America or the inner cities—both places where big ISPs don’t find it profitable to make such an investment. EFF has long advocated for affordable, accessible, future-proof internet access for all. I recently spoke with EFA member CCTV Cambridge, as they partnered with the Massachusetts Broadband Institute to tackle this issue and address the digital divide in their state:

How did the partnership with the Massachusetts Broadband Institute come about, and what does it entail?

Mass Broadband Institute and Mass Hire Metro North are the key funding partners. We were moving forward with lifting up digital equity and saw an opportunity to apply for this funding, which is going to several communities in the Metro North area. So, this collaboration was generated in Cambridge for the partners in this digital equity work. Key program activities will entail hiring and training “Digital Navigators” to be placed in the Cambridge Public Library and Cambridge Public Schools, working in partnership with navigators at CCTV and Just A Start. CCTV will employ a coordinator as part of the project, who will serve residents and coordinate the digital navigators across partners to build community, skills, and consistency in support for residents. Regular meetings will be coordinated for Digital Navigators across the city to share best practices, discuss challenging cases, exchange community resources, and measure impact from data collection. These efforts will align with regional initiatives supported through the Mass Broadband Institute Digital Navigator coalition.

What is CCTV Cambridge’s approach to digital equity and why is it an important issue?

CCTV’s approach to digital equity has always been about people over tech. We really see the Digital Navigators as more like digital social workers rather than IT people in a sense that technology is required to be a fully civically engaged human, someone who is connected to your community and family, someone who can have a sense of well being and safety in the world. We really feel like what digital equity means is not just being able to use the tools but to be able to have access to the tools that make your life better. You really can’t operate in an equal way in the world without the access to technology, you can’t make a doctor’s appointment, talk to your grandkids on zoom, you can’t even park your car without an app! You can’t be civically engaged without access to tech. We risk marginalizing a bunch of folks if we don’t, as a community, bring them into digital equity work. We’re community media, it’s in our name, and digital equity is the responsibility of the community. It’s not okay to leave people behind.

It’s amazing to see organizations like CCTV Cambridge making a difference in the community, what do you envision as the results of having the Digital Navigators?

Hopefully we’re going to increase community and civic engagement in Cambridge, particularly amongst people who might not have the loudest voice. We’re going to reach people we haven't reached in the past, including people who speak languages other than English and haven’t had exposure to community media. It’s a really great opportunity for intergenerational work which is also a really important community building tool.

How can people both locally in Massachusetts and across the country plug-in and support?

People everywhere are welcomed and invited to support this work through donations, which you can do by visiting cctvcambridge.org! When the applications open for the Digital Navigators, share in your networks with people you think would love to do this work; spread the word on social media and follow us on all platforms @cctvcambridge! 

The U.S. House Version of KOSA: Still a Censorship Bill

3 May 2024 at 12:48

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

On World Press Freedom Day (and Every Day), We Fight for an Open Internet

3 May 2024 at 11:47

Today marks World Press Freedom Day, an annual celebration instituted by the United Nations in 1993 to raise awareness of press freedom and remind governments of their duties under Article 19 of the Universal Declaration of Human Rights. This year, the day is dedicated to the importance of journalism and freedom of expression in the context of the current global environmental crisis.

Journalists everywhere face challenges in reporting on climate change and other environmental issues. Whether lawsuits, intimidation, arrests, or disinformation campaigns, these challenges are myriad. For instance, journalists and human rights campaigners attending the COP28 Summit held in Dubai last autumn faced surveillance and intimidation. The Committee to Protect Journalists (CPJ) has documented arrests of environmental journalists in Iran and Venezuela, among other countries. And in 2022, a Guardian journalist was murdered while on the job in the Brazilian Amazon.

The threats faced by journalists are the same as those faced by ordinary internet users around the world. According to CPJ, there are 320 journalists jailed worldwide for doing their job. And ranked among the top jailers of journalists last year were China, Myanmar, Belarus, Russia, Vietnam, Israel, and Iran; countries in which internet users also face censorship, intimidation, and in some cases, arrest. 

On this World Press Freedom Day, we honor the journalists, human rights defenders, and internet users fighting for a better world. EFF will continue to fight for the right to freedom of expression and a free and open internet for every internet user, everywhere.



Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

Speaking Freely: Rebecca MacKinnon

1 May 2024 at 12:31

*This interview has been edited for length and clarity.

Rebecca MacKinnon is Vice President, Global Advocacy at the Wikimedia Foundation, the non-profit that hosts Wikipedia. Author of Consent of the Networked: The Worldwide Struggle For Internet Freedom (2012), she is co-founder of the citizen media network Global Voices, and  founding director of Ranking Digital Rights, a research and advocacy program at New America. From 1998-2004 she was CNN’s Bureau Chief in Beijing and Tokyo. She has taught at the University of Hong Kong and the University of Pennsylvania, and held fellowships at Harvard, Princeton, and the University of California. She holds an AB magna cum laude in Government from Harvard and was a Fulbright scholar in Taiwan.

David Greene: Can you introduce yourself and give us a bit of your background? 

My name is Rebecca MacKinnon, I am presently the Vice President for Global Advocacy at the Wikimedia Foundation, but I’ve worn quite a number of hats working in the digital rights space for almost twenty years. I was co-founder of Global Voices, which at the time we called it International Bloggers’ Network, which is about to hit its twentieth anniversary. I was one of the founding board members of the Global Networking Initiative, GNI. I wrote a book called “Consent of the Networked: The Worldwide Struggle for Internet Freedom,” which came out more than a decade ago. It didn’t sell very well, but apparently it gets assigned in classes still so I still hear about it. I was also a founding member of Ranking Digital Rights, which is a ranking of the big tech companies and the biggest telecommunications companies on the extent to which they are or are not protecting their users’ freedom of expression and privacy. I left that in 2021 and ended up with the Wikimedia Foundation, and it’s never a dull moment! 

Greene: And you were a journalist before all of this, right? 

Yes, I worked for CNN for twelve years in Beijing for nine years where I ended up Bureau Chief and Correspondent, and in Tokyo for almost three years where I was also Bureau Chief and Correspondent. That’s also where I first experienced the magic of the global internet in a journalistic context and also experienced the internet arriving in China and the government immediately trying to figure out both how to take advantage of it economically but also to control it enough that the Communist Party would not lose power. 

Greene: At what point did it become apparent that the internet would bring both benefits and threats to freedom of expression?

At the beginning I think the media, industry, policymakers, kind of everybody, assumed—you know, this is like in 1995 when the internet first showed up commercially in China—everybody assumed “there’s no way the Chinese Communist Party can survive this,” and we were all a bit naive. And our reporting ended up influencing naive policies in that regard. And perhaps naive understanding of things like Facebook revolutions and things like that in the activism world. It really began to be apparent just how authoritarianism was adapting to the internet and starting to adapt the internet. And how China was really Exhibit A for how that was playing out and could play out globally. That became really apparent in the mid-to-late 2000s as I was studying Chinese blogging communities and how the government was controlling private companies, private platforms, to carry out censorship and surveillance work. 

Greene: And it didn’t stop with China, did it? 

It sure didn’t! And in the book I wrote I only had a chapter on China and talked about how if the trajectory the Western democratic world was on just kind of continued in a straight line we were going to go more in China’s direction unless policymakers, the private sector, and everyone else took responsibility for making sure that the internet would actually support human rights. 

Greene: It’s easy to talk about authoritarian threats, but we see some of the same concerns in democratic countries as well. 

We’re all just one bad election away from tyranny, aren’t we? This is again why when we’re talking to lawmakers, not only do we ask them to apply a Wikipedia test—if this law is going to break Wikipedia, then it’s a bad law—but also, how will this stand up to a bad election? If you think a law is going to be good for protecting children or fighting disinformation under the current dominant political paradigm, what happens if someone who has no respect for the rule of law, no respect for democratic institutions or processes ends up in power? And what will they do with that law? 

Greene: This happens so much within disinformation, for example, and I always think of it in terms of, what power are we giving the state? Is it a good thing that the state has this power? Well, let’s switch things up and go to the basics. What does free speech mean to you? 

People talk about is it free as in speech? Is it free as in beer? What does “free” mean? I am very much in the camp that freedom of expression needs to be considered in the context of human rights. So my free speech does not give me freedom to advocate for a pogrom against the neighboring neighborhood. That is violating the rights of other people. And I actually think that Article 19 of the Declaration of Human Rights—it may not be perfect—but it gives us a really good framework to think about what is the context of freedom of expression or free speech as situated with other rights? And how do we make sure that, if there are going to be limits on freedom of expression to prevent me from calling for a pogrom of my neighbors, then the limitations placed on my speech are necessary and proportionate and cannot be abused? And therefore it’s very important that whoever is imposing those limits is being held accountable, that their actions are sufficiently transparent, and that any entity’s actions to limit my speech—whether it’s a government or an internet service provider—that I understand who has the power to limit my speech or limit what I can know or limit what I can access, so that I can even know what I don’t know! So that I know what is being kept from me. I also know who has the authority to restrict my speech, under what circumstances, so that I know what I can do to hold them accountable. That is the essence of freedom of speech within human rights and where power is held appropriately accountable. 

Greene: How do you think about the ways that your speech might harm people? 

You can think of it in terms of the other rights in the Universal Declaration. There’s the right to privacy. There’s the right to assembly. There’s the right to life! So for me to advocate for people in that building over there to go kill people in that other building, that’s violating a number of rights that I should not be able to violate. But what’s complicated, when we’re talking about rules and rights and laws and enforcement of laws and governance online, is that we somehow think it can be more straightforward and black and white than governance in the physical world is. So what do we consider to be appropriate law enforcement in the city of San Francisco? It’s a hot topic! And reasonable people of a whole variety of backgrounds reasonably disagree and will never agree! So you can’t just fix crime in San Francisco the way you fix the television. And nobody in their right mind would expect that you should expect that, right? But somehow in the internet space there’s so much policy conversation around making the internet safe for children. But nobody’s running around saying, “let’s make San Francisco safe for children in the same way.” Because they know that if you want San Francisco to be 100% safe for children, you’re going to be Pyongyang, North Korea! 

Greene: Do you think that’s because with technology some people just feel like there’s this techno-solutionism? 

Yeah, there’s this magical thinking. I have family members who think that because I can fix something with their tech settings I can perform magic. I think because it’s new, because it’s a little bit mystifying for many people, and because I think we’re still in the very early stages of people thinking about governance of digital spaces and digital activities as an extension of real world activities. And they’re thinking more about, okay, it’s like a car we need to put seatbelts on.

Greene: I’ve heard that from regulators many times. Does the fact that the internet is speech, does that make it different from cars? 

Yeah, although increasingly cars are becoming more like the internet! Because a car is essentially a smartphone that can also be a very lethal weapon. And it’s also a surveillance device, it’s also increasingly a device that is a conduit for speech. So actually it’s going the other way!

Greene: I want to talk about misinformation a bit. You’re at Wikimedia, and so, independent of any concern people have about misinformation, Wikipedia is the product and its goal is to be accurate. What do we do with the “problem” of misinformation?

Well, I think it’s important to be clear about what is misinformation and what is disinformation. And deal with them—I mean they overlap, the dividing line can be blurry—but, nonetheless, it’s important to think about both in somewhat different ways. Misinformation being inaccurate information that is not necessarily being spread maliciously with intent to mislead. It might just be, you know, your aunt seeing something on Facebook and being like, “Wow, that’s crazy. I’m going to share it with 25 friends.” And not realizing that they’re misinformed. Whereas disinformation is when someone is spreading lies for a purpose. Whether it’s in an information warfare context where one party in a conflict is trying to convince a population of something about their own government which is false, or whatever it is. Or misinformation about a human rights activist and, say, an affair they allegedly had and why they deserve whatever fate they had… you know, just for example. That’s disinformation. And at the Wikimedia Foundation—just to get a little into the weeds because I think it helps us think about these problems—Wikipedia is a platform whose content is not written by staff of the Wikimedia Foundation. It’s all contributed by volunteers, anybody can be a volunteer. They can go on Wikipedia and contribute to a page or create a page. Whether that content stays, of course, depends on whether the content they’ve added adheres to what constitutes well-sourced, encyclopedic content. There’s a whole hierarchy of people whose job it is to remove content that does not fit the criteria. And one could talk about that for several podcasts. But that process right there is, of course, working to counter misinformation. Because anything that’s not well-sourced—and they have rules about what is a reliable source and what isn’t—will be taken down. So the volunteer Wikipedians, kind of through their daily process of editing and enforcing rules, are working to eliminate as much misinformation as possible. Of course, it’s not perfect. 

Greene: [laughing] What do you mean it’s not perfect? It must be perfect!

What is true is a matter of dispute even between scientific journals or credible news sources, or what have you. So there’s lots of debates and all those debates are in the history tab of every page which are public, about what source is credible and what the facts are, etc. So this is kind of the self-cleaning oven that’s dealing with misinformation. The human hive mind that’s dealing with this. Disinformation is harder because you have a well-funded state actor who not only may be encouraging people—not necessary people who are employed by that actor themselves, but people who are kind of nationalistic and supporters of that government or politician or people who are just useful idiots—to go on and edit Wikipedia to promote certain narratives. But that’s kind of the least of it. You also, of course, have threats, credible, physical threats against editors who are trying to delete the disinformation and staff of the Foundation who are trying to support editors in dealing with investigating and identifying what is actually a disinformation campaign and supports volunteers in addressing that, sometimes with legal support, sometimes with technical support and other support. But people are in jail in one country in particular right now because they were fighting disinformation on the projects in their language. In Belarus, we had people, volunteers, who were jailed for the same reason. We have people who are under threat in Russia, and you have governments who will say, “Wikipedia contains disinformation about our, for example, Special Military Exercise in Ukraine because they’re calling it ‘an invasion’ which is disinformation, so therefore they’re breaking the law against disinformation so we have to threaten them.” So the disinformation piece—fighting it can become very dangerous. 

Greene: What I hear is there are threats to freedom of expression in efforts to fight disinformation and, certainly in terms of state actors, those might be malicious. Are there any well-meaning efforts to fight disinformation that also bring serious threats to freedom of expression? 

Yeah, the people who say, “Okay, we should just require the platforms to remove all content that is anything from COVID disinformation to certain images that might falsely present… you know, deepfake images, etc.” Content-focused efforts to fight misinformation and disinformation will result in over-censorship because you can almost never get all the nuance and context right. Humor, satire, critique, scientific reporting on a topic or about disinformation itself or about how so-and-so perpetrated disinformation on X, Y, Z… you have to actually talk about it. But if the platform is required to censor the disinformation you can’t even use that platform to call out disinformation, right? So content-based efforts to fight disinformation go badly and get weaponized. 

Greene: And, as the US Supreme Court has said, there’s actually some social value to the little white lie. 

There can be. There can be. And, again, there’s so many topics on which reasonable people disagree about what the truth is. And if you start saying that certain types of misinformation or disinformation are illegal, you can quickly have a situation where the government is becoming arbiter of the truth in ways that can be very dangerous. Which brings us back to… we’re one bad election away from tyranny.

Greene: In your past at Ranking Digital Rights you looked more at the big corporate actors rather than State actors. How do you see them in terms of freedom of expression—they have their own freedom of expression rights, but there’s also their users—what does that interplay look to you? 

Especially in relation to the disinformation thing, when I was at Ranking Digital Rights we put out a report that also related to regulation. When we’re trying to hold these companies accountable, whether we’re civil society or government, what’s the appropriate approach? The title of the report was, “It’s Not the Content, it’s the Business Model.” Because the issue is not about the fact that, oh, something bad appears on Facebook. It’s how it’s being targeted, how it’s being amplified, how that speech and the engagement around it is being monetized, that’s where most of the harm takes place. And here’s where privacy law would be rather helpful! But no, instead we go after Section 230. We could do a whole other podcast on that, but… I digress. 

I think this is where bringing in international human rights law around freedom of expression is really helpful. Because the US constitutional law, the First Amendment, doesn’t really apply to companies. It just protects the companies from government regulation of their speech. Whereas international human rights law does apply to companies. There’s this framework, The UN Guiding Principles on Business and Human Rights, where nation-states have the ultimate responsibility—duty—to protect human rights, but companies and platforms, whether you’re a nonprofit or a for-profit, have a responsibility to respect human rights. And everybody has a responsibility to provide remedy, redress. So in that context, of course, it doesn’t contradict the First Amendment at all, but it sort of adds another layer to corporate accountability that can be used in a number of ways. And that is being used more actively in the European context. But Article 19 is not just about your freedom of speech, it’s also your freedom of access to information, which is part of it, and your freedom to form an opinion without interference. Which means that if you are being manipulated and you don’t even know it—because you are on this platform that’s monetizing people’s ability to manipulate you—that’s a violation of your freedom of expression under international law. And that’s a problem that companies, platforms of any kind—including if Wikimedia were to allow that to happen, which they don’t—anyone should be held accountable for. 

Greene: Just in terms of the role of the State in this interplay, because you could say that companies should operate within a human rights framing, but then we see different approaches around the world. Is it okay or is it too much power for the state to require them to do that? 

Here’s the problem. If the States were perfect in achieving their human rights duties, then we wouldn’t have a problem and we could totally trust states to regulate companies in our interest and in ways that protect our human rights. But there is no such state. There are some that are further away on the spectrum than others, but they’re all on a spectrum and nobody is at that position of utopia, and they will never get there. And so, given that all states in large ways or small, in different ways, are making demands of internet platforms, companies generally, that reasonable numbers of people believe violates their rights, then we need accountability. And that holding the state accountable for what it’s demanding of the private sector, making sure that’s transparent and that the state does not have absolute power is of utmost importance. And when you have situations where a government is just blatantly violating rights, and a company—even a well-meaning company that wants to do the right thing— is just stuck between a rock and a hard place. You can be really transparent about the fact that you’re complying with bad law, but you’re stuck in this place where if you refuse to comply then your employees go to jail. Or other bad things happen. And so what do you do other than just try and let people know? And then the state tells you, “Oh, you can’t tell people because that's a state secret.” So what do you do then? Do you just stop operating? So one can be somewhat sympathetic. Some of the corporate accountability rhetoric has gone a little overboard in not recognizing that if the state’s are failing to do their job, we have a problem. 

Greene: What’s the role of either the State or the companies if you have two people and one person is making it hard for the other to speak? Whether through heckling or just creating an environment where the other person doesn’t feel safe speaking? Is there a role for either the State or the companies where you have two peoples’ speech rights butting up against each other? 

We have this in private physical spaces all the time. If you’re at a comedy show and somebody gets up and starts threatening the stand-up comedian, obviously, security throws them out! I think in physical space we have some general ideas about that, that work okay. And that we can apply in virtual space, although it’s very contextual and, again, somebody has to make a decision—whose speech is more important than whose safety? Choices are going to be made. They’re not always going to be, in hindsight, the right choices, because sometimes you have to act really quickly and you don’t know if somebody’s life is in danger or not. Or how dangerous is this person speaking? But you have to err on the side of protecting life and limb. And then you might have realized at the end of the day that wasn’t the right choice. But are you being transparent about what your processes are—what you’re going to do under what circumstances? So people know, okay, well this is really predictable. They said they were going to x if I did y, and I did y and they did indeed take action, and if I think that they unfairly took action then there’s some way of appealing. That it’s not just completely opaque and unaccountable. 

This is a very overly simplistic description of very complex problems, but I’m now working at a platform. Yes, it’s a nonprofit, public interest platform, but our Trust and Safety team are working with volunteers who are enforcing rules and every day—well, I don’t know if it’s every day because they’re the Trust and Safety team so they don’t tell me exactly what’s going on—but there are frequent decisions around people’s safety. And what enables the volunteer community to basically both trust each other enough, and trust the platform operator enough, for the whole thing not to collapse due to mistrust and anger is that you’re being open and transparent enough about what you’re doing and why you’re doing it so that if you did make a mistake there’s a way to address it and be honest about it. 

Greene: So at least at Wikimedia you have the overriding value of truthfulness. At another platform should they value wanting to preserve places for people who otherwise wouldn’t have places to speak? People who are historically or culturally don’t have the opportunities to speak. How should they handle these instances of people being heckled down or shouted down off of a site? From your perspective, how should they respond to that? Should they make an effort to preserve these spaces? 

This is where I think in Silicon Valley in particular you often hear this thing that the technology is neutral— “we treat everybody the same.” —

Greene: And it’s not true.

Oh, of course it’s not true! But that’s the rhetoric. But that is held up as being “the right thing.” But that’s like saying, “Okay, we’re going to administer public housing in a way” — and it’s not a perfect comparison—being completely blind to the context and the socio-economic and political realities of the human beings that you are taking action upon is sort of like, again, if you’re operating a public housing system, or whatever, and you’re not taking into account at all the socio-economic backgrounds or ethnic backgrounds of people for whom you’re making decisions, you’re going to be perpetuating and, most likely, amplifying social injustice. So people who run public housing or universities and so on are quite familiar with this notion that being neutral is actually not neutral. It’s perpetuating existing social, economic, and political power imbalances. And we found that’s absolutely the case with social media claiming to be neutral. And the vulnerable people end up losing out. That’s what the research has shown and the activism has shown. 

And, you know, in the Wikimedia community there are debates about this. There are people who have been editing for a long time who say, “we have to be neutral.” But on the other hand—what’s very clear—is the greater diversity of viewpoints and backgrounds and languages and genres, etc of the people contributing to an article on a given topic the better it is. So if you want something to actually have integrity, you can’t just have one type of person working on it. And so there’s all kinds of reasons why it’s important as a platform operator that we do everything we can to ensure that this is a welcoming space for people of all backgrounds. That people who are under threat feel safe contributing to the platforms and not just rich white guys in Northern Europe. 

Greene: And at the same time we can’t expect them to be more perfect than the real world, also, right? 

Well, yeah, but you do have to recognize that the real world is the real world and there are these power dynamics going on that you have to take into account and you can decide to amplify them by pretending they don’t exist, or you can work actively to compensate in a manner that is consistent with human rights standards. 

Greene: Okay, one more question for you. Who is your free speech hero and why? 

Wow, that’s a good question, nobody has asked me that before in that very direct way. I think I really have to say sort of a group of people who really set me on the path of caring deeply for the rest of my life about free speech. Those are the people in China, most of whom I met when I was a journalist there, who stood up to tell the truth despite tremendous threats like being jailed, or worse. And oftentimes the determination that I would witness from even very ordinary people that “I am right, and I need to say this. And I know I’m taking a risk, but I must do it.” And it’s because of my interactions with such people in my twenties when I was starting out as a journalist in China that set me on this path. And I am grateful to them all, including several who are no longer on this earth including Liu Xiaobo, who received a Nobel prize when he was in jail before he died. 



Congress Should Just Say No to NO FAKES

29 April 2024 at 16:21

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

Speaking Freely: Obioma Okonkwo

23 April 2024 at 15:05

This interview has been edited for clarity and length.*

Obioma Okonkwo is a lawyer and human rights advocate. She is currently the Head of Legal at Media Rights Agenda (MRA), a non-governmental organization based in Nigeria whose focus is to promote and defend freedom of expression, press freedom, digital rights and access to information within Nigeria and across Africa. She is passionate about advancing freedom of expression, media freedom, access to information, and digital rights. She also has extensive experience in litigating, researching, advocating and training around these issues. Obioma is an alumnus of the Open Internet for Democracy Leaders Programme, a fellow of the African School of Internet Governance, and a Media Viability Ambassador with the Deutsche Welle Akademie.

 York: What does free speech or free expression mean to you?

In my view, free speech is an intrinsic right that allows citizens, journalists and individuals to express themselves freely without repressive restriction. It is also the ability to speak, be heard, and participate in social life as well as political discussion, and this includes the right to disseminate information and the right to know. Considering my work around press freedom and media rights, I would also say that free speech is when the media can gather and disseminate information to the public without restrictions.

 York: Can you tell me about an experience in your life that helped shape your views on free speech?

 An experience that shaped my views on free speech happened in 2013, while I was in University. Some of my schoolmates were involved in a ghastly car accident—as a result of a bad road—which resulted in their death. This led the students to start an online campaign demanding that the government should repair the road and compensate the victims’ families. Due to this campaign, the road was repaired and the victims’ families were compensated.  Another instance is the #End SARS protest, a protest against police brutality and corrupt practices in Nigeria. People were freely expressing their opinions both offline and online on this issue and demanding for a reform of the Nigerian Police Force. These incidents have helped shape my views on how important the right to free speech is in any given society considering that it gives everyone an avenue to hold the government accountable, demand for justice, as well as share their views about how they feel about certain issues that affect them as an individual or group.  

 York: I know you work a bit on press freedom in Nigeria and across Africa. Can you tell me a bit about the situation for press freedom in the context in which you’re working?

 The situation for press freedom in Africa—and particularly Nigeria—is currently an eye sore. The legal and political environment is becoming repressive against press freedom and freedom of expression as governments across the region are now posing themselves as authoritarian. And they have been making several efforts to gag the media by enacting draconian laws, arresting and arbitrarily detaining journalists, imposing fines, and closing media outlets, amongst many other actions.

In my country, Nigeria, the government has resorted to using laws like the Cybercrime Act of 2015 and the Criminal Code Act, among other laws, to silence journalists who are either exposing their corrupt practices, sharing dissenting views, or holding them accountable to the people. For instance, journalists like Agba Jalingo, Ayodele Samuel, Emmanuel Ojo and Dare Akogun – just to mention a few who have been arrested, detained, or charged to court under these laws. In the case of Agba Jalingo, he was arrested and detained for over 100 days after he exposed the corrupt practices of the Governor of Cross River, a state in Nigeria.

 The case is the same in many African countries including Benin, Ghana, and Senegal. Journalists are arrested, detained, and sent to court for performing their journalistic duty. Ignace Sossou, a journalist in Benin, was sent to court and imprisoned under the Digital Code for posting the statement of the Minister of justice  on his Facebook’s account. The reality right now is that governments across the region are at war against press freedom and journalists who are purveyors of information.

 Although this is what press freedom looks like across the region, civil society organizations are fighting back to protect press freedom and freedom of  expression.  To create an enabling environment for press freedom, my organization, Media Rights Agenda (MRA) has been making several efforts such as instituting lawsuits before the national and regional courts challenging these draconian laws; providing pro bono legal representation to journalists who are arrested, detained, or charged; and engaging various stakeholders on this issue. 

 York: Are you working on the issue of online regulation and can you tell us the situation of online speech in the region?

 As the Head of Legal with MRA, I am actively working around the issue of online regulation to ensure that the rights to press freedom, freedom of expression, access to information, and digital rights are promoted and protected online. The region is facing an era of digital authoritarianism as there is a crackdown on online speech. In the context of my country, the Nigerian Government has made several attempts to regulate the internet or introduce social media bills under the guise of combating cybercrimes, hate speech, and mis/disinformation. However, diverse stakeholders – including civil society organizations like my organization – have, on many occasions, fought against these attempts to regulate online speech for the reason that these proposed bills will not only limit freedom of expression, press freedom, and other digital rights. They will also shrink the civic space online, as some of their provisions are overly broad and governments are known for using laws like this arbitrarily to silence dissenting voices and witch hunt journalists, opposition entities, or individuals.

 An example is when diverse stakeholders challenged the National Information and Technology Development Agency (NITDA), an agency saddled with the duty of creating a framework for the planning and regulation of information technology practices activities and systems in Nigeria over the draft regulation, “Code of Practices for Interactive Computer Service Platforms/Internet Intermediaries.” They challenged the draft regulation on the basis that it must contain some provisions that recognize freedom of expression, privacy, press freedom and other human rights concerns. Although the agency took into consideration some of the suggestions made by these stakeholders, there are still concerns that individuals, activists, and human rights defenders might be surveilled, amongst other things.

 The government of Nigeria is relying on laws like the Cybercrime Act, Criminal Code Act and many more to stifle online speech. And the Ghanaian government is no different as they are also relying on the Electronic Communication Act to suppress freedom of expression and hound critical journalists under the pretense of battling fake news. Countries like Zimbabwe, Sudan, Uganda, and Morocco have also enacted laws to silence dissent and repress citizens’ internet use especially for expression.

 York: Can you also tell me a little bit more about the landscape for civil society where you work? Are there any creative tactics or strategies from civil society that you work with?

 Nigeria is home to a wide variety of civil society organizations (CSOs) and non-governmental organizations (NGOs). The main legislation that regulates CSOs are federal laws such as the Nigerian Constitution, which guarantees freedom of association, and the Companies and Allied Matters Act (CAMA), which provides every group or association with legal personality.

 CSOs in Nigeria face quite a number of legal and political hurdles. For example, CSOs that wish to operate as a company limited by guarantee need to seek the consent of the Attorney-General of the Federation which may be rejected. While CSOs operating as incorporated trustees are mandated to carry out some obligations which can be tedious and time consuming. On several occasions, the Nigerian Government has made attempts to pressure and even subvert CSOs and to single out certain CSOs for special adverse treatment. Despite receiving foreign funding support, the Nigerian government finds it convenient to berate or criticize CSOs as being “sponsored” by foreign interests, with the underlying suggestion that such organizations are unpatriotic and – by criticizing government – are being paid to act contrary to Nigeria’s interests.

 There are lots of strategies or tactics CSOs are using to address the issues they are working on, including issuing press statements, engaging diverse stakeholders, litigation, capacity-building efforts, and advocacy.  

 York: Do you have a free expression hero?

 Yes, I do. All the critical journalists out there are my free expression heroes. I also consider Julian Assange as a free speech hero for his belief in openness and transparency as well as taking personal risk to expose the corrupt acts of the powerful, an act necessary in a democratic society. 

Screen Printing 101: EFF's Spring Speakeasy at Babylon Burning

23 April 2024 at 12:00

At least twice each year, we invite current EFF members to gather with fellow internet freedom supporters and to meet the people behind your favorite digital civil liberties organization. For this year’s Bay Area based members, we had the opportunity to take over Babylon Burning’s screen printing shop in San Francisco, where Mike Lynch and his team bring EFF art(work) to life.

Babylon Burning Front of Building

To kick off the evening we had EFF’s Director of Member Engagement Aaron Jue, talk about the near-20-year friendship between EFF and Babylon Burning, the shop that has printed everything from t-shirts to hoodies to hats, and now tote bags. At EFF, we love the opportunity to support a local business and have a great partnership at the same time. When we send our artwork to Mike and his staff, we know it is in good hands.

EFF Shirt Archive

Following Aaron, EFF’s Creative Director Hugh D’Andrade dived into some of EFF’s most popular works such as the NSA Spying Eagle and the many versions of the EFF Liberty Mecha. The EFF NSA Spying Eagle focuses on mass surveillance found in the Hepting and Jewel cases. The EFF Liberty Mecha has been featured on four different occasions, most recently on a shirt for DEF CON 29, and highlights freedom, empowerment through technology, interoperability, and teamwork. More information about EFF’s member shirts can be found in our blog and in our shop.

Mike Lynch at Babylon Burning

Mike jumped in after Hugh to walk members though a hands-on demonstration of traditional screen printing. Members printed tote bags, toured the Babylon Burning print shop, and mingled with EFF staff and local supporters.

EFF Tote Bag

Thank you to everyone that attended this year’s Spring Members’ Speakeasy and continue to support EFF as a member. Your support allows our engineers, lawyers, and skilled advocates to tend the path for technology users, and to nurture your rights to privacy, expression, and innovation online.

EFF Art

Thanks to all of the EFF members who participated at our annual Bay Area meetup. If you're not a member of EFF yet, join us today. See you at the next event!

Podcast Episode: Right to Repair Catches the Car

23 April 2024 at 03:06

If you buy something—a refrigerator, a car, a tractor, a wheelchair, or a phone—but you can't have the information or parts to fix or modify it, is it really yours? The right to repair movement is based on the belief that you should have the right to use and fix your stuff as you see fit, a philosophy that resonates especially in economically trying times, when people can’t afford to just throw away and replace things.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

 Companies for decades have been tightening their stranglehold on the information and the parts that let owners or independent repair shops fix things, but the pendulum is starting to swing back: New York, Minnesota, California, Colorado, and Oregon are among states that have passed right to repair laws, and it’s on the legislative agenda in dozens of other states. Gay Gordon-Byrne is executive director of The Repair Association, one of the major forces pushing for more and stronger state laws, and for federal reforms as well. She joins EFF’s Cindy Cohn and Jason Kelley to discuss this pivotal moment in the fight for consumers to have the right to products that are repairable and reusable.  

In this episode you’ll learn about: 

  • Why our “planned obsolescence” throwaway culture doesn’t have to be, and shouldn’t be, a technology status quo. 
  • The harm done by “parts pairing:” software barriers used by manufacturers to keep people from installing replacement parts. 
  • Why one major manufacturer put out a user manual in France, but not in other countries including the United States. 
  • How expanded right to repair protections could bring a flood of new local small-business jobs while reducing waste. 
  • The power of uniting disparate voices—farmers, drivers, consumers, hackers, and tinkerers—into a single chorus that can’t be ignored. 

Gay Gordon-Byrne has been executive director of The Repair Association—formerly known as The Digital Right to Repair Coalition—since its founding in 2013, helping lead the fight for the right to repair in Congress and state legislatures. Their credo: If you bought it, you should own it and have the right to use it, modify it, and repair it whenever, wherever, and however you want. Earlier, she had a 40-year career as a vendor, lessor, and used equipment dealer for large commercial IT users; she is the author of "Buying, Supporting and Maintaining Software and Equipment - an IT Manager's Guide to Controlling the Product Lifecycle” (2014), and a Colgate University alumna. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

GAY GORDON-BYRNE
A friend of mine from Boston had his elderly father in a condo in Florida, not uncommon. And when the father went into assisted living, the refrigerator broke and it was out of warranty. So my friend went to Florida, figured out what was wrong, said, ‘Oh, I need a new thermostat,’ ordered the thermostat, stuck around till the thermostat arrived, put it in and it didn't work.

And so he called GE because he bought the part from GE and he says, ‘you didn't provide me, there's a password. I need a password.’ And GE says, ‘Oh, you can't have the password. You have to have a GE authorized tech come in to insert the password.’ And that to me is the ultimate in stupid.

CINDY COHN
That’s Gay Gordon-Byrne with an example of how companies often prevent people from fixing things that they own in ways that are as infuriating as they are absurd.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.  

Our guest today, Gay Gordon-Byrne, is the executive director of The Repair Association, where she has been advocating for years for legislation that will give consumers the right to buy products that are repairable and reusable – rather than things that need to be replaced outright every few years, or as soon as they break. 

CINDY COHN
The Right to Repair is something we fight for a lot at EFF, and a topic that has come up frequently on this podcast. In season three, we spoke to Adam Savage about it.

ADAM SAVAGE
I was trying to fix one of my bathroom faucets a couple of weeks ago, and I called up a Grohee service video of how to repair this faucet. And we all love YouTube for that, right, because anything you want to fix whether it’s your video camera, or this thing, someone has taken it apart. Whether they’re in Micronesia or Australia, it doesn’t matter. But the moment someone figures out that they can make a bunch of dough from that, I’m sure we’d see companies start to say, ‘no, you can’t put up those repair videos, you can only put up these repair videos’ and we all lose when that happens.

JASON KELLEY
In an era where both the cost of living and environmental concerns are top of mind, the right to repair is more important than ever. It addresses both sustainability and affordability concerns.

CINDY COHN
We’re especially excited to talk to Gay right now because Right to Repair is a movement that is on its way up and we have been seeing progress in recent months and years. We started off by asking her where things stand right now in the United States.

GAY GORDON-BYRNE
We've had four states actually pass statutes for Right to Repair, covering a variety of different equipment, and there's 45 states that have introduced right to repair over the past few years, so we expect there will be more bills finishing. Getting them started is easy, getting them over the finish line is hard.

CINDY COHN
Oh, yes. Oh, yes. We just passed a right to repair bill here in California where EFF is based. Can you tell us a little bit about that and do you see it as a harbinger, or just another step along the way?

GAY GORDON-BYRNE
Well, honestly, I see it as another step along the way, because three states actually had already passed laws, in California, Apple decided that they weren't going to object any further to right to repair laws, but they did have some conditions that are kind of unique to California because Apple is so influential in California. But it is a very strong bill for consumer products. It just doesn't extend to non-consumer products.

CINDY COHN
Yeah. That's great. And do you know what made Apple change their mind? Because they had, they had been staunch opponents, right? And EFF has battled with them in various different areas around Section 1201 and other things and, and then it seemed like they changed their minds and I wondered if you had some insights about that.

GAY GORDON-BYRNE
I take full responsibility.

CINDY COHN
Yay! Hey, getting a big company to change their position like that is no small feat and it doesn't happen overnight.

GAY GORDON-BYRNE
Oh, it doesn't happen overnight. And what's interesting is that New York actually passed a bill that Apple tried to negotiate and kind of really didn't get to do it in New York, that starts in January. So there was a pressure point already in place. New York is not an insignificant size state.

And then Minnesota passed a much stronger bill. That also takes effect, I think, I might be wrong on this, I think also in January. And so the wheels were already turning, I think the idea of inevitability had occurred to Apple that they'd be on the wrong side of all their environmental claims if they didn't at least make a little bit more of a sincere effort to make things repairable.

CINDY COHN
Yeah. I mean, they have been horrible about this from the very beginning with, you know with custom kinds of dongles, and difficulty in repairing. And again, we fought them around section 1201, which is the ability to do circumvention so that you can see how something works and build. tools that will let you fix them.

It's just no small feat from where we set to get, to get the winds to change such that even Apple puts their finger up and says, I think the winds are changing. We better get on the right side of history.

GAY GORDON-BYRNE
Yeah, that's what we've been trying to do for the past, when did we get started? I got started in 2010, the organization got started in 2013. So we've been at it a full 10 years as an actual organization, but the problems with Apple and other manufacturers existed long before. So the 1201 problem still exists, and that's the problem that we're trying to move in federally, but oh my God. I thought moving legislation in states was hard and long.

CINDY COHN
Yeah, the federal system is different, and I think that one of the things that we've experienced, though, is when the states start leading, eventually the feds begin to follow. Now, often they follow with the idea that they're going to water down what the states do. That's why, you know, EFF and, and I think a lot of organizations rally around this thing called preemption, which doesn't really sound like a thing you want to rally around, but it ends up being the way in which you make sure that the feds aren't putting the brakes on the states in terms of doing the right things and that you create space for states to be more bold.

It's sometimes not the best thing for a company that has to sell in a bunch of different markets, but it's certainly better than  letting the federal processes come in and essentially damp down what the states are doing.

GAY GORDON-BYRNE
You're totally right. One of our biggest fears is that someone will... We'll actually get a bill moving for Right to Repair, and it's obviously going to be highly lobbied, and we will probably not have the same quality of results as we have in states. So we would like to see more states pass more bills so that it's harder and harder for the federal government to preempt the states.

In the meantime, we're also making sure that the states don't preempt the federal government, which is another source of friction.

CINDY COHN
Oh my gosh.

GAY GORDON-BYRNE
Yeah, preemption is a big problem.

CINDY COHN
It goes both ways. In our, in our Section 1201 fights, we're fighting the Green case, uh, Green vs. Department of Justice, and the big issue there is that while we can get exemptions under 1201 for actual circumvention, the tools that you need  in order to circumvent, you can't get an exception for, and so you have this kind of strange situation in which you technically have the right to repair your device, but nobody can help you do that and nobody can give you the tools to do it. 

So it's this weird, I often, sometimes I call it the, you know, it's legal to be in Arizona, but it's illegal to go to Arizona kind of law. No offense, Arizona.

GAY GORDON-BYRNE
That's very much the case.

JASON KELLEY
You mentioned, Gay, that you've been doing this work while probably you've been doing the work a lot longer than the time you've been with the coalition and the Repair Association. We'll get to the brighter future that we want to look towards here in a second, but before we get to the, the way we want to fix things and how it'll look when we do, can you just take us back a little bit and tell us more about how we got to a place where you actually have to fight for your right to repair the things that you buy. You know, 50 years ago, I think most people would just assume that appliances and, and I don't know if you'd call them devices, but things that you purchased you could fix or you could bring to a repair shop. And now we have to force companies to let us fix things.

I know there's a lot of history there, but is there a short version of how we ended up in this place where we have to fight for this right to repair?

GAY GORDON-BYRNE
Yeah, there is a short version. It's called about 20 years ago, right after Y2K, it became possible, because of the improvements in the internet, for manufacturers to basically host a repair manual or a user guide. online and expect their customers to be able to retrieve that information for free.

Otherwise, they have to print, they have to ship. It's a cost. So it started out as a cost reduction strategy on the part of manufacturers. And at first it seemed really cool because it really solved a problem. I used to have manuals that came in like, huge desktop sets that were four feet of paper. And every month we'd get pages that we had to replace because the manual had been updated. So it was a huge savings for manufacturers, a big convenience for consumers and for businesses.

And then, no aspersions on lawyers. But my opinion is that some lawyer decided they wanted to know, they should know. For reasons we have no idea because they, they still don't make sense, that they should know who's accessing their website. So then they started requiring a login and a password, things like that.

And then another bright light, possibly a lawyer, but most likely a CFO said, we should charge people to get access to the website. And that slippery slope got really slippery or really fast. So it became obvious that you could save a lot of money by not providing manuals, not providing diagnostics and then not selling parts.

I mean, if you didn't want to sell parts, you didn't have to. There was no law that said you have to sell parts, or tools, or diagnostics. And that's where we've been for 20 years. And everybody that gets away with it has encouraged everybody else to do it. To the point where, um, I don't think Cindy would disagree with me.

I mean, I took a look, um, as did Nathan Proctor of US PIRG when we were getting ready to go before the FTC. And we said, you know, I wonder how many companies are actually selling parts and tools and manuals, and Nathan came up with a similar statistic. Roughly 90 percent of the companies don't.

JASON KELLEY
Wow.

GAY GORDON-BYRNE
So we're, face it, we have now gone from a situation where everybody could fix anything if they were really interested, to 90 percent of stuff not being fixable, and that number is going, getting worse, not better. So yeah, that's the short story, it’s been a bad 20 years.

CINDY COHN
It's funny because I think it's really, it's such a testament to people's desire to want to fix their own things that despite this, you can go on YouTube if something breaks and you can find some nice person who will walk you through how to fix, you know, lots and lots of devices that you have. And to me, that's a testament to the human desire to want to fix things and the human desire to want to teach other people how to fix things, that despite all these obstacles, there is this thriving world, YouTube's not the only place, but it's kind of the central place where you can find nice people who will help tell you how to fix your things, despite it being so hard and getting harder to have that knowledge and the information you need to do it.

GAY GORDON-BYRNE
I would also add to that there's a huge business of repair that, we're not strictly fighting for people's rights to be able to do it yourself. In fact, most people, again, you know, back to some kind of general statistics, most people, somewhere around 85 percent of them, really don't want to fix their own stuff.

They may fix some stuff, but they don't want to fix all stuff. But the options of having somebody help them have also gone. Gone just downhill, downhill, downhill massively in the last 20 years and really bad in the past 10 years. 

So the industry that current employment used to be about 3 million people in the repair, in the industry of repair and that kind of spanned auto repair and a bunch of other things. But those people don't have jobs if people can't fix their stuff because the only way they can be in business is to know that they can buy a part. To know that they can buy the tool, to know that they can get a hold of the schematic and the diagnostics. So these are the things that have thwarted business as well as, do it yourself. And I think most people, most people, especially the people I know, really expect to be able to fix their things. I think we've been told that we don't, and the reality is we do.

CINDY COHN
Yeah, I think that's right. And one of the, kind of, stories that people have been told is that, you know, if there's a silicon chip in it, you know, you just can't fix it. That that's just, um, places things beyond repair and I think that that's been a myth and I think a lot of people have always known It's a myth, you know, certainly in EFF's community.

We have a lot of hardware hackers, we even have lots of software hackers that know that the fact that there's a chip involved doesn't mean that it's a disposable item. But I wondered you know from your perspective. Have you seen that as well?

GAY GORDON-BYRNE
Oh, absolutely. People are told that these things are too sophisticated, that they're too complex, they're too small. All of these things that are not true, and you know, you got 20 years of a drumbeat of just massive marketing against repair. The budgets for people that are saying you can't fix your stuff are far greater than the budgets of the people that say you can.

So, thank you, Tim Cook and Apple, because you've made this an actual point of advocacy. Every time Apple does something dastardly, and they do it pretty often, every new release there's something dastardly in it, we get to get more people behind the, ‘hey, I want to fix my phone, goddamnit!’

CINDY COHN
Yeah, I think that's right. I think that's one of the wonderful things about the Right to Repair movement is that you're, you're surfing people's natural tendencies. The idea that you have to throw something away as soon as it breaks is just so profoundly …I think it's actually an international human, you know, desire to be able to fix these kinds of things and be able to make something that you own work for you.

So it's always been profoundly strange to have companies kind of building this throwaway culture. It reminds me a little of the privacy fights where we've had also 20 years of companies trying to convince us that your privacy doesn't matter and you don't care about it, and that the world's better if you don't have any privacy. And on a one level that has certainly succeeded in building surveillance business models. But on the other hand, I think it's profoundly against human tendencies, so those of us on the side of privacy and repair, the benefit of us is we're kind of riding with how people want to be in the kind of world they want to live in, against, you know, kind of very powerful, well funded forces who are trying to convince us we're different than we are.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Gay Gordon-Byrne.

At the top of the episode, Gay told us a story about a refrigerator that couldn’t be fixed unless a licensed technician – for a fee, obviously – was brought in to ENTER A PASSWORD. INTO A FRIDGE. Even though the person who owned the fridge had sourced the new part and installed it.

GAY GORDON-BYRNE
And that illustrates to me the damage that's being done by this concept of parts pairing, which is where only the manufacturer can make the part work. So even if you can find a part. Even if you could put it in, you can't make it work without calling the manufacturer again, which kind of violates the whole idea that you bought it and you own it, and they shouldn't have anything to do with it after that. 

So these things are pervasive. We see it in all sorts of stuff. The refrigerator one really infuriates me.

CINDY COHN
Yeah, we've seen it with printer cartridges. We've seen it with garage door openers, for sure. I recently had an espresso machine that broke and couldn't get it fixed because the company that made it doesn't make parts available for, for people and that. You know, that's a hard lesson. It's one of the things when you're buying something is to try to figure out, like, is, is this actually repairable or not?

You know, making that information available is something that our friends at Consumer Reports have done and other people have done, but it's still a little hard to find sometimes.

GAY GORDON-BYRNE
Yeah, that information gap is enormous. There are some resources. They're not great. none of them are comprehensive enough to really do the job. But there's an ‘index de repairability’ in France that covers a lot of consumer tech, you know, cell phones and laptops and things along those lines.

It's not hard to find, but it's in French, so use Google Translate or something and you'll see what they have to say. Um, that's actually had a pretty good impact on a couple companies. For example, Samsung, which had never put out a manual before, had to put out a manual, um, in order to be rated in France. So they did. The same manual they didn't put out in the U. S. and England.

CINDY COHN  
Oh my God, it’s amazing.

Music break.

CINDY COHN
So let's flip this around a little bit. What does the world look like if we get it right? What does a repairable world look like? How is it when you live in it, Gay? Give me a day in the life of somebody who's living in the fixed version of the world.

GAY GORDON-BYRNE
Well, you will be able to buy things that you can fix, or have somebody fix them for you. And one of the consequences is that you will see more repair shops back in your town.

It will be possible for some enterprising person, that'll open up. Again, the kinds of shops we used to have when we were kids.

You'll see a TV repair shop, an appliance repair shop, an electronics repair shop. In fact, it might be one repair shop, because some of these things are all being fixed in the same way. 

So  you'll see more economic activity in the area of repair. You'll also see, and this is a hope, that manufacturers, if they're going to make their products more repairable, in order to look better, you know, it's more of a, more of a PR and a marketing thing.

If they're going to compete on the basis of repairability, they're going to have to start making their products. more repairable from the get go. They're probably gonna have to stop gluing everything together. Europe has been pretty big on making sure that things are made with fasteners instead of glue.

I think we're gonna see more activity along those lines, and more use of replaceable batteries. Why should a battery be glued in? That seems like a pretty stupid thing to do. So I think we'll see some improvements along the line of sustainability in the sense that we'll be able to keep our things longer and use them until we're done with them, not to when the manufacturer decides they want to sell you a new one, which is really the cycle that we have today.

CINDY COHN
Yeah. Planned obsolescence I think is what the marketers call it. I love a vision of the world, you know, when I grew up, I grew up in a small town in Iowa and we had the, the people called the gearheads, right? They were the ones who were always tinkering with cars. And of course you could take your appliances to them and other kinds of things because, you know, people who know how to take things apart and figure out how they work tend to know that about multiple things.

So I'd love a future of the world where the kind of gearheads rise again and are around to help us keep our stuff longer and keep our stuff again.  I really appreciate what you say, like when we're done with them. I mean, I love innovation. I love new toys.

I think that's really great. But the idea that when I'm done with something, you know, it goes into a trash heap. Um, or, you know, into someplace where you have to have fancy, uh, help to make sure that you're not endangering the planet. Like, that's not a very good world.

GAY GORDON-BYRNE
Well, look at your example of your espresso machine. You weren't done with it. It quit. It quit. You can't fix it. You can't make another cup of espresso with it.

That's not what you planned. That's not what you wanted.

CINDY COHN
Yep.

JASON KELLEY
I think we all have stories like the espresso machine and that's part of why this is such a tangible topic for everyone. Maybe I'm not alone in this, but I love, you know, thrift stores and places like that where I can get something that maybe someone else was, was tired of. I was walking. Hmm. I passed a house a few years ago and someone had put, uh, a laptop that the screen had been damaged just next to the trash.

And I thought, that looks like a pretty nice laptop. And I grabbed it. It was a pretty new, like, one year old Microsoft Surface. Tablet, laptop, um, anyway, I took it to a repair shop and they were able to repair it for like way less than the cost of buying a new one and I had a new laptop essentially, um, and I don't think they gave me extra service because I worked at EFF but they were certainly happy to help because I worked at EFF, um, but then, you know, these things do eventually Sort of give up, right?

That laptop lasted me about three years and then had so many issues that I just kind of had to get rid of it Where do you think in the in the better future? We should put the things that are sort of Unfixable. You know, do we, do we bring them to a repair shop and they pull out the pieces that work like a junkyard that they can reuse?

Is there a better system for, uh, disposing of the different pieces or the different devices that we can't repair? How do you think about that more sustainable future once everything is better in the first place in terms of being able to repair things?

GAY GORDON-BYRNE
Excellent question. We have a number of members that are what we call charitable recyclers. And I think that's a model for more, rather than less. They don't even have to be gently used. They just have to be potentially useful. And they'll take them in. They will fix them. They will train people, often people that have some employment challenges, especially coming out of the criminal justice system.  And they'll train them to make repairs and they both get a skill, a marketable skill for future employment. And they also, they also turn around and then resell those devices to make money to keep the whole system going.

But in the commercial recycling business, there's a lot of value in the things that have been discarded if they can have their batteries removed before, before they are, quote, recycled, because recycling is a very messy business and it requires physical contact with the device to the point that it's shredded or crushed. And if we can intercept some of that material before it goes to the crusher, we can reuse more of that material. And I think a lot of it can be reused very effectively in downstream markets, but we don't have those markets because we can't fix the products that are broken.

CINDY COHN
Yep. There's a whole chain of good that starts happening if we can begin to start fixing things, right? It's not just the individuals get to fix the things that they get, but it sets off kind of a cycle of things, a happy cycle of things that get better all along the way.

GAY GORDON-BYRNE
Yep, and that can be, that can happen right now, well, I should say as soon as these laws start taking effect, because a lot of the information parts and tools that are required under the laws are immediately useful.

CINDY COHN
Right. So tell me, how do these laws work? What do they, what, the good ones anyway, what are, what are they doing? How are things changing with the current flock of laws that are just now coming online?

GAY GORDON-BYRNE
Well, they're all pretty much the same. They require manufacturers of things that they already repair, so there's some limitations right there, to make available on fair and reasonable terms the same parts, tools, diagnostics, and firmware that they already provide to their quote authorized or their subcontract repair providers because our original intent was to restore competition. So the bills are really a pro competition law as opposed to an e-waste law.

CINDY COHN  
Mm hmm.

GAY GORDON-BYRNE
Because these don't cover everything. They cover a lot of stuff, but not everything. California is a little bit different in that they already had a statute that required things of be, under $50 or under $100 to be covered for three years. They have some dates in there that expand the effectiveness of the bill into products that don't even have repair options today.

But the bills that we've been promoting are a little softer, because the intent is competition, because we want to see what competition can do, when we unlock competition, what that does for consumers.

CINDY COHN  
Yeah, and I think that that dovetails nicely into something EFF has been working on quite a while now, which is interoperability, right? One of the things that unlocks competition is, you know, requiring people to build their tools and services in a way that are interoperable with others, that helps both with repair and with kind of follow on innovation that, you know, you can switch up how your Facebook feed shows up based on what you want to see rather than, you know, based upon what Facebook's algorithm wants you to see or other kinds of changes like that. And how do you see interoperability fitting into all of this?

GAY GORDON-BYRNE
I think there will be more. It's not specific to the law, but I think it will simply happen as people try to comply with the law. 

Music break

CINDY COHN  
You founded the Repair Association, so tell us a little bit about how that got started and how you decided to dedicate your life to this. I think it's really important for us to think about, like, the people that are needed to build a better world, as well as the, you know, kind of technologies and ideas.

GAY GORDON-BYRNE
I was always in the computer industry. I grew up with my father who was a computer architect in the 50s and 60s. So I never knew a world that didn't involve computers. It was what dad did. And then when I needed a job out of college, and having bounced around a little bit and found not a great deal of success, my father encouraged me to take a job selling computers, because that was the one thing he had never done and thought that it was missing from his resume.

And I took to it like, uh, I don't know, fish to water? I loved it. I had a wonderful time and a wonderful career. But by the mid 2000s, I was done. I mean, I was like, I can't stand this job anymore. So I decided to retire. I didn't like being retired. I started doing other things and eventually, I started doing some work with a group of companies that repair large mainframes.

I've known them. I mean, my former boss was the president. It was kind of a natural. And they started having trouble with some of the manufacturers and I said, that's wrong. I mean, I had this sense of indignation that what Oracle had done when they bought Sun was just flatly wrong and it was illegal. And I volunteered to join a committee. And that's when, haha, that's when I got involved and it was basically, I tell people I over-volunteered.

CINDY COHN
Yeah.

GAY GORDON-BYRNE
And what happened is that because I was the only person in that organization that didn't already have relationships with manufacturers, that they couldn't, they couldn't bite the hand that fed them, I was elected chief snowball thrower. AKA Executive Director. 

So it was a passion project that I could afford to do because otherwise I was going to stay home and knit. So this is way better than knitting or quilting these days, way more fun, way more gratifying. I've had a truly wonderful experience, met so many fabulous people, have a great sense of impact that I would never have had with quilting.

CINDY COHN
I just love the story of somebody who kind of put a toe in and then realized, Oh my God, this is so important. And ‘I found this thing where I can make the world better.’ And then you just get, you know, kind of, you get sucked in and, um, but it's, it's fun. And what I really appreciate about the Repair Association and the Right to Repair people is that while, you know, they're working with very serious things, they also, you know, there's a lot of fun in making the world a better place.

And it's kind of fun to be involved in the Right to Repair right now because after a long time kind of shouting in the darkness, there's some traction starting to happen. So then the fun gets even more fun.

GAY GORDON-BYRNE
I can tell you it's ... We're so surprised. I mean, it took, we've had over, well, well over 100 bills filed and, you know, every year we get a little further. We get past this committee and this hurdle and this hurdle and this hurdle. We get almost to the end and then something would happen. And to finally get to the end where the bill becomes law? It's like the dog that chases the car, and you go, we caught the car, now what?

CINDY COHN
Yeah. Now you get to fix it! The car!

JASON KELLEY
Yeah, now you can repair the car.

MUSIC TRANSITION

JASON KELLEY
That was such a wonderful, optimistic conversation and not the first one we've had this season. But this one is interesting because we're actually already getting where we want to be. We're already building the future that we want to live in and it's just really, really pleasing to be able to talk to someone who's in the middle of that and, and making sure that that work happens.

CINDY COHN
I mean, one of the things that really struck me is how much of the better future that we're building together is really about creating new jobs and new opportunities for people to work. I think there's a lot of fear right now in our community that the future isn't going to have work, and that without a social safety net or other kinds of things, you know, it's really going to hurt people.

And I so appreciated hearing about how, you know, Main Street's going to have more jobs. There's going to be people in your local community who can fix your things locally because devices, those are things where having a local repair community and businesses is really. helpful to people.

And so I also kind of, the flip side of that is this interesting observation that one of the things that's happened as a result of shutting off the Right to Repair is an increasing centralization, um, that the jobs that are happening in this thing are not happening locally and that by unlocking the right to repair, we're going to unlock some local opportunities for economic things.

I mean, You know, EFF thinks about this both in terms of empowering users, but also in terms of competition. And the thing about Right to Repair is it really does unlock kind of hyper local competition.

JASON KELLEY
I hadn't really thought about how specifically local it is to have a repair shop that you can just bring your device to. And right now it feels like the options are if you live near an Apple store, for example, maybe you can bring your phone there and then they send it somewhere. I'd much rather go to someone, you know, in my town that I can talk to, and who can tell me about what needs to be done. That's such a benefit of this movement that a lot of people aren't even really putting on the forefront, but it really is something that will help people actually get work and, and, and help the people who need the work and the people who need the job done.

CINDY COHN
Another thing that I really appreciate about the Right to Repair movement s how universal it is. Everyone experiences some version of this, you know, from the refrigerator story to my espresso machine, to any of any number of other stories to the farmers, like everyone has some version of how.

This needs to be fixed. And the other thing that I really appreciate about her gay stories about the right to repair movement is that, you know, she's somebody who comes out of computers, and was thinking about this from the context of computers and didn't really realize that farmers were having the same problem.

Of course, we all kind of know analytically that a lot of the movement in a lot of industries is towards, you know, centralizing computers and making, you know. You know, tractors are now computers with gigantic wheels. Cars are now computers with smaller wheels. That computers have become central to these kinds of things, but also realization that we have silos of users who are experiencing a version of the same problem and connecting those silent silos together, let me say that again. I think the realization that we have silos of users who are experiencing the same problem depending on what kind of tool they're using, um, and connecting those silos together so that together we stand as a much bigger voice is something that the repair, um, the Right to Repair folks have really done well and it is a, is a good lesson for the rest of us.

JASON KELLEY
Yeah, I think we talked a little bit with Adam Savage when he was on a while ago about this sort of gatekeeping and how effective it is to remove the gatekeepers from these movements and say, you know, we're all fighting the same fight. And it just goes to show you that it actually works. I mean, not only does it get everybody on the same page, but unlike a lot of movements, I think you can really see the impact that the Right to Repair movement has had. 

And we talked with Gay about this and it's just, it really, I think, should make people come away optimistic that advocacy like this works over time. You know, it's not a sprint, it's a marathon, and we have actually crested a sort of hill in some ways.

There's a lot of work to be done, but it's, it's actually work that we probably will be able to get done and, and that we're seeing the benefits of today

CINDY COHN
Yeah. And as we start to see benefits, we're going to start to see more benefits. I appreciate her. We're in, you know, we're in the whole plugging period where, you know, we got something passed and we need to plug the holes. But I also think once people start feeling the power of having the Right to Repair again, I think I hope it will help snowball.

One of the things that she said that I have observed as well is that sometimes it feels like nothing's happening, nothing's happening, nothing's happening, and then all of a sudden it's all happening. And I think that that's one of the, the kind of flows of advocacy work that I've observed over time and it's fun to see the, the Right to Repair Coalition kind of getting to experience that wave, even if it can be a little overwhelming sometimes.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard …Come Inside by Zep Hurme featuring snowflake and Drops of H2O ( The Filtered Water Treatment ) by J.Lang featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast. 

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

U.S. Senate and Biden Administration Shamefully Renew and Expand FISA Section 702, Ushering in a Two Year Expansion of Unconstitutional Mass Surveillance

One week after it was passed by the U.S. House of Representatives, the Senate has passed what Senator Ron Wyden has called, “one of the most dramatic and terrifying expansions of government surveillance authority in history.” President Biden then rushed to sign it into law.  

The perhaps ironically named “Reforming Intelligence and Securing America Act (RISAA)” does everything BUT reform Section 702 of the Foreign Intelligence Surveillance Act (FISA). RISAA not only reauthorizes this mass surveillance program, it greatly expands the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. The bill’s only significant “compromise” is a limited, two-year extension of this mass surveillance. But overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large telecommunications service providers: massive amounts of traffic on the Internet backbone are accessed and those communications on the government’s secret list are copied. And that’s just one part of the massive, expensive program. 

While Section 702 prohibits the NSA and FBI from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally.” The government can then conduct backdoor, warrantless searches of these “incidentally collected” communications.

The government cannot even follow the very lenient rules about what it does with the massive amount of information it gathers under Section 702, repeatedly abusing this authority by searching its databases for Americans’ communications. In 2021 alone, the FBI reported conducting up to 3.4 million warrantless searches of Section 702 data using Americans’ identifiers. Given this history of abuse, it is difficult to understand how Congress could decide to expand the government’s power under Section 702 rather than rein it in.

One of RISAA’s most egregious expansions is its large but ill-defined increase of the range of entities that have to turn over information to the NSA and FBI. This provision allegedly “responds” to a 2023 decision by the FISC Court of Review, which rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While the New York Times reports that the unknown company from this FISC opinion was a data center, this new provision is written so expansively that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider. This could potentially include landlords, maintenance people, and many others who routinely have access to your communications on the interconnected internet.

This is to say nothing of RISAA’s other substantial expansions. RISAA changes FISA’s definition of “foreign intelligence” to include “counternarcotics”: this will allow the government to use FISA to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths,” but also to any of their precursors. While surveillance under FISA has (contrary to what most Americans believe) never been limited exclusively to terrorism and counterespionage, RISAA’s expansion of FISA to ordinary crime is unacceptable.

RISAA also allows the government to use Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released in 2023, the FISC repeatedly denied government attempts to obtain some version of this authority, before finally approving it for the first time in 2023. By formally lowering Section 702’s protections for immigrants and asylum seekers, RISAA exacerbates the risk that government officials could discriminate against members of these populations on the basis of their sexuality, gender identity, religion, or political beliefs.

Faced with massive pushback from EFF and other civil liberties advocates, some members of Congress, like Senator Ron Wyden, raised the alarm. We were able to squeeze out a couple of small concessions. One was a shorter reauthorization period for Section 702, meaning that the law will be up for review in just two more years. Also, in a letter to Congress, the Department of Justice claimed it would only interpret the new provision to apply to the type of unidentified businesses at issue in the 2023 FISC opinion. But a pinky promise from the current Department of Justice is not enforceable and easily disregarded by a future administration. There is some possible hope here, because Senator Mark Warner promised to return to the provision in a later defense authorization bill, but this whole debacle just demonstrates how Congress gives the NSA and FBI nearly free rein when it comes to protecting Americans – any limitation that actually protects us (and here the FISA Court actually did some protecting) is just swept away.

RISAA’s passage is a shocking reversal—EFF and our allies had worked hard to put together a coalition aimed at enacting a warrant requirement for Americans and some other critical reforms, but the NSA, FBI and their apologists just rolled Congress with scary-sounding (and incorrect) stories that a lapse in the spying was imminent. It was a clear dereliction of Congress’s duty to oversee the intelligence community in order to protect all of the rest of us from its long history of abuse.

After over 20 years of doing it, we know that rolling back any surveillance authority, especially one as deeply entrenched as Section 702, is an uphill fight. But we aren’t going anywhere. We had more Congressional support this time than we’ve had in the past, and we’ll be working to build that over the next two years.

Too many members of Congress (and the Administrations of both parties) don’t see any downside to violating your privacy and your constitutional rights in the name of national security. That needs to change.

Internet Service Providers Plan to Subvert Net Neutrality. Don’t Let Them

19 April 2024 at 19:54

In the absence of strong net neutrality protections, internet service providers (ISPs) have made all sorts of plans that would allow them to capitalize on something called "network slicing." While this technology has all sorts of promise, what the ISPs have planned would subvert net neutrality—the principle that all data be treated equally by your service provider—by allowing them to recreate the kinds of “fast lanes” we've already agreed should not be allowed. If their plans succeed, then the new proposed net neutrality protections will end up doing far less for consumers than the old rules did.

The FCC released draft rules to reinstate net neutrality, with a vote on adopting the rules to come the 25th of April. Overall, the order is a great step for net neutrality. However, to be truly effective the rules must not preempt states from protecting their residents with stronger laws and clearly find the creation of “fast lanes” via positive discrimination and unpaid prioritization of specific applications or services are violations of net neutrality.

Fast Lanes and How They Could Harm Competition

Since “fast lanes” aren’t a technical term, what do we mean when we are talking about a fast lane? To understand, it is helpful to think about data traffic and internet networking infrastructure like car traffic and public road systems. As roads connect people, goods, and services across distances, so does network infrastructure allow for data traffic to flow from one place to another. And just as a road with more capacity in the way of more lanes theoretically means the road can support more traffic moving at speed1, internet infrastructure with more “lanes” (i.e. bandwidth) should mean that a network can better support applications like streaming services and online gaming.

Individual ISPs have a maximum network capacity, and speed, of internet traffic they can handle. To continue the analogy, the road leading to your neighborhood has a set number of lanes. This is why the speed of your internet may change throughout the day. At peak hours your internet service may slow down because a slowdown has occurred from too much requested traffic clogging up the lanes.

It’s not inherently a bad thing to have specific lanes for certain types of traffic, actual fast lanes on freeways can improve congestion by not making faster moving vehicles compete for space with slower moving traffic, having exit and entry lanes in freeways also allows cars to perform specialized tasks without impeding other traffic. A lane only for buses isn’t a bad thing as long as every bus gets equal access to that lane and everyone has equal access to riding those buses. Where this becomes a problem is if there is a special lane only for Google buses, or for consuming entertainment content instead of participating in video calls. In these scenarios you would be increasing the quality of certain bus rides at the expense of degraded service for everyone else on the road.

An internet “fast lane” would be the designation of part of the network with more bandwidth and/or lower latency to only be used for certain services. On a technical level, the physical network infrastructure would be split amongst several different software defined networks with different use cases using network slicing. One network might be optimized for high bandwidth applications such as video streaming, another might be optimized for applications needing low latency (e.g. a short distance between the client and the server), and another might be optimized for IoT devices. The maximum physical network capacity is split among these slices. To continue our tortured metaphor, your original six lane general road is now a four lane general road with two lanes reserved for, say, a select list of streaming services. Think dedicated high speed lanes for Disney+, HBO, and Netflix, but those services only. In a network neutral construction of the infrastructure, all internet traffic shares all lanes, and no specific app or service is unfairly sped up or slowed down. This isn’t to say that we are inherently against network management techniques like quality of service or network slicing. But it’s important that quality of service efforts be undertaken, as much as possible, in an application agnostic manner.

The fast lanes metaphor isn’t ideal. On the road having fast lanes is a good thing, it can protect more slow and cautious drivers from dangerous driving and improve the flow of traffic. Bike lanes are a good thing because they make cyclists safer and allow cars to drive more quickly and not have to navigate around them. But with traffic lanes it’s the driver, not the road, that decides which lane they belong in (with penalties for doing obviously bad faith things such as driving in the bike lane.)

Internet service providers (ISPs) are already testing their ability to create these network slices. They already have plans of creating market offerings where certain applications and services, chosen by them, are given exclusive reserved fast lanes while the rest of the internet must shoulder their way through what is left. This kind of networking slicing is a violation of net neutrality. We aren’t against network slicing as a technology, it could be useful for things like remote surgery or vehicle to vehicle communication which requires low latency connections and is in the public interest, which are separate offerings and not part of the broadband services covered in the draft order. We are against network slicing being used as a loophole to circumvent principles of net neutrality.

Fast Lanes Are a Clear Violation of Net Neutrality

Where net neutrality is the principle that all ISPs should treat all legitimate traffic coming over their networks equally, discriminating between  certain applications or types of traffic is a clear violation of that principle. When fast lanes speed up certain applications or certain classes of applications, they cannot do so without having a negative impact on other internet traffic, even if it’s just by comparison. This is throttling, plain and simple.

Further, because ISPs choose which applications or types of services get to be in the fast lane, they choose winners and losers within the internet, which has clear harms to both speech and competition. Whether your access to Disney+ is faster than your access to Indieflix because Disney+ is sped up or because Indieflix is slowed down doesn’t matter because the end result is the same: Disney+ is faster than Indieflix and so you are incentivized to use Disney+ over Indieflix.

ISPs should not be able to harm competition even by deciding to prioritize incumbent services over new ones, or that one political party’s website is faster than another’s. It is the consumer who should be in charge of what they do online. Fast lanes have no place in a network neutral internet.

  • 1. Urban studies research shows that this isn’t actually the case, still it remains the popular wisdom among politicians and urban planners.

EFF, Human Rights Organizations Call for Urgent Action in Case of Alaa Abd El Fattah

19 April 2024 at 12:13

Following an urgent appeal filed to the United Nations Working Group on Arbitrary Detention (UNWGAD) on behalf of blogger and activist Alaa Abd El Fattah, EFF has joined 26 free expression and human rights organizations calling for immediate action.

The appeal to the UNWGAD was initially filed in November 2023 just weeks after Alaa’s tenth birthday in prison. The British-Egyptian citizen is one of the most high-profile prisoners in Egypt and has spent much of the past decade behind bars for his pro-democracy writing and activism following Egypt’s revolution in 2011.

EFF and Media Legal Defence Initiative submitted a similar petition to the UNGWAD on behalf of Alaa in 2014. This led to the Working Group issuing an opinion that Alaa’s detention was arbitrary and called for his release. In 2016, the UNWGAD declared Alaa's detention (and the law under which he was arrested) a violation of international law, and again called for his release.

We once again urge the UN Working Group to urgently consider the recent petition and conclude that Alaa’s detention is arbitrary and contrary to international law. We also call for the Working Group to find that the appropriate remedy is a recommendation for Alaa’s immediate release.

Read our full letter to the UNWGAD and follow Free Alaa for campaign updates.

Congress: Don't Let Anyone Own The Law

19 April 2024 at 10:27

We should all have the freedom to read, share, and comment on the laws we must live by. But yesterday, the House Judiciary Committee voted 19-4 to move forward the PRO Codes Act (H.R. 1631), a bill that would limit those rights in a critical area. 

TAKE ACTION

Tell Congress To Reject The Pro Codes Act

A few well-resourced private organizations have made a business of charging money for access to building and safety codes, even when those codes have been incorporated into law. 

These organizations convene volunteers to develop model standards, encourage regulators to make those standards into mandatory laws, and then sell copies of those laws to the people (and city and state governments) that have to follow and enforce them.

They’ve claimed it’s their copyrighted material. But court after court has said that you can’t use copyright in this way—no one “owns” the law. The Pro Codes Act undermines that rule and the public interest, changing the law to state that the standards organizations that write these rules “shall retain” a copyright in it, as long as the rules are made “publicly accessible” online. 

That’s not nearly good enough. These organizations already have so-called online reading rooms that aren’t searchable, aren’t accessible to print-disabled people, and condition your ability to read mandated codes on agreeing to onerous terms of use, among many other problems. That’s why the Association of Research Libraries sent a letter to Congress last week (supported by EFF, disability rights groups, and many others) explaining how the Pro Codes Act would trade away our right to truly understand and educate our communities about the law for cramped public access to it. Congress must not let well-positioned industry associations abuse copyright to control how you access, use, and share the law. Now that this bill has passed committee, we urgently need your help—tell Congress to reject the Pro Codes Act.

TAKE ACTION

TELL CONGRESS: No one owns the law

Two Years Post-Roe: A Better Understanding of Digital Threats

18 April 2024 at 17:14

It’s been a long two years since the Dobbs decision to overturn Roe v. Wade. Between May 2022 when the Supreme Court accidentally leaked the draft memo and the following June when the case was decided, there was a mad scramble to figure out what the impacts would be. Besides the obvious perils of stripping away half the country’s right to reproductive healthcare, digital surveillance and mass data collection caused a flurry of concerns.

Although many activists fighting for reproductive justice had been operating under assumptions of little to no legal protections for some time, the Dobbs decision was for most a sudden and scary revelation. Everyone implicated in that moment somewhat understood the stark difference between pre-Roe 1973 and post-Roe 2022; living under the most sophisticated surveillance apparatus in human history presents a vastly different landscape of threats. Since 2022, some suspicions have been confirmed, new threats have emerged, and overall our risk assessment has grown smarter. Below, we cover the most pressing digital dangers facing people seeking reproductive care, and ways to combat them.

Digital Evidence in Abortion-Related Court Cases: Some Examples

Social Media Message Logs

A case in Nebraska resulted in a woman, Jessica Burgess, being sentenced to two years in prison for obtaining abortion pills for her teenage daughter. Prosecutors used a Facebook Messenger chat log between Jessica and her daughter as key evidence, bolstering the concerns many had raised about using such privacy-invasive tech products for sensitive communications. At the time, Facebook Messenger did not have end-to-end encryption.

In response to criticisms about Facebook’s cooperation with law enforcement that landed a mother in prison, a Meta spokesperson issued a frustratingly laconic tweet stating that “[n]othing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion.” They followed this up with a short statement reiterating that the warrants did not mention abortion at all. The lesson is clear: although companies do sometimes push back against data warrants, we have to prepare for the likelihood that they won’t.

Google: Search History & Warrants

Well before the Dobbs decision, prosecutors had already used Google Search history to indict a woman for her pregnancy outcome. In this case, it was keyword searches for misoprostol (a safe and effective abortion medication) that clinched the prosecutor’s evidence against her. Google acquiesced, as it so often has, to the warrant request.

Related to this is the ongoing and extremely complicated territory of reverse keyword and geolocation warrants. Google has promised that it would remove from user profiles all location data history related to abortion clinic sites. Researchers tested this claim and it was shown to be false, twice. Late in 2023, Google made a bigger promise: it would soon change how it stores location data to make it much more difficult–if not impossible–for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. This would be a genuinely helpful measure, but we’ve been conditioned to approach such claims with caution. We’ll believe it when we see it (and refer to external testing for proof).

Other Dangers to Consider

Doxxing

Sites propped up for doxxing healthcare professionals that offer abortion services are about as old as the internet itself. Doxxing comes in a variety of forms, but a quick and loose definition of it is the weaponization of open source intelligence with the intention of escalating to other harms. There’s been a massive increase in hate groups abusing public records requests and data broker collections to publish personal information about healthcare workers. Doxxing websites hosting such material are updated frequently. Doxxing has led to steadily rising material dangers (targeted harassment, gun violence, arson, just to name a few) for the past few years.

There are some piecemeal attempts at data protection for healthcare workers in more protective states like California (one which we’ve covered). Other states may offer some form of an address confidentiality program that provides people with proxy addresses. Though these can be effective, they are not comprehensive. Since doxxing campaigns are typically coordinated through a combination of open source intelligence tactics, it presents a particularly difficult threat to protect against. This is especially true for government and medical industry workers whose information may be subjected to exposure through public records requests.

Data Brokers

Recently, Senator Wyden’s office released a statement about a long investigation into Near Intelligence, a data broker company that sold geolocation data to The Veritas Society, an anti-choice think tank. The Veritas Society then used the geolocation data to target individuals who had traveled near healthcare clinics that offered abortion services and delivered pro-life advertisements to their devices.

That alone is a stark example of the dangers of commercial surveillance, but it’s still unclear what other ways this type of dataset could be abused. Near Intelligence has filed for bankruptcy, but they are far from the only, or the most pernicious, data broker company out there. This situation bolsters what we’ve been saying for years: the data broker industry is a dangerously unregulated mess of privacy threats that needs to be addressed. It not only contributes to the doxxing campaigns described above, but essentially creates a backdoor for warrantless surveillance.

Domestic Terrorist Threat Designation by Federal Agencies

Midway through 2023, The Intercept published an article about a tenfold increase in federal designation of abortion-rights activist groups as domestic terrorist threats. This projects a massive shadow of risk for organizers and activists at work in the struggle for reproductive justice. The digital surveillance capabilities of federal law enforcement are more sophisticated than that of typical anti-choice zealots. Most people in the abortion access movement may not have to worry about being labeled a domestic terrorist threat, though for some that is a reality, and strategizing against it is vital.

Looming Threats

Legal Threats to Medication Abortion

Last month, the Supreme Court heard oral arguments challenging the FDA’s approval of and regulations governing mifepristone, a widely available and safe abortion pill. If the anti-abortion advocates who brought this case succeed, access to the most common medication abortion regimen used in the U.S. would end across the country—even in those states where abortion rights are protected.

Access to abortion medication might also be threatened by a 150 year old obscenity law. Many people now recognize the long dormant Comstock Act as a potential avenue to criminalize procurement of the abortion pill.

Although the outcomes of these legal challenges are yet-to-be determined, it’s reasonable to prepare for the worst: if there is no longer a way to access medication abortion legally, there will be even more surveillance of the digital footprints prescribers and patients leave behind. 

Electronic Health Records Systems

Electronic Health Records (EHRs) are digital transcripts of medical information meant to be easily stored and shared between medical facilities and providers. Since abortion restrictions are now dictated on a state-by-state basis, the sharing of these records across state lines present a serious matrix of concerns.

As some academics and privacy advocates have outlined, the interoperability of EHRs can jeopardize the safety of patients when reproductive healthcare data is shared across state lines. Although the Department of Health and Human Services has proposed a new rule to help protect sensitive EHR data, it’s currently possible that data shared between EHRs can lead to the prosecution of reproductive healthcare.

The Good Stuff: Protections You Can Take

Perhaps the most frustrating aspect of what we’ve covered thus far is how much is beyond individual control. It’s completely understandable to feel powerless against these monumental threats. That said, you aren’t powerless. Much can be done to protect your digital footprint, and thus, your safety. We don’t propose reinventing the wheel when it comes to digital security and data privacy. Instead, rely on the resources that already exist and re-tool them to fit your particular needs. Here are some good places to start:

Create a Security Plan

It’s impossible, and generally unnecessary, to implement every privacy and security tactic or tool out there. What’s more important is figuring out the specific risks you face and finding the right ways to protect against them. This process takes some brainstorming around potentially scary topics, so it’s best done well before you are in any kind of crisis. Pen and paper works best. Here's a handy guide.

After you’ve answered those questions and figured out your risks, it’s time to locate the best ways to protect against them. Don’t sweat it if you’re not a highly technical person; many of the strategies we recommend can be applied in non-tech ways.

Careful Communications

Secure communication is as much a frame of mind as it is a type of tech product. When you are able to identify which aspects of your life need to be spoken about more carefully, you can then make informed decisions about who to trust with what information, and when. It’s as much about creating ground rules with others about types of communication as it is about normalizing the use of privacy technologies.

Assuming you’ve already created a security plan and identified some risks you want to protect against, begin thinking about the communication you have with others involving those things. Set some rules for how you broach those topics, where they can be discussed, and with whom. Sometimes this might look like the careful development of codewords. Sometimes it’s as easy as saying “let’s move this conversation to Signal.” Now that Signal supports usernames (so you can keep your phone number private), as well as disappearing messages, it’s an obvious tech choice for secure communication.

Compartmentalize Your Digital Activity

As mentioned above, it’s important to know when to compartmentalize sensitive communications to more secure environments. You can expand this idea to other parts of your life. For example, you can designate different web browsers for different use cases, choosing those browsers for the privacy they offer. One might offer significant convenience for day-to-day casual activities (like Chrome), whereas another is best suited for activities that require utmost privacy (like Tor).

Now apply this thought process towards what payment processors you use, what registration information you give to social media sites, what profiles you keep public versus private, how you organize your data backups, and so on. The possibilities are endless, so it’s important that you prioritize only the aspects of your life that most need protection.

Security Culture and Community Care

Both tactics mentioned above incorporate a sense of community when it comes to our privacy and security. We’ve said it before and we’ll say it again: privacy is a team sport. People live in communities built on trust and care for one another; your digital life is imbricated with others in the same way.

If a node on a network is compromised, it will likely implicate others on the same network. This principle of computer network security is just as applicable to social networks. Although traditional information security often builds from a paradigm of “zero trust,” we are social creatures and must work against that idea. It’s more about incorporating elements of shared trust pushing for a culture of security.

Sometimes this looks like setting standards for how information is articulated and shared within a trusted group. Sometimes it looks like choosing privacy-focused technologies to serve a community’s computing needs. The point is to normalize these types of conversations, to let others know that you’re caring for them by attending to your own digital hygiene. For example, when you ask for consent to share images that include others from a protest, you are not only pushing for a culture of security, but normalizing the process of asking for consent. This relationship of community care through data privacy hygiene is reciprocal.

Help Prevent Doxxing

As somewhat touched on above in the other dangers to consider section, doxxing can be a frustratingly difficult thing to protect against, especially when it’s public records that are being used against you. It’s worth looking into your state level voter registration records, if that information is public, and how you can request for that information to be redacted (success may vary by state).

Similarly, although business registration records are publicly available, you can appeal to websites that mirror that information (like Bizapedia) to have your personal information taken down. This is of course only a concern if you have a business registration tied to your personal address.

If you work for a business that is susceptible to public records requests revealing personal sensitive information about you, there’s little to be done to prevent it. You can, however, apply for an address confidentiality program if your state has it. You can also do the somewhat tedious work of scrubbing your personal information from other places online (since doxxing is often a combination of information resources). Consider subscribing to a service like DeleteMe (or follow a free DIY guide) for a more thorough process of minimizing your digital footprint. Collaborating with trusted allies to monitor hate forums is a smart way to unburden yourself from having to look up your own information alone. Sharing that responsibility with others makes it easier to do, as well as group planning for what to do in ways of prevention and incident response.

Take a Deep Breath

It’s natural to feel bogged down by all the thought that has to be put towards privacy and security. Again, don’t beat yourself up for feeling powerless in the face of mass surveillance. You aren’t powerless. You can protect yourself, but it’s reasonable to feel frustrated when there is no comprehensive federal data privacy legislation that would alleviate so many of these concerns.

Take a deep breath. You’re not alone in this fight. There are guides for you to learn more about stepping up your privacy and security. We've even curated a special list of them. And there is Digital Defense Fund, a digital security organization for the abortion access movement, who we are grateful and proud to boost. And though it can often feel like privacy is getting harder to protect, in many ways it’s actually improving. With all that information, as well as continuing to trust your communities, and pushing for a culture of security within them, safety is much easier to attain. With a bit of privacy, you can go back to focusing on what matters, like healthcare.

Fourth Amendment is Not For Sale Act Passed the House, Now it Should Pass the Senate

18 April 2024 at 12:25

The Fourth Amendment is Not For Sale Act, H.R.4639, originally introduced in the Senate by Senator Ron Wyden in 2021, has now made the important and historic step of passing the U.S. House of Representatives. In an era when it often seems like Congress cannot pass much-needed privacy protections, this is a victory for vulnerable populations, people who want to make sure their location data is private, and the hard-working activists and organizers who have pushed for the passage of this bill.

Everyday, your personal information is being harvested by your smart phone applications, sold to data brokers, and used by advertisers hoping to sell you things. But what safeguards prevent the government from shopping in that same data marketplace? Mobile data regularly bought and sold, like your geolocation, is information that law enforcement or intelligence agencies would normally have to get a warrant to acquire. But it does not require a warrant for law enforcement agencies to just buy the data. The U.S. government has been using its purchase of this information as a loophole for acquiring personal information on individuals without a warrant.

Now is the time to close that loophole.

At EFF, we’ve been talking about the need to close the databroker loophole for years. We even launched a massive investigation into the data broker industry which revealed Fog Data Science, a company that has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and their associates. We found close to 20 law enforcement agents used or were offered this tool.

It’s time for the Senate to close this incredibly dangerous and invasive loophole. If police want a personor a whole community’slocation data, they should have to get a warrant to see it. 

Take action

TELL congress: 702 Needs serious reforms

About Face (Recognition) | EFFector 36.5

17 April 2024 at 13:37

There are a lot of updates in the fight for our freedoms online, from a last-minute reauthorization bill to expand Section 702 (tell your senators to vote NO on the bill here!), a new federal consumer data privacy law (we deserve better!), and a recent draft from the FCC to reinstate net neutrality (you can help clean it up!).

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.5.- About Face (Recognition)

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

How Political Campaigns Use Your Data to Target You

16 April 2024 at 15:49

Data about potential voters—who they are, where they are, and how to reach them—is an extremely valuable commodity during an election year. And while the right to a secret ballot is a cornerstone of the democratic process, your personal information is gathered, used, and sold along the way. It's not possible to fully shield yourself from all this data processing, but you can take steps to at least minimize and understand it.

Political campaigns use the same invasive tricks that behavioral ads do—pulling in data from a variety of sources online to create a profile—so they can target you. Your digital trail is a critical tool for campaigns, but the process starts in the real world, where longstanding techniques to collect data about you can be useful indicators of how you'll vote. This starts with voter records.

Your IRL Voting Trail Is Still Valuable

Politicians have long had access to public data, like voter registration, party registration, address, and participation information (whether or not a voter voted, not who they voted for). Online access to such records has made them easier to get in some states, with unintended consequences, like doxing.

Campaigns can purchase this voter information from most states. These records provide a rough idea of whether that person will vote or not, and—if they're registered to a particular party—who they might lean toward voting for. Campaigns use this to put every voter into broad categories, like "supporter," "non-supporter," or "undecided." Campaigns gather such information at in-person events, too, like door-knocking and rallies, where you might sign up for emails or phone calls.

Campaigns also share information about you with other campaigns, so if you register with a candidate one year, it's likely that information goes to another in the future. For example, the website for Adam’s Schiff’s campaign to serve as U.S. Senator from California has a privacy policy with this line under “Sharing of Information”:

With organizations, candidates, campaigns, groups, or causes that we believe have similar political viewpoints, principles, or objectives or share similar goals and with organizations that facilitate communications and information sharing among such groups

Similar language can be found on other campaign sites, including those for Elizabeth Warren and Ted Cruz. These candidate lists are valuable, and are often shared within the national party. In 2017, the Hillary Clinton campaign gave its email list to the Democratic National Committee, a contribution valued at $3.5 million.

If you live in a state with citizen initiative ballot measures, data collected from signature sheets might be shared or used as well. Signing a petition doesn't necessarily mean you support the proposed ballot measure—it's just saying you think it deserves to be put on the ballot. But in most states, these signature pages will remain a part of the public record, and the information you provide may get used for mailings or other targeted political ads. 

How Those Voter Records, and Much More, Lead to Targeted Digital Ads

All that real world information is just one part of the puzzle these days. Political campaigns tap into the same intrusive adtech tracking systems used to deliver online behavioral ads. We saw a glimpse into how this worked after the Cambridge Analytica scandal, and the system has only grown since then.

Specific details are often a mystery, as a political advertising profile may be created by combining disparate information—from consumer scoring data brokers like Acxiom or Experian, smartphone data, and publicly available voter information—into a jumble of data points that’s often hard to trace in any meaningful way. A simplified version of the whole process might go something like this:

  1. A campaign starts with its voter list, which includes names, addresses, and party affiliation. It may have purchased this from the state or its own national committee, or collected some of it for itself through a website or app.
  2. The campaign then turns to a data broker to enhance this list with consumer information. The data broker combines the voter list with its own data, then creates a behavioral profile using inferences based on your shopping, hobbies, demographics, and more. The campaign looks this all over, then chooses some categories of people it thinks will be receptive to its messages in its various targeted ads.
  3. Finally, the campaign turns to an ad targeting company to get the ad on your device. Some ad companies might use an IP address to target the ad to you. As The Markup revealed, other companies might target you based on your phone's location, which is particularly useful in reaching voters not in the campaign's files. 

In 2020, Open Secrets found political groups paid 37 different data brokers at least $23 million for access to services or data. These data brokers collect information from browser cookies, web beacons, mobile phones, social media platforms, and more. They found that some companies specialize in more general data, while others, like i360, TargetSmart, and Grassroots Analytics, focus on data useful to campaigns or advocacy.

screenshot of spreadsheet with categories, "Qanon, Rightwing Militias, Right to Repair, Inflation Fault, Electric Vehicle Buyer, Climate Change, and Amazon Worker Treatment"

A sample of some categories and inferences in a political data broker file that we received through a CCPA request shows the wide variety of assumptions these companies may make.

These political data brokers make a lot of promises to campaigns. TargetSmart claims to have 171 million highly accurate cell phone numbers, and i360 claims to have data on 220 million voters. They also tend to offer specialized campaign categories that go beyond the offerings of consumer-focused data brokers. Check out data broker L2’s “National Models & Predictive Analytics” page, which breaks down interests, demographics, and political ideology—including details like "Voter Fraud Belief," and "Ukraine Continue." The New York Times demonstrated a particularly novel approach to these sorts of profiles where a voter analytics firm created a “Covid concern score” by analyzing cell phone location, then ranked people based on travel patterns during the pandemic.

Some of these companies target based on location data. For example, El Toro claims to have once “identified over 130,000 IP-matched voter homes that met the client’s targeting criteria. El Toro served banner and video advertisements up to 3 times per day, per voter household – across all devices within the home.”

That “all devices within the home” claim may prove important in the coming elections: as streaming video services integrate more ad-based subscription tiers, that likely means more political ads this year. One company, AdImpact, projects $1.3 billion in political ad spending on “connected television” ads in 2024. This may be driven in part by the move away from tracking cookies, which makes web browsing data less appealing.

In the case of connected televisions, ads can also integrate data based on what you've watched, using information collected through automated content recognition (ACR). Streaming device maker and service provider Roku's pitch to potential political advertisers is straightforward: “there’s an opportunity for campaigns to use their own data like never before, for instance to reach households in a particular district where they need to get out the vote.” Roku claims to have at least 80 million users. As a platform for televisions and “streaming sticks,” and especially if you opted into ACR (we’ll detail how to check below), Roku can collect and use a lot of your viewing data ranging from apps, to broadcast TV, or even to video games.

This is vastly different from traditional broadcast TV ads, which might be targeted broadly based on a city or state, and the show being aired. Now, a campaign can target an ad at one household, but not their neighbor, even if they're watching the same show. Of the main streaming companies, only Amazon and Netflix don’t accept political ads.

Finally, there are Facebook and Google, two companies that have amassed a mountain of data points about all their users, and which allow campaigns to target based on some of those factors. According to at least one report, political ad spending on Google (mostly through YouTube) is projected to be $552 million, while Facebook is projected at $568 million. Unlike the data brokers discussed above, most of what you see on Facebook and Google is derived from the data collected by the company from its users. This may make it easier to understand why you’re seeing a political ad, for example, if you follow or view content from a specific politician or party, or about a specific political topic.

What You Can Do to Protect Your Privacy

Managing the flow of all this data might feel impossible, but you can take a few important steps to minimize what’s out there. The chances you’ll catch everything is low, but minimizing what is accessible is still a privacy win.

Install Privacy Badger
Considering how much data is collected just from your day-to-day web browsing, it’s a good idea to protect that first. The simplest way to do so is with our own tracking blocker extension, Privacy Badger.

Disable Your Phone Advertising ID and Audit Your Location Settings
Your phone has an ad identifier that makes it simple for advertisers to track and collate everything you do. Thankfully, you can make this much harder for those advertisers by disabling it:

  • On iPhone: Head into Settings > Privacy & Security > Tracking, and make sure “Allow Apps to Request to Track” is disabled. 
  • On Android: Open Settings > Security & Privacy > Privacy > Ads, and select “Delete advertising ID.”

Similarly, as noted above, your location is a valuable asset for campaigns. They can collect your location through data brokers, which usually get it from otherwise unaffiliated apps. This is why it's a good idea to limit what sorts of apps have access to your location:

  • On iPhone: open Settings > Privacy & Security > Location Services, and disable access for any apps that do not need it. You can also set location for only "While using," for certain apps where it's helpful, but unnecessary to track you all the time. Also, consider disabling "Precise Location" for any apps that don't need your exact location (for example, your GPS navigation app needs precise location, but no weather app does).
  • On Android: Open Settings > Location > App location permissions, and confirm that no apps are accessing your location that you don't want to. As with iOS, you can set it to "Allow only while using the app," for apps that don't need it all the time, and disable "Use precise location," for any apps that don't need exact location access.

Opt Out of Tracking on Your TV or Streaming Device, and Any Video Streaming Service
Nearly every brand of TV is connected to the internet these days. Consumer Reports has a guide for disabling what you can on most popular TVs and software platforms. If you use an Apple TV, you can disable the ad identifier following the exact same directions as on your phone.

Since the passage of a number of state privacy laws, streaming services, like other sites, have offered a way for users to opt out of the sale of their info. Many have extended this right outside of states that require it. You'll need to be logged into your streaming service account to take action on most of these, but TechHive has a list of opt out links for popular streaming services to get you started. Select the "Right to Opt Out" option, when offered.

Don't Click on Links in (or Respond to) Political Text Messages
You've likely been receiving political texts for much of the past year, and that's not going to let up until election day. It is increasingly difficult to decipher whether they're legitimate or spam, and with links that often use a URL shortener or odd looking domains, it's best not to click them. If there's a campaign you want to donate to, head directly to the site of the candidate or ballot sponsor.

Create an Alternate Email and Phone Number for Campaign Stuff
If you want to keep updated on campaign or ballot initiatives, consider setting up an email specifically for that, and nothing else. Since a phone number is also often required, it's a good idea to set up a secondary phone number for these same purposes (you can do so for free through services like Google Voice).

Keep an Eye Out for Deceptive Check Boxes
Speaking of signing up for updates, be mindful of when you don't intend to sign up for emails. Campaigns might use pre-selected options for everything from donation amounts to signing up for a newsletter. So, when you sign up with any campaign, keep an eye on any options you might not intend to opt into.

Mind Your Social Media
Now's a great time to take any sort of "privacy checkup" available on whatever social media platforms you use to help minimize any accidental data sharing. Even though you can't completely opt out of behavioral advertising on Facebook, review your ad preferences and opt out whatever you can. Also be sure to disable access to off-site activity. You should also opt out of personalized ads on Google's services. You cannot disable behavioral ads on TikTok, but the company doesn't allow political ads.

If you're curious to learn more about why you're seeing an ad to begin with, on Facebook you can always click the three-dot icon on an ad, then click "Why am I seeing this ad?" to learn more. For ads on YouTube, you can click the "More" button and then "About this advertiser" to see some information about who placed the ad. Anywhere else you see a Google ad you can click the "Adchoices" button and then "Why this ad?"

You shouldn't need to spend an afternoon jumping through opt out hoops and tweaking privacy settings on every device you own just so you're not bombarded with highly targeted ads. That’s why EFF supports comprehensive consumer data privacy legislation, including a ban on online behavioral ads.

Democracy works because we participate, and you should be able to do so without sacrificing your privacy. 

Speaking Freely: Lynn Hamadallah

16 April 2024 at 15:27

Lynn Hamadallah is a Syrian-Palestinian-French Psychologist based in London. An outspoken voice for the Palestinian cause, Lynn is interested in the ways in which narratives, spoken and unspoken, shape identity. Having lived in five countries and spent a lot of time traveling, she takes a global perspective on freedom of expression. Her current research project investigates how second-generation British-Arabs negotiate their cultural identity. Lynn works in a community mental health service supporting some of London's most disadvantaged residents, many of whom are migrants who have suffered extensive psychological trauma.

York: What does free speech or free expression mean to you? 

Being Arab and coming from a place where there is much more speech policing in the traditional sense, I suppose there is a bit of an idealization of Western values of free speech and democracy. There is this sense of freedom we grow up associating with the West. Yet recently, we’ve come to realize that the way it works in practice is quite different to the way it is described, and this has led to a lot of disappointment and disillusionment in the West and its ideals amongst Arabs. There’s been a lot of censorship for example on social media, which I’ve experienced myself when posting content in support of Palestine. At a national level, we have witnessed the dehumanization going on around protesters in the UK, which undermines the idea of free speech. For example, the pro-Palestine protests where we saw the then-Home Secretary Suella Braverman referring to protesters as “hate marchers.” So we’ve come to realize there’s this kind of veneer of free speech in the West which does not really match up to the more idealistic view of freedom we were taught about.

With the increased awareness we have gained as a result of the latest aggression going on in Palestine, actually what we’re learning is that free speech is just another arm of the West to support political and racist agendas. It’s one of those things that the West has come up with which only applies to one group of people and oppresses another. It’s the same as with human rights you know - human rights for who? Where are Palestinian’s human rights? 

We’ve seen free speech being weaponized to spread hate and desecrate Islam, for example, in the case of Charlie Hebdo and the Quran burning in Denmark and in Sweden. The argument put forward was that those cases represented instances of free speech rather than hate speech. But actually to millions of Muslims around the world those incidents were very, very hateful. They were acts of violence not just against their religious beliefs but right down to their sense of self. It’s humiliating to have a part of your identity targeted in that way with full support from the West, politicians and citizens alike. 

And then, when we— we meaning Palestinians and Palestine allies—want to leverage this idea of free speech to speak up against the oppression happening by the state of Israel, we see time and time again accusations flying around: hate speech, anti-semitism, and censorship. Heavy, heavy censorship everywhere. So that’s what I mean when I say that free speech in the West is a racist concept, actually. And I don’t know that true free speech exists anywhere in the world really. In the Middle East we don’t have democracies but at least there’s no veneer of democracy— the messaging and understanding is clear. Here, we have a supposed democracy, but in practice it looks very different. And that’s why, for me, I don’t really believe that free speech exists. I’ve never seen a real example of it. I think as long as people are power hungry there’s going to be violence, and as long as there’s violence, people are going to want to hide their crimes. And as long as people are trying to hide their crimes there’s not going to be free speech. Sorry for the pessimistic view!

York: It’s okay, I understand where you’re coming from. And I think that a lot of those things are absolutely true. Yet, from my perspective, I still think it’s a worthy goal even though governments—and organizationally we’ve seen this as well—a lot of times governments do try to abuse this concept. So I guess then I would just as a follow-up, do you feel that despite these issues that some form of universalized free expression is still a worthy ideal? 

Of course, I think it’s a worthy ideal. You know, even with social media – there is censorship. I’ve experienced it and it’s not just my word and an isolated incident. It’s been documented by Human Rights Watch—even Meta themselves! They did an internal investigation in 2021—Meta had a nonprofit called Business for Social Responsibility do an investigation and produce a report—and they’ve shown there was systemic censorship of Palestine-related content. And they’re doing it again now. That being said, I do think social media is making free speech more accessible, despite the censorship. 

And I think—to your question—free speech is absolutely worth pursuing. Because we see that despite these attempts at censorship, the truth is starting to come out. Palestine support is stronger than it’s ever been. To the point where we’ve now had South Africa take Israel to trial at the International Court of Justice for genocide, using evidence from social media videos that went viral. So what I’m saying is, free speech has the power to democratize demanding accountability from countries and creating social change, so yes, absolutely something we should try to pursue. 

York: You just mentioned two issues close to my heart. One is the issues around speech on social media platforms, and I’ve of course followed and worked on the Palestinian campaigns quite closely and I’m very aware of the BSR report. But also, video content, specifically, that’s found on social media being used in tribunals. So let me shift this question a bit. You have such a varied background around the world. I’m curious about your perspective over the past decade or decade and a half since social media has become so popular—how do you feel social media has shaped people’s views or their ability to advocate for themselves globally? 

So when we think about stories and narratives, something I’m personally interested in, we have to think about which stories get told and which stories remain untold. These stories and their telling is very much controlled by the mass media— BBC, CNN, and the like. They control the narrative. And I guess what social media is doing is it’s giving a voice to those who are often voiceless. In the past, the issue was that there was such a monopoly over mouthpieces. Mass  media were so trusted, to the point where no one would have paid attention to these alternative viewpoints. But what social media has done… I think it’s made people become more aware or more critical of mass media and how it shapes public opinion. There’s been a lot of exposure of their failure for example, like that video that went viral of Egyptian podcaster and activist Rahma Zain confronting CNN’s Clarissa Ward at the Rafah border about their biased reporting of the genocide in Palestine. I think that confrontation spoke to a lot of people. She was shouting “ You own the narrative, this is our problem. You own the narrative, you own the United Nations, you own Hollywood, you own all these mouthpieces— where are our voices?! Our voices need to be heard!” It was SO powerful and that video really spoke to the sentiment of many Arabs who have felt angry, betrayed and abandoned by the West’s ideals and their media reporting.

Social media is providing  a voice to more diverse people, elevating them and giving the public more control around narratives. Another example we’ve seen recently is around what’s currently happening in Sudan and the Democratic Republic of Congo. These horrific events and stories would never have had much of a voice or exposure before at the global stage. And now people all over the world are paying more attention and advocating for Sudanese and Congolese rights, thanks to social media. 

I personally was raised with quite a critical view of mass media, I think in my family there was a general distrust of the West, their policies and their media, so I never really relied personally on the media as this beacon of truth, but I do think that’s an exception. I think the majority of people rely on mass media as their source of truth. So social media plays an important role in keeping them accountable and diversifying narratives.

York: What are some of the biggest challenges you see right now anywhere in the world in terms of the climate for free expression for Palestinian and other activism? 

I think there’s two strands to it. There’s the social media strand. And there’s the governmental policies and actions. So I think on social media, again, it’s very documented, but it’s this kind of constant censorship. People want to be able to share content that matters to them, to make people more aware of global issues and we see time and time again viewership going down, content being deleted or reports from Meta of alleged hate speech or antisemitism. And that’s really hard. There’ve been random strategies that have popped up to increase social media engagement, like posting random content unrelated to Palestine or creating Instagram polls for example. I used to do that, I interspersed Palestine content with random polls like, “What’s your favorite color?” just to kind of break up the Palestine content and boost my engagement. And it was honestly so exhausting. It was like… I’m watching a genocide in real time, this is an attack on my people and now I’m having to come up with silly polls? Eventually I just gave up and accepted my viewership as it was, which was significantly lower.

At a government level, which is the other part of it, there’s this challenge of constant intimidation that we’re witnessing. I just saw recently there was a 17-year-old boy who was interviewed by the counterterrorism police at an airport because he was wearing a Palestinian flag. He was interrogated about his involvement in a Palestinian protest. When has protesting become a crime and what does that say about democratic rights and free speech here in the UK? And this is one example, but there are so many examples of policing, there was even talk of banning protests all together at one point. 

The last strand I’d include, actually, that I already touched on, is the mass media. Just recently we’ve seen the BBC reporting on the ICJ hearing, they showed the Israeli defense part, but they didn’t even show the South African side. So this censorship is literally in plain sight and poses a real challenge to the climate of free expression for Palestine activism.

York: Who is your free speech hero? 

Off the top of my head I’d probably say Mohammed El-Kurd. I think he’s just been so unapologetic in his stance. Not only that but I think he’s also made us think critically about this idea of narrative and what stories get told. I think it was really powerful when he was arguing the need to stop giving the West and mass media this power, and that we need to disempower them by ceasing to rely on them as beacons of truth, rather than working on changing them. Because, as he argues, oppressors who have monopolized and institutionalized violence will never ever tell the truth or hold themselves to account. Instead, we need to turn to Palestinians, and to brave cultural workers, knowledge producers, academics, journalists, activists, and social media commentators who understand the meaning of oppression and view them as the passionate, angry and, most importantly, reliable narrators that they are.

Americans Deserve More Than the Current American Privacy Rights Act

16 April 2024 at 15:03

EFF is concerned that a new federal bill would freeze consumer data privacy protections in place, by preempting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling.

We also urge the authors of the American Privacy Rights Act (APRA) to strengthen other portions of the bill. It should be easier to sue companies that violate our rights. The bill should limit sharing with the government and expand the definition of sensitive data. And it should narrow exceptions that allow companies to exploit our biometric information, our so-called “de-identified” data, and our data obtained in corporate “loyalty” schemes.

Despite our concerns with the APRA bill, we are glad Congress is pivoting the debate to a privacy-first approach to online regulation. Reining in companies’ massive collection, misuse, and transfer of everyone’s personal data should be the unifying goal of those who care about the internet. This debate has been absent at the federal level in the past year, giving breathing room to flawed bills that focus on censorship and content blocking, rather than privacy.

In general, the APRA would require companies to minimize their processing of personal data to what is necessary, proportionate, and limited to certain enumerated purposes. It would specifically require opt-in consent for the transfer of sensitive data, and most processing of biometric and genetic data. It would also give consumers the right to access, correct, delete, and export their data. And it would allow consumers to universally opt-out of the collection of their personal data from brokers, using a registry maintained by the Federal Trade Commission.

We welcome many of these privacy protections. Below are a few of our top priorities to correct and strengthen the APRA bill.

Allow States to Pass Stronger Privacy Laws

The APRA should not preempt existing and future state data privacy laws that are stronger than the current bill. The ability to pass stronger bills at the state and local level is an important tool in the fight for data privacy. We ask that Congress not compromise our privacy rights by undercutting the very state-level action that spurred this compromise federal data privacy bill in the first place.

Subject to exceptions, the APRA says that no state may “adopt, maintain, enforce, or continue in effect” any state-level privacy requirement addressed by the new bill. APRA would allow many state sectoral privacy laws to remain, but it would still preempt protections for biometric data, location data, online ad tracking signals, and maybe even privacy protections in state constitutions or some other limits on what private companies can share with the government. At the federal level, the APRA would also wrongly preempt many parts of the federal Communications Act, including provisions that limit a telephone company’s use, disclosure, and access to customer proprietary network information, including location information.

Just as important, it would prevent states from creating stronger privacy laws in the future. States are more nimble at passing laws to address new privacy harms as they arise, compared to Congress which has failed for decades to update important protections. For example, if lawmakers in Washington state wanted to follow EFF’s advice to ban online behavioral advertising or to allow its citizens to sue companies for not minimizing their collection of personal data (provisions where APRA falls short), state legislators would have no power to do so under the new federal bill.

Make It Easier for Individuals to Enforce Their Privacy Rights

The APRA should prevent coercive forced arbitration agreements and class action waivers, allow people to sue for statutory damages, and allow them to bring their case in state court. These rights would allow for rigorous enforcement and help force companies to prioritize consumer privacy.

The APRA has a private right of action, but it is a half-measure that still lets companies side-step many legitimate lawsuits. And the private right of action does not apply to some of the most important parts of the law, including the central data minimization requirement.

The favorite tool of companies looking to get rid of privacy lawsuits is to bury provision in their terms of service that force individuals into private arbitration and prevent class action lawsuits. The APRA does not address class action waivers and only prevents forced arbitration for children and people who allege “substantial” privacy harm. In addition, statutory damages and enforcement in state courts is essential, because many times federal courts still struggle to acknowledge privacy harm as real—relying instead on a cramped view that does not recognize privacy as a human right. In addition, the bill would allow companies to cure violations rather than face a lawsuit, incentivizing companies to skirt the law until they are caught.

Limit Exceptions for Sharing with the Government

APRA should close a loophole that may allow data brokers to sell data to the government and should require the government to obtain a court order before compelling disclosure of user data. This is important because corporate surveillance and government surveillance are often the same.

Under the APRA, government contractors do not have to follow the bill’s privacy protections. Those include any “entity that is collecting, processing, retaining, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, to the extent that such entity is acting as a service provider to the government entity.” Read broadly, this provision could protect data brokers who sell biometric information and location information to the government. In fact, Clearview AI previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is a point that needs revision because other parts of the bill rightly prevent covered entities (government contractors excluded) from selling data to the government for the purpose of fraud detection, public safety, and criminal activity detection.

The APRA also allows entities to transfer personal data to the government pursuant to a “lawful warrant, administrative subpoena, or other form of lawful process.” EFF urges that the requirement be strengthened to at least a court order or warrant with prompt notice to the consumer. Protections like this are not unique, and it is especially important in the wake of the Dobbs decision.

Strengthen the Definition of Sensitive Data

The APRA has heightened protections for sensitive data, and it includes a long list of 18 categories of sensitive data, like: biometrics, precise geolocation, private communications, and an individual’s online activity overtime and across websites. This is a good list that can be added to. We ask Congress to add other categories, like immigration status, union membership, employment history, familial and social relationships, and any covered data processed in a way that would violate a person’s reasonable expectation of privacy. The sensitivity of data is context specific—meaning any data can be sensitive depending on how it is used. The bill should be amended to reflect that.

Limit Other Exceptions for Biometrics, De-identified Data, and Loyalty Programs

An important part of any bill is to make sure the exceptions do not swallow the rule. The APRA’s exceptions on biometric information, de-identified data, and loyalty programs should be narrowed.

In APRA, biometric information means data “generated from the measurement or processing of the individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to the individual” and excludes “metadata associated with a digital or physical photograph or an audio or video recording that cannot be used to identify an individual.” EFF is concerned this definition will not protect biometric information used for analysis of sentiment, demographics, and emotion, and could be used to argue hashed biometric identifiers are not covered.

De-identified data is excluded from the definition of personal data covered by the APRA, and companies and service providers can turn personal data into de-identified data to process it however they want. The problem with de-identified data is that many times it is not. Moreover, many people do not want their private data that they store in confidence with a company to then be used to improve that company’s product or train its algorithm—even if the data has purportedly been de-identified.

Many companies under the APRA can host loyalty programs and can sell that data with opt-in consent. Loyalty programs are a type of pay-for-privacy scheme that pressure people to surrender their privacy rights as if they were a commodity. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.” At the very least, the bill should be amended to prevent companies from selling data that they obtain from a loyalty program.

We welcome Congress' privacy-first approach in the APRA and encourage the authors to improve the bill to ensure privacy is protected for generations to come.

❌
❌