Reading view

There are new articles available, click to refresh the page.

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect.

Montana’s ban (which EFF and others opposed) prohibits TikTok from operating anywhere within the state and imposes financial penalties on TikTok or any mobile application store that allows users to access TikTok. The district court recognized that Montana’s law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” and blocked Montana’s ban from going into effect. Last year, EFF—along with the ACLU, Freedom of the Press Foundation, Reason Foundation, and the Center for Democracy and Technology—filed a friend-of-the-court brief in support of TikTok and Montana TikTok users’ challenge to this law at the trial court level.

As the brief explains, Montana’s TikTok ban is a prior restraint on speech that prohibits Montana TikTok users—and TikTok itself—from posting on the platform. The law also prohibits TikTok’s ability to make decisions about curating its platform.

Prior restraints such as Montana’s ban are presumptively unconstitutional. For a court to uphold a prior restraint, the First Amendment requires it to satisfy the most exacting scrutiny. The prior restraint must be necessary to further an urgent interest of the highest magnitude, and the narrowest possible way for the government to accomplish its precise interest. Montana’s TikTok ban fails to meet this demanding standard.

Even if the ban is not a prior restraint, the brief illustrates that it would still violate the First Amendment. Montana’s law is a “total ban” on speech: it completely forecloses TikTok users’ speech with respect to the entire medium of expression that is TikTok. As a result, Montana’s ban is subject to an exacting tailoring requirement: it must target and eliminate “no more than the exact source of the ‘evil’ it seeks to remedy.” Montana’s law is undeniably overbroad and fails to satisfy this scrutiny.

This appeal is happening in the immediate aftermath of President Biden signing into law federal legislation that effectively bans TikTok in its current form, by requiring TikTok to divest of any Chinese ownership within 270 days. This federal law raises many of the same First Amendment concerns as Montana’s.

It’s important that the Ninth Circuit take this opportunity to make clear that the First Amendment requires the government to satisfy a very demanding standard before it can impose these types of extreme restrictions on Americans’ speech.

Fair Use Still Protects Histories and Documentaries—Even Tiger King

Copyright’s fair use doctrine protects lots of important free expression against the threat of ruinous lawsuits. Fair use isn’t limited to political commentary or erudite works – it also protects popular entertainment like Tiger King, Netflix’s hit 2020 documentary series about the bizarre and sometimes criminal exploits of a group of big cat breeders. That’s why a federal appeals court’s narrow interpretation of fair use in a recent copyright suit threatens not just the producers of Tiger King but thousands of creators who make documentaries, histories, biographies, and even computer software. EFF and other groups asked the court to revisit its decision. Thankfully, the court just agreed to do so.

The case, Whyte Monkee Productions v. Netflix, was brought by a videographer who worked at the Greater Wynnewood Exotic Animal Park, the Oklahoma attraction run by Joe Exotic that was chronicled in Tiger King. The videographer sued Netflix for copyright infringement over the use of his video clips of Joe Exotic in the series. A federal district court in Oklahoma found Netflix’s use of one of the video clips—documenting Joe Exotic’s eulogy for his husband Travis Maldonado—to be a fair use. A three-judge panel of the Court of Appeals for the Tenth Circuit reversed that decision and remanded the case, ruling that the use of the video was not “transformative,” a concept that’s often at the heart of fair use decisions.

The appeals court based its ruling on a mistaken interpretation of the Supreme Court’s opinion in Andy Warhol Foundation for the Visual Arts v. Goldsmith. Warhol was a deliberately narrow decision that upheld the Supreme Court’s prior precedents about what makes a use transformative while emphasizing that commercial uses are less likely to be fair. The Supreme Court held that commercial re-uses of a copyrighted work—in that case, licensing an Andy Warhol print of the artist Prince for a magazine cover when the print was based on a photo that was also licensed for magazine covers—required a strong justification. The Warhol Foundation’s use of the photo was not transformative, the Supreme Court said, because Warhol’s print didn’t comment on or criticize the original photograph, and there was no other reason why the foundation needed to use a print based on that photograph in order to depict Prince. In Whyte Monkee, the Tenth Circuit honed in on the Supreme Court’s discussion about commentary and criticism but mistakenly read it to mean that only uses that comment on an original work are transformative. The court remanded the case to the district court to re-do the fair use analysis on that basis.

As EFF, along with Authors Alliance, American Library Association, Association of Research Libraries, and Public Knowledge explained in an amicus brief supporting Netflix’s request for a rehearing, there are many kinds of transformative fair uses. People creating works of history or biography frequently reproduce excerpts from others’ copyrighted photos, videos, or artwork as indispensable historical evidence. For example, using sketches from the famous Zapruder film in a book about the assassination of President Kennedy was deemed fair, as was reproducing the artwork from Grateful Dead posters in a book about the band. Software developers use excerpts from others’ code—particularly declarations that describe programming interfaces—to build new software that works with what came before. And open government organizations, like EFF client Public.Resource.Org, use technical standards incorporated into law to share knowledge about the law. None of these uses involves commentary or criticism, but courts have found them all to be transformative fair uses that don’t require permission.

The Supreme Court was aware of these uses and didn’t intend to cast doubt on their legality. In fact, the Supreme Court cited to many of them favorably in its Warhol decision. And the Court even engaged in some non-commentary fair use itself when it included photos of Prince in its opinion to illustrate how they were used on magazine covers. If the Court had meant to overrule decades of court decisions, including its own very recent Google v. Oracle decision about software re-use, it would have said so.

Fortunately, the Tenth Circuit heeded our warning, and the warnings of Netflix, documentary filmmakers, legal scholars, and the Motion Picture Association, all of whom filed briefs. The court vacated its decision and asked for further briefing about Warhol and what it means for documentary filmmakers.

The bizarre story of Joe Exotic and his friends and rivals may not be as important to history as the Kennedy assassination, but fair use is vital to bringing us all kinds of learning and entertainment. If other courts start treating the Warhol decision as a radical rewriting of fair use law when that’s not what the Supreme Court said at all, many kinds of free expression will face an uncertain future. That’s why we’re happy that the Tenth Circuit withdrew its opinion. We hope the court will, as the Supreme Court did, reaffirm the importance of fair use.

The Cybertiger Strikes Again! EFF's 8th Annual Tech Trivia Night

Being well into spring, with the weather getting warmer, we knew it was only a matter of time till the Cybertiger awoke from his slumber. But we were prepared. Prepared to quench the Cybertiger's thirst for tech nerds to answer his obscure and fascinating minutiae of tech-related questions.

But how did we prepare for the Cybertiger's quiz? Well, with our 8th Annual Tech Trivia Night of course! We gathered fellow digital freedom supporters to test their tech-know how, and to eat delicious tacos, churros, and special tech-themed drinks, including LimeWire, Moderated Content, and Zero Cool.

Nine teams gathered before the Cybertiger, ready to battle for the *new* wearable first, second, and third place prizes:

EFF's Tech Trivia Awards! An acrylic award with an image of a blue/pink tiger.

But this year, the Cybertiger had a surprise up his sleeve! A new way to secure points had been added: bribes. Now, teams could donate to EFF to sway the judges and increase their total points to secure their lead. Still, the winner of the first-place prize was the Honesty Winner, so participants needed to be on their A-game to win!

At the end of round two of six, team Bad @ Names and 0x41434142 were tied for first place, making a tense game! It wasn’t until the bonus question after round two, where the Cybertiger asked each team, “What prompt would you use to jailbreak the Cybertiger AI?” where the team Bad @ Names came in first place with their answer.

By the end of round 4, Bad @ Names was still in first place, only in the lead by three points! Could they win the bonus question again? This time, each team was asked to create a ridiculous company elevator pitch that would be on the RSA expo floor. (Spoiler alert: these company ideas were indeed ridiculous!)

After the sixth round of questions, the Cybertiger gave one last chance for teams to scheme their way to victory! The suspense built, but after some time, we got our winners... 

In third place, AI Hallucinations with 60 total points! 

In second place, and also winning the bribery award, 0x41434142, with 145 total points!

In first place... Bad @ Names with 68 total points!

EFF’s sincere appreciation goes out to the many participants who joined us for a great quiz over tacos and drinks while never losing sight of EFF’s mission to drive the world towards a better digital future. Thank you to the digital freedom supporters around the world helping to ensure that EFF can continue working in the courts and on the streets to protect online privacy and free expression.

Thanks to EFF's Luminary Organizational Members DuckDuckGo, No Starch Press, and the Hering Foundation for their year-round support of EFF's mission. If you or your company are interested in supporting a future EFF event, or would like to learn more about Organizational Membership, please contact Tierney Hamilton.

Learn about upcoming EFF events when you sign up for our email list, or just check out our event calendar. We hope to see you soon!

Coalition to Calexico: Think Twice About Reapproving Border Surveillance Tower Next to a Public Park

Update May 15, 2024: The letter has been updated to include support from the Southern Border Communities Coalition. It was re-sent to the Calexico City Council. 

On the southwest side of Calexico, a border town in California’s Imperial Valley, a surveillance tower casts a shadow over a baseball field and a residential neighborhood. In 2000, the Immigration and Naturalization Service (the precursor to the Department of Homeland Security (DHS)) leased the corner of Nosotros Park from the city for $1 a year for the tower. But now the lease has expired, and DHS component Customs & Border Protection (CBP) would like the city to re-up the deal 

Map of Nosotros park with location of tower

But times—and technology—have changed. CBP’s new strategy calls for adopting powerful artificial intelligence technology to not only control the towers, but to scan, track and categorize everything they see.  

Now, privacy and social justice advocates including the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition have joined EFF in sending the city council a letter urging them to not sign the lease and either spike the project or renegotiate it to ensure that civil liberties and human rights are protected.  

The groups write 

The Remote Video Surveillance System (RVSS) tower at Nosotros Park was installed in the early 2000s when video technology was fairly limited and the feeds required real-time monitoring by human personnel. That is not how these cameras will operate under CBP's new AI strategy. Instead, these towers will be controlled by algorithms that will autonomously detect, identify, track and classify objects of interest. This means that everything that falls under the gaze of the cameras will be scanned and categorized. To an extent, the AI will autonomously decide what to monitor and recommend when Border Patrol officers should be dispatched. While a human being may be able to tell the difference between children playing games or residents getting ready for work, AI is prone to mistakes and difficult to hold accountable. 

In an era where the public has grave concerns on the impact of unchecked technology on youth and communities of color, we do not believe enough scrutiny and skepticism has been applied to this agreement and CBP's proposal. For example, the item contains very little in terms of describing what kinds of data will be collected, how long it will be stored, and what measures will be taken to mitigate the potential threats to privacy and human rights. 

The letter also notes that CBP’s tower programs have repeatedly failed to achieve the promised outcomes. In fact, the DHS Inspector General found that the early 2000s program,yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts.”  

The groups are calling for Calexico to press pause on the lease agreement until CBP can answer a list of questions about the impact of the surveillance tower on privacy and human rights. Should the city council insist on going forward, they should at least require regular briefings on any new technologies connected to the tower and the ability to cancel the lease on much shorter notice than the 365 days currently spelled out in the proposed contract.  

One (Busy) Day in the Life of EFF’s Activism Team

EFF is an organization of lawyers, technologists, policy professionals, and importantly–full-time activists–who fight to make sure that technology enhances rather than threatens civil liberties on a global scale. EFF’s activism team includes experienced issue experts, master communicators, and grassroots organizers who help to coordinate and orchestrate EFF’s activist campaigns that include but go well beyond litigation, technical analyses and solutions, and direct lobbying to legislators.

If you’ve ever wondered what it would be like to work on the activism team at EFF, or if you are curious about applying for a job at EFF, take a look at one exceptional (but also fairly ordinary) day in the life of five members of the team:

Jillian York, Director For International Freedom of Expression

I wake up around 9:00, make coffee, and check my email and internal messages (we use Mattermost, a self-hosted chat tool). I live in Berlin—between four and nine hours ahead of most of my colleagues—which on most days enables me to get some “deep work” done before anyone else is online.

I see that one of my colleagues in San Francisco left a late-night message asking for someone to edit a short blog post. No one else is awake yet, so I jump on it. I then work on a piece of writing of my own, documenting the case of Alaa Abd El Fattah, an Egyptian technologist, blogger, and EFF supporter who’s been imprisoned on and off for the past decade. After that, I respond to some emails and messages from colleagues from the day prior.

EFF offers us flexible hours, and since I’m in Europe I often have to take calls in the evening (6 or 7 pm my time is 9 or 10 am San Francisco time, when a lot of team meetings take place). I see this as an advantage, as it allows me to meet a friend for lunch and hit the gym before heading back to work. 

There’s a dangerous new bill being proposed in a country where we don’t have so much expertise, but which looks likely to have a greater impact across the region, so a colleague and I hop on a call with a local digital rights group to plan a strategy. When we work internationally, we always consult or partner with local groups to make sure that we’re working toward the best outcome for the local population.

While I’m on the call, my Signal messages start blowing up. A lot of the partners we work with in another region of the world prefer to organize there for reasons of safety, and there’s been a cyberattack on a local media publication. Our partners are looking for some assistance in dealing with it, so I send some messages to colleagues (both at EFF and other friendly organizations) to get them the right help.

After handling some administrative tasks, it’s time for the meeting of the international working group. In that group, we discuss threats facing people outside the U.S., often in areas that are underrepresented by both U.S. and global media.

After that meeting, it's off to prep for a talk I'll be giving at an upcoming conference. There have been improvements in social media takedown transparency reporting, but there are a lot of ways to continue that progress, and a former colleague and I will be hosting a mock game show about the heroes and anti-heroes of transparency. By the time I finish that, it's nearly 11 pm my time, so it's off to bed for me, but not for everyone else!

Matthew Guariglia, Senior Policy Analyst Responsible for Government Surveillance Advocacy

My morning can sometimes start surprisingly early. This morning, a reporter I often speak to called to if I had any comments about a major change to how Amazon Ring security cameras will allow police to request access to user’s footage. I quickly try to make sense of the new changes—Amazon’s press release doesn’t say nearly enough.  Giving a statement to the press requires a brief huddle between me, EFF’s press director, and other lawyers, technologists, and activists who have worked on our Ring campaign over the last few years. Soon, we have a statement that conveys exactly what we think Amazon needs to do differently, and what users and non-users should know about this change and its impact on their rights.. About an hour after that, we turn our brief statement into a longer blog post for everyone to read. 

For the rest of the day now, in between other obligations and meetings, I take press calls or do TV interviews from curious reporters asking whether this change in policy is a win for privacy. My first meeting is with representatives of about a dozen mostly-local groups in the Bay Area, where EFF is located, about the next steps for opposing Proposition E, a ballot measure that greatly reduces the amount of oversight on the San Francisco Police Department concerning what technology they use. I send a few requests to our design team about printing window signs and then talk with our Activism Director about making plans to potentially fly a plane over the city. Shortly after that, I’m in a coalition meeting of national civil liberties organizations discussing ways of keeping a clean reauthorization of Section 702 (a mass surveillance authority that expires this year) out of a must-pass bill that would continue to fund the government. 

In the afternoon, I watch and take notes as a Congressional committee holds a hearing about AI use in law enforcement. Keeping an eye on this allows me to see what arguments and talking points law enforcement is using, which members of Congress seem critical of AI use in policing and might be worth getting in touch with, and whether there are any revelations in the hearing that we should communicate to our members and readers. 

After the hearing, I have to briefly send notes to a Senator and their staff on a draft of a public letter they intend to send to industry leaders about data collection—and when law enforcement may or may not request access to stored user data. 

Tomorrow,  I’ll follow up on many of the plans made over the course of this day: I’ll need to send out a mass email to EFF supporters in the Bay Area rallying them to join in the fight against Proposition E, and review new federal legislation to see if it offers enough reform of Section 702 that EFF might consider supporting it. 

Hayley Tsukayama, Associate Director of Legislative Activism

I settle in with a big mug of tea to start a day full of online meetings. This probably sounds boring to a lot of people, but I know I'll have a ton of interesting conversations today.

Much of my job coordinating our state legislative work requires speaking with like-minded organizations across the country. EFF tries, but we can't be everywhere we want to be all of the time. So, for example, we host a regular call with groups pushing for stronger state consumer data privacy laws. This call gives us a place to share information about a dozen or more privacy bills in as many states. Some groups on the call focus on one state; others, like EFF, work in multiple states. Our groups may not agree on every bill, but we're all working toward a world where companies must respect our privacy by default.

You know, just a small goal.

Today, we get a summary of a hearing that a friendly lawmaker organized to give politicians from several states a forum to explain how big tech companies, advertisers, and data brokers have stymied strong privacy legislation. This is one reason we compare notes: the more we know about what they're doing, the better we can fight them—even though the other side has more money and staff for state legislative work than all of us combined.

From there, I jump to a call on emerging AI legislation in states. Many companies pushing weak AI regulation make software that monitors employees, so this work has connected me to a universe of labor advocates I've never gotten to work with before. I've learned so much from them, both about how AI affects working conditions and about the ways they organize and mobilize people. Working in coalitions shows me how different people bring their strengths to a broader movement.

At EFF, our activists know: we win with words. I make a note to myself to start drafting a blog post on some bad copy-paste AI bills showing up across the country, which companies have carefully written to exempt their own products.

My position lets me stick my nose into almost every EFF issue, which is one thing I love about it. For the rest of the day, I meet with a group of right-to-repair advocates whose decades of advocacy have racked up incredible wins in the past couple of years. I update a position letter to the California legislature about automotive data. I send a draft action to one of our lawyers—who I get to work with every day— about a great Massachusetts bill that would prohibit the sale of location data without permission. I debrief with two EFF staffers who testified this week in Sacramento on two California bills—one on IP issues, another on police surveillance. I polish a speech I'm giving with one of my colleagues, who has kindly made time to help me. I prep for a call with young activists who want to discuss a bill idea.

There is no "typical" day in my job. The one constant is that I get to work with passionate people, at EFF and outside of it, who want to make the world a better place. We tackle tough problems, big and small—but always ones that matter. And, sure, I have good days and bad days. But I can say this: they are rarely boring.

Rory Mir, Associate Director of Community Organizing 

As an organizer at EFF, I juggle long-term projects and needs with rapid responses for both EFF and our local allies in our grassroots network, Electronic Frontier Alliance. Days typically start with morning rituals that keep me grounded as a remote worker: I wake up, make coffee, put on music. I log in, set TODOs, clear my inbox. I get dressed, check the news, morning dog walk..

Back at my desk, I start with small tasks—reach out to a group I met at a conference, add an event to the EFF calendar, and promote EFA events on social media. Then, I get a call from a Portland EFA group. A city ordinance shedding light on police use of surveillance tech needs support. They’re working on a coalition letter EFF can sign, so I send it along to our street level surveillance team, schedule a meeting, and reach out to aligned groups in PDX.

Next up is a policy meeting on consumer privacy. Yesterday in Congress, the House passed a bill undermining privacy (again) and we need to kill it (again). We discuss key Senate votes, and I remember that an EFA group had a good relationship with one of those members in a campaign last year. I reach out to the group with links on our current campaign and see if they can help us lobby on the issue.

After a quick vegan lunch, I start a short Deeplinks post celebrating a major website connecting to the Fediverse, promoting folks autonomy online. I’m not quite done in time for my next meeting, planning an upcoming EFA meetup with my team. Before we get started though, an urgent message from San Diego interrupts us—the city council moved a crucial hearing on ALPRs to tomorrow. We reschedule and pivot to drafting an action alert email for the area as well as social media pushes to rally support.

In the home stretch, I set that meeting with Portland groups and make sure our newest EFA member has information on our workshop next week. After my last meeting for the day, a coalition call on Right to Repair (with Hayley!), I send my blog to a colleague for feedback, and wrap up the day in one of our off-topic chats. While passionately ranking Godzilla movies, my dog helpfully reminds me it’s time to log off and go on another walk.

Thorin Klosowski, Security and Privacy Activist

I typically start my day with reading—catching up on some broad policy things, but just as often poking through product-related news sites and consumer tech blogs—so I can keep an eye out for any new sorts of technology terrors that might be on the horizon, privacy promises that seem too good to be true, or any data breaches and other security guffaws that might need to be addressed.

If I’m lucky (or unlucky, depending on how you look at it), I’ll find something strange enough to bring to our Public Interest Technology crew for a more detailed look. Maybe it’ll be the launch of a new feature that promises privacy but doesn’t seem to deliver it, or in rare cases, a new feature that actually seems to. In either instance, if it seems worth a closer look, I’ll often then chat through all this with the technologists who specialize in the technology at play, then decide whether it’s worth writing something, or just keeping in our deep log of “terrible technologies to watch out for.” This process works in reverse, too—where someone on the PIT team brings up something they’re working on, like sketchyware on an Android tablet, and we’ll brainstorm some ways to help people who’re stuck with these types of things make them less sucky.

Today, I’m also tagging along with a couple of members of the PIT team at a meeting with representatives from a social media company that’s rolling out a new feature in its end-to-end encryption chat app. The EFF technologists will ask smart, technical questions and reference research papers with titles like, “Unbreakable: Designing for Trustworthiness in Private Messaging” while I furiously take notes and wonder how on earth we’ll explain all the positive (or negative) effects on individual privacy this feature might pose if it does in fact release.

With whatever time I have left, I’ll then work on Surveillance Self-Defense, our guide to protecting you and your friends from online spying. Today, I’m working through updating several of our encryption guides, which means chatting with our resident encryption experts both on the legal and PIT teams. What makes SSD so good, in my eyes, is how much knowledge backs every single word of every guide. This is what sets SSD apart from the graveyard of security guides online, but it also means a lot of wrangling to get eyes on everything that goes on the site. Sometimes a guide update clicks together smoothly and we update things quickly. Sometimes one update to a guide cascades across a half dozen others, and I start to feel like I have one of those serial killer boards, but I’m keeping track of several serial killers across multiple timelines. But however an SSD update plays out, it all needs to get translated, so I’ll finish off the day with a look at a spreadsheet of all the translations to make sure I don’t need to send anything new over (or just as often, realize I’ve already gotten translations back that need to put online).

*****

We love giving people a picture of the work we do on a daily basis at EFF to help protect your rights online. Our former Activism Directors, Elliot Harmon and Rainey Reitman, each wrote one of these blogs in the past as well. If you’d like to join us on the EFF Activism Team, or anywhere else in the organization, check out opportunities to do so here.

Speaking Freely: Mohamed El Gohary

Interviewer: Jillian York

Mohamed El Gohary is an open-knowledge enthusiast. After majoring in Biomedical Engineering in October 2010, he switched careers to work as a Social Media manager for Al-Masry Al-Youm newspaper until October 2011, when he joined Global Voices contracts managing Lingua until the end of 2021. He now works for IFEX as the MENA Network Engagement Specialist.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean for you?

Free speech, for me, freedom of expression, means the ability for people to govern themselves. It means to me that the real meaning of democracy can not happen without freedom of speech, without people expressing their needs in different spectrums. The idea of civic space, the idea of people basically living their lives and using different means of communication for getting things done right through freedom of speech.

York: What’s an experience that shaped your views on freedom of expression?

Well, my background is using the internet. So I always believed, in the early days of using the internet, that it would enable people to express themselves in a way for a better democratic process. But right now that changed because of the decentralization of online spaces to centralized spaces which are the antithesis of democracy. So the internet turns into an oligarch’s world. Which is, again, going back to freedom of expression. I think there are ways that are unchartered territories in terms of activism, in terms of platforms online and offline, to maybe reinvent the wheel in a way for people to have a better democratic process in terms of freedom of expression. 

York: You came up in an era where social media had so much promise, and now, like you said about the oligarchical online space—which I tend to agree with—we’re in kind of a different era. What are your views right now on regulation of social media?

Well, it’s still related to the democratic process. It’s a similar conversation to, let’s say, the Internet Governance Forum where… where is the decision making? Who has the power dynamics around decision making? So there are governments, then there are private companies, then there is law and the rule of law, and then there is civil society. And there’s good civil society and there’s bad civil society, in terms of their relationship with both governments and companies. So it goes back to freedom of expression as a collective and in an individual manner. And it comes to people and freedom of assembly in terms of absolute right and in terms of practice, to reinvent the democratic process. It’s the whole system. It turns out it’s not just freedom of expression. Freedom of expression has an important role, and the democratic process can’t be reinvented without looking at freedom of expression. The whole system, democracy, Western democracy and how different countries apply it in ways that affects and creates the power of the rich and powerful while the rest of the population just loses their hope in different ways. Everything goes back to reinventing the democratic process. And freedom of expression is a big part of it.

York: So this is a special interview, we’re here at the IFEX general meeting. What are some of the things that you’re seeing here, either good or bad, and maybe even what are some things that give you hope about the IFEX network?

I think, inside the IFEX network and the extended IFEX network, it’s the importance of connection. It’s the importance of collaboration. Different governments try to always work together to establish their power structures, while the resources governments have is not always available to civil society. So it’s important for civil society organizations—and IFEX is an example of collaboration between a large number of organizations around the world—in all scales, in all directions, that these kinds of collaborations happen in different organizations while still encouraging every organization in itself to look at itself, to look at itself as an organization, to look at how it’s working. To ask themselves, is it just a job? Are we working for a cause? Are we working for a cause in the right way? It’s the other side of the coin to how governments work and maintain existing power structures. There needs to be the other side of the coin in terms of, again, reinventing the democratic process.

York: Is there anything I didn’t ask that you want to mention?

My only frustration is where organizations work as if it is a job, and they only do the minimum, for example. And that’s in a good case scenario. A bad case scenario is when a civil society organization is working for the government or for private companies—where organizations can be a burden more than a resource. I don’t know how to approach that without cost. Cost is difficult, cost is expensive, it’s ugly, it’s not something you look for when you start your day. And there is a very small number of people and organizations who would be willing to even think about paying the price of being an inconvenience to organizations that are burdening entities. That would be my immediate and long term frustration with civil society at least in my vicinity.

Who is your free speech hero?

For me, as an Egyptian, that would be Alaa Abd El-Fattah. As a person who is a perfect example of looking forward to being an inconvenience. And there are not a lot of people who would be this kind of inconvenience. There are many people who appear like they are an inconvenience, but they aren’t really. This would be my hero.

Big Tech to EU: "Drop Dead"

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so. 

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days. 

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could. 

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK. 

 

 

Victory! FCC Closes Loopholes and Restores Net Neutrality

Thanks to weeks of the public speaking up and taking action the FCC has recognized the flaw in their proposed net neutrality rules. The FCC’s final adopted order on net neutrality restores bright line rules against all forms of throttling, once again creating strong federal protections for all Americans.

The FCC’s initial order had a narrow interpretation of throttling that could have allowed ISPs to create so-called fast lanes, speeding up access to certain sites and services and effectively slowing down other traffic flowing through your network. The order’s bright line rule against throttling now explicitly bans this kind of conduct, finding that the “decision to speed up ‘on the basis of Internet content, applications, or services’ would ‘impair or degrade’ other content, applications, or services which are not given the same treatment.” With this language, the order both hews more closely to the 2015 Order and further aligns with the strong protections Californians already enjoy via California’s net neutrality law.

As we celebrate this victory, it is important to remember that net neutrality is more than just bright line rules against blocking, throttling, and paid prioritization: It is the principle that ISPs should treat all traffic coming over their networks without discrimination. Customers, not ISPs, should decide for themselves how they would like to experience the internet. EFF—standing with users, innovators, creators, public interest advocates, libraries, educators and everyone else who relies on the open internet—will continue to champion this principle. 

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

No Country Should be Making Speech Rules for the World

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Free Speech Around the World | EFFector 36.6

Let's gather around the campfire and tell tales of the latest happenings in the fight for privacy and free expression online. Take care in roasting your marshmallows while we share ways to protect your data from political campaigns seeking to target you; seek nominees for our annual EFF Awards; and call for immediate action in the case of activist Alaa Abd El Fattah.

As the fire burns out, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.6 - Free Speech Around the World

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

What Can Go Wrong When Police Use AI to Write Reports?

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Speaking Freely : Nompilo Simanje

Nompilo Simanje is a lawyer by profession and is the Africa Advocacy and Partnerships Lead at the International Press Institute. She leads the IPI Africa Program which monitors and collects data on press freedom threats and violations across the continent, including threats to journalists’ safety and gendered attacks against journalists both online and offline to inform evidence-based advocacy. Nompilo is an expert on the intersection of technology, the law, and human rights. She has years of experience in advocacy and capacity building aimed at promoting media freedom, freedom of expression, access to information, and the right to privacy. She also currently serves on the Advisory Board of the Global Forum on Cyber Expertise. Simanje is an alumnus of the Open Internet for Democracy Leaders Program and the US State Department IVLP Program on Promoting Cybersecurity.

This interview has been edited for length and clarity.*

York: What does free expression mean to you? 

For me, free expression or free speech is the capacity for one to be able to communicate their views and their opinions without any fear or without thinking that there might be some reprisals or repercussions for freely engaging on any conversation or any issue which might be personal, but also even on any issue of public interest. 

What are some of the qualities that have made you passionate about free speech?

Being someone who works in the civil society sector, I think when I look at free speech and free expression, I view it as an avenue for the realization of several other rights. One key thing for me is that free expression encourages interactive dialogue, it encourages public dialogue, which is very important. Especially for democracy, but also for transparency and accountability. Being based in Africa, we are always having conversations around corruption, around accountability by government actors and public officials. And I feel that free expression is a vehicle for that, because it allows people to be able to question those that hold power and to criticize certain conduct by people that are in power. Those are some of the qualities that I feel are very important for me when I think about free expression. It enables transparency and accountability, but also holding those in power to account, which is something I believe is very important for democracies in Africa. 

So you work all around the African continent. Broadly speaking, what are some of the biggest online threats you’re seeing today? 

The digital age has been quite a revolutionary development, especially when you think about free expression. And I always talk about this when I engage on the topic of digital rights, but it has opened the avenue for people to communicate across boundaries, across borders, across countries, but, at the same time—in terms of the impact of threats and risks—they become equally huge as well. As part of the work that I have been doing, there are a few key things that I’ve seen online. One would be the issue of legislation—that countries have increased or upscaled their regulation of the online space. And one of the biggest threats for me has been lawfare, seeing how countries have been implementing old and new laws to undermine free expression online. For example, cybercrime laws or even existing criminal law code or penal codes. So I’ve seen that increasingly happening in Africa. 

Other key things that come to mind are online harassment, which is also happening in various forms. So just sometime last year at the 77th Session of the ACHPR (African Commission on Human and Peoples' Rights) we hosted a side event on the online safety of female journalists in Africa. And there were so many cases which were being shared about how female journalists are fearing online harassment. One big issue discussed was targeted disinformation. Where individuals spread false information about a certain individual as a way of discrediting them or undermining them or just attempting to silence them and ensure that they don’t communicate freely online. But also sometimes online harassment in the form of doxxing. Where personal details are shared online. Someone’s address. Someone’s email. And people are mobilized to attack that person. I’ve seen all those cases happening and I feel that online harassment especially towards female journalists and politicians continue to be some of the biggest threats to free expression in the region. In addition, of course, to what state actors are doing. 

I think also, generally, what I’m also seeing as part of the regulation aspect, is sometimes even the suspension of news websites. Where journalists are using those platforms—you know, like podcasts, Twitter spaces—to freely express. So this increase in regulation is one of the key things I feel continues to threaten online expression, particularly in the region.

You also work globally, you serve on a couple of advisory boards, and I’m curious, coming from an African perspective, how you see things like the Cybercrime Treaty or other international developments impacting the nations that you work in? 

It’s a brilliant question because the adjunct committee for the UN Cybercrime Treaty just recently met. I think one of the aspects I’ve noticed is that sometimes African civil society actors are not meaningfully participating in global processes. And as a result, they don’t get to share their experiences and get to reflect on how some developments at the global level will impact the region. 

Just taking on the example you shared about the UN Cybercrime Treaty, as part of my role at IPI, we actually submitted a letter to the adjunct committee with about 49 other civil society actors within Africa, highlighting to the committee that if this treaty is enacted in the way it was currently crafted, with wide scope in terms of the crimes and minimal human rights safeguards, it would actually undermine free expression. And this was informed by our experiences with cybercrime laws in the region. And we’re saying we have seen how some authoritarian governments in the region have been using cybercrime laws. So imagine having a global treaty or a global cybercrime convention. It can be a tool for other authoritarian governments to justify some of their conduct which has been targeted at undermining free expression. Some of the examples include criminalizing inciting public violence or criminalizing publishing falsehoods. We have seen that consistently in several countries and how those laws have been used to undermine expression. I definitely think that whenever there are global engagements about conventions that can undermine fundamental rights it’s very important for Africa to be represented, particularly civil society, because civil society is there to promote human rights and ensure that human rights are safeguarded. 

Also, there have been other key discussions happening, for example, with the open-ended working group on ICTs. We’ve had conversations about cyber capacity-building in the region and how that would also look for Africa where internet penetration is not at its highest and already there are additional divisions where everyone is not able to freely express themselves online. I think all those deliberations need to be taken into account and they need to be contextualized. My opinion is that when I look at global processes and I think about Africa, I always feel that it’s important for civil society actors and key stakeholders to contribute meaningfully to those processes, but also for us to contextualize some of those discussions and deliberate on how they will potentially impact us. Even when I think about the Global Digital Compact and all those issues around the Compact that the Compact seeks to address, we also need to contextualize them with our experiences with countries in the region which have ongoing conflicts and with countries in the region that are led by military regimes—especially in West Africa. All those issues need to be taken into account when we deliberate about global conventions or global policies. So that’s how I’ve been approaching these conversations around the global process, but trying to contextualize them based on what’s happening in the region and what our experiences have been with similar legislation and policies. 

I’m also really curious, has your work touched on issues of content moderation? 

Yes, but not broadly, because I think our interaction with the platforms has been quite minimal, but, yes, we have engaged platforms before. I think I’ll give you an example of Somalia. There’ve been so many reported cases by our partners at Somali Journalist Syndicate where individual accounts of journalists have been suspended, permanently suspended, and sometimes taken down, simply because political sympathizers of the government consistently report those accounts for expressing dissenting views. Or state actors have reached out to the platforms and asked them to intervene and suspend either pages or individual accounts. So we’ve had conversations with the platforms and we have issued public statements to highlight that, as far as content moderation is concerned, it is very important for the platforms to be transparent about requests that they’re receiving from governments, and also to be deliberate as far as media freedom is concerned. Especially where content relates to content or news that has been disseminated by media outlets or pages or accounts that have been utilized by journalists. Because in some countries you see governments consistently trying to undermine or ensure that journalists or media outlets do not fully utilize the online space. So that’s the angle that we have interacted with the platforms as far as content moderation is concerned—just ensuring that as they undertake their work they prioritize media freedom, they prioritize journalists, but also they understand the operating context, that there are countries that are quite authoritarian where dissenting voices are being targeted. So we always try to engage the platforms whenever we get an opportunity to raise awareness where platforms are suspending accounts or taking down content where such content genuinely relates to expressional protected speech. 

York: Did you have any formative experiences that helped shape your views on freedom of expression? 

Funny story actually. When I was in high school I was in certain positions of leadership as a head girl in my high school, but also serving in Junior Parliament. We had this institution put on by the Youth Council where young people in high school can form a shadow Parliament representing different constituencies across the country. I happened to be a part of that in high school. So, of course, that meant being in public spaces, and also generally my identity being known outside my circles. So what that also meant was that it opened an avenue for me to be targeted by trolls online. 

At some point when I was in high school people posted some defamatory, false information about me on an online platform. And over the years I’ve seen that post still there, still in existence. When that happened, I was in high school, I was still a child. But I was interacting on Facebook, you know, we have used Facebook for so many years, that’s the platform I think so many of us have been most familiar with from the time we were still kids. When this post was put up it was posted through a certain page that was a tabloid of sorts. And no one knew who was behind that page, no one knew who was the administrator of that page. What that meant for me was there was no recourse. Because I didn’t even know who was behind this post, who posted this defamatory and false information about me. 

I think from there it really triggered an interest in me about regulation of free expression online. How do you approach issues around anonymity and how far can we go in terms of protecting free expression online in instances where, indeed, rights of other people are also being undermined? It really helped to shape my thoughts around regulation of social media, regulation of content online. So I think, for me, the position even in terms of the work I’ve continued to do in my adult life around digital rights literacy, I’ve really tried to emphasize a digital citizenship where the key focus is really to ensure that we can freely express, but we need to ensure the rights of others. Which is why I strongly condemn hate speech. Which is why I strongly condemn targeted attacks, for instance, on female politicians and female journalists. Because I know that while we can freely express ourselves, there are certain limitations or boundaries that we shouldn’t cross. And I think I learned that from experiencing that targeted attack on me online. 

York: Is there anything I haven’t touched on yet that you’d like to talk about? 

I’d like to maybe just speak briefly about the implications of free expression being undermined especially in the online space. And I’m emphasizing this because we are in the digital age where the online space has really provided a platform for the full realization of so many fundamental rights. So one of the key things I’ve seen is the increase in self-censorship. For example, if individuals are being arrested over their Tweets and Facebook posts, news websites are being suspended, there’s also an increase in self-censorship. But also limited participation in public dialogue. We have so many elections happening in 2024, and we’ve had recent elections happen in the region, also. Nigeria was a big election. DRC was another big election. What I’ve been seeing is really limited participation, especially by high risk groups like women and LGBTQI communities. Especially, for example, when they’ve been targeted in Uganda through legislation. So there’s been limited participation and interactive dialogue in the region because of all these various developments that have been happening. 

Also, one aspect that comes to mind for me is the correlation between free expression and freedom of assembly and association. Because we are also interacting with groups and other like-minded people in the online space. So while we are freely expressing, the online space is also a platform for assembly and association. And some people are also being robbed of that experience, of freely associating online, because of the threats or the attacks that have been targeting free expression. I think it’s also important for Africa to think about these implications—that when you’re targeting free expression, you’re also targeting other fundamental rights. And I think that’s quite important for me to emphasize as part of this conversation. 

York: Who is your free speech hero? Someone who has really inspired you? 

I haven’t really thought about that actually! I don’t think I have a specific person in mind, but I generally just appreciate everyone who freely expresses their mind, especially on Twitter, because Twitter can be quite brutal at times. But there are several individuals that I look at and really admire for their tenacity in continuing to engage on the platforms even when they’re constantly being targeted. I won’t mention a specific person, but I think, from a Zimbabwen perspective, I would highlight that I’ve seen several female politicians in Zimbabwe being targeted. Actually, I will mention, there’s a female politician in Zimbabwe, Fadzayi Mahere, she’s also an advocate. I’ll mention her as a free speech hero. Because every time I speak about online attacks or online gender-based violence in digital rights trainings, I always mention her. That’s because I’ve seen how she has been able to stand against so many coordinated attacks from a political front and from a personal front. Just to highlight that last year she published a video which had been circulating and trending online about a case where police had allegedly assaulted a woman who had been carrying a child on her back. And she tweeted about that and she was actually arrested, charged, and convicted for, I think, “publishing falsehoods”, or, there’s a provision in the criminal law code that I think is like “publishing falsehoods to undermine public authority or the police service.” So I definitely think she is a press freedom hero, her story is quite an interesting story to follow in terms of her experiences in Zimbabwe as a young lawyer and as a politician, and a female politician at that. 

Podcast Episode: Building a Tactile Internet

Blind and low-vision people have experienced remarkable gains in information literacy because of digital technologies, like being able to access an online library offering more than 1.2 million books that can be translated into text-to-speech or digital Braille. But it can be a lot harder to come by an accessible map of a neighborhood they want to visit, or any simple diagram, due to limited availability of tactile graphics equipment, design inaccessibility, and publishing practices.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Chancey Fleet wants a technological future that’s more organically attuned to people’s needs, which requires including people with disabilities in every step of the development and deployment process. She speaks with EFF’s Cindy Cohn and Jason Kelley about building an internet that’s just and useful for all, and why this must include giving blind and low-vision people the discretion to decide when and how to engage artificial intelligence tools to solve accessibility problems and surmount barriers. 

In this episode you’ll learn about: 

  • The importance of creating an internet that’s not text-only, but that incorporates tactile images and other technology to give everyone a richer, more fulfilling experience. 
  • Why AI-powered visual description apps still need human auditing. 
  • How inclusiveness in tech development is always a work in progress. 
  • Why we must prepare people with the self-confidence, literacy, and low-tech skills they need to get everything they can out of even the most optimally designed technology. 
  • Making it easier for everyone to travel the two-way street between enjoyment and productivity online. 

Chancey Fleet’s writing, organizing and advocacy explores how cloud-connected accessibility tools benefit and harm, empower and expose communities of disability. She is the Assistive Technology Coordinator at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library, where she founded and maintains the Dimensions Project, a free open lab for the exploration and creation of accessible images, models and data representations through tactile graphics, 3D models and nonvisual approaches to coding, CAD and “visual” arts. She is a former fellow and current affiliate-in-residence at Data & Society; she is president of the National Federation of the Blind’s Assistive Technology Trainers Division; and she was recognized as a 2017 Library Journal Mover and Shaker. 

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

CHANCEY FLEET
The fact is, as I see it, that if you are presented with what seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. What I've already noticed is blind people in droves dumping their descriptions of personal images, sentimental images, generated by AI onto social media, and there is a certain hyper-normative quality to the language. Any scene that contains a child or a dog is heartwarming. Any sunset or sunrise is vibrant. Anything with a couch and a lamp is calm or cozy. Idiosyncrasies are left by the wayside.

Unflattering little aspects of an image are often unremarked upon, and I feel like I'm being served some Ikea pressboard of reality, and it is so much better than anything that we've had before on demand without having to involve a sighted human being. And it's good enough to mail, kind of like a Hallmark card, but do I want the totality of digital description online to slide into this hyper normative, serene anodyne description? I do not. I think that we need to do something about it.

CINDY COHN
That's Chancey Fleet describing one of the problems that has arisen as AI is increasingly used in assistive technologies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we start to get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fight when they DO go wrong. But this show is about optimism, hope and bright ideas for the future.

According to a National Health Interview Survey from 2018, more than 32 million Americans reported that they had vision loss, including blindness. And as our population continues to age, this number only increases. And a big part of fixing the internet means fixing it so that it works properly for everyone who needs and wants to use it – blind, sighted, and everyone in between.

JASON KELLEY
Our guest today is Chancey Fleet. She is the Assistive Technology Coordinator for the New York Public Library, where she teaches people how to use assistive technology to make their lives easier and more accessible. She’s also the president of the Assistive Technology Trainer’s Division for the National Federation of the Blind. 

CINDY COHN
We started our conversation as we often do – by asking Chancey what the world could be like if we started getting it right for blind and low vision people. 

CHANCEY FLEET
The unifying feature of rightness for blind and low vision folks is that we encounter a digital commons that plays to our strengths, and that means that it's easy for us to find information that we can access and understand. That might mean that web content always has semantic structure that includes things like headings for navigation. 

But it also includes things that we don't have much of right now, like a non-visual way to access maps and diagrams and images, because of course, the internet hasn't been in text only mode for the rest of us for a really long time.

I think getting the internet right also means that we're able to find each other and build community because we're a really low incidence disability. So odds are your colleague, your neighbor, your family members aren't blind or low-vision, and so we really have to learn and produce knowledge and circulate knowledge with each other. And when the internet gets it right, that's something that's easy for us to do. 

CINDY COHN
I think that's so right. And it's honestly consistent with, I think, what every community wants, right? I mean, the Internet's highest and best use is to connect us to the people we wanna be connected to. And the way that it works best is if the people who are the users of it, the people who are relying on it have, not just a voice, but a role in how this works.

I've heard you talk about that in the context of what you call ‘ghostwritten code.’ Do you wanna explain what that is? Am I right? I think that's one of the things that has concerned you.

CHANCEY FLEET
Yeah, you are right. A lot of people who work in design and development are used to thinking of blind and disabled people in terms of user stories and personas, and they may know on paper what the web content accessibility guidelines, for instance, say that a blind or low vision user or a keyboard-only user, or a switch user needs. The problems crop up when they interpret the concrete aspects of those guidelines without having a lived experience that leads them to understand usability in the real world.

I can give you one example. A few years ago, Google rolled out a transcribe feature within Google Translate, which I was personally super excited about. And by the way, I'm a refreshable Braille user, which means I use a Braille display with my iPhone. And if you were running VoiceOver, the screen reader for iPhone, when you launched the transcribed feature, it actually scolded you that it would not proceed, that it would not transcribe, until you plugged in headphones because well-meaning developers and designers thought, well, VoiceOver users have phones that talk, and if those phones are talking, it's going to ruin the transcription, so we'll just prevent that from happening. They didn't know about me. They didn't know about refreshable Braille users or users that might have another way to use VoiceOver that didn't involve speech out loud.

And so that, I guess you could call it a bug, I would call it a service denial, was around for a few weeks until our community communicated back about it, and if there had been blind people in the room or Braille users in the room, that would've never happened.

JASON KELLEY
I think this will be really interesting and useful for the designers at EFF who think a lot in user personas and also about accessibility. And I think just hearing what happens when you get it wrong and how simple the mistake can be is really useful I think for folks to think about inclusion and also just how essential it is to make sure there's more in-depth testing and personas as you're saying. 

I wanna talk a little bit about the variety of things you brought up in your opening salvo, which I think we're gonna cover a lot of. But one of the points you mentioned was, or maybe you didn't say it this way in the opening, but you've written about it, and talked about it, which is tactile graphics and something that's called the problem of image poverty online.

And that basically, as you mentioned, the internet is a primarily text-based experience for blind and low-vision users. But there are these tools that, in a better future, will be more accessible, both available and usable and effective. And I wonder if you could talk about some of those tools like tablets and 3D printers and things like that.

CHANCEY FLEET
So it's wild to me the way that our access to information as blind folks has evolved given the tools that we've had. So, since the eighties or nineties we've had Braille embossers that are also capable of creating tactile graphics, which is a fancy way to say raise drawings.

A graphics-capable embosser can emboss up to a hundred dots per inch. So if you look at it. Visually, it's a bit pixelated, but it approaches the limits of tactile perception. And in this way, we can experience media that includes maybe braille in the form of labels, but also different line types, dotted lines, dashed lines, textured infills.

Tactile design is a little bit different from visual design because our perceptual acuity is lower. It's good to scale things up. And it's good to declutter items. We may separate layers of information out to separate graphics. If Braille were print, it would be a thirty-six point font, so we use abbreviations liberally when we need to squeeze some braille onto an image.

And of course, we can't use color to communicate anything semantic. So when the idea of a red line or a blue line goes away we start thinking about a solid line versus a dashed or dotted line. When we think about a pie chart, we think about maybe textures or labels in place of colors. But what's interesting to me is that although tactile graphics equipment has been on the market since at least the eighties, probably someone will come along and correct me that it's even sooner than that.

Most of that equipment is on the wrong side of an institutional locked door, so it belongs to a disability services office in a university. It belongs to the makers of standardized tests. It belongs to publishers. I've often heard my library patrons say something along the lines of, oh yeah, there was a graphics embosser in my school, but I never got to touch it, I never got to use it. 

Sometimes the software that's used to produce tactile graphics is, in itself, inaccessible. And so I think blind people have experienced pretty remarkable gains in general in regard to our information literacy because of digital technologies and the internet. For example, I can go to Bookshare.org, which is an online library for people with print disabilities and have my choice of a million books right now.

And those can automatically be translated to text-to-speech or to digital braille. But if I want a map of the neighborhood that I'm going to visit tomorrow, or if I want a glimpse of how electoral races play out, that can be really hard to come by. And I think it is a combination of the limited availability of tactile graphics equipment, inaccessibility of design and publishing practices for tactile graphics, and then this sort of vicious circular lack of demand that happens when people don't have access. 

When I ask most blind people, they'll say that they've maybe encountered two or three tactile graphics in the past year, maybe less. Um, a lot of us got more than that during our K-12 instruction. But what I find, at least for myself, is that when tactile graphics are so strongly associated with standardized testing and homework and never associated with my own curiosity or fun or playfulness or exploration, for a long time, that actually dampened down my desire to experience tactile graphics.

And so most of us would say probably, if I can be so bold as to think that I speak for the community for a second, most of us would say that yes, we have the right to an accessible web. Yes, we have the right to digital text. I think far fewer of us are comfortable saying, or understand the power of saying we also have a right to images and so in the best possible version of the internet that I imagine we have three things. We have tactile graphics equipment that is bought more frequently, and so there are economies of scale and the prices come down. We have tactile design and graphics design programs that are more accessible than what's on the market right now. And critically, we have enough access to tactile graphics online that people can find the kind of information that engages and compels them. And within 10 years or so, people are saying, we don't live in a text-only world, images aren't inherently visual, they are spacial, and we have a right to them.

JASON KELLEY
I read a piece that you had written about the kind of importance of data visualizations during the pandemic and how important it was for that sort of flatten the curve graph to be able to be seen or, or touched in this case, um, by as many people as possible. But, and, and that really struck me, but I also love this idea that we shouldn't have to get these tools only because they're necessary, but also because people deserve to be able to enjoy the experience of the internet.

CHANCEY FLEET
Right, and you never know when enjoyment is going to lead to something productive or when something productive you're doing spins out into enjoyment. Somebody sent me a book of tactile origami diagrams. It's a four volume book with maybe 40 models in it, and I've been working through them all. I can do almost all of them now, and it's really hard as a blind person to go online and find origami instructions that make any sense from an accessibility perspective.

There is a wonderful website called AccessOrigami.com. Lindy Vandermeer out of South Africa does great descriptive origami instruction. So it's all text directing you step by step by step. But the thing is, I'm a spatial thinker. I'm what you might think of as a visual thinker, and so I can get more out of a diagram that's showing me where to flip dot A to dot B, then I can in reading three paragraphs. It's faster, it's more fluid, it's more fun. And so I treasure this book and unfortunately every other blind person I show it to also treasures it and can't have it 'cause I've got one copy. And I just imagine a world in which, when there's a diagram on screen, we can use some kind of process to re-render it in a more optimal format for tactile exploration. That might mean AI or machine learning, and we can talk a little bit about that later. But a lot of what we learn about. What we're good at, what we enjoy, want, what we want more of in life. You know, we do find online these days, and I want to be able to dive into those moments of curiosity and interest without having to first engineer a seven step plan to get access to whatever it is that's on my screen.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Chancey Fleet.

CINDY COHN
So let's talk a little bit about AI and I'd love to hear your perspective on where AI is gonna be helpful and where we ought to be cautious.

CHANCEY FLEET
So if you are blind and reasonably online and you have a smartphone and you're somebody that's comfortable enough with your smartphone that like you download apps on a discretionary basis, there's a good chance that you've heard of a new feature in this app, be my eyes called be my AI, and it's a ChatGPT with computer vision powered describer.

You aim your camera at something, wait a few seconds, and a fairly rich description comes back. It's more detailed and nuanced than anything that AI or machine learning has delivered before, and so it strikes a lot of us as transformational and or uncanny, and it allows us to grab glimpses of what I would call a hypothesized visual world because as we all know, these AI make up stories out of whole cloth and include details that aren't there, and skip details that to the average human observer would be obviously relevant. So I can know that the description I'm getting is probably not prioritized and detailed in quite the same way that a human describer would approach it.

So what's interesting to me is that, since interconnected blind folks have such a dense social graph, we are all sort of diving into this together and advising each other on what's going well and what's not. And I think that a lot of us are deriving authentic value from this experience as bounded by caveats as it is. At the same time, I fear that when this technology scales, which it will, if other forces don't counteract it, it may become a convincing enough business case that organizations and institutions can skip. Human authoring of alt text to describe images online and substitute these rich seeming descriptions that are generated by an AI, and even if that's done in such a way that a human auditor can go in and make changes.

The fact is, as I see it, that if you are presented with. What seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. 

CINDY COHN
I think what I hear in the answer is it can be an augment to the humans doing the describing, um, but not a replacement for, and that's where the, you know, but it's cheaper part comes in. Right. And I think keeping our North Star on the, you know, using these systems in ways that assist people rather than replace people is coming up over and over again in the conversations around AI, and I'm hearing it in what you're saying as well.

CHANCEY FLEET
Absolutely, and let me say as a positive it is both my due diligence as an educator and my personal joy to experiment with moments where AI technologies can make it easier for me to find information or learn things. For example, if I wanna get a quick visual description of the Bluebird trains that the MTA used to run, that's a question that I might ask AI.

I never would've bothered a human being with it. It was not central enough. But if I'm reading something and I want a quick visual description to fill it in, I'll do that.

I also really love using AI tools to look up questions about different artistic or architectural styles, or even questions about code.

I'm studying Python right now because when I go to look for information online on these subjects, often I'm finding websites that are riddled with. Lack of semantic structure that have graphics that are totally unlabeled, that have carousels, that are hard for screen reader users to navigate. And so one really powerful and compelling thing that current Conversational AI offers is that it lives in a text box and it won't violate the conventions of a chat by throwing a bunch of unwanted visual or structural clutter my way.

And when I just want an answer and I'm willing to grant myself that I'm going to have to live with the consequences of trusting that answer, or do some lateral reference, do some double checking, it can be worth my while. And in the best possible world moving forward, I'd like us to be able to harness that efficiency and that facility that conversational AI has for avoiding the hyper visual in a way that empowers us, but doesn't foreclose opportunities to find things out in other ways.

CINDY COHN
As you're describing it, I'm envisioning, you know, my drunk friend, right? They might do okay telling me stuff, but I wouldn't rely on them for stuff that really matters.

CHANCEY FLEET
Exactly.

CINDY COHN
You've also talked a little bit about the role of data privacy and consent and the special concerns that blind people have around some of the technologies that are offered to them. But making sure that consent is real. I'd love for you to talk a little bit about that.

CHANCEY FLEET
When AI is deployed on the server side to fix accessibility problems in lieu of baking, accessibility in from the ground up in a website or an application, that does a couple of things. It avoids changing the culture at the company, the customer company itself, around accessibility. It also involves an ongoing cost and technology debt to the overlay company that an organization is using and it builds in the need for ongoing supervision of the AI. So in a lot of ways, I think that that's not optimal. What I think is optimal is for developers and designers, perhaps, to use AI tools to flag issues in need of human remediation, and to use AI tools for education to speed up their immersion into accessibility and usability concepts.

You know, AI can be used to make short work of things that used to take a little bit more time. When it comes to deploying AI tools to solve accessibility problems, I think that that is a suite of tools that is best left to the discretion of the user. So we can decide, on the user side, for example, when to turn on a browser extension that tries to make those remediations. Because when they're made for us at scale, that doesn't happen with our consent and it can have a lot of collateral impacts that organizations might not expect.

JASON KELLEY
The points you're making about being involved in different parts of the process. Right. It's clear that people that use these tools or that, that actually these tools are designed for should be able to decide when to deploy them.

And it's also clear that they should be more involved, as you've mentioned a few times, in the creation. And I wanted to talk a little bit about that idea of inclusion because it's sort of how we get to a place where consent is  actually, truly given. 

And it's also how we get to a place where these tools that are created do what they're supposed to do, and the companies that you're describing, um, build the, the web, the way that it should be built so that people can can access it.

We have to have inclusion in every step of the process to get to that place where these, all of these tools and the web and, and everything we're talking about actually works for everyone. Is inclusion sort of across the spectrum a solution that you see as well?

CHANCEY FLEET
I would say that inclusion is never a solution because inclusion is a practice and a process. It's something that's never done. It's never achieved, and it's never comprehensive and perfect. 

What I see as my role as an educator, when it comes to inclusion, is meeting people where they are trying to raise awareness – among library patrons and everyone else – I serve about what technologies are available and the costs and benefits of each, and helping people road map a path from their goals and their intentions to achieving the things that they want to do.

And so I think of inclusion as sort of a guiding frame and a constant set of questions that I ask myself about what I'm noticing, what I may not be noticing, what I might be missing, who's coming in, for example, for tech lessons, versus who we're not reaching. And how the goals of the people I serve might differ from my goals for them.

And it's all kind of a spider web of things that add up to inclusion as far as I'm concerned.

CINDY COHN
I like that framing of inclusion as kind of a process rather than an end state. And I think that framing is good because I think it really moves away from the checkbox kind of approach to things like, you know, did we get the disabled person in the room? Check! 

Everybody has different goals and different things that work for them and there isn't just one box that can be checked for a lot of these kinds of things.

CHANCEY FLEET
Blind library patrons and blind people in general are as diverse as any library patrons or people in general. And that impacts our literacy levels. It impacts our thoughts and the thoughts of our loved ones about disability. It impacts our educational attainment, and especially for those of us who lose our vision later in life, it impacts how we interact with systems and services.

I would venture to say that at this time in the U.S, if you lose your vision as an adult, or if you grow up blind in a school system, the quality of literacy and travel and independent living instruction you receive is heavily dependent on the quality of the systems and infrastructure around you, who you know, and who you know who is primed to be a disability advocate or a mentor.

And I see such different outcomes when it comes to technology based on those things. And so we can't talk about a best possible world in the technology sphere without also imagining a world that prepares people with the self-confidence, the literacy skills, and the supports for developing low tech skills that are necessary to get everything that one can get out of even the most optimally designed technology. 

A step by step app for walking directions can be as perfect as it gets. But if the person that you are equipping with that app is afraid to step out of their front door and start moving their cane back and forth and listening to the traffic and trusting their reflexes and their instincts because they have been taught how to trust those things, the app won't be used and there'll be people who are unreached and so technology can only succeed to the extent that the people using it are set up to succeed. And I think that that is where a lot of our toughest work resides.

CINDY COHN
We're trying to fix the internet here, but the internet rests on the rest of the world. And if the rest of the world isn't setting people up for success, technology can't swoop in and solve a lot of these problems.

It needs to rest upon a solid foundation. I think that's just a wonderful place to close because all of us sit on top of what John Perry Barlow called meatspace, right, and if meatspace isn't serving us, then the digital world can only, you know, it can't solve for the problems that are not digital.

JASON KELLEY
I would have loved to talk to Chancey for another hour. That was fantastic.

CINDY  COHN
Yeah, that was a really fun conversation. And I have to say, I just love the idea of the internet going tactile, right? That right now it's all very visual, and that we have the technology to make it tactile so that maps and other things that are, you know, pretty hard for people with low vision or blindness to navigate now, but we have technology, some of the, tools that she talked about that really could make the internet something you could feel as well as see? 

JASON KELLEY
Yeah, I didn't know before talking to her that these tools even existed. And when you hear about it, you're like, oh, of course they do. But it was clear, uh, It was clear from what she said that a lot of people don't have access to them. The tools are relatively new and they need to be spread out more.  But when that happens, hopefully that does happen,  it sort of then requires us to rethink how the internet is built in some ways in terms of the hierarchy of text and what kinds of graphics exist and protocols for converting that information into tactile experiences for people. 

CINDY COHN
Yeah, I think so. And  it does sit upon something that she mentioned. I mean, she said these machines exist and have existed for a long time, but they're mainly in libraries or other places where people can't use them in their everyday lives. And, and I think, you know, one of the things that we ended with in the conversation was really important, which is, you know, we're all sitting upon a society that doesn't make a lot of these tools as widely available as they need to. 

And, you know, the good news in that is that the hard problem has been solved, which is how do you build a machine like this? The problem that we ought to be able to address as a society is how do we make it available much more broadly? I use this quote a lot, but you know, the future is here. It's just not evenly distributed. Seemed really, really clear in the way that she talked about these tools that like most blind people have used once or twice in school, but then don't get to use and turn part of their everyday life 

JASON KELLEY
Yeah. The, the way I heard this was that we have this problem solved sort of at an institutional level where you can access these tools at an institution, but not at the individual level. And it's really.  It is helpful to hear and and optimistic to hear that they will exist in theory in people's homes if we can just get that to happen. And I think what was really rare for this conversation is that it, like you said, we actually do have the technology to do these things a lot of times we're talking about what we need to improve or change about the technology and and how that technology doesn't quite exist or will always be problematic and in this case, sure, the technology can always get better, but  it sounds like we're actually  At a point where we have a lot of the problems solved, whether it's using tactile tablets or, um,  creating ways for people to  use technology to guide each other through places, whether that's through like a person, through Be My Eyes or even in some cases an AI with the Be My AI version of that.

But we just haven't gotten to the point where those things work for everyone. And everyone has  a level of technological proficiency that lets them use those things. And that's something that clearly we'll need to work on in the future.

CINDY COHN
Yeah, but she also pointed out the work that needs to be done about making sure that we're continuing to build the tech that actually serves this community. And she, you know, and they're talking about, you know, ghostwritten code and things like that, where, you know, people who don't have the experience are writing things and building things based upon what they think the people who are blind might want. So, you know, on the one hand, there's good news because a lot of really good technology already exists, but I think she also didn't let us off the hook as a society about something that we, we see all across the board, which is, you know, it need, we need to have the direct input of the people who are going to be using the tools in the building of the tools, lest we end up on a whole other path with things that other than what people actually need. And, you know, this is one of the kind of old, you know, what did they say? The lessons will be repeated until they are learned. This is one of those things where over and over again, we find that the need for people who are building technologies to not just talk to the people who are going to be using them, but really embed those people in the development is one of the ways we stay true to our, to our goal, which is to build stuff that will actually be useful to people.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some limited edition merch like tshirts or buttons or stickers and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Attribution 3.0 unported by their creators. In this episode, you heard Probably Shouldn't by J.Lang, commonGround by airtone and Klaus by Skill_Borrower

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN

And I’m Cindy Cohn.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

EFF Zine on Surveillance Tech at the Southern Border Shines Light on Ever-Growing Spy Network

Guide Features Border Tech Photos, Locations, and Explanation of Capabilities

SAN FRANCISCO—Sensor towers controlled by AI, drones launched from truck-bed catapults, vehicle-tracking devices disguised as traffic cones—all are part of an arsenal of technologies that comprise the expanding U.S surveillance strategy along the U.S.-Mexico border, revealed in a new EFF zine for advocates, journalists, academics, researchers, humanitarian aid workers, and borderland residents.

Formally released today and available for download online in English and Spanish, “Surveillance Technology at the U.S.-Mexico Border” is a 36-page comprehensive guide to identifying the growing system of surveillance towers, aerial systems, and roadside camera networks deployed by U.S.-law enforcement agencies along the Southern border, allowing for the real-time tracking of people and vehicles.

The devices and towers—some hidden, camouflaged, or moveable—can be found in heavily populated urban areas, small towns, fields, farmland, highways, dirt roads, and deserts in California, Arizona, New Mexico, and Texas.

The zine grew out of work by EFF’s border surveillance team, which involved meetings with immigrant rights groups and journalists, research into government procurement documents, and trips to the border. The team located, studied, and documented spy tech deployed and monitored by the Department of Homeland Security (DHS), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), National Guard, and Drug Enforcement Administration (DEA), often working in collaboration with local law enforcement agencies.

“Our team learned that while many people had an abstract understanding of the so-called ‘virtual wall,’ the actual physical infrastructure was largely unknown to them,” said EFF Director of Investigations Dave Maass. “In some cases, people had seen surveillance towers, but mistook them for cell phone towers, or they’d seen an aerostat flying in the sky and not known it was part of the U.S. border strategy.

“That's why we put together this zine; it serves as a field guide to spotting and identifying the large range of technologies that are becoming so ubiquitous that they are almost invisible,” said Maass.

The zine also includes a copy off EFF’s pocket guide to crossing the U.S. border and protecting information on smart phones, computers, and other digital devices.

The zine is available for republication and remixing under EFF’s Creative Commons Attribution License and features photography by Colter Thomas and Dugan Meyer, whose exhibit “Infrastructures of Control,”—which incorporates some of EFF’s border research—opened in April at the University of Arizona. EFF has previously released a gallery of images of border surveillance that are available for publications to reuse, as well as a living map of known surveillance towers that make up the so-called “virtual wall.”

To download the zine:
https://www.eff.org/pages/zine-surveillance-technology-us-mexico-border

For more on border surveillance:
https://www.eff.org/issues/border-surveillance-technology

For EFF’s searchable Atlas of Surveillance:
https://atlasofsurveillance.org/ 

 

Contact: 
Dave
Maass
Director of Investigations

CCTV Cambridge, Addressing Digital Equity in Massachusetts

Here at EFF digital equity is something that we advocate for, and we are always thrilled when we hear a member of the Electronic Frontier Alliance is advocating for it as well. Simply put, digital equity is the condition in which everyone has access to technology that allows them to participate in society; whether it be in rural America or the inner cities—both places where big ISPs don’t find it profitable to make such an investment. EFF has long advocated for affordable, accessible, future-proof internet access for all. I recently spoke with EFA member CCTV Cambridge, as they partnered with the Massachusetts Broadband Institute to tackle this issue and address the digital divide in their state:

How did the partnership with the Massachusetts Broadband Institute come about, and what does it entail?

Mass Broadband Institute and Mass Hire Metro North are the key funding partners. We were moving forward with lifting up digital equity and saw an opportunity to apply for this funding, which is going to several communities in the Metro North area. So, this collaboration was generated in Cambridge for the partners in this digital equity work. Key program activities will entail hiring and training “Digital Navigators” to be placed in the Cambridge Public Library and Cambridge Public Schools, working in partnership with navigators at CCTV and Just A Start. CCTV will employ a coordinator as part of the project, who will serve residents and coordinate the digital navigators across partners to build community, skills, and consistency in support for residents. Regular meetings will be coordinated for Digital Navigators across the city to share best practices, discuss challenging cases, exchange community resources, and measure impact from data collection. These efforts will align with regional initiatives supported through the Mass Broadband Institute Digital Navigator coalition.

What is CCTV Cambridge’s approach to digital equity and why is it an important issue?

CCTV’s approach to digital equity has always been about people over tech. We really see the Digital Navigators as more like digital social workers rather than IT people in a sense that technology is required to be a fully civically engaged human, someone who is connected to your community and family, someone who can have a sense of well being and safety in the world. We really feel like what digital equity means is not just being able to use the tools but to be able to have access to the tools that make your life better. You really can’t operate in an equal way in the world without the access to technology, you can’t make a doctor’s appointment, talk to your grandkids on zoom, you can’t even park your car without an app! You can’t be civically engaged without access to tech. We risk marginalizing a bunch of folks if we don’t, as a community, bring them into digital equity work. We’re community media, it’s in our name, and digital equity is the responsibility of the community. It’s not okay to leave people behind.

It’s amazing to see organizations like CCTV Cambridge making a difference in the community, what do you envision as the results of having the Digital Navigators?

Hopefully we’re going to increase community and civic engagement in Cambridge, particularly amongst people who might not have the loudest voice. We’re going to reach people we haven't reached in the past, including people who speak languages other than English and haven’t had exposure to community media. It’s a really great opportunity for intergenerational work which is also a really important community building tool.

How can people both locally in Massachusetts and across the country plug-in and support?

People everywhere are welcomed and invited to support this work through donations, which you can do by visiting cctvcambridge.org! When the applications open for the Digital Navigators, share in your networks with people you think would love to do this work; spread the word on social media and follow us on all platforms @cctvcambridge! 

The U.S. House Version of KOSA: Still a Censorship Bill

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

On World Press Freedom Day (and Every Day), We Fight for an Open Internet

Today marks World Press Freedom Day, an annual celebration instituted by the United Nations in 1993 to raise awareness of press freedom and remind governments of their duties under Article 19 of the Universal Declaration of Human Rights. This year, the day is dedicated to the importance of journalism and freedom of expression in the context of the current global environmental crisis.

Journalists everywhere face challenges in reporting on climate change and other environmental issues. Whether lawsuits, intimidation, arrests, or disinformation campaigns, these challenges are myriad. For instance, journalists and human rights campaigners attending the COP28 Summit held in Dubai last autumn faced surveillance and intimidation. The Committee to Protect Journalists (CPJ) has documented arrests of environmental journalists in Iran and Venezuela, among other countries. And in 2022, a Guardian journalist was murdered while on the job in the Brazilian Amazon.

The threats faced by journalists are the same as those faced by ordinary internet users around the world. According to CPJ, there are 320 journalists jailed worldwide for doing their job. And ranked among the top jailers of journalists last year were China, Myanmar, Belarus, Russia, Vietnam, Israel, and Iran; countries in which internet users also face censorship, intimidation, and in some cases, arrest. 

On this World Press Freedom Day, we honor the journalists, human rights defenders, and internet users fighting for a better world. EFF will continue to fight for the right to freedom of expression and a free and open internet for every internet user, everywhere.



Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

Speaking Freely: Rebecca MacKinnon

*This interview has been edited for length and clarity.

Rebecca MacKinnon is Vice President, Global Advocacy at the Wikimedia Foundation, the non-profit that hosts Wikipedia. Author of Consent of the Networked: The Worldwide Struggle For Internet Freedom (2012), she is co-founder of the citizen media network Global Voices, and  founding director of Ranking Digital Rights, a research and advocacy program at New America. From 1998-2004 she was CNN’s Bureau Chief in Beijing and Tokyo. She has taught at the University of Hong Kong and the University of Pennsylvania, and held fellowships at Harvard, Princeton, and the University of California. She holds an AB magna cum laude in Government from Harvard and was a Fulbright scholar in Taiwan.

David Greene: Can you introduce yourself and give us a bit of your background? 

My name is Rebecca MacKinnon, I am presently the Vice President for Global Advocacy at the Wikimedia Foundation, but I’ve worn quite a number of hats working in the digital rights space for almost twenty years. I was co-founder of Global Voices, which at the time we called it International Bloggers’ Network, which is about to hit its twentieth anniversary. I was one of the founding board members of the Global Networking Initiative, GNI. I wrote a book called “Consent of the Networked: The Worldwide Struggle for Internet Freedom,” which came out more than a decade ago. It didn’t sell very well, but apparently it gets assigned in classes still so I still hear about it. I was also a founding member of Ranking Digital Rights, which is a ranking of the big tech companies and the biggest telecommunications companies on the extent to which they are or are not protecting their users’ freedom of expression and privacy. I left that in 2021 and ended up with the Wikimedia Foundation, and it’s never a dull moment! 

Greene: And you were a journalist before all of this, right? 

Yes, I worked for CNN for twelve years in Beijing for nine years where I ended up Bureau Chief and Correspondent, and in Tokyo for almost three years where I was also Bureau Chief and Correspondent. That’s also where I first experienced the magic of the global internet in a journalistic context and also experienced the internet arriving in China and the government immediately trying to figure out both how to take advantage of it economically but also to control it enough that the Communist Party would not lose power. 

Greene: At what point did it become apparent that the internet would bring both benefits and threats to freedom of expression?

At the beginning I think the media, industry, policymakers, kind of everybody, assumed—you know, this is like in 1995 when the internet first showed up commercially in China—everybody assumed “there’s no way the Chinese Communist Party can survive this,” and we were all a bit naive. And our reporting ended up influencing naive policies in that regard. And perhaps naive understanding of things like Facebook revolutions and things like that in the activism world. It really began to be apparent just how authoritarianism was adapting to the internet and starting to adapt the internet. And how China was really Exhibit A for how that was playing out and could play out globally. That became really apparent in the mid-to-late 2000s as I was studying Chinese blogging communities and how the government was controlling private companies, private platforms, to carry out censorship and surveillance work. 

Greene: And it didn’t stop with China, did it? 

It sure didn’t! And in the book I wrote I only had a chapter on China and talked about how if the trajectory the Western democratic world was on just kind of continued in a straight line we were going to go more in China’s direction unless policymakers, the private sector, and everyone else took responsibility for making sure that the internet would actually support human rights. 

Greene: It’s easy to talk about authoritarian threats, but we see some of the same concerns in democratic countries as well. 

We’re all just one bad election away from tyranny, aren’t we? This is again why when we’re talking to lawmakers, not only do we ask them to apply a Wikipedia test—if this law is going to break Wikipedia, then it’s a bad law—but also, how will this stand up to a bad election? If you think a law is going to be good for protecting children or fighting disinformation under the current dominant political paradigm, what happens if someone who has no respect for the rule of law, no respect for democratic institutions or processes ends up in power? And what will they do with that law? 

Greene: This happens so much within disinformation, for example, and I always think of it in terms of, what power are we giving the state? Is it a good thing that the state has this power? Well, let’s switch things up and go to the basics. What does free speech mean to you? 

People talk about is it free as in speech? Is it free as in beer? What does “free” mean? I am very much in the camp that freedom of expression needs to be considered in the context of human rights. So my free speech does not give me freedom to advocate for a pogrom against the neighboring neighborhood. That is violating the rights of other people. And I actually think that Article 19 of the Declaration of Human Rights—it may not be perfect—but it gives us a really good framework to think about what is the context of freedom of expression or free speech as situated with other rights? And how do we make sure that, if there are going to be limits on freedom of expression to prevent me from calling for a pogrom of my neighbors, then the limitations placed on my speech are necessary and proportionate and cannot be abused? And therefore it’s very important that whoever is imposing those limits is being held accountable, that their actions are sufficiently transparent, and that any entity’s actions to limit my speech—whether it’s a government or an internet service provider—that I understand who has the power to limit my speech or limit what I can know or limit what I can access, so that I can even know what I don’t know! So that I know what is being kept from me. I also know who has the authority to restrict my speech, under what circumstances, so that I know what I can do to hold them accountable. That is the essence of freedom of speech within human rights and where power is held appropriately accountable. 

Greene: How do you think about the ways that your speech might harm people? 

You can think of it in terms of the other rights in the Universal Declaration. There’s the right to privacy. There’s the right to assembly. There’s the right to life! So for me to advocate for people in that building over there to go kill people in that other building, that’s violating a number of rights that I should not be able to violate. But what’s complicated, when we’re talking about rules and rights and laws and enforcement of laws and governance online, is that we somehow think it can be more straightforward and black and white than governance in the physical world is. So what do we consider to be appropriate law enforcement in the city of San Francisco? It’s a hot topic! And reasonable people of a whole variety of backgrounds reasonably disagree and will never agree! So you can’t just fix crime in San Francisco the way you fix the television. And nobody in their right mind would expect that you should expect that, right? But somehow in the internet space there’s so much policy conversation around making the internet safe for children. But nobody’s running around saying, “let’s make San Francisco safe for children in the same way.” Because they know that if you want San Francisco to be 100% safe for children, you’re going to be Pyongyang, North Korea! 

Greene: Do you think that’s because with technology some people just feel like there’s this techno-solutionism? 

Yeah, there’s this magical thinking. I have family members who think that because I can fix something with their tech settings I can perform magic. I think because it’s new, because it’s a little bit mystifying for many people, and because I think we’re still in the very early stages of people thinking about governance of digital spaces and digital activities as an extension of real world activities. And they’re thinking more about, okay, it’s like a car we need to put seatbelts on.

Greene: I’ve heard that from regulators many times. Does the fact that the internet is speech, does that make it different from cars? 

Yeah, although increasingly cars are becoming more like the internet! Because a car is essentially a smartphone that can also be a very lethal weapon. And it’s also a surveillance device, it’s also increasingly a device that is a conduit for speech. So actually it’s going the other way!

Greene: I want to talk about misinformation a bit. You’re at Wikimedia, and so, independent of any concern people have about misinformation, Wikipedia is the product and its goal is to be accurate. What do we do with the “problem” of misinformation?

Well, I think it’s important to be clear about what is misinformation and what is disinformation. And deal with them—I mean they overlap, the dividing line can be blurry—but, nonetheless, it’s important to think about both in somewhat different ways. Misinformation being inaccurate information that is not necessarily being spread maliciously with intent to mislead. It might just be, you know, your aunt seeing something on Facebook and being like, “Wow, that’s crazy. I’m going to share it with 25 friends.” And not realizing that they’re misinformed. Whereas disinformation is when someone is spreading lies for a purpose. Whether it’s in an information warfare context where one party in a conflict is trying to convince a population of something about their own government which is false, or whatever it is. Or misinformation about a human rights activist and, say, an affair they allegedly had and why they deserve whatever fate they had… you know, just for example. That’s disinformation. And at the Wikimedia Foundation—just to get a little into the weeds because I think it helps us think about these problems—Wikipedia is a platform whose content is not written by staff of the Wikimedia Foundation. It’s all contributed by volunteers, anybody can be a volunteer. They can go on Wikipedia and contribute to a page or create a page. Whether that content stays, of course, depends on whether the content they’ve added adheres to what constitutes well-sourced, encyclopedic content. There’s a whole hierarchy of people whose job it is to remove content that does not fit the criteria. And one could talk about that for several podcasts. But that process right there is, of course, working to counter misinformation. Because anything that’s not well-sourced—and they have rules about what is a reliable source and what isn’t—will be taken down. So the volunteer Wikipedians, kind of through their daily process of editing and enforcing rules, are working to eliminate as much misinformation as possible. Of course, it’s not perfect. 

Greene: [laughing] What do you mean it’s not perfect? It must be perfect!

What is true is a matter of dispute even between scientific journals or credible news sources, or what have you. So there’s lots of debates and all those debates are in the history tab of every page which are public, about what source is credible and what the facts are, etc. So this is kind of the self-cleaning oven that’s dealing with misinformation. The human hive mind that’s dealing with this. Disinformation is harder because you have a well-funded state actor who not only may be encouraging people—not necessary people who are employed by that actor themselves, but people who are kind of nationalistic and supporters of that government or politician or people who are just useful idiots—to go on and edit Wikipedia to promote certain narratives. But that’s kind of the least of it. You also, of course, have threats, credible, physical threats against editors who are trying to delete the disinformation and staff of the Foundation who are trying to support editors in dealing with investigating and identifying what is actually a disinformation campaign and supports volunteers in addressing that, sometimes with legal support, sometimes with technical support and other support. But people are in jail in one country in particular right now because they were fighting disinformation on the projects in their language. In Belarus, we had people, volunteers, who were jailed for the same reason. We have people who are under threat in Russia, and you have governments who will say, “Wikipedia contains disinformation about our, for example, Special Military Exercise in Ukraine because they’re calling it ‘an invasion’ which is disinformation, so therefore they’re breaking the law against disinformation so we have to threaten them.” So the disinformation piece—fighting it can become very dangerous. 

Greene: What I hear is there are threats to freedom of expression in efforts to fight disinformation and, certainly in terms of state actors, those might be malicious. Are there any well-meaning efforts to fight disinformation that also bring serious threats to freedom of expression? 

Yeah, the people who say, “Okay, we should just require the platforms to remove all content that is anything from COVID disinformation to certain images that might falsely present… you know, deepfake images, etc.” Content-focused efforts to fight misinformation and disinformation will result in over-censorship because you can almost never get all the nuance and context right. Humor, satire, critique, scientific reporting on a topic or about disinformation itself or about how so-and-so perpetrated disinformation on X, Y, Z… you have to actually talk about it. But if the platform is required to censor the disinformation you can’t even use that platform to call out disinformation, right? So content-based efforts to fight disinformation go badly and get weaponized. 

Greene: And, as the US Supreme Court has said, there’s actually some social value to the little white lie. 

There can be. There can be. And, again, there’s so many topics on which reasonable people disagree about what the truth is. And if you start saying that certain types of misinformation or disinformation are illegal, you can quickly have a situation where the government is becoming arbiter of the truth in ways that can be very dangerous. Which brings us back to… we’re one bad election away from tyranny.

Greene: In your past at Ranking Digital Rights you looked more at the big corporate actors rather than State actors. How do you see them in terms of freedom of expression—they have their own freedom of expression rights, but there’s also their users—what does that interplay look to you? 

Especially in relation to the disinformation thing, when I was at Ranking Digital Rights we put out a report that also related to regulation. When we’re trying to hold these companies accountable, whether we’re civil society or government, what’s the appropriate approach? The title of the report was, “It’s Not the Content, it’s the Business Model.” Because the issue is not about the fact that, oh, something bad appears on Facebook. It’s how it’s being targeted, how it’s being amplified, how that speech and the engagement around it is being monetized, that’s where most of the harm takes place. And here’s where privacy law would be rather helpful! But no, instead we go after Section 230. We could do a whole other podcast on that, but… I digress. 

I think this is where bringing in international human rights law around freedom of expression is really helpful. Because the US constitutional law, the First Amendment, doesn’t really apply to companies. It just protects the companies from government regulation of their speech. Whereas international human rights law does apply to companies. There’s this framework, The UN Guiding Principles on Business and Human Rights, where nation-states have the ultimate responsibility—duty—to protect human rights, but companies and platforms, whether you’re a nonprofit or a for-profit, have a responsibility to respect human rights. And everybody has a responsibility to provide remedy, redress. So in that context, of course, it doesn’t contradict the First Amendment at all, but it sort of adds another layer to corporate accountability that can be used in a number of ways. And that is being used more actively in the European context. But Article 19 is not just about your freedom of speech, it’s also your freedom of access to information, which is part of it, and your freedom to form an opinion without interference. Which means that if you are being manipulated and you don’t even know it—because you are on this platform that’s monetizing people’s ability to manipulate you—that’s a violation of your freedom of expression under international law. And that’s a problem that companies, platforms of any kind—including if Wikimedia were to allow that to happen, which they don’t—anyone should be held accountable for. 

Greene: Just in terms of the role of the State in this interplay, because you could say that companies should operate within a human rights framing, but then we see different approaches around the world. Is it okay or is it too much power for the state to require them to do that? 

Here’s the problem. If the States were perfect in achieving their human rights duties, then we wouldn’t have a problem and we could totally trust states to regulate companies in our interest and in ways that protect our human rights. But there is no such state. There are some that are further away on the spectrum than others, but they’re all on a spectrum and nobody is at that position of utopia, and they will never get there. And so, given that all states in large ways or small, in different ways, are making demands of internet platforms, companies generally, that reasonable numbers of people believe violates their rights, then we need accountability. And that holding the state accountable for what it’s demanding of the private sector, making sure that’s transparent and that the state does not have absolute power is of utmost importance. And when you have situations where a government is just blatantly violating rights, and a company—even a well-meaning company that wants to do the right thing— is just stuck between a rock and a hard place. You can be really transparent about the fact that you’re complying with bad law, but you’re stuck in this place where if you refuse to comply then your employees go to jail. Or other bad things happen. And so what do you do other than just try and let people know? And then the state tells you, “Oh, you can’t tell people because that's a state secret.” So what do you do then? Do you just stop operating? So one can be somewhat sympathetic. Some of the corporate accountability rhetoric has gone a little overboard in not recognizing that if the state’s are failing to do their job, we have a problem. 

Greene: What’s the role of either the State or the companies if you have two people and one person is making it hard for the other to speak? Whether through heckling or just creating an environment where the other person doesn’t feel safe speaking? Is there a role for either the State or the companies where you have two peoples’ speech rights butting up against each other? 

We have this in private physical spaces all the time. If you’re at a comedy show and somebody gets up and starts threatening the stand-up comedian, obviously, security throws them out! I think in physical space we have some general ideas about that, that work okay. And that we can apply in virtual space, although it’s very contextual and, again, somebody has to make a decision—whose speech is more important than whose safety? Choices are going to be made. They’re not always going to be, in hindsight, the right choices, because sometimes you have to act really quickly and you don’t know if somebody’s life is in danger or not. Or how dangerous is this person speaking? But you have to err on the side of protecting life and limb. And then you might have realized at the end of the day that wasn’t the right choice. But are you being transparent about what your processes are—what you’re going to do under what circumstances? So people know, okay, well this is really predictable. They said they were going to x if I did y, and I did y and they did indeed take action, and if I think that they unfairly took action then there’s some way of appealing. That it’s not just completely opaque and unaccountable. 

This is a very overly simplistic description of very complex problems, but I’m now working at a platform. Yes, it’s a nonprofit, public interest platform, but our Trust and Safety team are working with volunteers who are enforcing rules and every day—well, I don’t know if it’s every day because they’re the Trust and Safety team so they don’t tell me exactly what’s going on—but there are frequent decisions around people’s safety. And what enables the volunteer community to basically both trust each other enough, and trust the platform operator enough, for the whole thing not to collapse due to mistrust and anger is that you’re being open and transparent enough about what you’re doing and why you’re doing it so that if you did make a mistake there’s a way to address it and be honest about it. 

Greene: So at least at Wikimedia you have the overriding value of truthfulness. At another platform should they value wanting to preserve places for people who otherwise wouldn’t have places to speak? People who are historically or culturally don’t have the opportunities to speak. How should they handle these instances of people being heckled down or shouted down off of a site? From your perspective, how should they respond to that? Should they make an effort to preserve these spaces? 

This is where I think in Silicon Valley in particular you often hear this thing that the technology is neutral— “we treat everybody the same.” —

Greene: And it’s not true.

Oh, of course it’s not true! But that’s the rhetoric. But that is held up as being “the right thing.” But that’s like saying, “Okay, we’re going to administer public housing in a way” — and it’s not a perfect comparison—being completely blind to the context and the socio-economic and political realities of the human beings that you are taking action upon is sort of like, again, if you’re operating a public housing system, or whatever, and you’re not taking into account at all the socio-economic backgrounds or ethnic backgrounds of people for whom you’re making decisions, you’re going to be perpetuating and, most likely, amplifying social injustice. So people who run public housing or universities and so on are quite familiar with this notion that being neutral is actually not neutral. It’s perpetuating existing social, economic, and political power imbalances. And we found that’s absolutely the case with social media claiming to be neutral. And the vulnerable people end up losing out. That’s what the research has shown and the activism has shown. 

And, you know, in the Wikimedia community there are debates about this. There are people who have been editing for a long time who say, “we have to be neutral.” But on the other hand—what’s very clear—is the greater diversity of viewpoints and backgrounds and languages and genres, etc of the people contributing to an article on a given topic the better it is. So if you want something to actually have integrity, you can’t just have one type of person working on it. And so there’s all kinds of reasons why it’s important as a platform operator that we do everything we can to ensure that this is a welcoming space for people of all backgrounds. That people who are under threat feel safe contributing to the platforms and not just rich white guys in Northern Europe. 

Greene: And at the same time we can’t expect them to be more perfect than the real world, also, right? 

Well, yeah, but you do have to recognize that the real world is the real world and there are these power dynamics going on that you have to take into account and you can decide to amplify them by pretending they don’t exist, or you can work actively to compensate in a manner that is consistent with human rights standards. 

Greene: Okay, one more question for you. Who is your free speech hero and why? 

Wow, that’s a good question, nobody has asked me that before in that very direct way. I think I really have to say sort of a group of people who really set me on the path of caring deeply for the rest of my life about free speech. Those are the people in China, most of whom I met when I was a journalist there, who stood up to tell the truth despite tremendous threats like being jailed, or worse. And oftentimes the determination that I would witness from even very ordinary people that “I am right, and I need to say this. And I know I’m taking a risk, but I must do it.” And it’s because of my interactions with such people in my twenties when I was starting out as a journalist in China that set me on this path. And I am grateful to them all, including several who are no longer on this earth including Liu Xiaobo, who received a Nobel prize when he was in jail before he died. 



Congress Should Just Say No to NO FAKES

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

Speaking Freely: Obioma Okonkwo

This interview has been edited for clarity and length.*

Obioma Okonkwo is a lawyer and human rights advocate. She is currently the Head of Legal at Media Rights Agenda (MRA), a non-governmental organization based in Nigeria whose focus is to promote and defend freedom of expression, press freedom, digital rights and access to information within Nigeria and across Africa. She is passionate about advancing freedom of expression, media freedom, access to information, and digital rights. She also has extensive experience in litigating, researching, advocating and training around these issues. Obioma is an alumnus of the Open Internet for Democracy Leaders Programme, a fellow of the African School of Internet Governance, and a Media Viability Ambassador with the Deutsche Welle Akademie.

 York: What does free speech or free expression mean to you?

In my view, free speech is an intrinsic right that allows citizens, journalists and individuals to express themselves freely without repressive restriction. It is also the ability to speak, be heard, and participate in social life as well as political discussion, and this includes the right to disseminate information and the right to know. Considering my work around press freedom and media rights, I would also say that free speech is when the media can gather and disseminate information to the public without restrictions.

 York: Can you tell me about an experience in your life that helped shape your views on free speech?

 An experience that shaped my views on free speech happened in 2013, while I was in University. Some of my schoolmates were involved in a ghastly car accident—as a result of a bad road—which resulted in their death. This led the students to start an online campaign demanding that the government should repair the road and compensate the victims’ families. Due to this campaign, the road was repaired and the victims’ families were compensated.  Another instance is the #End SARS protest, a protest against police brutality and corrupt practices in Nigeria. People were freely expressing their opinions both offline and online on this issue and demanding for a reform of the Nigerian Police Force. These incidents have helped shape my views on how important the right to free speech is in any given society considering that it gives everyone an avenue to hold the government accountable, demand for justice, as well as share their views about how they feel about certain issues that affect them as an individual or group.  

 York: I know you work a bit on press freedom in Nigeria and across Africa. Can you tell me a bit about the situation for press freedom in the context in which you’re working?

 The situation for press freedom in Africa—and particularly Nigeria—is currently an eye sore. The legal and political environment is becoming repressive against press freedom and freedom of expression as governments across the region are now posing themselves as authoritarian. And they have been making several efforts to gag the media by enacting draconian laws, arresting and arbitrarily detaining journalists, imposing fines, and closing media outlets, amongst many other actions.

In my country, Nigeria, the government has resorted to using laws like the Cybercrime Act of 2015 and the Criminal Code Act, among other laws, to silence journalists who are either exposing their corrupt practices, sharing dissenting views, or holding them accountable to the people. For instance, journalists like Agba Jalingo, Ayodele Samuel, Emmanuel Ojo and Dare Akogun – just to mention a few who have been arrested, detained, or charged to court under these laws. In the case of Agba Jalingo, he was arrested and detained for over 100 days after he exposed the corrupt practices of the Governor of Cross River, a state in Nigeria.

 The case is the same in many African countries including Benin, Ghana, and Senegal. Journalists are arrested, detained, and sent to court for performing their journalistic duty. Ignace Sossou, a journalist in Benin, was sent to court and imprisoned under the Digital Code for posting the statement of the Minister of justice  on his Facebook’s account. The reality right now is that governments across the region are at war against press freedom and journalists who are purveyors of information.

 Although this is what press freedom looks like across the region, civil society organizations are fighting back to protect press freedom and freedom of  expression.  To create an enabling environment for press freedom, my organization, Media Rights Agenda (MRA) has been making several efforts such as instituting lawsuits before the national and regional courts challenging these draconian laws; providing pro bono legal representation to journalists who are arrested, detained, or charged; and engaging various stakeholders on this issue. 

 York: Are you working on the issue of online regulation and can you tell us the situation of online speech in the region?

 As the Head of Legal with MRA, I am actively working around the issue of online regulation to ensure that the rights to press freedom, freedom of expression, access to information, and digital rights are promoted and protected online. The region is facing an era of digital authoritarianism as there is a crackdown on online speech. In the context of my country, the Nigerian Government has made several attempts to regulate the internet or introduce social media bills under the guise of combating cybercrimes, hate speech, and mis/disinformation. However, diverse stakeholders – including civil society organizations like my organization – have, on many occasions, fought against these attempts to regulate online speech for the reason that these proposed bills will not only limit freedom of expression, press freedom, and other digital rights. They will also shrink the civic space online, as some of their provisions are overly broad and governments are known for using laws like this arbitrarily to silence dissenting voices and witch hunt journalists, opposition entities, or individuals.

 An example is when diverse stakeholders challenged the National Information and Technology Development Agency (NITDA), an agency saddled with the duty of creating a framework for the planning and regulation of information technology practices activities and systems in Nigeria over the draft regulation, “Code of Practices for Interactive Computer Service Platforms/Internet Intermediaries.” They challenged the draft regulation on the basis that it must contain some provisions that recognize freedom of expression, privacy, press freedom and other human rights concerns. Although the agency took into consideration some of the suggestions made by these stakeholders, there are still concerns that individuals, activists, and human rights defenders might be surveilled, amongst other things.

 The government of Nigeria is relying on laws like the Cybercrime Act, Criminal Code Act and many more to stifle online speech. And the Ghanaian government is no different as they are also relying on the Electronic Communication Act to suppress freedom of expression and hound critical journalists under the pretense of battling fake news. Countries like Zimbabwe, Sudan, Uganda, and Morocco have also enacted laws to silence dissent and repress citizens’ internet use especially for expression.

 York: Can you also tell me a little bit more about the landscape for civil society where you work? Are there any creative tactics or strategies from civil society that you work with?

 Nigeria is home to a wide variety of civil society organizations (CSOs) and non-governmental organizations (NGOs). The main legislation that regulates CSOs are federal laws such as the Nigerian Constitution, which guarantees freedom of association, and the Companies and Allied Matters Act (CAMA), which provides every group or association with legal personality.

 CSOs in Nigeria face quite a number of legal and political hurdles. For example, CSOs that wish to operate as a company limited by guarantee need to seek the consent of the Attorney-General of the Federation which may be rejected. While CSOs operating as incorporated trustees are mandated to carry out some obligations which can be tedious and time consuming. On several occasions, the Nigerian Government has made attempts to pressure and even subvert CSOs and to single out certain CSOs for special adverse treatment. Despite receiving foreign funding support, the Nigerian government finds it convenient to berate or criticize CSOs as being “sponsored” by foreign interests, with the underlying suggestion that such organizations are unpatriotic and – by criticizing government – are being paid to act contrary to Nigeria’s interests.

 There are lots of strategies or tactics CSOs are using to address the issues they are working on, including issuing press statements, engaging diverse stakeholders, litigation, capacity-building efforts, and advocacy.  

 York: Do you have a free expression hero?

 Yes, I do. All the critical journalists out there are my free expression heroes. I also consider Julian Assange as a free speech hero for his belief in openness and transparency as well as taking personal risk to expose the corrupt acts of the powerful, an act necessary in a democratic society. 

Screen Printing 101: EFF's Spring Speakeasy at Babylon Burning

At least twice each year, we invite current EFF members to gather with fellow internet freedom supporters and to meet the people behind your favorite digital civil liberties organization. For this year’s Bay Area based members, we had the opportunity to take over Babylon Burning’s screen printing shop in San Francisco, where Mike Lynch and his team bring EFF art(work) to life.

Babylon Burning Front of Building

To kick off the evening we had EFF’s Director of Member Engagement Aaron Jue, talk about the near-20-year friendship between EFF and Babylon Burning, the shop that has printed everything from t-shirts to hoodies to hats, and now tote bags. At EFF, we love the opportunity to support a local business and have a great partnership at the same time. When we send our artwork to Mike and his staff, we know it is in good hands.

EFF Shirt Archive

Following Aaron, EFF’s Creative Director Hugh D’Andrade dived into some of EFF’s most popular works such as the NSA Spying Eagle and the many versions of the EFF Liberty Mecha. The EFF NSA Spying Eagle focuses on mass surveillance found in the Hepting and Jewel cases. The EFF Liberty Mecha has been featured on four different occasions, most recently on a shirt for DEF CON 29, and highlights freedom, empowerment through technology, interoperability, and teamwork. More information about EFF’s member shirts can be found in our blog and in our shop.

Mike Lynch at Babylon Burning

Mike jumped in after Hugh to walk members though a hands-on demonstration of traditional screen printing. Members printed tote bags, toured the Babylon Burning print shop, and mingled with EFF staff and local supporters.

EFF Tote Bag

Thank you to everyone that attended this year’s Spring Members’ Speakeasy and continue to support EFF as a member. Your support allows our engineers, lawyers, and skilled advocates to tend the path for technology users, and to nurture your rights to privacy, expression, and innovation online.

EFF Art

Thanks to all of the EFF members who participated at our annual Bay Area meetup. If you're not a member of EFF yet, join us today. See you at the next event!

Podcast Episode: Right to Repair Catches the Car

If you buy something—a refrigerator, a car, a tractor, a wheelchair, or a phone—but you can't have the information or parts to fix or modify it, is it really yours? The right to repair movement is based on the belief that you should have the right to use and fix your stuff as you see fit, a philosophy that resonates especially in economically trying times, when people can’t afford to just throw away and replace things.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

 Companies for decades have been tightening their stranglehold on the information and the parts that let owners or independent repair shops fix things, but the pendulum is starting to swing back: New York, Minnesota, California, Colorado, and Oregon are among states that have passed right to repair laws, and it’s on the legislative agenda in dozens of other states. Gay Gordon-Byrne is executive director of The Repair Association, one of the major forces pushing for more and stronger state laws, and for federal reforms as well. She joins EFF’s Cindy Cohn and Jason Kelley to discuss this pivotal moment in the fight for consumers to have the right to products that are repairable and reusable.  

In this episode you’ll learn about: 

  • Why our “planned obsolescence” throwaway culture doesn’t have to be, and shouldn’t be, a technology status quo. 
  • The harm done by “parts pairing:” software barriers used by manufacturers to keep people from installing replacement parts. 
  • Why one major manufacturer put out a user manual in France, but not in other countries including the United States. 
  • How expanded right to repair protections could bring a flood of new local small-business jobs while reducing waste. 
  • The power of uniting disparate voices—farmers, drivers, consumers, hackers, and tinkerers—into a single chorus that can’t be ignored. 

Gay Gordon-Byrne has been executive director of The Repair Association—formerly known as The Digital Right to Repair Coalition—since its founding in 2013, helping lead the fight for the right to repair in Congress and state legislatures. Their credo: If you bought it, you should own it and have the right to use it, modify it, and repair it whenever, wherever, and however you want. Earlier, she had a 40-year career as a vendor, lessor, and used equipment dealer for large commercial IT users; she is the author of "Buying, Supporting and Maintaining Software and Equipment - an IT Manager's Guide to Controlling the Product Lifecycle” (2014), and a Colgate University alumna. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

GAY GORDON-BYRNE
A friend of mine from Boston had his elderly father in a condo in Florida, not uncommon. And when the father went into assisted living, the refrigerator broke and it was out of warranty. So my friend went to Florida, figured out what was wrong, said, ‘Oh, I need a new thermostat,’ ordered the thermostat, stuck around till the thermostat arrived, put it in and it didn't work.

And so he called GE because he bought the part from GE and he says, ‘you didn't provide me, there's a password. I need a password.’ And GE says, ‘Oh, you can't have the password. You have to have a GE authorized tech come in to insert the password.’ And that to me is the ultimate in stupid.

CINDY COHN
That’s Gay Gordon-Byrne with an example of how companies often prevent people from fixing things that they own in ways that are as infuriating as they are absurd.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.  

Our guest today, Gay Gordon-Byrne, is the executive director of The Repair Association, where she has been advocating for years for legislation that will give consumers the right to buy products that are repairable and reusable – rather than things that need to be replaced outright every few years, or as soon as they break. 

CINDY COHN
The Right to Repair is something we fight for a lot at EFF, and a topic that has come up frequently on this podcast. In season three, we spoke to Adam Savage about it.

ADAM SAVAGE
I was trying to fix one of my bathroom faucets a couple of weeks ago, and I called up a Grohee service video of how to repair this faucet. And we all love YouTube for that, right, because anything you want to fix whether it’s your video camera, or this thing, someone has taken it apart. Whether they’re in Micronesia or Australia, it doesn’t matter. But the moment someone figures out that they can make a bunch of dough from that, I’m sure we’d see companies start to say, ‘no, you can’t put up those repair videos, you can only put up these repair videos’ and we all lose when that happens.

JASON KELLEY
In an era where both the cost of living and environmental concerns are top of mind, the right to repair is more important than ever. It addresses both sustainability and affordability concerns.

CINDY COHN
We’re especially excited to talk to Gay right now because Right to Repair is a movement that is on its way up and we have been seeing progress in recent months and years. We started off by asking her where things stand right now in the United States.

GAY GORDON-BYRNE
We've had four states actually pass statutes for Right to Repair, covering a variety of different equipment, and there's 45 states that have introduced right to repair over the past few years, so we expect there will be more bills finishing. Getting them started is easy, getting them over the finish line is hard.

CINDY COHN
Oh, yes. Oh, yes. We just passed a right to repair bill here in California where EFF is based. Can you tell us a little bit about that and do you see it as a harbinger, or just another step along the way?

GAY GORDON-BYRNE
Well, honestly, I see it as another step along the way, because three states actually had already passed laws, in California, Apple decided that they weren't going to object any further to right to repair laws, but they did have some conditions that are kind of unique to California because Apple is so influential in California. But it is a very strong bill for consumer products. It just doesn't extend to non-consumer products.

CINDY COHN
Yeah. That's great. And do you know what made Apple change their mind? Because they had, they had been staunch opponents, right? And EFF has battled with them in various different areas around Section 1201 and other things and, and then it seemed like they changed their minds and I wondered if you had some insights about that.

GAY GORDON-BYRNE
I take full responsibility.

CINDY COHN
Yay! Hey, getting a big company to change their position like that is no small feat and it doesn't happen overnight.

GAY GORDON-BYRNE
Oh, it doesn't happen overnight. And what's interesting is that New York actually passed a bill that Apple tried to negotiate and kind of really didn't get to do it in New York, that starts in January. So there was a pressure point already in place. New York is not an insignificant size state.

And then Minnesota passed a much stronger bill. That also takes effect, I think, I might be wrong on this, I think also in January. And so the wheels were already turning, I think the idea of inevitability had occurred to Apple that they'd be on the wrong side of all their environmental claims if they didn't at least make a little bit more of a sincere effort to make things repairable.

CINDY COHN
Yeah. I mean, they have been horrible about this from the very beginning with, you know with custom kinds of dongles, and difficulty in repairing. And again, we fought them around section 1201, which is the ability to do circumvention so that you can see how something works and build. tools that will let you fix them.

It's just no small feat from where we set to get, to get the winds to change such that even Apple puts their finger up and says, I think the winds are changing. We better get on the right side of history.

GAY GORDON-BYRNE
Yeah, that's what we've been trying to do for the past, when did we get started? I got started in 2010, the organization got started in 2013. So we've been at it a full 10 years as an actual organization, but the problems with Apple and other manufacturers existed long before. So the 1201 problem still exists, and that's the problem that we're trying to move in federally, but oh my God. I thought moving legislation in states was hard and long.

CINDY COHN
Yeah, the federal system is different, and I think that one of the things that we've experienced, though, is when the states start leading, eventually the feds begin to follow. Now, often they follow with the idea that they're going to water down what the states do. That's why, you know, EFF and, and I think a lot of organizations rally around this thing called preemption, which doesn't really sound like a thing you want to rally around, but it ends up being the way in which you make sure that the feds aren't putting the brakes on the states in terms of doing the right things and that you create space for states to be more bold.

It's sometimes not the best thing for a company that has to sell in a bunch of different markets, but it's certainly better than  letting the federal processes come in and essentially damp down what the states are doing.

GAY GORDON-BYRNE
You're totally right. One of our biggest fears is that someone will... We'll actually get a bill moving for Right to Repair, and it's obviously going to be highly lobbied, and we will probably not have the same quality of results as we have in states. So we would like to see more states pass more bills so that it's harder and harder for the federal government to preempt the states.

In the meantime, we're also making sure that the states don't preempt the federal government, which is another source of friction.

CINDY COHN
Oh my gosh.

GAY GORDON-BYRNE
Yeah, preemption is a big problem.

CINDY COHN
It goes both ways. In our, in our Section 1201 fights, we're fighting the Green case, uh, Green vs. Department of Justice, and the big issue there is that while we can get exemptions under 1201 for actual circumvention, the tools that you need  in order to circumvent, you can't get an exception for, and so you have this kind of strange situation in which you technically have the right to repair your device, but nobody can help you do that and nobody can give you the tools to do it. 

So it's this weird, I often, sometimes I call it the, you know, it's legal to be in Arizona, but it's illegal to go to Arizona kind of law. No offense, Arizona.

GAY GORDON-BYRNE
That's very much the case.

JASON KELLEY
You mentioned, Gay, that you've been doing this work while probably you've been doing the work a lot longer than the time you've been with the coalition and the Repair Association. We'll get to the brighter future that we want to look towards here in a second, but before we get to the, the way we want to fix things and how it'll look when we do, can you just take us back a little bit and tell us more about how we got to a place where you actually have to fight for your right to repair the things that you buy. You know, 50 years ago, I think most people would just assume that appliances and, and I don't know if you'd call them devices, but things that you purchased you could fix or you could bring to a repair shop. And now we have to force companies to let us fix things.

I know there's a lot of history there, but is there a short version of how we ended up in this place where we have to fight for this right to repair?

GAY GORDON-BYRNE
Yeah, there is a short version. It's called about 20 years ago, right after Y2K, it became possible, because of the improvements in the internet, for manufacturers to basically host a repair manual or a user guide. online and expect their customers to be able to retrieve that information for free.

Otherwise, they have to print, they have to ship. It's a cost. So it started out as a cost reduction strategy on the part of manufacturers. And at first it seemed really cool because it really solved a problem. I used to have manuals that came in like, huge desktop sets that were four feet of paper. And every month we'd get pages that we had to replace because the manual had been updated. So it was a huge savings for manufacturers, a big convenience for consumers and for businesses.

And then, no aspersions on lawyers. But my opinion is that some lawyer decided they wanted to know, they should know. For reasons we have no idea because they, they still don't make sense, that they should know who's accessing their website. So then they started requiring a login and a password, things like that.

And then another bright light, possibly a lawyer, but most likely a CFO said, we should charge people to get access to the website. And that slippery slope got really slippery or really fast. So it became obvious that you could save a lot of money by not providing manuals, not providing diagnostics and then not selling parts.

I mean, if you didn't want to sell parts, you didn't have to. There was no law that said you have to sell parts, or tools, or diagnostics. And that's where we've been for 20 years. And everybody that gets away with it has encouraged everybody else to do it. To the point where, um, I don't think Cindy would disagree with me.

I mean, I took a look, um, as did Nathan Proctor of US PIRG when we were getting ready to go before the FTC. And we said, you know, I wonder how many companies are actually selling parts and tools and manuals, and Nathan came up with a similar statistic. Roughly 90 percent of the companies don't.

JASON KELLEY
Wow.

GAY GORDON-BYRNE
So we're, face it, we have now gone from a situation where everybody could fix anything if they were really interested, to 90 percent of stuff not being fixable, and that number is going, getting worse, not better. So yeah, that's the short story, it’s been a bad 20 years.

CINDY COHN
It's funny because I think it's really, it's such a testament to people's desire to want to fix their own things that despite this, you can go on YouTube if something breaks and you can find some nice person who will walk you through how to fix, you know, lots and lots of devices that you have. And to me, that's a testament to the human desire to want to fix things and the human desire to want to teach other people how to fix things, that despite all these obstacles, there is this thriving world, YouTube's not the only place, but it's kind of the central place where you can find nice people who will help tell you how to fix your things, despite it being so hard and getting harder to have that knowledge and the information you need to do it.

GAY GORDON-BYRNE
I would also add to that there's a huge business of repair that, we're not strictly fighting for people's rights to be able to do it yourself. In fact, most people, again, you know, back to some kind of general statistics, most people, somewhere around 85 percent of them, really don't want to fix their own stuff.

They may fix some stuff, but they don't want to fix all stuff. But the options of having somebody help them have also gone. Gone just downhill, downhill, downhill massively in the last 20 years and really bad in the past 10 years. 

So the industry that current employment used to be about 3 million people in the repair, in the industry of repair and that kind of spanned auto repair and a bunch of other things. But those people don't have jobs if people can't fix their stuff because the only way they can be in business is to know that they can buy a part. To know that they can buy the tool, to know that they can get a hold of the schematic and the diagnostics. So these are the things that have thwarted business as well as, do it yourself. And I think most people, most people, especially the people I know, really expect to be able to fix their things. I think we've been told that we don't, and the reality is we do.

CINDY COHN
Yeah, I think that's right. And one of the, kind of, stories that people have been told is that, you know, if there's a silicon chip in it, you know, you just can't fix it. That that's just, um, places things beyond repair and I think that that's been a myth and I think a lot of people have always known It's a myth, you know, certainly in EFF's community.

We have a lot of hardware hackers, we even have lots of software hackers that know that the fact that there's a chip involved doesn't mean that it's a disposable item. But I wondered you know from your perspective. Have you seen that as well?

GAY GORDON-BYRNE
Oh, absolutely. People are told that these things are too sophisticated, that they're too complex, they're too small. All of these things that are not true, and you know, you got 20 years of a drumbeat of just massive marketing against repair. The budgets for people that are saying you can't fix your stuff are far greater than the budgets of the people that say you can.

So, thank you, Tim Cook and Apple, because you've made this an actual point of advocacy. Every time Apple does something dastardly, and they do it pretty often, every new release there's something dastardly in it, we get to get more people behind the, ‘hey, I want to fix my phone, goddamnit!’

CINDY COHN
Yeah, I think that's right. I think that's one of the wonderful things about the Right to Repair movement is that you're, you're surfing people's natural tendencies. The idea that you have to throw something away as soon as it breaks is just so profoundly …I think it's actually an international human, you know, desire to be able to fix these kinds of things and be able to make something that you own work for you.

So it's always been profoundly strange to have companies kind of building this throwaway culture. It reminds me a little of the privacy fights where we've had also 20 years of companies trying to convince us that your privacy doesn't matter and you don't care about it, and that the world's better if you don't have any privacy. And on a one level that has certainly succeeded in building surveillance business models. But on the other hand, I think it's profoundly against human tendencies, so those of us on the side of privacy and repair, the benefit of us is we're kind of riding with how people want to be in the kind of world they want to live in, against, you know, kind of very powerful, well funded forces who are trying to convince us we're different than we are.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Gay Gordon-Byrne.

At the top of the episode, Gay told us a story about a refrigerator that couldn’t be fixed unless a licensed technician – for a fee, obviously – was brought in to ENTER A PASSWORD. INTO A FRIDGE. Even though the person who owned the fridge had sourced the new part and installed it.

GAY GORDON-BYRNE
And that illustrates to me the damage that's being done by this concept of parts pairing, which is where only the manufacturer can make the part work. So even if you can find a part. Even if you could put it in, you can't make it work without calling the manufacturer again, which kind of violates the whole idea that you bought it and you own it, and they shouldn't have anything to do with it after that. 

So these things are pervasive. We see it in all sorts of stuff. The refrigerator one really infuriates me.

CINDY COHN
Yeah, we've seen it with printer cartridges. We've seen it with garage door openers, for sure. I recently had an espresso machine that broke and couldn't get it fixed because the company that made it doesn't make parts available for, for people and that. You know, that's a hard lesson. It's one of the things when you're buying something is to try to figure out, like, is, is this actually repairable or not?

You know, making that information available is something that our friends at Consumer Reports have done and other people have done, but it's still a little hard to find sometimes.

GAY GORDON-BYRNE
Yeah, that information gap is enormous. There are some resources. They're not great. none of them are comprehensive enough to really do the job. But there's an ‘index de repairability’ in France that covers a lot of consumer tech, you know, cell phones and laptops and things along those lines.

It's not hard to find, but it's in French, so use Google Translate or something and you'll see what they have to say. Um, that's actually had a pretty good impact on a couple companies. For example, Samsung, which had never put out a manual before, had to put out a manual, um, in order to be rated in France. So they did. The same manual they didn't put out in the U. S. and England.

CINDY COHN  
Oh my God, it’s amazing.

Music break.

CINDY COHN
So let's flip this around a little bit. What does the world look like if we get it right? What does a repairable world look like? How is it when you live in it, Gay? Give me a day in the life of somebody who's living in the fixed version of the world.

GAY GORDON-BYRNE
Well, you will be able to buy things that you can fix, or have somebody fix them for you. And one of the consequences is that you will see more repair shops back in your town.

It will be possible for some enterprising person, that'll open up. Again, the kinds of shops we used to have when we were kids.

You'll see a TV repair shop, an appliance repair shop, an electronics repair shop. In fact, it might be one repair shop, because some of these things are all being fixed in the same way. 

So  you'll see more economic activity in the area of repair. You'll also see, and this is a hope, that manufacturers, if they're going to make their products more repairable, in order to look better, you know, it's more of a, more of a PR and a marketing thing.

If they're going to compete on the basis of repairability, they're going to have to start making their products. more repairable from the get go. They're probably gonna have to stop gluing everything together. Europe has been pretty big on making sure that things are made with fasteners instead of glue.

I think we're gonna see more activity along those lines, and more use of replaceable batteries. Why should a battery be glued in? That seems like a pretty stupid thing to do. So I think we'll see some improvements along the line of sustainability in the sense that we'll be able to keep our things longer and use them until we're done with them, not to when the manufacturer decides they want to sell you a new one, which is really the cycle that we have today.

CINDY COHN
Yeah. Planned obsolescence I think is what the marketers call it. I love a vision of the world, you know, when I grew up, I grew up in a small town in Iowa and we had the, the people called the gearheads, right? They were the ones who were always tinkering with cars. And of course you could take your appliances to them and other kinds of things because, you know, people who know how to take things apart and figure out how they work tend to know that about multiple things.

So I'd love a future of the world where the kind of gearheads rise again and are around to help us keep our stuff longer and keep our stuff again.  I really appreciate what you say, like when we're done with them. I mean, I love innovation. I love new toys.

I think that's really great. But the idea that when I'm done with something, you know, it goes into a trash heap. Um, or, you know, into someplace where you have to have fancy, uh, help to make sure that you're not endangering the planet. Like, that's not a very good world.

GAY GORDON-BYRNE
Well, look at your example of your espresso machine. You weren't done with it. It quit. It quit. You can't fix it. You can't make another cup of espresso with it.

That's not what you planned. That's not what you wanted.

CINDY COHN
Yep.

JASON KELLEY
I think we all have stories like the espresso machine and that's part of why this is such a tangible topic for everyone. Maybe I'm not alone in this, but I love, you know, thrift stores and places like that where I can get something that maybe someone else was, was tired of. I was walking. Hmm. I passed a house a few years ago and someone had put, uh, a laptop that the screen had been damaged just next to the trash.

And I thought, that looks like a pretty nice laptop. And I grabbed it. It was a pretty new, like, one year old Microsoft Surface. Tablet, laptop, um, anyway, I took it to a repair shop and they were able to repair it for like way less than the cost of buying a new one and I had a new laptop essentially, um, and I don't think they gave me extra service because I worked at EFF but they were certainly happy to help because I worked at EFF, um, but then, you know, these things do eventually Sort of give up, right?

That laptop lasted me about three years and then had so many issues that I just kind of had to get rid of it Where do you think in the in the better future? We should put the things that are sort of Unfixable. You know, do we, do we bring them to a repair shop and they pull out the pieces that work like a junkyard that they can reuse?

Is there a better system for, uh, disposing of the different pieces or the different devices that we can't repair? How do you think about that more sustainable future once everything is better in the first place in terms of being able to repair things?

GAY GORDON-BYRNE
Excellent question. We have a number of members that are what we call charitable recyclers. And I think that's a model for more, rather than less. They don't even have to be gently used. They just have to be potentially useful. And they'll take them in. They will fix them. They will train people, often people that have some employment challenges, especially coming out of the criminal justice system.  And they'll train them to make repairs and they both get a skill, a marketable skill for future employment. And they also, they also turn around and then resell those devices to make money to keep the whole system going.

But in the commercial recycling business, there's a lot of value in the things that have been discarded if they can have their batteries removed before, before they are, quote, recycled, because recycling is a very messy business and it requires physical contact with the device to the point that it's shredded or crushed. And if we can intercept some of that material before it goes to the crusher, we can reuse more of that material. And I think a lot of it can be reused very effectively in downstream markets, but we don't have those markets because we can't fix the products that are broken.

CINDY COHN
Yep. There's a whole chain of good that starts happening if we can begin to start fixing things, right? It's not just the individuals get to fix the things that they get, but it sets off kind of a cycle of things, a happy cycle of things that get better all along the way.

GAY GORDON-BYRNE
Yep, and that can be, that can happen right now, well, I should say as soon as these laws start taking effect, because a lot of the information parts and tools that are required under the laws are immediately useful.

CINDY COHN
Right. So tell me, how do these laws work? What do they, what, the good ones anyway, what are, what are they doing? How are things changing with the current flock of laws that are just now coming online?

GAY GORDON-BYRNE
Well, they're all pretty much the same. They require manufacturers of things that they already repair, so there's some limitations right there, to make available on fair and reasonable terms the same parts, tools, diagnostics, and firmware that they already provide to their quote authorized or their subcontract repair providers because our original intent was to restore competition. So the bills are really a pro competition law as opposed to an e-waste law.

CINDY COHN  
Mm hmm.

GAY GORDON-BYRNE
Because these don't cover everything. They cover a lot of stuff, but not everything. California is a little bit different in that they already had a statute that required things of be, under $50 or under $100 to be covered for three years. They have some dates in there that expand the effectiveness of the bill into products that don't even have repair options today.

But the bills that we've been promoting are a little softer, because the intent is competition, because we want to see what competition can do, when we unlock competition, what that does for consumers.

CINDY COHN  
Yeah, and I think that that dovetails nicely into something EFF has been working on quite a while now, which is interoperability, right? One of the things that unlocks competition is, you know, requiring people to build their tools and services in a way that are interoperable with others, that helps both with repair and with kind of follow on innovation that, you know, you can switch up how your Facebook feed shows up based on what you want to see rather than, you know, based upon what Facebook's algorithm wants you to see or other kinds of changes like that. And how do you see interoperability fitting into all of this?

GAY GORDON-BYRNE
I think there will be more. It's not specific to the law, but I think it will simply happen as people try to comply with the law. 

Music break

CINDY COHN  
You founded the Repair Association, so tell us a little bit about how that got started and how you decided to dedicate your life to this. I think it's really important for us to think about, like, the people that are needed to build a better world, as well as the, you know, kind of technologies and ideas.

GAY GORDON-BYRNE
I was always in the computer industry. I grew up with my father who was a computer architect in the 50s and 60s. So I never knew a world that didn't involve computers. It was what dad did. And then when I needed a job out of college, and having bounced around a little bit and found not a great deal of success, my father encouraged me to take a job selling computers, because that was the one thing he had never done and thought that it was missing from his resume.

And I took to it like, uh, I don't know, fish to water? I loved it. I had a wonderful time and a wonderful career. But by the mid 2000s, I was done. I mean, I was like, I can't stand this job anymore. So I decided to retire. I didn't like being retired. I started doing other things and eventually, I started doing some work with a group of companies that repair large mainframes.

I've known them. I mean, my former boss was the president. It was kind of a natural. And they started having trouble with some of the manufacturers and I said, that's wrong. I mean, I had this sense of indignation that what Oracle had done when they bought Sun was just flatly wrong and it was illegal. And I volunteered to join a committee. And that's when, haha, that's when I got involved and it was basically, I tell people I over-volunteered.

CINDY COHN
Yeah.

GAY GORDON-BYRNE
And what happened is that because I was the only person in that organization that didn't already have relationships with manufacturers, that they couldn't, they couldn't bite the hand that fed them, I was elected chief snowball thrower. AKA Executive Director. 

So it was a passion project that I could afford to do because otherwise I was going to stay home and knit. So this is way better than knitting or quilting these days, way more fun, way more gratifying. I've had a truly wonderful experience, met so many fabulous people, have a great sense of impact that I would never have had with quilting.

CINDY COHN
I just love the story of somebody who kind of put a toe in and then realized, Oh my God, this is so important. And ‘I found this thing where I can make the world better.’ And then you just get, you know, kind of, you get sucked in and, um, but it's, it's fun. And what I really appreciate about the Repair Association and the Right to Repair people is that while, you know, they're working with very serious things, they also, you know, there's a lot of fun in making the world a better place.

And it's kind of fun to be involved in the Right to Repair right now because after a long time kind of shouting in the darkness, there's some traction starting to happen. So then the fun gets even more fun.

GAY GORDON-BYRNE
I can tell you it's ... We're so surprised. I mean, it took, we've had over, well, well over 100 bills filed and, you know, every year we get a little further. We get past this committee and this hurdle and this hurdle and this hurdle. We get almost to the end and then something would happen. And to finally get to the end where the bill becomes law? It's like the dog that chases the car, and you go, we caught the car, now what?

CINDY COHN
Yeah. Now you get to fix it! The car!

JASON KELLEY
Yeah, now you can repair the car.

MUSIC TRANSITION

JASON KELLEY
That was such a wonderful, optimistic conversation and not the first one we've had this season. But this one is interesting because we're actually already getting where we want to be. We're already building the future that we want to live in and it's just really, really pleasing to be able to talk to someone who's in the middle of that and, and making sure that that work happens.

CINDY COHN
I mean, one of the things that really struck me is how much of the better future that we're building together is really about creating new jobs and new opportunities for people to work. I think there's a lot of fear right now in our community that the future isn't going to have work, and that without a social safety net or other kinds of things, you know, it's really going to hurt people.

And I so appreciated hearing about how, you know, Main Street's going to have more jobs. There's going to be people in your local community who can fix your things locally because devices, those are things where having a local repair community and businesses is really. helpful to people.

And so I also kind of, the flip side of that is this interesting observation that one of the things that's happened as a result of shutting off the Right to Repair is an increasing centralization, um, that the jobs that are happening in this thing are not happening locally and that by unlocking the right to repair, we're going to unlock some local opportunities for economic things.

I mean, You know, EFF thinks about this both in terms of empowering users, but also in terms of competition. And the thing about Right to Repair is it really does unlock kind of hyper local competition.

JASON KELLEY
I hadn't really thought about how specifically local it is to have a repair shop that you can just bring your device to. And right now it feels like the options are if you live near an Apple store, for example, maybe you can bring your phone there and then they send it somewhere. I'd much rather go to someone, you know, in my town that I can talk to, and who can tell me about what needs to be done. That's such a benefit of this movement that a lot of people aren't even really putting on the forefront, but it really is something that will help people actually get work and, and, and help the people who need the work and the people who need the job done.

CINDY COHN
Another thing that I really appreciate about the Right to Repair movement s how universal it is. Everyone experiences some version of this, you know, from the refrigerator story to my espresso machine, to any of any number of other stories to the farmers, like everyone has some version of how.

This needs to be fixed. And the other thing that I really appreciate about her gay stories about the right to repair movement is that, you know, she's somebody who comes out of computers, and was thinking about this from the context of computers and didn't really realize that farmers were having the same problem.

Of course, we all kind of know analytically that a lot of the movement in a lot of industries is towards, you know, centralizing computers and making, you know. You know, tractors are now computers with gigantic wheels. Cars are now computers with smaller wheels. That computers have become central to these kinds of things, but also realization that we have silos of users who are experiencing a version of the same problem and connecting those silent silos together, let me say that again. I think the realization that we have silos of users who are experiencing the same problem depending on what kind of tool they're using, um, and connecting those silos together so that together we stand as a much bigger voice is something that the repair, um, the Right to Repair folks have really done well and it is a, is a good lesson for the rest of us.

JASON KELLEY
Yeah, I think we talked a little bit with Adam Savage when he was on a while ago about this sort of gatekeeping and how effective it is to remove the gatekeepers from these movements and say, you know, we're all fighting the same fight. And it just goes to show you that it actually works. I mean, not only does it get everybody on the same page, but unlike a lot of movements, I think you can really see the impact that the Right to Repair movement has had. 

And we talked with Gay about this and it's just, it really, I think, should make people come away optimistic that advocacy like this works over time. You know, it's not a sprint, it's a marathon, and we have actually crested a sort of hill in some ways.

There's a lot of work to be done, but it's, it's actually work that we probably will be able to get done and, and that we're seeing the benefits of today

CINDY COHN
Yeah. And as we start to see benefits, we're going to start to see more benefits. I appreciate her. We're in, you know, we're in the whole plugging period where, you know, we got something passed and we need to plug the holes. But I also think once people start feeling the power of having the Right to Repair again, I think I hope it will help snowball.

One of the things that she said that I have observed as well is that sometimes it feels like nothing's happening, nothing's happening, nothing's happening, and then all of a sudden it's all happening. And I think that that's one of the, the kind of flows of advocacy work that I've observed over time and it's fun to see the, the Right to Repair Coalition kind of getting to experience that wave, even if it can be a little overwhelming sometimes.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard …Come Inside by Zep Hurme featuring snowflake and Drops of H2O ( The Filtered Water Treatment ) by J.Lang featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast. 

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

U.S. Senate and Biden Administration Shamefully Renew and Expand FISA Section 702, Ushering in a Two Year Expansion of Unconstitutional Mass Surveillance

One week after it was passed by the U.S. House of Representatives, the Senate has passed what Senator Ron Wyden has called, “one of the most dramatic and terrifying expansions of government surveillance authority in history.” President Biden then rushed to sign it into law.  

The perhaps ironically named “Reforming Intelligence and Securing America Act (RISAA)” does everything BUT reform Section 702 of the Foreign Intelligence Surveillance Act (FISA). RISAA not only reauthorizes this mass surveillance program, it greatly expands the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. The bill’s only significant “compromise” is a limited, two-year extension of this mass surveillance. But overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large telecommunications service providers: massive amounts of traffic on the Internet backbone are accessed and those communications on the government’s secret list are copied. And that’s just one part of the massive, expensive program. 

While Section 702 prohibits the NSA and FBI from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally.” The government can then conduct backdoor, warrantless searches of these “incidentally collected” communications.

The government cannot even follow the very lenient rules about what it does with the massive amount of information it gathers under Section 702, repeatedly abusing this authority by searching its databases for Americans’ communications. In 2021 alone, the FBI reported conducting up to 3.4 million warrantless searches of Section 702 data using Americans’ identifiers. Given this history of abuse, it is difficult to understand how Congress could decide to expand the government’s power under Section 702 rather than rein it in.

One of RISAA’s most egregious expansions is its large but ill-defined increase of the range of entities that have to turn over information to the NSA and FBI. This provision allegedly “responds” to a 2023 decision by the FISC Court of Review, which rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While the New York Times reports that the unknown company from this FISC opinion was a data center, this new provision is written so expansively that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider. This could potentially include landlords, maintenance people, and many others who routinely have access to your communications on the interconnected internet.

This is to say nothing of RISAA’s other substantial expansions. RISAA changes FISA’s definition of “foreign intelligence” to include “counternarcotics”: this will allow the government to use FISA to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths,” but also to any of their precursors. While surveillance under FISA has (contrary to what most Americans believe) never been limited exclusively to terrorism and counterespionage, RISAA’s expansion of FISA to ordinary crime is unacceptable.

RISAA also allows the government to use Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released in 2023, the FISC repeatedly denied government attempts to obtain some version of this authority, before finally approving it for the first time in 2023. By formally lowering Section 702’s protections for immigrants and asylum seekers, RISAA exacerbates the risk that government officials could discriminate against members of these populations on the basis of their sexuality, gender identity, religion, or political beliefs.

Faced with massive pushback from EFF and other civil liberties advocates, some members of Congress, like Senator Ron Wyden, raised the alarm. We were able to squeeze out a couple of small concessions. One was a shorter reauthorization period for Section 702, meaning that the law will be up for review in just two more years. Also, in a letter to Congress, the Department of Justice claimed it would only interpret the new provision to apply to the type of unidentified businesses at issue in the 2023 FISC opinion. But a pinky promise from the current Department of Justice is not enforceable and easily disregarded by a future administration. There is some possible hope here, because Senator Mark Warner promised to return to the provision in a later defense authorization bill, but this whole debacle just demonstrates how Congress gives the NSA and FBI nearly free rein when it comes to protecting Americans – any limitation that actually protects us (and here the FISA Court actually did some protecting) is just swept away.

RISAA’s passage is a shocking reversal—EFF and our allies had worked hard to put together a coalition aimed at enacting a warrant requirement for Americans and some other critical reforms, but the NSA, FBI and their apologists just rolled Congress with scary-sounding (and incorrect) stories that a lapse in the spying was imminent. It was a clear dereliction of Congress’s duty to oversee the intelligence community in order to protect all of the rest of us from its long history of abuse.

After over 20 years of doing it, we know that rolling back any surveillance authority, especially one as deeply entrenched as Section 702, is an uphill fight. But we aren’t going anywhere. We had more Congressional support this time than we’ve had in the past, and we’ll be working to build that over the next two years.

Too many members of Congress (and the Administrations of both parties) don’t see any downside to violating your privacy and your constitutional rights in the name of national security. That needs to change.

Internet Service Providers Plan to Subvert Net Neutrality. Don’t Let Them

In the absence of strong net neutrality protections, internet service providers (ISPs) have made all sorts of plans that would allow them to capitalize on something called "network slicing." While this technology has all sorts of promise, what the ISPs have planned would subvert net neutrality—the principle that all data be treated equally by your service provider—by allowing them to recreate the kinds of “fast lanes” we've already agreed should not be allowed. If their plans succeed, then the new proposed net neutrality protections will end up doing far less for consumers than the old rules did.

The FCC released draft rules to reinstate net neutrality, with a vote on adopting the rules to come the 25th of April. Overall, the order is a great step for net neutrality. However, to be truly effective the rules must not preempt states from protecting their residents with stronger laws and clearly find the creation of “fast lanes” via positive discrimination and unpaid prioritization of specific applications or services are violations of net neutrality.

Fast Lanes and How They Could Harm Competition

Since “fast lanes” aren’t a technical term, what do we mean when we are talking about a fast lane? To understand, it is helpful to think about data traffic and internet networking infrastructure like car traffic and public road systems. As roads connect people, goods, and services across distances, so does network infrastructure allow for data traffic to flow from one place to another. And just as a road with more capacity in the way of more lanes theoretically means the road can support more traffic moving at speed1, internet infrastructure with more “lanes” (i.e. bandwidth) should mean that a network can better support applications like streaming services and online gaming.

Individual ISPs have a maximum network capacity, and speed, of internet traffic they can handle. To continue the analogy, the road leading to your neighborhood has a set number of lanes. This is why the speed of your internet may change throughout the day. At peak hours your internet service may slow down because a slowdown has occurred from too much requested traffic clogging up the lanes.

It’s not inherently a bad thing to have specific lanes for certain types of traffic, actual fast lanes on freeways can improve congestion by not making faster moving vehicles compete for space with slower moving traffic, having exit and entry lanes in freeways also allows cars to perform specialized tasks without impeding other traffic. A lane only for buses isn’t a bad thing as long as every bus gets equal access to that lane and everyone has equal access to riding those buses. Where this becomes a problem is if there is a special lane only for Google buses, or for consuming entertainment content instead of participating in video calls. In these scenarios you would be increasing the quality of certain bus rides at the expense of degraded service for everyone else on the road.

An internet “fast lane” would be the designation of part of the network with more bandwidth and/or lower latency to only be used for certain services. On a technical level, the physical network infrastructure would be split amongst several different software defined networks with different use cases using network slicing. One network might be optimized for high bandwidth applications such as video streaming, another might be optimized for applications needing low latency (e.g. a short distance between the client and the server), and another might be optimized for IoT devices. The maximum physical network capacity is split among these slices. To continue our tortured metaphor, your original six lane general road is now a four lane general road with two lanes reserved for, say, a select list of streaming services. Think dedicated high speed lanes for Disney+, HBO, and Netflix, but those services only. In a network neutral construction of the infrastructure, all internet traffic shares all lanes, and no specific app or service is unfairly sped up or slowed down. This isn’t to say that we are inherently against network management techniques like quality of service or network slicing. But it’s important that quality of service efforts be undertaken, as much as possible, in an application agnostic manner.

The fast lanes metaphor isn’t ideal. On the road having fast lanes is a good thing, it can protect more slow and cautious drivers from dangerous driving and improve the flow of traffic. Bike lanes are a good thing because they make cyclists safer and allow cars to drive more quickly and not have to navigate around them. But with traffic lanes it’s the driver, not the road, that decides which lane they belong in (with penalties for doing obviously bad faith things such as driving in the bike lane.)

Internet service providers (ISPs) are already testing their ability to create these network slices. They already have plans of creating market offerings where certain applications and services, chosen by them, are given exclusive reserved fast lanes while the rest of the internet must shoulder their way through what is left. This kind of networking slicing is a violation of net neutrality. We aren’t against network slicing as a technology, it could be useful for things like remote surgery or vehicle to vehicle communication which requires low latency connections and is in the public interest, which are separate offerings and not part of the broadband services covered in the draft order. We are against network slicing being used as a loophole to circumvent principles of net neutrality.

Fast Lanes Are a Clear Violation of Net Neutrality

Where net neutrality is the principle that all ISPs should treat all legitimate traffic coming over their networks equally, discriminating between  certain applications or types of traffic is a clear violation of that principle. When fast lanes speed up certain applications or certain classes of applications, they cannot do so without having a negative impact on other internet traffic, even if it’s just by comparison. This is throttling, plain and simple.

Further, because ISPs choose which applications or types of services get to be in the fast lane, they choose winners and losers within the internet, which has clear harms to both speech and competition. Whether your access to Disney+ is faster than your access to Indieflix because Disney+ is sped up or because Indieflix is slowed down doesn’t matter because the end result is the same: Disney+ is faster than Indieflix and so you are incentivized to use Disney+ over Indieflix.

ISPs should not be able to harm competition even by deciding to prioritize incumbent services over new ones, or that one political party’s website is faster than another’s. It is the consumer who should be in charge of what they do online. Fast lanes have no place in a network neutral internet.

  • 1. Urban studies research shows that this isn’t actually the case, still it remains the popular wisdom among politicians and urban planners.

EFF, Human Rights Organizations Call for Urgent Action in Case of Alaa Abd El Fattah

Following an urgent appeal filed to the United Nations Working Group on Arbitrary Detention (UNWGAD) on behalf of blogger and activist Alaa Abd El Fattah, EFF has joined 26 free expression and human rights organizations calling for immediate action.

The appeal to the UNWGAD was initially filed in November 2023 just weeks after Alaa’s tenth birthday in prison. The British-Egyptian citizen is one of the most high-profile prisoners in Egypt and has spent much of the past decade behind bars for his pro-democracy writing and activism following Egypt’s revolution in 2011.

EFF and Media Legal Defence Initiative submitted a similar petition to the UNGWAD on behalf of Alaa in 2014. This led to the Working Group issuing an opinion that Alaa’s detention was arbitrary and called for his release. In 2016, the UNWGAD declared Alaa's detention (and the law under which he was arrested) a violation of international law, and again called for his release.

We once again urge the UN Working Group to urgently consider the recent petition and conclude that Alaa’s detention is arbitrary and contrary to international law. We also call for the Working Group to find that the appropriate remedy is a recommendation for Alaa’s immediate release.

Read our full letter to the UNWGAD and follow Free Alaa for campaign updates.

Congress: Don't Let Anyone Own The Law

We should all have the freedom to read, share, and comment on the laws we must live by. But yesterday, the House Judiciary Committee voted 19-4 to move forward the PRO Codes Act (H.R. 1631), a bill that would limit those rights in a critical area. 

TAKE ACTION

Tell Congress To Reject The Pro Codes Act

A few well-resourced private organizations have made a business of charging money for access to building and safety codes, even when those codes have been incorporated into law. 

These organizations convene volunteers to develop model standards, encourage regulators to make those standards into mandatory laws, and then sell copies of those laws to the people (and city and state governments) that have to follow and enforce them.

They’ve claimed it’s their copyrighted material. But court after court has said that you can’t use copyright in this way—no one “owns” the law. The Pro Codes Act undermines that rule and the public interest, changing the law to state that the standards organizations that write these rules “shall retain” a copyright in it, as long as the rules are made “publicly accessible” online. 

That’s not nearly good enough. These organizations already have so-called online reading rooms that aren’t searchable, aren’t accessible to print-disabled people, and condition your ability to read mandated codes on agreeing to onerous terms of use, among many other problems. That’s why the Association of Research Libraries sent a letter to Congress last week (supported by EFF, disability rights groups, and many others) explaining how the Pro Codes Act would trade away our right to truly understand and educate our communities about the law for cramped public access to it. Congress must not let well-positioned industry associations abuse copyright to control how you access, use, and share the law. Now that this bill has passed committee, we urgently need your help—tell Congress to reject the Pro Codes Act.

TAKE ACTION

TELL CONGRESS: No one owns the law

Two Years Post-Roe: A Better Understanding of Digital Threats

It’s been a long two years since the Dobbs decision to overturn Roe v. Wade. Between May 2022 when the Supreme Court accidentally leaked the draft memo and the following June when the case was decided, there was a mad scramble to figure out what the impacts would be. Besides the obvious perils of stripping away half the country’s right to reproductive healthcare, digital surveillance and mass data collection caused a flurry of concerns.

Although many activists fighting for reproductive justice had been operating under assumptions of little to no legal protections for some time, the Dobbs decision was for most a sudden and scary revelation. Everyone implicated in that moment somewhat understood the stark difference between pre-Roe 1973 and post-Roe 2022; living under the most sophisticated surveillance apparatus in human history presents a vastly different landscape of threats. Since 2022, some suspicions have been confirmed, new threats have emerged, and overall our risk assessment has grown smarter. Below, we cover the most pressing digital dangers facing people seeking reproductive care, and ways to combat them.

Digital Evidence in Abortion-Related Court Cases: Some Examples

Social Media Message Logs

A case in Nebraska resulted in a woman, Jessica Burgess, being sentenced to two years in prison for obtaining abortion pills for her teenage daughter. Prosecutors used a Facebook Messenger chat log between Jessica and her daughter as key evidence, bolstering the concerns many had raised about using such privacy-invasive tech products for sensitive communications. At the time, Facebook Messenger did not have end-to-end encryption.

In response to criticisms about Facebook’s cooperation with law enforcement that landed a mother in prison, a Meta spokesperson issued a frustratingly laconic tweet stating that “[n]othing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion.” They followed this up with a short statement reiterating that the warrants did not mention abortion at all. The lesson is clear: although companies do sometimes push back against data warrants, we have to prepare for the likelihood that they won’t.

Google: Search History & Warrants

Well before the Dobbs decision, prosecutors had already used Google Search history to indict a woman for her pregnancy outcome. In this case, it was keyword searches for misoprostol (a safe and effective abortion medication) that clinched the prosecutor’s evidence against her. Google acquiesced, as it so often has, to the warrant request.

Related to this is the ongoing and extremely complicated territory of reverse keyword and geolocation warrants. Google has promised that it would remove from user profiles all location data history related to abortion clinic sites. Researchers tested this claim and it was shown to be false, twice. Late in 2023, Google made a bigger promise: it would soon change how it stores location data to make it much more difficult–if not impossible–for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. This would be a genuinely helpful measure, but we’ve been conditioned to approach such claims with caution. We’ll believe it when we see it (and refer to external testing for proof).

Other Dangers to Consider

Doxxing

Sites propped up for doxxing healthcare professionals that offer abortion services are about as old as the internet itself. Doxxing comes in a variety of forms, but a quick and loose definition of it is the weaponization of open source intelligence with the intention of escalating to other harms. There’s been a massive increase in hate groups abusing public records requests and data broker collections to publish personal information about healthcare workers. Doxxing websites hosting such material are updated frequently. Doxxing has led to steadily rising material dangers (targeted harassment, gun violence, arson, just to name a few) for the past few years.

There are some piecemeal attempts at data protection for healthcare workers in more protective states like California (one which we’ve covered). Other states may offer some form of an address confidentiality program that provides people with proxy addresses. Though these can be effective, they are not comprehensive. Since doxxing campaigns are typically coordinated through a combination of open source intelligence tactics, it presents a particularly difficult threat to protect against. This is especially true for government and medical industry workers whose information may be subjected to exposure through public records requests.

Data Brokers

Recently, Senator Wyden’s office released a statement about a long investigation into Near Intelligence, a data broker company that sold geolocation data to The Veritas Society, an anti-choice think tank. The Veritas Society then used the geolocation data to target individuals who had traveled near healthcare clinics that offered abortion services and delivered pro-life advertisements to their devices.

That alone is a stark example of the dangers of commercial surveillance, but it’s still unclear what other ways this type of dataset could be abused. Near Intelligence has filed for bankruptcy, but they are far from the only, or the most pernicious, data broker company out there. This situation bolsters what we’ve been saying for years: the data broker industry is a dangerously unregulated mess of privacy threats that needs to be addressed. It not only contributes to the doxxing campaigns described above, but essentially creates a backdoor for warrantless surveillance.

Domestic Terrorist Threat Designation by Federal Agencies

Midway through 2023, The Intercept published an article about a tenfold increase in federal designation of abortion-rights activist groups as domestic terrorist threats. This projects a massive shadow of risk for organizers and activists at work in the struggle for reproductive justice. The digital surveillance capabilities of federal law enforcement are more sophisticated than that of typical anti-choice zealots. Most people in the abortion access movement may not have to worry about being labeled a domestic terrorist threat, though for some that is a reality, and strategizing against it is vital.

Looming Threats

Legal Threats to Medication Abortion

Last month, the Supreme Court heard oral arguments challenging the FDA’s approval of and regulations governing mifepristone, a widely available and safe abortion pill. If the anti-abortion advocates who brought this case succeed, access to the most common medication abortion regimen used in the U.S. would end across the country—even in those states where abortion rights are protected.

Access to abortion medication might also be threatened by a 150 year old obscenity law. Many people now recognize the long dormant Comstock Act as a potential avenue to criminalize procurement of the abortion pill.

Although the outcomes of these legal challenges are yet-to-be determined, it’s reasonable to prepare for the worst: if there is no longer a way to access medication abortion legally, there will be even more surveillance of the digital footprints prescribers and patients leave behind. 

Electronic Health Records Systems

Electronic Health Records (EHRs) are digital transcripts of medical information meant to be easily stored and shared between medical facilities and providers. Since abortion restrictions are now dictated on a state-by-state basis, the sharing of these records across state lines present a serious matrix of concerns.

As some academics and privacy advocates have outlined, the interoperability of EHRs can jeopardize the safety of patients when reproductive healthcare data is shared across state lines. Although the Department of Health and Human Services has proposed a new rule to help protect sensitive EHR data, it’s currently possible that data shared between EHRs can lead to the prosecution of reproductive healthcare.

The Good Stuff: Protections You Can Take

Perhaps the most frustrating aspect of what we’ve covered thus far is how much is beyond individual control. It’s completely understandable to feel powerless against these monumental threats. That said, you aren’t powerless. Much can be done to protect your digital footprint, and thus, your safety. We don’t propose reinventing the wheel when it comes to digital security and data privacy. Instead, rely on the resources that already exist and re-tool them to fit your particular needs. Here are some good places to start:

Create a Security Plan

It’s impossible, and generally unnecessary, to implement every privacy and security tactic or tool out there. What’s more important is figuring out the specific risks you face and finding the right ways to protect against them. This process takes some brainstorming around potentially scary topics, so it’s best done well before you are in any kind of crisis. Pen and paper works best. Here's a handy guide.

After you’ve answered those questions and figured out your risks, it’s time to locate the best ways to protect against them. Don’t sweat it if you’re not a highly technical person; many of the strategies we recommend can be applied in non-tech ways.

Careful Communications

Secure communication is as much a frame of mind as it is a type of tech product. When you are able to identify which aspects of your life need to be spoken about more carefully, you can then make informed decisions about who to trust with what information, and when. It’s as much about creating ground rules with others about types of communication as it is about normalizing the use of privacy technologies.

Assuming you’ve already created a security plan and identified some risks you want to protect against, begin thinking about the communication you have with others involving those things. Set some rules for how you broach those topics, where they can be discussed, and with whom. Sometimes this might look like the careful development of codewords. Sometimes it’s as easy as saying “let’s move this conversation to Signal.” Now that Signal supports usernames (so you can keep your phone number private), as well as disappearing messages, it’s an obvious tech choice for secure communication.

Compartmentalize Your Digital Activity

As mentioned above, it’s important to know when to compartmentalize sensitive communications to more secure environments. You can expand this idea to other parts of your life. For example, you can designate different web browsers for different use cases, choosing those browsers for the privacy they offer. One might offer significant convenience for day-to-day casual activities (like Chrome), whereas another is best suited for activities that require utmost privacy (like Tor).

Now apply this thought process towards what payment processors you use, what registration information you give to social media sites, what profiles you keep public versus private, how you organize your data backups, and so on. The possibilities are endless, so it’s important that you prioritize only the aspects of your life that most need protection.

Security Culture and Community Care

Both tactics mentioned above incorporate a sense of community when it comes to our privacy and security. We’ve said it before and we’ll say it again: privacy is a team sport. People live in communities built on trust and care for one another; your digital life is imbricated with others in the same way.

If a node on a network is compromised, it will likely implicate others on the same network. This principle of computer network security is just as applicable to social networks. Although traditional information security often builds from a paradigm of “zero trust,” we are social creatures and must work against that idea. It’s more about incorporating elements of shared trust pushing for a culture of security.

Sometimes this looks like setting standards for how information is articulated and shared within a trusted group. Sometimes it looks like choosing privacy-focused technologies to serve a community’s computing needs. The point is to normalize these types of conversations, to let others know that you’re caring for them by attending to your own digital hygiene. For example, when you ask for consent to share images that include others from a protest, you are not only pushing for a culture of security, but normalizing the process of asking for consent. This relationship of community care through data privacy hygiene is reciprocal.

Help Prevent Doxxing

As somewhat touched on above in the other dangers to consider section, doxxing can be a frustratingly difficult thing to protect against, especially when it’s public records that are being used against you. It’s worth looking into your state level voter registration records, if that information is public, and how you can request for that information to be redacted (success may vary by state).

Similarly, although business registration records are publicly available, you can appeal to websites that mirror that information (like Bizapedia) to have your personal information taken down. This is of course only a concern if you have a business registration tied to your personal address.

If you work for a business that is susceptible to public records requests revealing personal sensitive information about you, there’s little to be done to prevent it. You can, however, apply for an address confidentiality program if your state has it. You can also do the somewhat tedious work of scrubbing your personal information from other places online (since doxxing is often a combination of information resources). Consider subscribing to a service like DeleteMe (or follow a free DIY guide) for a more thorough process of minimizing your digital footprint. Collaborating with trusted allies to monitor hate forums is a smart way to unburden yourself from having to look up your own information alone. Sharing that responsibility with others makes it easier to do, as well as group planning for what to do in ways of prevention and incident response.

Take a Deep Breath

It’s natural to feel bogged down by all the thought that has to be put towards privacy and security. Again, don’t beat yourself up for feeling powerless in the face of mass surveillance. You aren’t powerless. You can protect yourself, but it’s reasonable to feel frustrated when there is no comprehensive federal data privacy legislation that would alleviate so many of these concerns.

Take a deep breath. You’re not alone in this fight. There are guides for you to learn more about stepping up your privacy and security. We've even curated a special list of them. And there is Digital Defense Fund, a digital security organization for the abortion access movement, who we are grateful and proud to boost. And though it can often feel like privacy is getting harder to protect, in many ways it’s actually improving. With all that information, as well as continuing to trust your communities, and pushing for a culture of security within them, safety is much easier to attain. With a bit of privacy, you can go back to focusing on what matters, like healthcare.

Fourth Amendment is Not For Sale Act Passed the House, Now it Should Pass the Senate

The Fourth Amendment is Not For Sale Act, H.R.4639, originally introduced in the Senate by Senator Ron Wyden in 2021, has now made the important and historic step of passing the U.S. House of Representatives. In an era when it often seems like Congress cannot pass much-needed privacy protections, this is a victory for vulnerable populations, people who want to make sure their location data is private, and the hard-working activists and organizers who have pushed for the passage of this bill.

Everyday, your personal information is being harvested by your smart phone applications, sold to data brokers, and used by advertisers hoping to sell you things. But what safeguards prevent the government from shopping in that same data marketplace? Mobile data regularly bought and sold, like your geolocation, is information that law enforcement or intelligence agencies would normally have to get a warrant to acquire. But it does not require a warrant for law enforcement agencies to just buy the data. The U.S. government has been using its purchase of this information as a loophole for acquiring personal information on individuals without a warrant.

Now is the time to close that loophole.

At EFF, we’ve been talking about the need to close the databroker loophole for years. We even launched a massive investigation into the data broker industry which revealed Fog Data Science, a company that has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and their associates. We found close to 20 law enforcement agents used or were offered this tool.

It’s time for the Senate to close this incredibly dangerous and invasive loophole. If police want a personor a whole community’slocation data, they should have to get a warrant to see it. 

Take action

TELL congress: 702 Needs serious reforms

About Face (Recognition) | EFFector 36.5

There are a lot of updates in the fight for our freedoms online, from a last-minute reauthorization bill to expand Section 702 (tell your senators to vote NO on the bill here!), a new federal consumer data privacy law (we deserve better!), and a recent draft from the FCC to reinstate net neutrality (you can help clean it up!).

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.5.- About Face (Recognition)

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

How Political Campaigns Use Your Data to Target You

Data about potential voters—who they are, where they are, and how to reach them—is an extremely valuable commodity during an election year. And while the right to a secret ballot is a cornerstone of the democratic process, your personal information is gathered, used, and sold along the way. It's not possible to fully shield yourself from all this data processing, but you can take steps to at least minimize and understand it.

Political campaigns use the same invasive tricks that behavioral ads do—pulling in data from a variety of sources online to create a profile—so they can target you. Your digital trail is a critical tool for campaigns, but the process starts in the real world, where longstanding techniques to collect data about you can be useful indicators of how you'll vote. This starts with voter records.

Your IRL Voting Trail Is Still Valuable

Politicians have long had access to public data, like voter registration, party registration, address, and participation information (whether or not a voter voted, not who they voted for). Online access to such records has made them easier to get in some states, with unintended consequences, like doxing.

Campaigns can purchase this voter information from most states. These records provide a rough idea of whether that person will vote or not, and—if they're registered to a particular party—who they might lean toward voting for. Campaigns use this to put every voter into broad categories, like "supporter," "non-supporter," or "undecided." Campaigns gather such information at in-person events, too, like door-knocking and rallies, where you might sign up for emails or phone calls.

Campaigns also share information about you with other campaigns, so if you register with a candidate one year, it's likely that information goes to another in the future. For example, the website for Adam’s Schiff’s campaign to serve as U.S. Senator from California has a privacy policy with this line under “Sharing of Information”:

With organizations, candidates, campaigns, groups, or causes that we believe have similar political viewpoints, principles, or objectives or share similar goals and with organizations that facilitate communications and information sharing among such groups

Similar language can be found on other campaign sites, including those for Elizabeth Warren and Ted Cruz. These candidate lists are valuable, and are often shared within the national party. In 2017, the Hillary Clinton campaign gave its email list to the Democratic National Committee, a contribution valued at $3.5 million.

If you live in a state with citizen initiative ballot measures, data collected from signature sheets might be shared or used as well. Signing a petition doesn't necessarily mean you support the proposed ballot measure—it's just saying you think it deserves to be put on the ballot. But in most states, these signature pages will remain a part of the public record, and the information you provide may get used for mailings or other targeted political ads. 

How Those Voter Records, and Much More, Lead to Targeted Digital Ads

All that real world information is just one part of the puzzle these days. Political campaigns tap into the same intrusive adtech tracking systems used to deliver online behavioral ads. We saw a glimpse into how this worked after the Cambridge Analytica scandal, and the system has only grown since then.

Specific details are often a mystery, as a political advertising profile may be created by combining disparate information—from consumer scoring data brokers like Acxiom or Experian, smartphone data, and publicly available voter information—into a jumble of data points that’s often hard to trace in any meaningful way. A simplified version of the whole process might go something like this:

  1. A campaign starts with its voter list, which includes names, addresses, and party affiliation. It may have purchased this from the state or its own national committee, or collected some of it for itself through a website or app.
  2. The campaign then turns to a data broker to enhance this list with consumer information. The data broker combines the voter list with its own data, then creates a behavioral profile using inferences based on your shopping, hobbies, demographics, and more. The campaign looks this all over, then chooses some categories of people it thinks will be receptive to its messages in its various targeted ads.
  3. Finally, the campaign turns to an ad targeting company to get the ad on your device. Some ad companies might use an IP address to target the ad to you. As The Markup revealed, other companies might target you based on your phone's location, which is particularly useful in reaching voters not in the campaign's files. 

In 2020, Open Secrets found political groups paid 37 different data brokers at least $23 million for access to services or data. These data brokers collect information from browser cookies, web beacons, mobile phones, social media platforms, and more. They found that some companies specialize in more general data, while others, like i360, TargetSmart, and Grassroots Analytics, focus on data useful to campaigns or advocacy.

screenshot of spreadsheet with categories, "Qanon, Rightwing Militias, Right to Repair, Inflation Fault, Electric Vehicle Buyer, Climate Change, and Amazon Worker Treatment"

A sample of some categories and inferences in a political data broker file that we received through a CCPA request shows the wide variety of assumptions these companies may make.

These political data brokers make a lot of promises to campaigns. TargetSmart claims to have 171 million highly accurate cell phone numbers, and i360 claims to have data on 220 million voters. They also tend to offer specialized campaign categories that go beyond the offerings of consumer-focused data brokers. Check out data broker L2’s “National Models & Predictive Analytics” page, which breaks down interests, demographics, and political ideology—including details like "Voter Fraud Belief," and "Ukraine Continue." The New York Times demonstrated a particularly novel approach to these sorts of profiles where a voter analytics firm created a “Covid concern score” by analyzing cell phone location, then ranked people based on travel patterns during the pandemic.

Some of these companies target based on location data. For example, El Toro claims to have once “identified over 130,000 IP-matched voter homes that met the client’s targeting criteria. El Toro served banner and video advertisements up to 3 times per day, per voter household – across all devices within the home.”

That “all devices within the home” claim may prove important in the coming elections: as streaming video services integrate more ad-based subscription tiers, that likely means more political ads this year. One company, AdImpact, projects $1.3 billion in political ad spending on “connected television” ads in 2024. This may be driven in part by the move away from tracking cookies, which makes web browsing data less appealing.

In the case of connected televisions, ads can also integrate data based on what you've watched, using information collected through automated content recognition (ACR). Streaming device maker and service provider Roku's pitch to potential political advertisers is straightforward: “there’s an opportunity for campaigns to use their own data like never before, for instance to reach households in a particular district where they need to get out the vote.” Roku claims to have at least 80 million users. As a platform for televisions and “streaming sticks,” and especially if you opted into ACR (we’ll detail how to check below), Roku can collect and use a lot of your viewing data ranging from apps, to broadcast TV, or even to video games.

This is vastly different from traditional broadcast TV ads, which might be targeted broadly based on a city or state, and the show being aired. Now, a campaign can target an ad at one household, but not their neighbor, even if they're watching the same show. Of the main streaming companies, only Amazon and Netflix don’t accept political ads.

Finally, there are Facebook and Google, two companies that have amassed a mountain of data points about all their users, and which allow campaigns to target based on some of those factors. According to at least one report, political ad spending on Google (mostly through YouTube) is projected to be $552 million, while Facebook is projected at $568 million. Unlike the data brokers discussed above, most of what you see on Facebook and Google is derived from the data collected by the company from its users. This may make it easier to understand why you’re seeing a political ad, for example, if you follow or view content from a specific politician or party, or about a specific political topic.

What You Can Do to Protect Your Privacy

Managing the flow of all this data might feel impossible, but you can take a few important steps to minimize what’s out there. The chances you’ll catch everything is low, but minimizing what is accessible is still a privacy win.

Install Privacy Badger
Considering how much data is collected just from your day-to-day web browsing, it’s a good idea to protect that first. The simplest way to do so is with our own tracking blocker extension, Privacy Badger.

Disable Your Phone Advertising ID and Audit Your Location Settings
Your phone has an ad identifier that makes it simple for advertisers to track and collate everything you do. Thankfully, you can make this much harder for those advertisers by disabling it:

  • On iPhone: Head into Settings > Privacy & Security > Tracking, and make sure “Allow Apps to Request to Track” is disabled. 
  • On Android: Open Settings > Security & Privacy > Privacy > Ads, and select “Delete advertising ID.”

Similarly, as noted above, your location is a valuable asset for campaigns. They can collect your location through data brokers, which usually get it from otherwise unaffiliated apps. This is why it's a good idea to limit what sorts of apps have access to your location:

  • On iPhone: open Settings > Privacy & Security > Location Services, and disable access for any apps that do not need it. You can also set location for only "While using," for certain apps where it's helpful, but unnecessary to track you all the time. Also, consider disabling "Precise Location" for any apps that don't need your exact location (for example, your GPS navigation app needs precise location, but no weather app does).
  • On Android: Open Settings > Location > App location permissions, and confirm that no apps are accessing your location that you don't want to. As with iOS, you can set it to "Allow only while using the app," for apps that don't need it all the time, and disable "Use precise location," for any apps that don't need exact location access.

Opt Out of Tracking on Your TV or Streaming Device, and Any Video Streaming Service
Nearly every brand of TV is connected to the internet these days. Consumer Reports has a guide for disabling what you can on most popular TVs and software platforms. If you use an Apple TV, you can disable the ad identifier following the exact same directions as on your phone.

Since the passage of a number of state privacy laws, streaming services, like other sites, have offered a way for users to opt out of the sale of their info. Many have extended this right outside of states that require it. You'll need to be logged into your streaming service account to take action on most of these, but TechHive has a list of opt out links for popular streaming services to get you started. Select the "Right to Opt Out" option, when offered.

Don't Click on Links in (or Respond to) Political Text Messages
You've likely been receiving political texts for much of the past year, and that's not going to let up until election day. It is increasingly difficult to decipher whether they're legitimate or spam, and with links that often use a URL shortener or odd looking domains, it's best not to click them. If there's a campaign you want to donate to, head directly to the site of the candidate or ballot sponsor.

Create an Alternate Email and Phone Number for Campaign Stuff
If you want to keep updated on campaign or ballot initiatives, consider setting up an email specifically for that, and nothing else. Since a phone number is also often required, it's a good idea to set up a secondary phone number for these same purposes (you can do so for free through services like Google Voice).

Keep an Eye Out for Deceptive Check Boxes
Speaking of signing up for updates, be mindful of when you don't intend to sign up for emails. Campaigns might use pre-selected options for everything from donation amounts to signing up for a newsletter. So, when you sign up with any campaign, keep an eye on any options you might not intend to opt into.

Mind Your Social Media
Now's a great time to take any sort of "privacy checkup" available on whatever social media platforms you use to help minimize any accidental data sharing. Even though you can't completely opt out of behavioral advertising on Facebook, review your ad preferences and opt out whatever you can. Also be sure to disable access to off-site activity. You should also opt out of personalized ads on Google's services. You cannot disable behavioral ads on TikTok, but the company doesn't allow political ads.

If you're curious to learn more about why you're seeing an ad to begin with, on Facebook you can always click the three-dot icon on an ad, then click "Why am I seeing this ad?" to learn more. For ads on YouTube, you can click the "More" button and then "About this advertiser" to see some information about who placed the ad. Anywhere else you see a Google ad you can click the "Adchoices" button and then "Why this ad?"

You shouldn't need to spend an afternoon jumping through opt out hoops and tweaking privacy settings on every device you own just so you're not bombarded with highly targeted ads. That’s why EFF supports comprehensive consumer data privacy legislation, including a ban on online behavioral ads.

Democracy works because we participate, and you should be able to do so without sacrificing your privacy. 

Speaking Freely: Lynn Hamadallah

Lynn Hamadallah is a Syrian-Palestinian-French Psychologist based in London. An outspoken voice for the Palestinian cause, Lynn is interested in the ways in which narratives, spoken and unspoken, shape identity. Having lived in five countries and spent a lot of time traveling, she takes a global perspective on freedom of expression. Her current research project investigates how second-generation British-Arabs negotiate their cultural identity. Lynn works in a community mental health service supporting some of London's most disadvantaged residents, many of whom are migrants who have suffered extensive psychological trauma.

York: What does free speech or free expression mean to you? 

Being Arab and coming from a place where there is much more speech policing in the traditional sense, I suppose there is a bit of an idealization of Western values of free speech and democracy. There is this sense of freedom we grow up associating with the West. Yet recently, we’ve come to realize that the way it works in practice is quite different to the way it is described, and this has led to a lot of disappointment and disillusionment in the West and its ideals amongst Arabs. There’s been a lot of censorship for example on social media, which I’ve experienced myself when posting content in support of Palestine. At a national level, we have witnessed the dehumanization going on around protesters in the UK, which undermines the idea of free speech. For example, the pro-Palestine protests where we saw the then-Home Secretary Suella Braverman referring to protesters as “hate marchers.” So we’ve come to realize there’s this kind of veneer of free speech in the West which does not really match up to the more idealistic view of freedom we were taught about.

With the increased awareness we have gained as a result of the latest aggression going on in Palestine, actually what we’re learning is that free speech is just another arm of the West to support political and racist agendas. It’s one of those things that the West has come up with which only applies to one group of people and oppresses another. It’s the same as with human rights you know - human rights for who? Where are Palestinian’s human rights? 

We’ve seen free speech being weaponized to spread hate and desecrate Islam, for example, in the case of Charlie Hebdo and the Quran burning in Denmark and in Sweden. The argument put forward was that those cases represented instances of free speech rather than hate speech. But actually to millions of Muslims around the world those incidents were very, very hateful. They were acts of violence not just against their religious beliefs but right down to their sense of self. It’s humiliating to have a part of your identity targeted in that way with full support from the West, politicians and citizens alike. 

And then, when we— we meaning Palestinians and Palestine allies—want to leverage this idea of free speech to speak up against the oppression happening by the state of Israel, we see time and time again accusations flying around: hate speech, anti-semitism, and censorship. Heavy, heavy censorship everywhere. So that’s what I mean when I say that free speech in the West is a racist concept, actually. And I don’t know that true free speech exists anywhere in the world really. In the Middle East we don’t have democracies but at least there’s no veneer of democracy— the messaging and understanding is clear. Here, we have a supposed democracy, but in practice it looks very different. And that’s why, for me, I don’t really believe that free speech exists. I’ve never seen a real example of it. I think as long as people are power hungry there’s going to be violence, and as long as there’s violence, people are going to want to hide their crimes. And as long as people are trying to hide their crimes there’s not going to be free speech. Sorry for the pessimistic view!

York: It’s okay, I understand where you’re coming from. And I think that a lot of those things are absolutely true. Yet, from my perspective, I still think it’s a worthy goal even though governments—and organizationally we’ve seen this as well—a lot of times governments do try to abuse this concept. So I guess then I would just as a follow-up, do you feel that despite these issues that some form of universalized free expression is still a worthy ideal? 

Of course, I think it’s a worthy ideal. You know, even with social media – there is censorship. I’ve experienced it and it’s not just my word and an isolated incident. It’s been documented by Human Rights Watch—even Meta themselves! They did an internal investigation in 2021—Meta had a nonprofit called Business for Social Responsibility do an investigation and produce a report—and they’ve shown there was systemic censorship of Palestine-related content. And they’re doing it again now. That being said, I do think social media is making free speech more accessible, despite the censorship. 

And I think—to your question—free speech is absolutely worth pursuing. Because we see that despite these attempts at censorship, the truth is starting to come out. Palestine support is stronger than it’s ever been. To the point where we’ve now had South Africa take Israel to trial at the International Court of Justice for genocide, using evidence from social media videos that went viral. So what I’m saying is, free speech has the power to democratize demanding accountability from countries and creating social change, so yes, absolutely something we should try to pursue. 

York: You just mentioned two issues close to my heart. One is the issues around speech on social media platforms, and I’ve of course followed and worked on the Palestinian campaigns quite closely and I’m very aware of the BSR report. But also, video content, specifically, that’s found on social media being used in tribunals. So let me shift this question a bit. You have such a varied background around the world. I’m curious about your perspective over the past decade or decade and a half since social media has become so popular—how do you feel social media has shaped people’s views or their ability to advocate for themselves globally? 

So when we think about stories and narratives, something I’m personally interested in, we have to think about which stories get told and which stories remain untold. These stories and their telling is very much controlled by the mass media— BBC, CNN, and the like. They control the narrative. And I guess what social media is doing is it’s giving a voice to those who are often voiceless. In the past, the issue was that there was such a monopoly over mouthpieces. Mass  media were so trusted, to the point where no one would have paid attention to these alternative viewpoints. But what social media has done… I think it’s made people become more aware or more critical of mass media and how it shapes public opinion. There’s been a lot of exposure of their failure for example, like that video that went viral of Egyptian podcaster and activist Rahma Zain confronting CNN’s Clarissa Ward at the Rafah border about their biased reporting of the genocide in Palestine. I think that confrontation spoke to a lot of people. She was shouting “ You own the narrative, this is our problem. You own the narrative, you own the United Nations, you own Hollywood, you own all these mouthpieces— where are our voices?! Our voices need to be heard!” It was SO powerful and that video really spoke to the sentiment of many Arabs who have felt angry, betrayed and abandoned by the West’s ideals and their media reporting.

Social media is providing  a voice to more diverse people, elevating them and giving the public more control around narratives. Another example we’ve seen recently is around what’s currently happening in Sudan and the Democratic Republic of Congo. These horrific events and stories would never have had much of a voice or exposure before at the global stage. And now people all over the world are paying more attention and advocating for Sudanese and Congolese rights, thanks to social media. 

I personally was raised with quite a critical view of mass media, I think in my family there was a general distrust of the West, their policies and their media, so I never really relied personally on the media as this beacon of truth, but I do think that’s an exception. I think the majority of people rely on mass media as their source of truth. So social media plays an important role in keeping them accountable and diversifying narratives.

York: What are some of the biggest challenges you see right now anywhere in the world in terms of the climate for free expression for Palestinian and other activism? 

I think there’s two strands to it. There’s the social media strand. And there’s the governmental policies and actions. So I think on social media, again, it’s very documented, but it’s this kind of constant censorship. People want to be able to share content that matters to them, to make people more aware of global issues and we see time and time again viewership going down, content being deleted or reports from Meta of alleged hate speech or antisemitism. And that’s really hard. There’ve been random strategies that have popped up to increase social media engagement, like posting random content unrelated to Palestine or creating Instagram polls for example. I used to do that, I interspersed Palestine content with random polls like, “What’s your favorite color?” just to kind of break up the Palestine content and boost my engagement. And it was honestly so exhausting. It was like… I’m watching a genocide in real time, this is an attack on my people and now I’m having to come up with silly polls? Eventually I just gave up and accepted my viewership as it was, which was significantly lower.

At a government level, which is the other part of it, there’s this challenge of constant intimidation that we’re witnessing. I just saw recently there was a 17-year-old boy who was interviewed by the counterterrorism police at an airport because he was wearing a Palestinian flag. He was interrogated about his involvement in a Palestinian protest. When has protesting become a crime and what does that say about democratic rights and free speech here in the UK? And this is one example, but there are so many examples of policing, there was even talk of banning protests all together at one point. 

The last strand I’d include, actually, that I already touched on, is the mass media. Just recently we’ve seen the BBC reporting on the ICJ hearing, they showed the Israeli defense part, but they didn’t even show the South African side. So this censorship is literally in plain sight and poses a real challenge to the climate of free expression for Palestine activism.

York: Who is your free speech hero? 

Off the top of my head I’d probably say Mohammed El-Kurd. I think he’s just been so unapologetic in his stance. Not only that but I think he’s also made us think critically about this idea of narrative and what stories get told. I think it was really powerful when he was arguing the need to stop giving the West and mass media this power, and that we need to disempower them by ceasing to rely on them as beacons of truth, rather than working on changing them. Because, as he argues, oppressors who have monopolized and institutionalized violence will never ever tell the truth or hold themselves to account. Instead, we need to turn to Palestinians, and to brave cultural workers, knowledge producers, academics, journalists, activists, and social media commentators who understand the meaning of oppression and view them as the passionate, angry and, most importantly, reliable narrators that they are.

Americans Deserve More Than the Current American Privacy Rights Act

EFF is concerned that a new federal bill would freeze consumer data privacy protections in place, by preempting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling.

We also urge the authors of the American Privacy Rights Act (APRA) to strengthen other portions of the bill. It should be easier to sue companies that violate our rights. The bill should limit sharing with the government and expand the definition of sensitive data. And it should narrow exceptions that allow companies to exploit our biometric information, our so-called “de-identified” data, and our data obtained in corporate “loyalty” schemes.

Despite our concerns with the APRA bill, we are glad Congress is pivoting the debate to a privacy-first approach to online regulation. Reining in companies’ massive collection, misuse, and transfer of everyone’s personal data should be the unifying goal of those who care about the internet. This debate has been absent at the federal level in the past year, giving breathing room to flawed bills that focus on censorship and content blocking, rather than privacy.

In general, the APRA would require companies to minimize their processing of personal data to what is necessary, proportionate, and limited to certain enumerated purposes. It would specifically require opt-in consent for the transfer of sensitive data, and most processing of biometric and genetic data. It would also give consumers the right to access, correct, delete, and export their data. And it would allow consumers to universally opt-out of the collection of their personal data from brokers, using a registry maintained by the Federal Trade Commission.

We welcome many of these privacy protections. Below are a few of our top priorities to correct and strengthen the APRA bill.

Allow States to Pass Stronger Privacy Laws

The APRA should not preempt existing and future state data privacy laws that are stronger than the current bill. The ability to pass stronger bills at the state and local level is an important tool in the fight for data privacy. We ask that Congress not compromise our privacy rights by undercutting the very state-level action that spurred this compromise federal data privacy bill in the first place.

Subject to exceptions, the APRA says that no state may “adopt, maintain, enforce, or continue in effect” any state-level privacy requirement addressed by the new bill. APRA would allow many state sectoral privacy laws to remain, but it would still preempt protections for biometric data, location data, online ad tracking signals, and maybe even privacy protections in state constitutions or some other limits on what private companies can share with the government. At the federal level, the APRA would also wrongly preempt many parts of the federal Communications Act, including provisions that limit a telephone company’s use, disclosure, and access to customer proprietary network information, including location information.

Just as important, it would prevent states from creating stronger privacy laws in the future. States are more nimble at passing laws to address new privacy harms as they arise, compared to Congress which has failed for decades to update important protections. For example, if lawmakers in Washington state wanted to follow EFF’s advice to ban online behavioral advertising or to allow its citizens to sue companies for not minimizing their collection of personal data (provisions where APRA falls short), state legislators would have no power to do so under the new federal bill.

Make It Easier for Individuals to Enforce Their Privacy Rights

The APRA should prevent coercive forced arbitration agreements and class action waivers, allow people to sue for statutory damages, and allow them to bring their case in state court. These rights would allow for rigorous enforcement and help force companies to prioritize consumer privacy.

The APRA has a private right of action, but it is a half-measure that still lets companies side-step many legitimate lawsuits. And the private right of action does not apply to some of the most important parts of the law, including the central data minimization requirement.

The favorite tool of companies looking to get rid of privacy lawsuits is to bury provision in their terms of service that force individuals into private arbitration and prevent class action lawsuits. The APRA does not address class action waivers and only prevents forced arbitration for children and people who allege “substantial” privacy harm. In addition, statutory damages and enforcement in state courts is essential, because many times federal courts still struggle to acknowledge privacy harm as real—relying instead on a cramped view that does not recognize privacy as a human right. In addition, the bill would allow companies to cure violations rather than face a lawsuit, incentivizing companies to skirt the law until they are caught.

Limit Exceptions for Sharing with the Government

APRA should close a loophole that may allow data brokers to sell data to the government and should require the government to obtain a court order before compelling disclosure of user data. This is important because corporate surveillance and government surveillance are often the same.

Under the APRA, government contractors do not have to follow the bill’s privacy protections. Those include any “entity that is collecting, processing, retaining, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, to the extent that such entity is acting as a service provider to the government entity.” Read broadly, this provision could protect data brokers who sell biometric information and location information to the government. In fact, Clearview AI previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is a point that needs revision because other parts of the bill rightly prevent covered entities (government contractors excluded) from selling data to the government for the purpose of fraud detection, public safety, and criminal activity detection.

The APRA also allows entities to transfer personal data to the government pursuant to a “lawful warrant, administrative subpoena, or other form of lawful process.” EFF urges that the requirement be strengthened to at least a court order or warrant with prompt notice to the consumer. Protections like this are not unique, and it is especially important in the wake of the Dobbs decision.

Strengthen the Definition of Sensitive Data

The APRA has heightened protections for sensitive data, and it includes a long list of 18 categories of sensitive data, like: biometrics, precise geolocation, private communications, and an individual’s online activity overtime and across websites. This is a good list that can be added to. We ask Congress to add other categories, like immigration status, union membership, employment history, familial and social relationships, and any covered data processed in a way that would violate a person’s reasonable expectation of privacy. The sensitivity of data is context specific—meaning any data can be sensitive depending on how it is used. The bill should be amended to reflect that.

Limit Other Exceptions for Biometrics, De-identified Data, and Loyalty Programs

An important part of any bill is to make sure the exceptions do not swallow the rule. The APRA’s exceptions on biometric information, de-identified data, and loyalty programs should be narrowed.

In APRA, biometric information means data “generated from the measurement or processing of the individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to the individual” and excludes “metadata associated with a digital or physical photograph or an audio or video recording that cannot be used to identify an individual.” EFF is concerned this definition will not protect biometric information used for analysis of sentiment, demographics, and emotion, and could be used to argue hashed biometric identifiers are not covered.

De-identified data is excluded from the definition of personal data covered by the APRA, and companies and service providers can turn personal data into de-identified data to process it however they want. The problem with de-identified data is that many times it is not. Moreover, many people do not want their private data that they store in confidence with a company to then be used to improve that company’s product or train its algorithm—even if the data has purportedly been de-identified.

Many companies under the APRA can host loyalty programs and can sell that data with opt-in consent. Loyalty programs are a type of pay-for-privacy scheme that pressure people to surrender their privacy rights as if they were a commodity. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.” At the very least, the bill should be amended to prevent companies from selling data that they obtain from a loyalty program.

We welcome Congress' privacy-first approach in the APRA and encourage the authors to improve the bill to ensure privacy is protected for generations to come.

Tell the FCC It Must Clarify Its Rules to Prevent Loopholes That Will Swallow Net Neutrality Whole

The Federal Communications Commission (FCC) has released draft rules to reinstate net neutrality, with a vote on adopting the rules to come on the 25th of April. The FCC needs to close some loopholes in the draft rules before then.

Proposed Rules on Throttling and Prioritization Allow for the Circumvention of Net Neutrality

Net neutrality is the principle that all ISPs should treat all traffic coming over their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. Violations of this principle include, but are not limited to, attempts to block, speed up, or slow down certain content as means of controlling traffic.

Net neutrality is critical to ensuring that the internet remains a vibrant place to learn, organize, speak, and innovate, and the FCC recognizes this. The draft mostly reinstates the bright-line rules of the landmark 2015 net neutrality protections to ban blocking, throttling, and paid prioritization.

It falls short, though, in a critical way: the FCC seems to think that it’s not okay to favor certain sites or services by slowing down other traffic, but it might be okay to favor them by giving them access to so-called fast lanes such as 5G network slices. First of all, in a world with a certain amount of finite bandwidth, favoring some traffic necessarily impairs other traffic. Secondly, the harms to speech and competition would be the same even if an ISP could conjure more bandwidth from thin air to speed up traffic from its business partners. Whether your access to Spotify is faster than your access to Bandcamp because Spotify is sped up or because Bandcamp is slowed down doesn’t matter because the end result is the same: Spotify is faster than Bandcamp and so you are incentivized to use Spotify over Bandcamp.

The loophole is especially bizarre because the 2015 FCC already got this right, and there has been bipartisan support for net neutrality proposals that explicitly encompass both favoring and disfavoring certain traffic. It’s a distinction that doesn’t make logical sense, doesn’t seem to have partisan significance, and could potentially undermine the rules in the event of a court challenge by drawing a nonsensical distinction between what’s forbidden under the bright-line rules versus what goes through the multi-factor test for other potentially discriminatory conduct by ISPs.

The FCC needs to close this loophole for unpaid prioritization of certain applications or classes of traffic. Customers should be in charge of what they do online, rather than ISPs deciding that, say, it’s more important to consume streaming entertainment products than to participate in video calls or that one political party’s websites should be served faster than another’s.

The FCC Should Clearly Rule Preemption to be a Floor, Not a Ceiling

When the FCC under the previous administration abandoned net neutrality protections in 2017 with the so-called “Restoring Internet Freedom” order, many stateschief among them Californiastepped in to pass state net neutrality laws. Laws more protective than federal net neutrality protections—like California's should be explicitly protected by the new rule.

The FCC currently finds that California’s law “generally tracks [with] the federal rules[being] restored. (269)” It goes on to find that state laws are fine so long as they do not “interfere with or frustrate…federal rules,” are not “inconsistent,” or are not “incompatible.” It then reserves the right to revisit any state law if evidence arises that a state policy is found to “interfere or [be] incompatible.”

States should be able to build on federal laws to be more protective of rights, not run into limits to available protections. California’s net neutrality is in some places stronger than the draft rules. Where the FCC means to evaluate zero-rating, the practice of exempting certain data from a user’s data cap, on a case-by-case basis, California outright bans the practice of zero rating select apps.

There is no guarantee that a Commission which finds California to “generally track” today will do the same in two years time. The language as written unnecessarily sets a low bar for a future Commission to find California’s, and other states’, net neutrality laws to be preempted. It also leaves open unnecessary room for the large internet service providers (ISPs) to challenge California’s law once again. After all, when California’s law was first passed, it was immediately taken to court by these same ISPs and only after years of litigation did the courts reject the industry’s arguments and allow enforcement of this gold standard law to begin.

We urge the Commission to clearly state that, not only is California consistent with the FCC’s rules, but that on the issue of preemption the FCC considers its rules to be  the floor to build on, and that further state protections are not inconsistent simply because they may go further than the FCC chooses to.

Overall, the order is a great step for net neutrality. Its rules go a distance in protecting internet users. But we need clear rules recognizing that the creation of fast lanes via positive discrimination and unpaid prioritization are violations of net neutrality just the same, and assurance that states will continue to be free to protect their residents even when the FCC won’t.

Tell the FCC to Fix the Net Neutrality Rules:

1. Go to this link
2. For "Proceeding" put 23-320
3. Fill out the form
4. In "brief comments" register your thoughts on net neutrality. We recommend this, which you can copy and paste or edit for yourself:

Net neutrality is the principle that all internet service providers treat all traffic coming through their networks without discrimination. The effect of this principle is that customers decide for themselves how they’d like to experience the internet. The Commission’s rules as currently written leave open the door for positive discrimination of content, that is, the supposed creation of fast lanes where some content is sped up relative to others. This isn’t how the internet works, but in any case, whether an ISP is speeding up or slowing down content, the end result is the same: the ISP picks the winners and losers on the internet. As such the Commission must create bright line rules against all forms of discrimination, speeding up or slowing down, against apps or classes of apps on general traffic in the internet.

Further, while the Commission currently finds state net neutrality rules, like California’s, to not be preempted because they “generally track” its own rules, it makes it easy to rule otherwise at a future date. But just as we received net neutrality in 2015 only to have it taken away in 2017, there is no guarantee that the Commission will continue to find state net neutrality laws passed post-2017 to be consistent with the rules. To safeguard net neutrality, the Commission must find that California’s law is wholly consistent with their rules and that preemption is taken as a floor, not a ceiling, so that states can go above and beyond the federal standard without it being considered inconsistent with the federal rule.

Take Action

Tell the FCC to Fix the Net Neutrality Rules

S.T.O.P. is Working to ‘Ban The Scan’ in New York

Facial recognition is a threat to privacy, racial justice, free expression, and information security. EFF supports strict restrictions on face recognition use by private companies, and total bans on government use of the technology. Face recognition in all of its forms, including face scanning and real-time tracking, pose threats to civil liberties and individual privacy. “False positive” error rates are significantly higher for women, children, and people of color, meaning face recognition has an unfair discriminatory impact. Coupled with the fact that cameras are over-deployed in neighborhoods with immigrants and people of color, spying technologies like face surveillance serve to amplify existing disparities in the criminal justice system.

Across the nation local communities from San Francisco to Boston have moved to ban government use of facial recognition. In New York, Electronic Frontier Alliance member Surveillance Technology Oversight Project (S.T.O.P.) is at the forefront of this movement. Recently we got the chance to speak with them about their efforts and what people can do to help advance the cause. S.T.O.P. is a New York-based civil rights and privacy organization that does research, advocacy, and litigation around issues of surveillance technology abuse.

What does “Ban The Scan” mean? 

When we say scan, we are referring to the “face scan” component of facial recognition technology. Surveillance, and more specifically facial recognition, disproportionately targets Black, Brown, Indigenous, and immigrant communities, amplifying the discrimination that has defined New York’s policing for as long as our state has had police. Facial recognition is notoriously biased and often abused by law enforcement. It is a threat to free speech, freedom of association, and other civil liberties. Ban the Scan is a campaign and coalition built around passing two packages of bills that would ban facial recognition in a variety of contexts in New York City and New York State. 

Are there any differences with the State vs City version?

The City and State packages are largely similar. The main differences are that the State package contains a bill banning law enforcement use of facial recognition, whereas the City package has a bill that bans all government use of the technology (although this bill has yet to be introduced). The State package also contains an additional bill banning facial recognition use in schools, which would codify an existing regulatory ban that currently applies to schools.

What hurdles exist to its passage? 

 For the New York State package, the coalition is newly coming together, so we are still gathering support from legislators and the public. For the City package, we are lucky to have a lot of support already, and we are waiting to have a hearing conducted on the residential ban bills and move them into the next phase of legislation. We are also working to get the bill banning government use introduced at the City level.

What can people do to help this good legislation? How to get involved? 

We recently launched a campaign website for both City and State packages (banthescan.org). If you’re a New York City or State resident, you can look up your legislators (links below!) and contact them to ask them to support these bills or thank them for their support if they are already signed on. We also have social media toolkits with graphics and guidance on how to help spread the word!  

Find your NYS Assemblymember: https://nyassembly.gov/mem/search/ 

Find your NYS Senator: https://www.nysenate.gov/find-my-senator 

Find your NYC Councilmember: https://council.nyc.gov/map-widget/  

EFF Submits Comments on FRT to Commission on Civil Rights

Our faces are often exposed and, unlike passwords or pin numbers, cannot be remade. Governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. This is why EFF recently submitted comments to the U.S. Commission on Civil Rights, which is preparing a report on face recognition technology (FRT).   

In our submission, we reiterated our stance that there should be a ban on governmental use of FRT and strict regulations on private use because it: (1) is not reliable enough to be used in determinations affecting constitutional and statutory rights or social benefits; (2) is a menace to social justice as its errors are far more pronounced when applied to people of color, members of the LGBTQ+ community, and other marginalized groups; (3) threatens privacy rights; (4) chills and deters expression; and (5) creates information security risks.

Despite these grave concerns, FRT is being used by the government and law enforcement agencies with increasing frequency, and sometimes with devastating effects. At least one Black woman and five Black men have been wrongfully arrested due to misidentification by FRT: Porcha Woodruff, Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, and Robert Williams. And Harvey Murphy Jr., a white man, was wrongfully arrested due to FRT misidentification, and then sexually assaulted while in jail.

Even if FRT was accurate, or at least equally inaccurate across demographics, it would still severely impact our privacy and security. We cannot change our face, and we expose it to the mass surveillance networks already in place every day we go out in public. But doing that should not be license for the government or private entities to make imprints of our face and retain that data, especially when that data may be breached by hostile actors.

The government should ban its own use of FRT, and strictly limit private use, to protect us from the threats posed by FRT. 

What Does EFF Mean to You?

We could go on for days talking about all the work EFF does to ensure that technology supports freedom, justice, and innovation for all people of the world. In fact, we DO go on for days talking about it — but we’d rather hear from you. 

What does EFF mean to you? We’d love to know why you support us, how you see our mission, or what issue or area we address that affects your life the most. It’ll help us make sure we keep on being the EFF you want us to be.

So if you’re willing to go on the record, please send us a few sentences, along with your first name and current city of residence, to testimonials@eff.org; we’ll pick some every now and then to share with the world here on our blog, in our emails, and on our social media.

Bad Amendments to Section 702 Have Failed (For Now)—What Happens Next?

Yesterday, the House of Representatives voted against considering a largely bad bill that would have unacceptably expanded the tentacles of Section 702 of the Foreign Intelligence Surveillance Act, along with reauthorizing it and introducing some minor fixes. Section 702 is Big Brother’s favorite mass surveillance law that EFF has been fighting since it was first passed in 2008. The law is currently set to expire on April 19. 

Yesterday’s decision not to decide is good news, at least temporarily. Once again, a bipartisan coalition of law makers—led by Rep. Jim Jordan and Rep. Jerrold Nadler—has staved off the worst outcome of expanding 702 mass surveillance in the guise of “reforming” it. But the fight continues and we need all Americans to make their voices heard. 

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: 702 Needs serious reforms

Yesterday’s vote means the House also will not consider amendments to Section 702 surveillance introduced by members of the House Judiciary Committee (HJC) and House Permanent Select Committee on Intelligence (HPSCI). As we discuss below, while the HJC amendments would contain necessary, minimum protections against Section 702’s warrantless surveillance, the HPSCI amendments would impose no meaningful safeguards upon Section 702 and would instead increase the threats Section 702 poses to Americans’ civil liberties.

Section 702 expressly authorizes the government to collect foreign communications inside the U.S. for a wide range of purposes, under the umbrellas of national security and intelligence gathering. While that may sound benign for Americans, foreign communications include a massive amount of Americans’ communications with people (or services) outside the United States. Under the government’s view, intelligence agencies and even domestic law enforcement should have backdoor, warrantless access to these “incidentally collected” communications, instead of having to show a judge there is a reason to query Section 702 databases for a specific American's communications.

Many amendments to Section 702 have recently been introduced. In general, amendments from members of the HJC aim at actual reform (although we would go further in many instances). In contrast, members of HPSCI have proposed bad amendments that would expand Section 702 and undermine necessary oversight. Here is our analysis of both HJC’s decent reform amendments and HPSCI’s bad amendments, as well as the problems the latter might create if they return.

House Judiciary Committee’s Amendments Would Impose Needed Reforms

The most important amendment HJC members have introduced would require the government to obtain court approval before querying Section 702 databases for Americans’ communications, with exceptions for exigency, consent, and certain queries involving malware. As we recently wrote regarding a different Section 702 bill, because Section 702’s warrantless surveillance lacks the safeguards of probable cause and particularity, it is essential to require the government to convince a judge that there is a justification before the “separate Fourth Amendment event” of querying for Americans’ communications. This is a necessary, minimum protection and any attempts to renew Section 702 going forward should contain this provision.

Another important amendment would prohibit the NSA from resuming “abouts” collection. Through abouts collection, the NSA collected communications that were neither to nor from a specific surveillance target but merely mentioned the target. While the NSA voluntarily ceased abouts collection following Foreign Intelligence Surveillance Court (FISC) rulings that called into question the surveillance’s lawfulness, the NSA left the door open to resume abouts collection if it felt it could “work that technical solution in a way that generates greater reliability.” Under current law, the NSA need only notify Congress when it resumes collection. This amendment would instead require the NSA to obtain Congress’s express approval before it can resume abouts collection, which―given this surveillance's past abuses—would be notable.

The other HJC amendment Congress should accept would require the FBI to give a quarterly report to Congress of the number of queries it has conducted of Americans’ communications in its Section 702 databases and would also allow high-ranking members of Congress to attend proceedings of the notoriously secretive FISC. More congressional oversight of FBI queries of Americans’ communications and FISC proceedings would be good. That said, even if Congress passes this amendment (which it should), both Congress and the American public deserve much greater transparency about Section 702 surveillance.  

House Permanent Select Committee on Intelligence’s Amendments Would Expand Section 702

Instead of much-needed reforms, the HPSCI amendments expand Section 702 surveillance.

One HPSCI amendment would add “counternarcotics” to FISA’s definition of “foreign intelligence information,” expanding the scope of mass surveillance even further from the antiterrorism goals that most Americans associate with FISA. In truth, FISA’s definition of “foreign intelligence information” already goes beyond terrorism. But this counternarcotics amendment would further expand “foreign intelligence information” to allow FISA to be used to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths” but also to any of their precursors. Given the massive amount of Americans’ communications the government already collects under Section 702 and the government’s history of abusing Americans’ civil liberties through searching these communications, the expanded collection this amendment would permit is unacceptable.

Another amendment would authorize using Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released last year, the government has sought some version of this authority for years, and the FISC repeatedly denied it—finally approving it for the first time in 2023. The FISC opinion is very redacted, which makes it impossible to know either the current scope of immigration and visa-related surveillance under Section 702 or what the intelligence agencies have sought in the past. But regardless, it’s deeply concerning that HPSCI is trying to formally lower Section 702 protections for immigrants and asylum seekers. We’ve already seen the government revoke people’s visas based upon their political opinions—this amendment would put this kind of thing on steroids.

The last HPSCI amendment tries to make more companies subject to Section 702’s required turnover of customer information in more instances. In 2023, the FISC Court of Review rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While we don’t know the details of the secret proceedings because the FISC Court of Review opinion is heavily redacted, this is an ominous attempt to increase the scope of providers subject to 702. With this amendment, HPSCI is attempting to legislatively overrule a court already famously friendly to the government. HPSCI Chair Mike Turner acknowledged as much in a House Rules Committee hearing earlier this week, stating that this amendment “responds” to the FISC Court of Review’s decision.

What’s Next 

This hearing was unlikely to be the last time Congress considers Section 702 before April 19—we expect another attempt to renew this surveillance authority in the coming days. We’ve been very clear: Section 702 must not be renewed without essential reforms that protect privacy, improve transparency, and keep the program within the confines of the law. 

Take action

TELL congress: 702 Needs serious reforms

Virtual Reality and the 'Virtual Wall'

When EFF set out to map surveillance technology along the U.S.-Mexico border, we weren't exactly sure how to do it. We started with public records—procurement documents, environmental assessments, and the like—which allowed us to find the GPS coordinates of scores of towers. During a series of in-person trips, we were able to find even more. Yet virtual reality ended up being one of the key tools in not only discovering surveillance at the border, but also in educating people about Customs & Border Protection's so-called "virtual wall" through VR tours.

EFF Director of Investigations Dave Maass recently gave a lightning talk at University of Nevada, Reno's annual XR Meetup explaining how virtual reality, perhaps ironically, has allowed us to better understand the reality of border surveillance.

play
Privacy info. This embed will serve content from youtube.com

The Motion Picture Association Doesn’t Get to Decide Who the First Amendment Protects

Twelve years ago, internet users spoke up with one voice to reject a law that would build censorship into the internet at a fundamental level. This week, the Motion Picture Association (MPA), a group that represents six giant movie and TV studios, announced that it hoped we’d all forgotten how dangerous this idea was. The MPA is wrong. We remember, and the internet remembers.

What the MPA wants is the power to block entire websites, everywhere in the U.S., using the same tools as repressive regimes like China and Russia. To it, instances of possible copyright infringement should be played like a trump card to shut off our access to entire websites, regardless of the other legal speech hosted there. It is not simply calling for the ability to take down instances of infringement—a power they already have, without even having to ask a judge—but for the keys to the internet. Building new architectures of censorship would hurt everyone, and doesn’t help artists.

The bills known as SOPA/PIPA would have created a new, rapid path for copyright holders like the major studios to use court orders against sites they accuse of infringing copyright. Internet service providers (ISPs) receiving one of those orders would have to block all of their customers from accessing the identified websites. The orders would also apply to domain name registries and registrars, and potentially other companies and organizations that make up the internet’s basic infrastructure. To comply, all of those would have to build new infrastructure dedicated to site-blocking, inviting over-blocking and all kinds of abuse that would censor lawful and important speech.

In other words, the right to choose what websites you visit would be taken away from you and given to giant media companies and ISPs. And the very shape of the internet would have to be changed to allow it.

In 2012, it seemed like SOPA/PIPA, backed by major corporations used to getting what they want from Congress, was on the fast track to becoming law. But a grassroots movement of diverse Internet communities came together to fight it. Digital rights groups like EFF, Public Knowledge, and many more joined with editor communities from sites like Reddit and Wikipedia to speak up. Newly formed grassroots groups like Demand Progress and Fight for the Future added their voices to those calling out the dangers of this new form of censorship. In the final days of the campaign, giant tech companies like Google and Facebook (now Meta) joined in opposition as well.

What resulted was one of the biggest protests ever seen against a piece of legislation. Congress was flooded with calls and emails from ordinary people concerned about this steamroller of censorship. Members of Congress raced one another to withdraw their support for the bills. The bills died, and so did site blocking legislation in the US. It was, all told, a success story for the public interest.

Even the MPA, one of the biggest forces behind SOPA/PIPA, claimed to have moved on. But we never believed it, and they proved us right time and time again. The MPA backed site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. Even the lobbying of Congress for a new law never really went away. It’s just that today, with MPA president Charles Rivkin openly calling on Congress “to enact judicial site-blocking legislation here in the United States,” the MPA is taking its mask off.

Things have changed since 2012. Tech platforms that were once seen as innovators have become behemoths, part of the establishment rather than underdogs. The Silicon Valley-based video streamer Netflix illustrated this when it joined MPA in 2019. And the entertainment companies have also tried to pivot into being tech companies. Somehow, they are adopting each other’s worst aspects.

But it’s important not to let those changes hide the fact that those hurt by this proposal are not Big Tech but regular internet users. Internet platforms big and small are still where ordinary users and creators find their voice, connect with audiences, and participate in politics and culture, mostly in legal—and legally protected—ways. Filmmakers who can’t get a distribution deal from a giant movie house still reach audiences on YouTube. Culture critics still reach audiences through zines and newsletters. The typical users of these platforms don’t have the giant megaphones of major studios, record labels, or publishers. Site-blocking legislation, whether called SOPA/PIPA, “no fault injunctions,” or by any other name, still threatens the free expression of all of these citizens and creators.

No matter what the MPA wants to claim, this does not help artists. Artists want their work seen, not locked away for a tax write-off. They wanted a fair deal, not nearly five months of strikes. They want studios to make more small and midsize films and to take a chance on new voices. They have been incredibly clear about what they want, and this is not it.

Even if Rivkin’s claim of an “unflinching commitment to the First Amendment” was credible from a group that seems to think it has a monopoly on free expression—and which just tried to consign the future of its own artists to the gig economy—a site-blocking law would not be used only by Hollywood studios. Anyone with a copyright and the means to hire a lawyer could wield the hammer of site-blocking. And here’s the thing: we already know that copyright claims are used as tools of censorship.

The notice-and-takedown system created by the Digital Millennium Copyright Act, for example, is abused time and again by people who claim to be enforcing their copyrights, and also by folks who simply want to make speech they don’t like disappear from the Internet. Even without a site-blocking law, major record labels and US Immigration and Customs Enforcement shut down a popular hip hop music blog and kept it off the internet for over a year without ever showing that it infringed copyright. And unscrupulous characters use accusations of infringement to extort money from website owners, or even force them into carrying spam links.

This censorious abuse, whether intentional or accidental, is far more damaging when it targets the internet’s infrastructure. Blocking entire websites or groups of websites is imprecise, inevitably bringing down lawful speech along with whatever was targeted. For example, suits by Microsoft intended to shut down malicious botnets caused thousands of legitimate users to lose access to the domain names they depended on. There is, in short, no effective safeguard on a new censorship power that would be the internet’s version of police seizing printing presses.

Even if this didn’t endanger free expression on its own, once new tools exist, they can be used for more than copyright. Just as malfunctioning copyright filters were adapted into the malfunctioning filters used for “adult content” on tumblr, so can means of site blocking. The major companies of a single industry should not get to dictate the future of free speech online.

Why the MPA is announcing this now is anyone’s guess. They might think no one cares anymore. They’re wrong. Internet users rejected site blocking in 2012 and they reject it today.

Speaking Freely: Mary Aileen Diez-Bacalso

This interview has been edited for length and clarity.*

Mary Aileen Diez-Bacalso is the executive director of FORUM-Asia. She has worked for many years in human rights organizations in the Philippines and internationally, and is best known for her work on enforced disappearances. She has received several human rights awards at home and abroad, including the Emilio F. Mignone International Human Rights Prize conferred by the Government of Argentina and the Franco-German Ministerial Prize for Human Rights and Rule of Law. In addition to her work at FORUM-Asia, she currently serves as the president of the International Coalition Against Enforced Disappearances (ICAED) and is a senior lecturer at the Asian Center of the University of the Philippines.

York: What does free expression mean to you? And can you tell me about an experience, or experiences, that shaped your views on free expression?

To me, free speech or free expression means the exercise of the right to express oneself and to seek and receive information as an individual or an organization. I’m an individual, but I’m also representing an organization, so it means the ability to express thoughts, ideas, or opinions without threats or intimidation or fear of reprisals. 

Free speech is expressed in various avenues, such as in a community where one lives or in an organization where one belongs at the national, regional, or international levels. It is the right to express these ideas, opinions, and thoughts for different purposes, for instance; influencing behaviors, opinions, and policy decisions; giving education; addressing, for example, historical revisionism—which is historically common in my country, the Philippines. Without freedom of speech people will be kept in the dark in terms of access to information, in understanding and analyzing information, and deciding which information to believe and which information is incorrect or inaccurate or is meant to misinform people. So without freedom of speech people cannot exercise their other basic human rights, like the right of suffrage and, for example, religious organizations who are preaching will not be able to fulfill their mission of preaching if freedom of speech is curtailed. 

I have worked for years with families of the disappeared—victims of enforced disappearance—in many countries. And this forced disappearance is a consequence of the absence of free speech. These disappeared people are forcibly disappeared because of their political beliefs, because of their political affiliations, and because of their human rights work, among other things. And they were deprived of the right to speech. Additionally, in the Philippines and many other Asian countries, rallies, for example, and demonstrations on various legitimate issues of the people are being dispersed by security forces in the name of peace. That’s depriving legitimate protesters from the rights to speech and to peaceful assembly. So these people are named as enemies of the state, as subversives, as troublemakers, and in the process they’re tear-gassed, arrested, detained, etcetera. So allowing these people to exercise their constitutional rights is a manifestation of free speech. But in many Asian countries—and many other countries in other regions also—such rights, although provided for by the Constitution, are not respected. Free speech in whatever country you are in, wherever you go, is freedom to study the situation of that country to give your opinion of that situation and share your ideas with others. 

York: Can you share some experiences that helped shape your views on freedom of expression? 

During my childhood years, when martial law was imposed, I’d heard a lot of news about detention, arrest and detention of journalists because of their protest against martial law that was imposed by the dictator Ferdinand Marcos, Sr, who was the father of the present President of the Philippines. So I read a lot about violations of human rights of activists from different sectors of society. I read about farmers, workers, students, church people, who were arrested, detained, tortured, disappeared, and killed because of martial law. Because they spoke against the Marcos administration. So during those years when I was so young, this actually formed my mind and also my commitment to freedom of expression, freedom of assembly, freedom of association. 

Once, I was arrested during the first Marcos administration, and that was a very long time ago. That is a manifestation of the curtailment of the right of free speech. I was together with other human rights defenders—I was very young at the time. We were rallying because there was a priest who was made to disappear forcibly. So we were arrested and detained. Also, I was deported by the government of India on my way to Kashmir. I was there three times, but on my third time I was not allowed to go to Kashmir because of our human rights work there. So even now, I am banned in India and I can not go back there. It was because of those reports we made on enforced disappearances and mass graves in Kashmir. So free speech means freedom without thread, intimidation, or retaliation. And it means being able to use all avenues in various contexts to speak in whatever forms—verbal speeches, written speeches, videos, and all forms of communication.

Also, the enforced disappearance of my husband informed my views on free expression. Two weeks after we got married he was briefly forcibly disappeared. He was tortured, he was not fed, and he was forced to confess that he was a member of the Communist Party of the Philippines. He was together with one other person he did not know and did not see, and they were forced to dig a grave for themselves to be buried alive inside. Another person who was disappeared then escaped and informed us of where my husband was. So we told the military that we knew where my husband was. They were afraid that the other person might testify so they released my husband in a cemetery near his parent’s house.

And that made an impact on me, that’s why I work a lot with families of enforced disappearances both in the Philippines and in many other countries. I believe that the experience of enforced disappearance of my husband, and other family members of the disappeared and their experience of having family members disappeared until now, is a consequence of the violation of freedom of expression, freedom of assembly, freedom of speech. And also my integration or immersion with families of the disappeared has contributed a lot to my commitment to human rights and free speech. I’m just lucky to have my husband back. And he’s lucky. But the way of giving back, of being grateful for the experience we had—because they are very rare cases where victims of enforced disappearances surfaced alive—so I dedicate my whole life to the cause of human rights. 

York: What do you feel are some of the qualities that make you passionate about protecting free expression for others?

Being brought up by my family, my parents, we were taught about the importance of speaking for the truth, and the importance of uprightness. It was also because of our religious background. We were taught it is very important to tell the truth. So this passion for truth and uprightness is one of the qualities that make me passionate about free expression. And the sense of moral responsibility to rectify wrongs that are being committed. My love of writing, also. I love writing whenever I have the opportunity to do it, the time to do it. And the sense of duty to make human rights a lifetime commitment. 

York: What should we know about the role of social media in modern Philippine society? 

I believe social media contributed a lot to what we are now. The current oppressive administration invested a lot in misinformation, in revising history, and that’s why a lot of young people think of martial law as the years of glory and prosperity. I believe one of the biggest factors of the administration getting the votes was their investment in social media for at least a decade. 

York: What are your feelings on how online speech should be regulated? 

I’m not very sure it should be regulated. For me, as long as the individuals or the organizations have a sense of responsibility for what they say online, there should be no regulation. But when we look at free speech on online platforms these online platforms have the responsibility to ensure that there are clear guidelines for content moderation and must be held accountable for content posted on their platforms. So fact-checking—which is so important in this world of misinformation and “fake news”—and complaints mechanisms have to be in place to ensure that harmful online speech is identified and addressed. So while freedom of expression is a fundamental right, it is important to recognize that this can be exploited to spread hate speech and harmful content all in the guise of online freedom of speech—so this could be abused. This is being abused. Those responsible for online platforms must be accountable for their content. For example, from March 2020 to July 2020 our organization, FORUM-Asia and its partners, including freedom of expression group AFAD, documented around 40 cases of hate speech and dangerous speech on Facebook. And the study scope is limited as it only covered posts and comments in Burmese. The researchers involved also reported that many other posts were reported and subsequently removed prior to being documented. So the actual amount of hate speech is likely to be significantly higher. I recommend taking a look at the report. So while FORUM-Asia acknowledges the efforts of Facebook to promote policies to curb hate speech on the platform, it still needs to update and constantly review all these things, like the community guidelines, including those on political advertisements and paid or sponsored content, with the participation of the Facebook Oversight Board. 

York: Can you tell me about a personal experience you’ve had with censorship, or perhaps the opposite, an experience you have of using freedom of expression for the greater good?

In terms of censorship, I don’t have personal experience with censorship. I wrote some opinion pieces in the Union of Catholic Asian News and other online platforms, but I haven’t had any experience of censorship. Although I did experience negative comments because of the content of what I wrote. There are a lot of trolls in the Philippines and they were and are very supportive of the previous administration of Duterte, so there was negative feedback when I wrote a lot on the war on drugs and the killings and impunity. But that’s also part of freedom of speech! I just had to ignore it, but, to be honest, I felt bad. 

York: Thank you for sharing that. Do you have a free expression hero? 

I believe we have so many unsung heroes in terms of free speech and these are the unknown persecuted human rights defenders. But I also answer that during this week we are commemorating the Holy Week [editor’s note: this interview took place on March 28, 2024] so I would like to say that I would like to remember Jesus Christ. Whose passion, death, and resurrection Christians are commemorating this week. So, during his time, Jesus spoke about the ills of society, he was enraged when he witnessed how defenseless poor were violated of their rights and he was angry when authority took advantage of them. And he spoke very openly about his anger, about his defense for the poor. So I believe that he is my hero.

Also, in contemporary times, Óscar Arnulfo Romero y Galdámez, who was canonized as a Saint in 2018, I consider him as my free speech hero also. I visited the chapel where he was assassinated, the Cathedral of San Salvador, where his mortal remains were buried. And the international community, especially the Salvadoran people, celebrated the 44th anniversary of his assassination last Sunday the 24th of March, 2024. Seeing the ills of society, the consequent persecution of the progressive segment of the Catholic church and the churches in El Salvador, and the indiscriminate killings of the Salvadoran people in his communities San Romero courageously spoke on the eve of his assassination. I’d like to quote what he said. He said:

“I would like to make a special appeal to the men of the army, and specifically to the ranks of the National Guard, the police and the military. Brothers, you come from our own people. You are killing your own brother peasants when any human order to kill must be subordinate to the law of God which says, ‘Thou shalt not kill.’ No soldier is obliged to obey an order contrary to the law of God. No one has to obey an immoral law. It is high time you recovered your consciences and obeyed your consciences rather than a sinful order. The church, the defender of the rights of God, of the law of God, of human dignity, of the person, cannot remain silent before such an abomination. We want the government to face the fact that reforms are valueless if they are to be carried out at the cost of so much blood. In the name of God, in the name of this suffering people whose cries rise to heaven more loudly each day, I implore you, I beg you, I order you in the name of God: stop the repression.”

So as a fitting tribute to Saint Romero of the Americas the United Nations has dedicated the 24th of March as the International Day for Truth, Justice, Reparation, and Guarantees of Non-repetition. So he is my hero. Of course, Jesus Christ being the most courageous human rights defender during these times, continues to be my hero. Which I’m sure was the model of Monsignor Romero. 

Podcast Episode: Antitrust/Pro-Internet

Imagine an internet in which economic power is more broadly distributed, so that more people can build and maintain small businesses online to make good livings. In this world, the behavioral advertising that has made the internet into a giant surveillance tool would be banned, so people could share more equally in the riches without surrendering their privacy.

play
Privacy info. This embed will serve content from simplecast.com

 

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

That’s the world Tim Wu envisions as he teaches and shapes policy on the revitalization of American antitrust law and the growing power of big tech platforms. He joins EFF’s Cindy Cohn and Jason Kelley to discuss using the law to counterbalance the market’s worst instincts, in order to create an internet focused more on improving people’s lives than on meaningless revenue generation. 

In this episode you’ll learn about: 

  • Getting a better “deal” in trading some of your data for connectedness. 
  • Building corporate structures that do a better job of balancing the public good with private profits. 
  • Creating a healthier online ecosystem with corporate “quarantines” to prevent a handful of gigantic companies from dominating the entire internet. 
  • Nurturing actual innovation of products and services online, not just newer price models. 

Timothy Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School, where he has served on the faculty since 2006. First known for coining the term “net neutrality” in 2002, he served in President Joe Biden’s White House as special assistant to the President for technology and competition policy from 2021 to 2023; he also had worked on competition policy for the National Economic Council during the last year of President Barack Obama’s administration. Earlier, he worked in antitrust enforcement at the Federal Trade Commission and served as enforcement counsel in the New York Attorney General’s Office. His books include “The Curse of Bigness: Antitrust in the New Gilded Age” (2018), "The Attention Merchants: The Epic Scramble to Get Inside Our Heads” (2016), “The Master Switch: The Rise and Fall of Information Empires” (2010), and “Who Controls the Internet? Illusions of a Borderless World” (2006).

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

TIM WU
I think with advertising we need a better deal. So advertising is always a deal. You trade your attention and you trade probably some data, in exchange you get exposed to advertising and in exchange you get some kind of free product.

You know, that's the deal with television, that's been the deal for a long time with radio. But because it's sort of an invisible bargain, it's hard to make the bargain, and the price can be increased in ways that you don't necessarily notice. For example, we had one deal with Google in, let's say, around the year 2010 - if you go on Google now, it's an entirely different bargain.

It's as if there's been a massive inflation in these so-called free products. In terms of how much data has been taken, in terms of how much you're exposed to, how much ad load you get. It's as if sneakers went from 30 dollars to 1,000 dollars!

CINDY COHN
That's Tim Wu – author, law professor, White House advisor. He’s something of a swiss army knife for technology law and policy. He spent two years on the National Economic Council, working with the Biden administration as an advisor on competition and tech policy. He worked on antitrust legislation to try and check some of the country’s biggest corporations, especially, of course, the tech giants.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast, How to Fix the Internet. Our guest today is Tim Wu. His stint with the Biden administration was the second White House administration he advised. And in between, he ran for statewide office in New York. And that whole thing is just a sideline from his day job as a law professor at Columbia University. Plus, he coined the term net neutrality!

CINDY COHN
On top of that, Tim basically writes a book every few years that I read in order to tell me what's going to happen next in technology. And before that he's been a programmer and a more traditional lab based scientist. So he's kind of got it all.

TIM WU
Sounds like I'm a dilettante.

CINDY COHN
Well, I think you've got a lot of skills in a lot of different departments, and I think that in some ways, I've heard you call yourself a translator, and I think that that's really what all of that experience gives you as a superpower is the ability to kind of talk between these kinds of spaces in the rest of the world.

TIM WU
Well, I guess you could say that. I've always been inspired by Wilhelm Humboldt, who had this theory that in order to have a full life, you had to try to do a lot of different stuff. So somehow that factors into it somewhere.

CINDY COHN
That's wonderful. We want to talk about a lot of things in this conversation, but I kind of wanted to start off with the central story of the podcast, which is, what does the world look like if we get this right? You know, you and I have spent a lot of years talking about all the problems, trying to lift up obstacles and get rid of obstacles.

But if we reach this end state where we get a lot of these problems right, in Tim Wu's world, what, what does it look like? Like, what does your day look like? What do people's experience of technology look like?

TIM WU
I think it looks like a world in which economic power surrounding the internet and surrounding the platforms is very much more distributed. And, you know, what that means practically is it means a lot of people are able to make a good living, I guess, based on being a small producer or having a service based skill in a way that feels sustainable and where the sort of riches of the Internet are more broadly shared.

So that's less about what kind of things you click on or, you know, what kind of apps you use and more about, I guess, the economic structure surrounding the Internet, which I think, you know, um, I don't think I'm the only person who thinks this, you know, the structure could be fairer and could work for more people.

It does feel like the potential and, you know, we've all lived through that potential starting in the 90s of this kind of economically liberating force that would be the basis for a lot of people to make a decent living has seemed to turn into something more where a lot of money aggregates in a few places.

CINDY COHN
Yeah, I remember, people still talk about the long tail, right, as a way in which the digitization of materials created a revenue stream that's more than just, you know, the flavor of the week that a movie studio or a book publisher might want us to pay attention to on kind of the cultural side, right?

That there was space for this. And that also makes me think of a conversation we just had with the folks in the right to repair movement talking about like their world includes a place where there's mom and pop shops that will help you fix your devices all over the place. Like this is another way in which we have centralized economic power.

We've centralized power and if we decentralize this or, or, or spread it more broadly, uh, we're going to create a lot of jobs and opportunities for people, not just as users of technology, but as the people who help build and offer it to us.

TIM WU
I'm writing a new book, um, working title, Platform Capitalism, that has caused me to go back and look at the, you know, the early promise of the internet. And I went back and I was struck by a book, some of you may remember, called "An Army of Davids," by Glenn Reynolds the Instapundit.
Yeah, and he wrote a book and he said, you know, the future of the American economy is going to be all these kind of mom and pop sellers who, who take over everything – he wrote this about 2006 – and he says, you know, bloggers are already competing with news operations, small sellers on eBay are already competing with retail stores, and so on, journalists, so on down the line that, uh, you know, the age of the big, centralized Goliath is over and the little guys are going to rule the future.

Kind of dovetailed, I went back and read Yochai Benkler's early work about a production commons model and how, you know, there'll be a new node of production. Those books have not aged all that well. In fact, I think the book that wins is Blitzscaling. That somewhere along the line, instead of the internet favoring small business, small production, things went in the exact opposite direction.

And when I think about Yochai Benkler's idea of sort of production-based commons, you know, Waze was like that, the mapping program, until one day Waze was just bought by Google. So, I was just thinking about those as I was writing that chapter of the book.

CINDY COHN
Yeah, I think that's right. I think that identifying and, and you've done a lot of work on this, identify the way in which we started with this promise and we ended up in this other place can help us figure out, and Cory Doctorow, our colleague and friend has been doing a lot of work on this with choke point capitalism and other work that he's done for EFF and elsewhere.

And I also agree with him that, like, we don't really want to create the good old days. We want to create the good new days, right? Like, we want to experience the benefits of an Internet post-1990s, but also have those, those riches decentralized or shared a little more broadly, or a lot more broadly, honestly.

TIM WU
Yeah, I think that's right, and so I think part of what I'm saying, you know, what would fix the internet, or what would make it something that people feel excited about. You know, I think people are always excited about apps and videos, but also people are excited about their livelihood and making money.

And if we can figure out the kind of structure that makes capitalism more distributed surrounding platforms, you know, it's not abandoning the idea of you have to have a good site or a product or something to, to gain customers. It's not a total surrender of that idea, but a return to that idea working for more people.

CINDY COHN
I mean, one of the things that you taught me in the early days is how kind of ‘twas ever so, right? If you think about radio or broadcast medium or other previous mediums, they kind of started out with this promise of a broader impact and broader empowerment and, and didn't end up that way as much as well.

And I know that's something you've thought about a lot.

TIM WU
Yeah, the first book I wrote by myself, The Master Switch, had that theme and at the time when I wrote it, um, I wrote a lot of it in the, ‘09, ‘08, ‘07 kind of period, and I think at that point I had more optimism that the internet could hold out, that it wouldn't be subject to the sort of monopolizing tendencies that had taken over the radio, which originally was thousands of radio stations, or the telephone system – which started as this ‘go west young man and start your own telephone company’ kind of technology – film industry and and many others. I was firmly of the view that things would be different. Um, I think I thought that, uh, because of the CCP IP protocol, because of the platforms like HTML that were, you know, the center of the web, because of net neutrality, lasting influence. But frankly, I was wrong. I was wrong, at least when I was writing the book.

JASON KELLEY
As you've been talking about the sort of almost inevitable funneling of the power that these technologies have into a single or, or a few small platforms or companies, I wonder what you think about newer ideas around decentralization that have sort of started over the last few years, in particular with platforms like Mastodon or something like that, these kinds of APIs or protocols, not platforms, that idea. Do you see any promise in that sort of thing? Because we see some, but I'm wondering what you think.

TIM WU
I do see some promise. I think that In some ways, it's a long overdue effort. I mean, it's not the first. I can't say it's the first. Um, and part of me wishes that we had been, you know, the idealistic people. Even the idealistic people at some of these companies, such as they were, had been a bit more careful about their design in the first place.

You know, I guess what I would hope … the problem with Mastodon on some of these is they're trying to compete with entities that already are operating with all the full benefits of scale and which are already tied to sort of a Delaware private corporate model. Uh, now this is a little bit, I'm not saying that hindsight is 20/20, but when I think about the major platforms and entities the early 21st century, it's really only Wikipedia that got it right in my view by structurally insulating themselves from certain forces and temptations.

So I guess what I'm trying to say is that, uh, part of me wishes we'd done more of this earlier. I do think there's hope in them. I think it's very challenging in current economics to succeed. And sometimes you'd have to wonder if you go in a different, you know, that it might be, I don't want to say impossible, very challenging when you're competing with existing structures. And if you're starting something new, you should start it right.
That said, AI started in a way structurally different and we've seen how that's gone recently.

CINDY COHN
Oh, say more, say more!

JASON KELLEY
Yeah. Yeah. Keep, keep talking about AI.

CINDY COHN
I'm very curious about your thinking about that.

TIM WU
Well, you know, I said that, The Holy Roman Empire was neither holy, nor Roman, nor an empire. And OpenAI is now no longer open, nor non-profit, nor anything else. You know, it's kind of, uh, been extraordinary that the circuit breakers they tried to install have just been blown straight through. Um, and I think there's been a lot of negative coverage of the board. Um, because, you know, the business press is kind of narrow on these topics. But, um, you know, OpenAI, I guess, at some point, tried to structure itself more carefully and, um, and, uh, you know, now the board is run by people whose main experience has been, um, uh, taking good organizations and making them worse, like Quora, so, yeah, I, I, that is not exactly an inspiring story, uh, I guess of OpenAI in the sense of it's trying to structure itself a little differently and, and it, uh, failing to hold.

CINDY COHN
I mean, I think Mozilla has managed to have a structure that has a, you know, kind of complicated for profit/not-for-profit strategy that has worked a little better, but II hear you. I think that if you do a power analysis, right, you know, a nonprofit is going to have a very hard time up against all the money in the world.

And I think that that seems to be what happened for OpenAI. Uh, once all the money in the world showed up, it was pretty hard to, uh, actually impossible for the public interest nonprofit side to hold sway.

TIM WU
When I think about it over and over, I think engineers and the people who set up these, uh, structures have been repeatedly very naive about, um, the power of their own good intentions. And I agree. Mozilla is a good example. Wikipedia is a good example. Google, I remember when they IPO'd, they had some set up, and they said, ‘We're not going to be an ordinary company,’ or something like that. And they sort of had preferred stock for some of the owners. You know, Google is still in some ways an impressive company, but it's hard to differentiate them from any other slightly money grubbing, non-innovative colossus, um, of the kind they were determined not to become.

And, you know, there was this like, well, it's not going to be us, because we're different. You know, we're young and idealistic, and why would we want to become, I don't know, like Xerox or IBM, but like all of us, you begin by saying, I'm never going to become like my parents, and then next thing you know, you're yelling at your kids or whatever.

CINDY COHN
Yeah, it's, it's the, you know, meet the new boss the same as the old boss, right? When we, what we were hoping was that we would be free of some of the old bosses and have a different way to approach, but, but the forces are pretty powerful that stick people back in line, I think.

TIM WU
And some of the old structures, you know, look a little better. Like, I'm not going to say newspapers are perfect, but a structure like the New York Times structure, for example, basically is better than Google's. And I just think there was this sense that, Well, we can solve that problem with code and good vibes. And that turned out to be the great mistake.

CINDY COHN
One of the conversations that you and I have had over the years is kind of the role of regulation on, on the internet. I think the fight about whether to regulate or not to regulate the Internet was always a little beside the point. The question is how. And I'm wondering what you're thinking now. You've been in the government a couple times. You've tried to push some things that were pretty regulatory. How are you thinking now about something like a centralized regulatory agency or another approach to, you know, regulating the Internet?

TIM WU
Yeah, I, you know, I continue to have mixed feelings about something like the central internet commission, mostly for some of the reasons you said, but on the other hand, sometimes, if I want to achieve what I mentioned, which is the idea of platforms that are an input into a lot of people being able to operate on top of them and run businesses-like, you know, at times, the roads have been, or the electric system, or the phone network, um, it's hard to get away from the idea of having some hard rules, sometimes I think my sort of platonic form of, of government regulation or rules was the 1956 AT&T consent decree, which, for those who are not as deep in those weeds as I am, told AT&T that it could do nothing but telecom, and therefore not do computing and also force them to license every single one of their patents for free. And the impact of that was more than one -  one is because they were out of computing. They were not able to dominate it and you had companies then new to computing like IBM and others that got into that space and developed the American computing industry completely separate from AT&T.

And you also ended up, semiconductor companies start that time with the transistor patent and other patents they used for free. So you know, I don't know exactly how you achieve that, but I'm drawn to basically keeping the main platforms in their lane. I would like there to be more competition.
The antitrust side of me would love it. And I think that in some areas we are starting to have it, like in social media, for better or for worse. But maybe for some of the more basic fundamentals, online markets and, you know, as much competition as we can get – but some rule to stay out of other businesses, some rule to stop eating the ecosystem. I do think we need some kind of structural separation rules. Who runs those is a little bit of a harder question.

CINDY COHN
Yeah, we're not opposed to structural separation at EFF. I think we, we think a lot more about interoperability to start with as a way to, you know, help people have other choices, but we haven't been opposed to structural separation, and I think there are situations in which it might make a lot of good sense, especially, you know, in the context of mergers, right?

Where the company has actually swallowed another company that did another thing. That's, kind of the low hanging fruit, and EFF has participated a lot in commenting on potential mergers.

TIM WU
I'm not opposed the idea of pushing interoperability. I think that it's based on the experience of the last 100 years. It is a tricky thing to get right. I'm not saying it's impossible. We do have examples: Phone network, in the early 20th century, and interconnection was relatively successful. And right now, you know, when you change between, let's say, T-Mobile and Verizon, there's only three left, but you get to take your phone number with you, which is a form of interoperability.

But it has the risk of being something you put a lot of effort into and it not necessarily working that well in terms of actually stimulating competition, particularly because of the problem of sabotage, as we saw in the ‘96 Act. So it's actually not about the theory, it's about the practice, the legal engineering of it. Can you find the right thing where you've got kind of a cut point where you could have a good interoperability scheme?

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Tim Wu. I was intrigued by what he said about keeping platforms in their lane. I wanted to hear him speak more about how that relates to antitrust – is that spreading into other ecosystems what sets his antitrust alarm bells off? How does he think about that?

TIM WU
I guess the phrase I might use is quarantine, is you want to quarantine businesses, I guess, from others. And it's less of a traditional antitrust kind of remedy, although it, obviously, in the ‘56 consent decree, which was out of an antitrust suit against AT&T, it can be a remedy.

And the basic idea of it is, it's explicitly distributional in its ideas. It wants more players in the ecosystem, in the economy. It's almost like an ecosystem promoting a device, which is you say, okay, you know, you are the unquestioned master of this particular area of commerce. Maybe we're talking about Amazon and it's online shopping and other forms of e-commerce, or Google and search.

We're not going to give up on the hope of competition, but we think that in terms of having a more distributed economy where more people have their say, um, almost in the way that you might insulate the college students from the elementary school students or something. We're going to give other, you know, room for other people to develop their own industries in these side markets. Now, you know, there's resistance say, well, okay, but Google is going to do a better job in, uh, I don't know, shopping or something, you know, they might do a good job. They might not, but you know, they've got their returns and they're always going to be an advantage as a platform owner and also as a monopoly owner of having the ability to cross-subsidize and the ability to help themselves.

So I think you get healthier ecosystems with quarantines. That's basically my instinct. And, you know, we do quarantines either legally or de facto all the time. As I said, the phone network has long been barred from being involved in a lot of businesses. Banking is kept out of a lot of businesses because of obvious problems of corruption. The electric network, I guess they could make toasters if they want, but it was never set up to allow them to dominate the appliance markets.

And, you know, if they did dominate the appliance markets, I think it would be a much poorer world, a lot less interesting innovation, and frankly, a lot less wealth for everyone. So, yeah, I have strong feelings. It's more of my net neutrality side that drives this thinking than my antitrust side, I’ll put it that way.

JASON KELLEY
You specifically worked in both the Obama and Biden administration sort of on these issues. I'm wondering if your thinking on this has changed. In experiencing those things from from the sort of White House perspective and also just how different those two, sort of, experiences were, obviously the moments are different in time and and and everything like that, but they're not so far apart – maybe light years in terms of technology, but what was your sort of experience between those two, and how do you think we're doing now on this issue?

TIM WU
I want to go back to a slightly earlier time in government, not the Obama, actually it was the Obama administration, but my first job in the, okay, sorry, my third job in the federal government, uh, I guess I'm a, one of these recidivists or something, was at the Federal Trade Commission.

CINDY COHN
Oh yeah, I remember.

TIM WU
Taking the first hard look at big tech and, in fact, we're investigating Google for the first time for antitrust possible offenses, and we also did the first privacy remedy on Facebook, which I will concede was a complete and absolute failure of government, one of the weakest remedies, I think. We did that right before Cambridge Analytica. And obviously had no effect on Facebook's conduct at all. So, one of the failed remedies. I think that when I think back about that period, the main difference was that the tech platforms were different in a lot of ways.

I believe that, uh, monopolies and big companies have, have a life cycle. And they were relatively early in that life cycle, maybe even in a golden age. A company like Amazon seemed to be making life possible for a lot of sellers. Google was still in its early phase and didn't have a huge number of verticals. Still had limited advertising. Most searches still didn't turn up that many ads.

You know, they were in a different stage of their life. And they also still felt somewhat, they were still already big companies. They still felt relatively in some sense, vulnerable to even more powerful economic forces. So they hadn't sort of reached that maturity. You know, 10 years later, I think the life cycle has turned. I think companies have largely abandoned innovation in their core products and turned to defense and trying to improve – most of their innovations are attempting to raise more revenue and supposed to make the product better. Uh, kind of reminds me of the airline industry, which stopped innovating somewhere in the seventies and started making, trying to innovate in, um, terms of price structures and seats being smaller, that kind of thing.

You know, there's, you reach this end point, I think the airlines are the end point where you take a high tech industry at one point and just completely give up on anything other than trying to innovate in terms of your pricing models.

CINDY COHN
Yeah, I mean, I, you know, our, our, we, Cory keeps coming up, but of course Cory calls it the “enshittification” of, uh, of services, and I think that is, uh, in typical Corrie way captures, this stage of the process.

TIM WU
Yeah, I just to speak more broadly. I you know, I think there's a lot of faith and belief that the, uh, company like Google, you know, in its heart meant well, and I do still think the people working there mean well, but I feel that, you know, the structure they set up, which requires showing increasing revenue and profit every quarter began to catch up with it much more and we’re at a much later stage of the process.

CINDY COHN
Yep.

TIM WU
Or the life cycle. I guess I'd put it.

CINDY COHN
And then for you, kind of coming in as a government actor on this, like, what did that mean in terms of, like, was it, I'm assuming, I kind of want to finish the sentence for you. And that, you know, that meant it was harder to get them to do the right thing. It meant that their defenses were better against trying to do the right thing.

Like how did that impact the governmental interventions that you were trying to help make happen?

TIM WU
I think it was both. I think there was both, in terms of government action, a sense that the record was very different. The Google story in 2012 is very different than 2023. And the main difference is in 2023 Google is paying out 26.3 billion a year to other companies to keep its search engine where it is, and arguably to split the market with Apple.

You know, there wasn't that kind of record back in 2012. Maybe we still should have acted, but there wasn't that much money being so obviously spent on pure defensive monopoly. But also people were less willing. They thought the companies were great. They overall, I mean, there's a broader ideological change that people still felt, many people from the Clinton administration felt the government was the problem. Private industry was the solution. Had kind of a sort of magical thinking about the ability of this industry to be different in some fundamental way.

So the chair of the FCC wasn't willing to pull the trigger. The economists all said it was a terrible idea. You know, they failed to block over a thousand mergers that big tech did during that period, which it's, I think, very low odds that none of those thousands were anti-competitive or in the aggregate that maybe, you know, that was a way of building up market power.

Um, it did enrich a lot of small company people, but I, I think people at companies like Waze really regret selling out and, you know, end up not really building anything of their own but becoming a tiny sub-post of the Google empire.

CINDY COHN
Yeah, the “acquihire” thing is very central now and what I hear from people in the industry is that like, if that's not your strategy to get acquired by one of the ones, it's very hard to get funded, right? It feeds back into the VC and how you get funded to get something built.

If it's not something that one of the big guys is going to buy, you're going to have a hard time building it and you're going to have a hard time getting the support to get to the place where you might actually even be able to compete with them.

TIM WU
And I think sometimes people forget we had different models. You know, some of your listeners might forget that, you know, in the ‘70s, ‘80s, and ‘90s, and early 2000s, people did build companies not just to be bought...

CINDY COHN
Right.

TIM WU
...but to build fortunes, or because they thought it was a good company. I mean, the people who built Sun, or Apple, or, you know, Microsoft, they weren't saying, well, I hope I'm gonna be bought by IBM one day. And they made real fortunes. I mean, look, being acquired, you can obviously become a very wealthy person, but you don't become a person of significance. You can go fund a charity or something, but you haven't really done something with your life.

CINDY COHN
I'm going to flip it around again. And so we get to the place where the Tim Wu vision that the power is spread more broadly. We've got lots of little businesses all around. We've got many choices for consumers. What else, what else do you see in this world? Like what role does the advertising business model play in this kind of a better future. That's just one example there of many, that we could give.

TIM WU
Yeah, no, I like your vision of a different future. I think, uh, just like focus on it goes back to the sense of opportunity and, you know, you could have a life where you run a small business that's on the internet that is a respectable business and you're neither a billionaire nor you're impoverished, but you know, you just had to have your own business the way people have, like, in New York or used to run like stores and in other parts of the country, and in that world, I mean, in my ideal world, there is advertising, but advertising is primarily informational, if that makes sense.

It provides useful information. And it's a long way to go between here and there, but where, um, you know, it's not the default business model for informational sources such that it, it has much less corrupting effects. Um, you know, I think that advertising obviously everyone's business model is going to affect them, but advertising has some of the more, corrupting business models around.

So, in my ideal world, we would not, it's not that advertising will go away, people want information, but we'd strike a better bargain. Exactly how you do that. I guess more competition helps, you know, lower advertising, um, sites you might frequent, better privacy protecting sites, but, you know, also passing privacy legislation might help too.

CINDY COHN
I think that’s right, I think EFF has taken a position that we think we should ban behavioral ads. That's a pretty strong position for us and not what we normally do, um, to, to say, well, we need to ban something. But also that we need, of course, comprehensive privacy law, which is, you know, kind of underlines so many of the harms that we're seeing online right now is this, this lack of a baseline privacy protection.

I don't know if you see it the same way, but it's certainly it seems to be the through line for a lot of harms that are coming up as things people are concerned about. Yeah.

TIM WU
I mean, absolutely, and I, you know, don't want to give EFF advice on their views, but I would say that I think it's wise to see the totally unregulated collection of data from, you know, millions, if not billions of people as a source of so many of the problems that we have.

It drives unhealthy business models, it leads to real-world consequences, in terms of identity theft and, and so many others, but I think I, I'd focus first on what, yeah, the kind of behavior that encourages the kind of business model is encourages, which are ones that just don't in the aggregate, feel very good for the businesses or for, for us in particular.

So yeah, my first priority legislatively, I think if I were acting at this moment would be starting right there with, um, a privacy law that is not just something that gives supposed user rights to take a look at the data that's collected, but that meaningfully stops the collection of data. And I think we'll all just shrug our shoulders and say, oh, we're better off without that. Yes, it supported some, but we will still have some of the things – it's not as if we didn't have friends before Facebook.

It's not as if we didn't have video content before YouTube, you know, these things will survive with less without behavioral advertising. I think your stance on this is entirely, uh, correct.

CINDY COHN
Great. Thank you, I always love it when Tim agrees with me and you know, it pains me when we disagree, but one of the things I know is that you are one of the people who was inspired by Larry Lessig and we cite Larry a lot on the show because we like to think about things or organize them in terms of the four levels of, um, You know, digital regulation, you know, laws, norms, markets, and code as four ways that we could control things online. And I know you've been focusing a lot on laws lately and markets as well.

How do you think about, you know, these four levers and where we are and, and how we should be deploying them?

TIM WU
Good question. I regard Larry as a prophet. He was my mentor in law school, and in fact, he is responsible for most of my life direction. Larry saw that there was a force arising through code that already was somewhat, in that time, 90s, early 2000s, not particularly subject to any kind of accountability, and he saw that it could take forms that might not be consistent with the kind of liberties you would like to have or expect and he was right about that.

You know, you can say whatever you want about law or government and there are many examples of terrible government, but at least the United States Constitution we think well, there is this problem called tyranny and we need to do something about it.

There's no real equivalent for the development of abusive technologies unless you get government to do something about it and government hasn't done much about it. You know, I think the interactions are what interests me about the four forces. So if we agree that code has a certain kind of sovereignty over our lives in many ways and most of us on a day-to-day basis are probably more affected by the code of the devices we use than by the laws we operate under.

And the question is, what controls code? And the two main contenders are the market and law. And right now the winner by far is just the market, which has led codemakers in directions that even they find kind of unfortunate and disgraceful.

I don't remember who had that quote, but it was some Facebook engineer that said the greatest minds of our generation are writing code to try to have people click on random ads, and we have sort of wasted a generation of talent on meaningless revenue generation when they could be building things that make people's lives better.

So, you know, the answer is not easy is to use law to counter the market. And that's where I think we are with Larry's four factors.

CINDY COHN
Yeah, I think that that's right, and I agree that it's a little ro-sham-bo, right, that you can control code with laws and, and markets and you can control markets with code, which is kind of where interoperability comes in sometimes and laws and you know, norms play a role in kind of a slightly different whammy role in all of these things, but I do think that those interactions are really important and we've, again, I've always thought it was a somewhat phony conversation about, you know, "to regulate or not to regulate, that is the question" because that's not actually particularly useful in terms of thinking about things because we were embedded in a set of laws. It's just the ones we pay attention to and the ones that we might not notice, but I do think we're in a time when we have to think a lot harder about how to make laws that will be flexible enough to empower people and empower competition and not lock in the winners of today's markets. And we spend a lot of time thinking about that issue.

TIM WU
Well, let me say this much. This might sound a little contradictory in my life story, but I'm not actually a fan of big government, certainly not overly prescriptive government. Having been in government, I see government's limits, and they are real. But I do think the people together are powerful.

I think laws can be powerful, but what they most usefully do is balance out the market. You know what I'm saying? And create different incentives or different forces against it. I think trying to have government decide exactly how tech should run is usually a terrible idea. But to cut off incentives – you talked about behavioral advertising. So let's say you ban behavioral advertising just the way we ban child labor or something. You know, you can live without it. And, yeah, maybe we're less productive because we don't let 12 year olds work in factories. There's a marginal loss of revenue, but I frankly think it's worth it.

And, you know, and some of the other practices that have shown up are in some ways the equivalent. And we can live without them. And that's the, you know, it's sort of easy to say. we should ban child labor. But when you look for those kind of practices, that's where we need law to be active.

JASON KELLEY
Well, Cindy, I came away from that with a reading list. I'm sure a lot of people are familiar with those authors and those books, but I am going to have to catch up. I think we'll put some of them, maybe all the books, in the, in the show notes so that people who are wondering can, can catch up on their end.

You, as someone who's already read all those books, probably have different takeaways from this conversation than me.

CINDY COHN
You know what I really, I really like how Tim thinks he's, you know, he comes out of this, especially most recently from an economics perspective. So his future is really an economics one.

It's about an internet that has lots of spaces for people to make a reasonable living as opposed to the few people make a killing, or sell their companies to the big tech giants. And I think that that vision dovetails a lot with a lot of the people that we've talked. to on this show that, you know, in some ways we've got to think about how do we redistribute the internet and that includes redistributing the economic benefits.

JASON KELLEY
Yeah. And thinking about, you know, something you've said many times, which is this idea of rather than going backwards to the internet we used to have, or the world we used to have, we're really trying to build a better world with the one we do have.

So another thing he did mention that I really pulled away from this conversation was when antitrust makes sense. And that sort of idea of, well, what do you do when companies start spreading into other ecosystems? That's when you really have to start thinking about the problems that they're creating for competition.

And I think the word he used was quarantine. Is that right?

CINDY COHN
Yeah I love that image.

JASON KELLEY
Yeah, that was just a helpful, I think, way for people to think about how antitrust can work. And that was something that I'll take away from this probably forever.

CINDY COHN
Yeah, I also liked his vision of what kind of deal we have with a lot of these free tools or AKA free tools, which is, you know, at one time when we signed up for, you know, a Gmail account, it's, you know, the, the deal was that it was going to look at what you searched on and what you wrote and then place you ads based on the context and what you did.

And now that deal is much, much worse. And I think he, he's right to likening that to something that, you know, has secretly gotten much more expensive for us, that the deal for us as consumers has gotten worse and worse. And I really like that framing because again, it kind of translates out from the issues that where we live, which is, you know, privacy and free speech and fairness and turns it into something that is actually kind of an economic framing of some of the same points.

I think that the kind of upshot of Tim and, and honestly, some of the other people we've talked to is this idea of ‘blitzscaling’, um, and growing gigantic platforms is really at the heart of a lot of the problems that we're seeing in free speech and in privacy and also in economic fairness. And I think that's a point that Tim makes very well.

I think that from, you know, The Attention Merchants, The Curse of Bigness, Tim has been writing in this space for a while, and he, what I appreciate is Tim is really a person, um, who came up in the Internet, he understands the Internet, he understands a lot of the values, and so he's, he's not writing as an outsider throwing rocks as much as an insider who is kind of dismayed at how things have gone and looking to try to unpack all of the problems. And I think his observation, which is shared by a lot of people, is that a lot of the problems that we're seeing inside tech are also problems we're seeing outside tech. It's just that tech is new enough that they really took over pretty fast.

But I think that it's important for us to both recognize the problems inside tech and it doesn't let tech off the hook. To note that these are broader societal problems, but it may help us in thinking about how we get out of them.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard Perspectives *** by J.Lang featuring Sackjo22 and Admiral Bob, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll talk to you again soon.

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.

"Infrastructures of Control": Q&A with the Geographers Behind University of Arizona's Border Surveillance Photo Exhibition

Guided by EFF's map of Customs & Border Protection surveillance towers, University of Arizona geographers Colter Thomas and Dugan Meyer have been methodologically traversing the U.S.-Mexico border and photographing the infrastructure that comprises the so-called "virtual wall."

An amrored vehicle next to a surveillance tower along the Rio Grande River

Anduril Sentry tower beside the Rio Grande River. Photo by Colter Thomas (CC BY-NC-ND 4.0)

From April 12-26, their outdoor exhibition "Infrastructures of Control" will be on display on the University of Arizona campus in Tucson, featuring more than 30 photographs of surveillance technology, a replica surveillance tower, and a blow up map based on EFF's data.

Locals can join the researchers and EFF staff for an opening night tour at 5pm on April 12, followed by an EFF Speakeasy/Meetup. There will also be a panel discussion at 5pm on April 19, moderated by journalist Yael Grauer, co-author of EFF's Street-Level Surveillance hub. It will feature a variety of experts on the border, including Isaac Esposto (No More Deaths), Dora Rodriguez (Salvavision), Pedro De Velasco (Kino Border Initiative), Todd Miller (The Border Chronicle), and Daniel Torres (Daniel Torres Reports).

In the meantime, we chatted with Colter and Dugan about what their project means to them.

MAASS: Tell us what you hope people will take away from this project?

MEYER: We think of our work as a way for us to contribute to a broader movement for border justice that has been alive and well in the U.S.-Mexico borderlands for decades. Using photography, mapping, and other forms of research, we are trying to make the constantly expanding infrastructure of U.S. border policing and surveillance more visible to public audiences everywhere. Our hope is that doing so will prompt more expansive and critical discussions about the extent to which these infrastructures are reshaping the social and environmental landscapes throughout this region and beyond.

THOMAS: The diversity of landscapes that make up the borderlands can make it hard to see how these parts fit together, but the common thread of surveillance is an ominous sign for the future and we hope that the work we make can encourage people from different places and experiences to find common cause for looking critically at these infrastructures and what they mean for the future of the borderlands.

A surveillance tower in a valley.

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: So much is written about border surveillance by researchers working off documents, without seeing these towers first hand. How did your real-world exploration affect your understanding of border technology?

THOMAS: Personally I’m left with more questions than answers when doing this fieldwork. We have driven along the border from the Gulf of Mexico to the Pacific, and it is surprising just how much variation there is within this broad system of U.S. border security. It can sometimes seem like there isn’t just one border at all, but instead a patchwork of infrastructural parts—technologies, architecture, policy, etc.—that only looks cohesive from a distance.

A surveillance tower on a hill

An Integrated Fixed Tower in Southern Arizona. Photo by Colter Thomas (CC BY-NC-ND 4.0)

MAASS: That makes me think of Trevor Paglen, an artist known for his work documenting surveillance programs. He often talks about the invisibility of surveillance technology. Is that also what you encountered?

MEYER: The scale and scope of U.S. border policing is dizzying, and much of how this system functions is hidden from view. But we think many viewers of this exhibition might be surprised—as we were when we started doing this work—just how much of this infrastructure is hidden in plain sight, integrated into daily life in communities of all kinds.

This is one of the classic characteristics of infrastructure: when it is working as intended, it often seems to recede into the background of life, taken for granted as though it always existed and couldn’t be otherwise. But these systems, from surveillance programs to the border itself, require tremendous amounts of labor and resources to function, and when you look closely, it is much easier to see the waste and brutality that are their real legacy. As Colter and I do this kind of looking, I often think about a line from the late David Graeber, who wrote that “the ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently.”

THOMAS: Like Dugan said, infrastructure rarely draws direct attention. As artists and researchers, then, our challenge has been to find a way to disrupt this banality visually, to literally reframe the material landscapes of surveillance in ways that sort of pull this infrastructure back into focus. We aren’t trying to make this infrastructure beautiful, but we are trying to present it in a way that people will look at it more closely. I think this is also what makes Paglen’s work so powerful—it aims for something more than simply documenting or archiving a subject that has thus far escaped scrutiny. Like Paglen, we are trying to present our audiences with images that demand attention, and to contextualize those images in ways that open up opportunities and spaces for viewers to act collectively with their attention. For us, this means collaborating with a range of other people and organizations—like the EFF—to invite viewers into critical conversations that are already happening about what these technologies and infrastructures mean for ourselves and our neighbors, wherever they are coming from.

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog

This post was co-written by EFF legal intern Melda Gurakar.

Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled.

The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so.

Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute.

The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X.

The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers.

EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information.

The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions.

The White House is Wrong: Section 702 Needs Drastic Change

With Section 702 of the Foreign Intelligence Surveillance Act set to expire later this month, the White House recently released a memo objecting to the SAFE Act—legislation introduced by Senators Dick Durbin and Mike Lee that would reauthorize Section 702 with some reforms. The White House is wrong. SAFE is a bipartisan bill that may be our most realistic chance of reforming a dangerous NSA mass surveillance program that even the federal government’s privacy watchdog and the White House itself have acknowledged needs reform.

As we’ve written, the SAFE Act does not go nearly far enough in protecting us from the warrantless surveillance the government now conducts under Section 702. But, with surveillance hawks in the government pushing for a reauthorization of their favorite national security law without any meaningful reforms, the SAFE Act might be privacy and civil liberties advocates’ best hope for imposing some checks upon Section 702.

Section 702 is a serious threat to the privacy of those in the United States. It authorizes the collection of overseas communications for national security purposes, and, in a globalized world, this allows the government to collect a massive amount of Americans’ communications. As Section 702 is currently written, intelligence agencies and domestic law enforcement have backdoor, warrantless access to millions of communications from people with clear constitutional rights.

The White House objects to the SAFE Act’s two major reforms. The first requires the government to obtain court approval before accessing the content of communications for people in the United States which have been hoovered up and stored in Section 702 databases—just like police have to do to read your letters or emails. The SAFE Act’s second reform closes the “data broker loophole” by largely prohibiting the government from purchasing personal data they would otherwise need a warrant to collect. While the White House memo is just the latest attempt to scare lawmakers into reauthorizing Section 702, it omits important context and distorts the key SAFE Act amendments’ effects

The government has repeatedly abused Section 702 by searching its databases for Americans’ communications. Every time, the government claims it has learned from its mistakes and won’t repeat them, only for another abuse to come to light years later. The government asks you to trust it with the enormously powerful surveillance tool that is Section 702—but it has proven unworthy of that trust.

The Government Should Get Judicial Approval Before Accessing Americans’ Communications

Requiring the government to obtain judicial approval before it can access the communications of Americans and those in the United States is a necessary, minimum protection against Section 702’s warrantless surveillance. Because Section 702 does not require safeguards of particularity and probable cause when the government initially collects communications, it is essential to require the government to at least convince a judge that there is a justification before the “separate Fourth Amendment event” of the government accessing the communications of Americans it has collected.

The White House’s memo claims that the government shouldn’t need to get court approval to access communications of Americans that were “lawfully obtained” under Section 702. But this ignores the fundamental differences between Section 702 and other surveillance. Intelligence agencies and law enforcement don’t get to play “finders keepers” with our communications just because they have a pre-existing program that warrantlessly vacuums them all up.

The SAFE Act has exceptions from its general requirement of court approval for emergencies, consent, and—for malicious software—“defensive cybersecurity queries.” While the White House memo claims these are “dangerously narrow,” exigency and consent are longstanding, well-developed exceptions to the Fourth Amendment’s warrant requirement. And the SAFE Act gives the government even more leeway than the Fourth Amendment ordinarily does in also excluding “defensive cybersecurity queries” from its requirement of judicial approval.

The Government Shouldn’t Be Able to Buy What It Would Otherwise Need a Warrant to Collect

The SAFE Act properly imposes broad restrictions upon the government’s ability to purchase data—because way too much of our data is available for the government to purchase. Both the FBI and NSA have acknowledged knowingly buying data on Americans. As we’ve written many times, the commercially available information that the government purchases can be very revealing about our most intimate, private communications and associations. The Director of National Intelligence’s own report on government purchases of commercially available information recognizes this data can be “misused to pry into private lives, ruin reputations, and cause emotional distress and threaten the safety of individuals.” This report also recognizes that this data can “disclose, for example, the detailed movements and associations of individuals and groups, revealing political, religious, travel, and speech activities.”

The SAFE Act would go a significant way towards closing the “data broker loophole” that the government has been exploiting. Contrary to the White House’s argument that Section 702 reauthorization is “not the vehicle” for protecting Americans’ data privacy, closing the “data broker loophole” goes hand-in-hand with putting crucial guardrails upon Section 702 surveillance: the necessary reform of requiring court approval for government access to Americans’ communications is undermined if the government is able to warrantlessly collect revealing information about Americans some other way. 

The White House further objects that the SAFE Act does not address data purchases by other countries and nongovernmental entities, but this misses the point. The best way Congress can protect Americans’ data privacy from these entities and others is to pass comprehensive data privacy regulation. But, in the context of Section 702 reauthorization, the government is effectively asking for special surveillance permissions for itself, that its surveillance continue to be subjected to minimal oversight while other other countries’ surveillance practices are regulated. (This has been a pattern as of late.) The Fourth Amendment prohibits intelligence agencies and law enforcement from giving themselves the prerogative to invade our privacy.  

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists

In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk. 

The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years.

The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.  

The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean. 

In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment. 

Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists

In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights. 

The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below.

Prior Judicial Order for Communications Surveillance and Access to Data

The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing thatthe mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.” 

Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that

“[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added) 

According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”  

The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter.

The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order.

Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.” 

To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure. 

Informational Self-Determination Recognized as an Autonomous Human Right 

In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention. 

“In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”  

The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases. 

The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law. 

According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.  

The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files.

The Vital Protection of the Right to Defend Human Rights

The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights.

For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities. 

Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State. 

Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices 

In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities. 

In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights. 

As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records. 

The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures. 

The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files. 

These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out.

In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations. 

The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.  

When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 

❌