Normal view

Received before yesterday

Air fryer app caught asking for voice data (re-air) (Lock and Code S06E24)

2 December 2025 at 11:22

This week on the Lock and Code podcast

It’s often said online that if a product is free, you’re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?

In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether people’s air fryers—seriously–might be spying on them.

By analyzing the associated Android apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.

As the researchers wrote:

“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason.”

Bizarrely, these types of data requests are far from rare.

Today, on the Lock and Code podcast, we revisit a 2024 episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.

These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Your coworker is tired of AI “workslop” (Lock and Code S06E23)

17 November 2025 at 10:44

This week on the Lock and Code podcast…

Everything’s easier with AI… except having to correct it.

In just the three years since OpenAI released ChatGPT, not only has onlife life changed at home—it’s also changed at work. Some of the biggest software companies today, like Microsoft and Google, are forwarding a vision of an AI-powered future where people don’t write their own emails anymore, or make their own slide decks for presentations, or compile their own reports, or even read their own notifications, because AI will do it for them.

But it turns out that offloading this type of work onto AI has consequences.

In September, a group of researchers from Stanford University and BetterUp Labs published findings from an ongoing study into how AI-produced work impacts the people who receive that work. And it turns out that the people who receive that work aren’t its biggest fans, because it’s not just work that they’re having to read, review, and finalize. It is, as the researchers called it, “workslop.”

Workslop is:

“AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. It can appear in many different forms, including documents, slide decks, emails, and code. It often looks good, but is overly long, hard to read, fancy, or sounds off.”

Far from an indictment on AI tools in the workplace, the study instead reveals the economic and human costs that come with this new phenomenon of “workslop.” The problem, according to the researchers, is not that people are using technology to help accomplish tasks. The problem is that people are using technology to create ill-fitting work that still requires human input, review, and correction down the line.

“The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” the researchers wrote.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Kristina Rapuano, senior research scientist at BetterUp Labs, about AI tools in the workplace, the potential lost productivity costs that come from “workslop,” and the sometimes dismal opinions that teammates develop about one another when receiving this type of work.

“This person said, ‘Having to read through workshop is demoralizing. It takes away time I could be spending doing my job because someone was too lazy to do theirs.'”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

OWASP Top 10 for 2025: What’s New and Why It Matters

17 November 2025 at 00:00

In this episode, we discuss the newly released OWASP Top 10 for 2025. Join hosts Tom Eston, Scott Wright, and Kevin Johnson as they explore the changes, the continuity, and the significance of the update for application security. Learn about the importance of getting involved with the release candidate to provide feedback and suggestions. The […]

The post OWASP Top 10 for 2025: What’s New and Why It Matters appeared first on Shared Security Podcast.

The post OWASP Top 10 for 2025: What’s New and Why It Matters appeared first on Security Boulevard.

💾

Meet NEO 1X: The Robot That Does Chores and Spies on You?

10 November 2025 at 00:00

The future of home robotics is here — and it’s a little awkward. Meet the NEO 1X humanoid robot, designed to help with chores but raising huge cybersecurity and privacy questions. We discuss what it can actually do, the risks of having an always-connected humanoid in your home, and why it’s definitely not the “Robot […]

The post Meet NEO 1X: The Robot That Does Chores and Spies on You? appeared first on Shared Security Podcast.

The post Meet NEO 1X: The Robot That Does Chores and Spies on You? appeared first on Security Boulevard.

💾

250 Episodes of Cloud Security Podcast by Google: From Confidential Computing to AI-Ready SOC

5 November 2025 at 16:57
Gemini for Docs improvises

So this may suck, but I am hoping to at least earn some points for honesty here. I wanted to write something pithy and smart once I realized our Cloud Security Podcast by Google just aired our 250th episode (“EP250 The End of “Collect Everything”? Moving from Centralization to Data Access?”). Yet nothing sufficiently pithy came to my mind …

… so I went around and asked a whole bunch of AIs and agents and such. Then massaged and aggregated the outputs, then ran more AI on the result. And then lightly curated it. Then deleted the bottom 2 stupidest points they made.

So, here it comes … in all its sloppy glory!

  1. The Foundational Roots and Unchanging Mission: Our show started with foundational cloud security topics — like Zero Trust, Data Security, and Cloud Migration Security which drew the initial large audiences. The core commitment since Episode 1 has been to question conventional wisdom, avoid “security theater” (EP248) and explore whether security measures truly benefit the user and the organization.
  2. The AI Transformation: We had a sizable shift with the last 50 episodes, where AI became a central theme, or at least one of the themes we always come back to (and, yes, this covers our 3 pillars of securing AI, AI for security and countering the AI-armed attacker). The focus has moved past general hype to practical applications, securing AI systems, and asking challenging questions like “Data readiness for AI SOC” (EP249).
  3. The Enduring Popularity of Detection & Response (D&R): We highlight that D&R and modernizing the SOC continue to be extremely popular with the audience (EP236 is epic). Trace the evolution of this topic from foundational engineering (like the very popular EP75 on scaling D&R at Google) to the architectural questions in EP250.
  4. “How Google Does Security” Sells the Tickets: We love the episodes offering a candid look behind Google’s security curtain on topics like internal red teaming, detection scaling, and Cloud IR tabletops. They consistently remain perennial audience favorites (the latest in this series is EP238 on how we use AI agents for security).
  5. The Centrality of People and Process: We emphasize the recurring lessons that the most challenging aspects of large-scale cloud (and now AI) security transformations are often the “people” and “process” elements, not the technical “tech” itself. EP237 is an epic example of this.
  6. The Call for Intentionality: We reinforce the importance of having a clear purpose for every security activity and following an engineering-led approach (EP117). The “magical” advice from EP236 is: to ask of every security element, “what is it in service of?”
  7. The Persistence of Old Problems: We often lament with a touch of humor on the industry’s tendency to repeat fundamental security mistakes (the SIEM Paradox in EP234 for instance or EP223 in general), underscoring the ongoing need to cover “boring” basics. We will absolutely continue this (a new episode on vulnerability management “stale” problems is coming soon)
  8. Community and Format Growth: We continue to “sorta-kinda” (human wrote this, eh?) the development of the podcast beyond a purely audio medium, including the launch of live video sessions and a Community site to foster more dialogue and feedback.
  9. The Unique Culture and Authenticity of the Show Stays: We remain obsessed about selecting high-energy, vocal, and knowledgeable guests and fun topics. We will keep on with our “inside jokes” like not allowing guests to recommend Anton’s blog as an episode resource and pokes about firewall appliances in the cloud (they are there).
  10. A Glimpse at 300: We want to tease future topics that will define the next 50+ episodes, such as deeper dives into Agentic AI, challenges of cross-cloud incident response and forensics, or the geopolitical aspects of cloud security. Give us ideas, will ya? Otherwise, you will get to hear about AI and D&R much of the time…

Top 5 popular episodes (excluding the oldest 3)

  1. EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil
  2. EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All
  3. EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security
  4. EP8 Zero Trust: Fast Forward from 2010 to 2021
  5. EP17 Modern Threat Detection at Google

Enjoy the show!


250 Episodes of Cloud Security Podcast by Google: From Confidential Computing to AI-Ready SOC was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post 250 Episodes of Cloud Security Podcast by Google: From Confidential Computing to AI-Ready SOC appeared first on Security Boulevard.

Would you sext ChatGPT? (Lock and Code S06E22)

3 November 2025 at 10:30

This week on the Lock and Code podcast…

In the final, cold winter months of the year, ChatGPT could be heating up.

On October 14, OpenAI CEO Sam Altman said that the “restrictions” that his company previously placed on their flagship product, ChatGPT, would be removed, allowing, perhaps, for “erotica” in the future.

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote on the platform X. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

This wasn’t the first time that OpenAI or its executive had addressed mental health.

On August 26, OpenAI published a blog titled “Helping people when they need it most,” which explored new protections for users, including stronger safeguards for long conversations, better recognition of people in crisis, and easier access to outside emergency services and even family and friends. The blog alludes to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises,” but it never explains what, explicitly, that means.

But on the very same day the blog was posted, OpenAI was sued for the alleged role that ChatGPT played in the suicide of a 16-year-old boy. According to chat logs disclosed in the lawsuit, the teenager spoke openly to the AI chatbot about suicide, he shared that he wanted to leave a noose in his room, and he even reportedly received an offer to help write a suicide note.

Bizarrely, this tragedy plays a role in the larger story, because it was Altman himself who tied the company’s mental health campaign to its possible debut of erotic content.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

What “erotica” entails is unclear, but one could safely assume it involves all the capabilities currently present in ChatGPT, through generative chat, of course, but also image generation.   

Today, on the Lock and Code podcast with host David Ruiz, we speak with Deb Donig, on faculty at the UC Berkeley School of Information, about the ethics of AI erotica, the possible accountability that belongs to users and to OpenAI, and why intimacy with an AI-power chatbot feels so strange.

“A chat bot offers, we might call it, ‘intimacy’s performance,’ without any of its substance, so you get all of the linguistic markers of connection, but no possibility for, for example, rejection. That’s part of the human experience of a relationship.”

Tune in today to listen to the full conversation.

how notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy

3 November 2025 at 00:00

In this episode, we explore OpenAI’s groundbreaking release GPT Atlas, the AI-powered browser that remembers your activities and acts on your behalf. Discover its features, implications for enterprise security, and the risks it poses to privacy. Join hosts Tom Eston and Scott Wright as they discuss everything from the browser’s memory function to vulnerabilities like […]

The post OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy appeared first on Shared Security Podcast.

The post OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy appeared first on Security Boulevard.

💾

It’s Always DNS: Lessons from the AWS Outage

27 October 2025 at 00:00

In episode 404 (no pun intended!) we discuss the recurring issue of DNS outages, the recent Amazon AWS disruption, and what this reveals about our dependency on cloud services. The conversation touches on the need for tested business continuity plans, the implications of DNS failures, and the misconceptions around cloud infrastructure’s automatic failover capabilities. ** […]

The post It’s Always DNS: Lessons from the AWS Outage appeared first on Shared Security Podcast.

The post It’s Always DNS: Lessons from the AWS Outage appeared first on Security Boulevard.

💾

What does Google know about me? (Lock and Code S06E21)

20 October 2025 at 10:26

This week on the Lock and Code podcast…

Google is everywhere in our lives. It’s reach into our data extends just as far.

After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google.

Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android.

Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive.

After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look.

But Ruiz wasn’t happy to let the company’s access continue. So he has a plan.

 ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

What’s there to save about social media? (Lock and Code S06E20)

6 October 2025 at 10:49

This week on the Lock and Code podcast…

“Connection” was the promise—and goal—of much of the early internet. No longer would people be separated from vital resources and news that was either too hard to reach or made simply inaccessible by governments. No longer would education be guarded behind walls both physical and paid. And no longer would your birthplace determine so much about the path of your life, as the internet could connect people to places, ideas, businesses, collaborations, and agency.

Somewhere along the line though, “connection” got co-opted. The same platforms that brought billions of people together—including Facebook, Twitter, Instagram, TikTok, and Snapchat—started to divide them for profit. These companies made more money by showing people whatever was most likely to keep them online, even if it upset them. More time spent on the platfrom meant more likelihood of encountering ads which meant more advertising revenue for Big Tech.

Today, these same platforms are now symbols of some of the worst aspects of being online. Nation-states have abused the platforms to push disinformation campaigns. An impossible sense of scale allows gore and porn and hate speech to slip by even the best efforts at content moderation. And children can be exposed to bullying, peer pressure, and harassment.

So, what would it take to make online connection a good thing?

Today, on the Lock and Code podcast with host David Ruiz, we speak with Rabble—an early architect of social media, Twitter’s first employee, and host of the podcast Revolution.Social—about what good remains inside social media and what steps are being taken to preserve it.

“ I don’t think that what we’re seeing with social media is so much a set of new things that are disasters that are rising up from this Pandora’s box… but rather they’re all things that existed in society and now they’re not all kept locked away. So we can see them and we have to address them now.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Can you disappear online? (Lock and Code S06E19)

23 September 2025 at 12:32

This week on the Lock and Code podcast

There’s more about you online than you know.

The company Acxiom, for example, has probably determined whether you’re a heavy drinker, or if you’re overweight, or if you smoke (or all three). The same company has also probably estimated—to the exact dollar—the amount you spend every year on dining out, donating to charities, and traveling domestically. Another company Experian, has probably made a series of decisions about whether you are “Likely,” “Unlikely,” “Highly Likely,” etc., to shop at a mattress store, visit a theme park, or frequent the gym.

This isn’t the data most people think about when considering their online privacy. Yes, names, addresses, phone numbers, and age are all important and potentially sensitive, and yes, there’s a universe of social media posts, photos, videos, and comments that are likely at the harvesting whim of major platforms to collect, package, and sell access to for targeted advertising.

But so much of the data that you leave behind online has nothing to do with what you willingly write, post, share, or say. Instead, it is data that is collected from online and offline interactions, like the items you add in a webpage’s shopping cart, the articles you read, the searches you make, and the objects you buy at a physical store.

Importantly, it is also data that is very hard to get rid of.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Peter Dolanjski, director of product at DuckDuckGo, about why the internet is so hungry for your data, how parents can help protect the privacy of their children, and whether it is pointless to try to “disappear” online.

“It’s not futile…  Taking steps now, despite the fact that you already have information out there, will help you into the future.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Podcast Episode: Building and Preserving the Library of Everything

10 September 2025 at 03:05

All this season, “How to Fix the Internet” has been focusing on the tools and technology of freedom – and one of the most important tools of freedom is a library. Access to knowledge not only creates an informed populace that democracy requires, but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed of – so long as that knowledge is allowed to flow freely.

play
Privacy info. This embed will serve content from simplecast.com

 Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

A passionate advocate for public internet access and a successful entrepreneur, Brewster Kahle has spent his life intent on a singular focus: providing universal access to all knowledge. The Internet Archive, which he founded in 1996, now preserves 99+ petabytes of data - the books, Web pages, music, television, government information, and software of our cultural heritage – and works with more than 400 library and university partners to create a digital library that’s accessible to all. The Archive is known for the Wayback Machine, which lets users search the history of almost one trillion web pages. But it also archives images, software, video and audio recordings, documents, and it contains dozens of resources and projects that fill a variety of gaps in cultural, political, and historical knowledge. Kahle joins EFF’s Cindy Cohn and Jason Kelley to discuss how the free flow of knowledge makes all of us more free. 

In this episode you’ll learn about:

  • The role AI plays in digitizing, preserving, and easing access to all kinds of information
  • How EFF helped the Internet Archive fight off the government’s demand for information about library patrons
  • The importance of building a decentralized, distributed web to finding and preserving information for all
  • Why building revolutionary, world-class libraries like the Internet Archive requires not only money and technology, but also people willing to dedicate their lives to the work
  • How nonprofits are crucial to filling societal gaps left by businesses, governments, and academia 

Brewster Kahle is the founder and digital librarian of the Internet Archive, which is among the world’s largest libraries and serves millions of people each day. After studying AI at and graduating from the Massachusetts Institute of Technology in 1982, Kahle helped launch the company Thinking Machines, a parallel supercomputer maker. In 1989, he helped create the internet's first publishing system called Wide Area Information Server (WAIS); WAIS Inc. was later sold to AOL. In 1996, Kahle co-founded Alexa Internet, which helps catalog the Web, selling it to Amazon.com in 1999. He is a former member of EFF’s Board of Directors. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

BREWSTER KAHLE: I think we should start making some better decisions, a little bit more informed, a little better communication with not only people that are around the world and finding the right people we should be talking to, but also, well, standing on the shoulders of giants. I mean, we can then go and learn from all the things that people have learned in the past. It's pretty straightforward what we're trying to do here. It's just build a library.

CINDY COHN: That's Internet Archive founder Brewster Kahle on what life could look like we all got to experience his dream of universal access to all human knowledge.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation

JASON KELLEY: And I'm Jason Kelley - EFF's activism director. And this is our podcast How to Fix the Internet.

CINDY COHN: This show is about what the world could look like if we get things right online - we hear from activists, computer engineers, thinkers, artists and today, a librarian, about their visions for a better digital future that we can all work towards.

JASON KELLEY: And our guest today is someone who has been actively making the internet a better place for several decades now.

CINDY COHN: Brewster Kahle is an early internet pioneer, and a longtime advocate for digitization. He’s a computer engineer but also a digital librarian, and he is of course best known as the founder of the Internet Archive and the Wayback Machine. EFF and the Archive are close allies and friends, and Brewster himself was a member of EFF’s Board of Directors for many years. I’m proud to say that the Archive is also a client of EFF, including most recently when we served as part of the legal team trying to protect true library lending of digital materials like ebooks and audiobooks.

JASON KELLEY: All season we’ve been focusing on the tools and technologies of freedom – and one of the most important tools of freedom is a library.
We started off our conversation by getting his take on the role that AI should play in his vision of a universally accessible library.

BREWSTER KAHLE: AI is absolutely critical and actually has been used for, well, a long period of time. You just think of, how does the magic of Google search happen, where you can just type a few words and get 10 links and several of them are actually really quite relevant. How do you do that? Those of us old enough to remember just keyword searching, that didn't work very well.
So it's going and using all this other information, metadata from other websites, but also learning from people, and machine learning at scale, that we've been able to make such progress.
Now there's the large language models, the generative AI, which is also absolutely fantastic. So we are digitizing obscure newsletters from theological missions in distant parts of the world. We are digitizing agricultural records and from over decades of the 20th century.
And these materials are absolutely relevant now with climate change in our new environments because, well, things are moving. So the pests that used to be only in Mexico are now in Louisiana and Texas. It's completely relevant to go and learn from these, but it's not gonna be based on people going and doing keyword search and finding that newsletter and, and learning from it. It's gonna be based on these augmentations, but take all of these materials and try to make it useful and accessible to a generation that's used to talking to machines.

CINDY COHN: Yeah, I think that that's a really important thing. One of my favorite insights about AI is that it's a very different user interface. It's a way to have a conversational access to information. And I think AI represents one of those other shifts about how people think about accessing information. There's a lot of side effects of AI and we definitely have to be serious about those. But this shift can really help people learn better and find what they're looking for, but also find things that maybe they didn't think they were looking for.

BREWSTER KAHLE: If we do it well, if we do it with public AI that is respectful, the opportunity for engaging people and in a more deep way to be able to have them get to literature that has been packed away, and we've spent billions of dollars in the library system over centuries going and building these collections that are now going to be accessible, not just to the reference librarian, not just to researchers, but to kind of anybody.

JASON KELLEY: Can I dig into this backstory of yours a little bit? Because you know, a lot of people may know how you ended up building the Internet Archive, but I don't think they know enough. I'd like to get more people to sort of have a model in tech for what they can do if they're successful. And you were, if I understand it right, you were one of the early successful internet stories.
You sold a company or two in the nineties and you could have probably quit then and instead you ended up building the Internet Archive. Did you have this moment of deciding to do this and how did you end up in library school in the first place?

BREWSTER KAHLE: So I'm a little unusual in that I, I've only had one idea in my life, and so back in college in 1980 a friend posed, okay, you're an idealist. Yes. And a technologist. Yes. Paint a portrait that's better with your technology. It turned out that was an extremely difficult question to answer.
We were very good about complaining about things. You know, that was Cold War Times and Nicaragua and El Salvador, and there's lots of things to complain about, but it was like. What would be better? So I only came up with two ideas. one was protect people's privacy, even though they were going to throw it away if they were given the chance.
And the other was build the library of everything, the building of the library of everything, the digital library of Alexandria seemed too obvious. So I tried to work on the privacy one, but I couldn't make chips to encrypt voice conversations cheap enough to help the people I wanted to, but I learned how to make chips.
But then that got me engaged with the artificial intelligence lab at MIT and Danny Hillis and Marvin Minsky, they had this idea of building a thinking machine and to go and build a computer that was large enough to go and search everything. And that seemed absolutely critical.
So I helped work on that. Founded a company, Thinking Machines. That worked pretty well. So we got the massively parallel computers. We got the first search engine on the internet, then spun off a company to go and try to get publishers online called WAIS Incorporated. It came before the web, it was the first publishing system.
And so these were all steps in the path of trying to get to the library. So once we had publishers online, we also needed open source software. The free and open source software movement is absolutely critical to the whole story of how this whole thing came about, and open protocols, which was not the way people thought of things. They would go and make them proprietary and sue people and license things, but the internet world had this concept of how to share that ran very, very well. I wasn't central in the ARPANET to the internet conversation. But I did have quite a bit to do with some of the free and open source software, the protocol development, the origins of the web.
And once we had publishers, then, onboard, then I could turn my attention to building the library in 1996, so that's 28 years ago, something like that. And so we then said, okay, now we can build the library. What does that make up of? And we said, well, let's start with the web. Right? The most fragile of media.
I mean, Tim's system, Tim Berners-Lee's system, was very easy to implement, which was kind of great and one of the keys for his success, but it had some really, basically broken parts of it. You think of publishers and they would go and make copies and sell them to individuals or libraries, and they would stay alive much longer than the publishers.
But the web, there's only one copy and it's only on one machine. And so if they change that, then it's gone. So you're asking publishers to be librarians, which is a really bad idea. And so we thought, okay, why don't we go and make a copy of everything that was on the web. Every page from every website every two months.
And turns out you could do that. That was my Altavista moment when I actually went to see Altavista. It was the big search engine before Google and it was the size of two Coke machines, and it was kind of wild to go and look - that's the whole web! So the idea that you could go and gather it all back up again, uh, was demonstrated by Altavista and the Internet Archive continued on with other media type after media type, after media type.

JASON KELLEY: I heard you talk about the importance of privacy to you, and I know Cindy's gonna wanna dig into that a little bit with some of the work that EFF and the Archive have done together.

CINDY COHN: Yeah, for sure. One of the things I think, you know, your commitment to privacy is something that I think is very, very important to you and often kind of gets hidden because the, you know, the archive is really important. But, you know, we were able to stand up together against national security letters, you know, long before some of the bigger cases that came later and I wanted to, you know, when you reached out to us and said, look, we've gotten this national security letter, we wanna fight back. Like, it was obvious to you that we needed to push back. And I wanna hear you talk about that a little bit.

BREWSTER KAHLE: Oh, this is a hero day. This is a hero moment for EFF and its own, you know, I, okay.

CINDY COHN: Well, and the Archive, we did it together.

BREWSTER KAHLE: Well, no, we just got the damn letter. You saved our butts. Okay. So how this thing worked was in 2001,they passed this terrible law, the Patriot Act, and they basically made any government official almost be able to ask any organization and be able to get anything they wanted and they had a gag order. So not only could they just get any information, say on patrons’ reading habits in a library, they could make it so that you can't tell anybody about it.
So I got sat down one day and Kurt Opsahl from EFF said, this isn't your best day. You just got a letter demanding information about a patron of the Internet Archive. I said, they can't do that. He said, yeah, they can. And I said, okay, well this doesn't make any sense. I mean, the librarians have a long history of dealing with people being surveilled on what it is they read and then rounded up and bad things happen to them, right? This is, this is something we know how that movie plays out.
So I said, Kurt, what, what can we do? And he said, you have to supply the data. I said, what if we don't? And he said, jail. That wasn't my favorite sentence. So is there anything else we can do? And he said, well, you can sue the United States government. (laughter)
OH! Well I didn't know even know whether I could bring this up with my board. I mean, remember there's a gag order. So there was just a need to know to be able to find out from the engineers what it is we had, what we didn't have. And fortunately we never had very much information. 'cause we don't keep it, we don't keep IP addresses if we possibly can. We didn't have that much, but we wanted to push back. And then how do you do that? And if it weren't for the EFF, and then EFF got the ACLU involved on a pro bono basis, I would never have been able to pull it off! I would have to have answered questions to the finance division of how, why are we spending all this money on lawyers?
The gag order made it so absolutely critical for EFF to exist, and to be ready and willing and funded enough to take on a court case against the United States government without, you know, having to go into a fundraising round.
But because of you, all of you listeners out there donating to EFF, having that piggy bank made it so that they could spring to the defense of the Internet Archive. The great thing about this was that after this lawsuit was launched, the government wanted out of this lawsuit as fast as possible.
They didn't want to go and have a library going and getting a court case to take their little precious toy of this Patriot Act, National Security letters away from them. So they wanted out, but we wouldn't let them. We wanted to be able to talk about it. They had to go and release the gag order. And I think we're only one or two or three organizations that have ever talked publicly about the hundreds of thousands, if not millions, of national security letters because we had EFF support.

CINDY COHN: Oh, thank you Brewster. That's very sweet. But it was a great honor to get to do this. And in hearing you talk about this future, I just wanna pull out a few of the threads. One is privacy and how important that is for access for information. Some people think of that as a different category, right? And it's not. It's part and parcel of giving people access to information.
I also heard the open source community and open protocols and making sure that people can, you know, crawl the web and do things with websites that might be different than the original creator wanted, but are still useful to society.
The other thing that you mentioned that I think it's important to lift up as well is, you know, when we're talking about AI systems, you're talking about public AI, largely. You're talking about things that similarly are not controlled by just one company, but are available so that the public really has access not only to the information, but to the tools that let them build the next thing.

BREWSTER KAHLE: Yes, the big thing I think I may have gotten wrong starting this whole project in 1980 was the relaxation of the antitrust laws in the United States, that we now have these monster organizations that are not only just dominating a country's telecom or publishing systems or academic access, but it's worldwide now.
So we have these behemoth companies. That doesn't work very well. We want a game with many winners. We want that level playing field. We wanna make it so that new innovators can come along and, you know, try it out, make it go. In the early web, we had this, we watched sort of the popularity and the movement of popularity. And so you could start out with a small idea and it could become quite popular without having to go through the gatekeepers. And that was different from when I was growing up. I mean, if you had a new idea for a kid's toy, trying to get that on the shelves in a bunch of toy stores was almost impossible.
So the idea of the web and the internet made it so that good ideas could surface and grow, and that can work as long as you don't allow people to be gatekeepers.
We really need a mechanism for people to be able to grow, have some respect, some trust. If we really decrease the amount of trust, which is kind of, there's a bonfire of trust right now, then a lot of these systems are gonna be highly friction-full.
And how do we go and make it so that, you know, we have people that are doing worthwhile projects, not exploiting every piece of surveillance that they have access to. And how do we build that actually into the architecture of the web?

CINDY COHN: That leads, I think, directly into the kind of work that the archive has done about championing the distributed web, the D-web work. And you've done a real lot of work to kind of create a space for a distributed web, a better web. And I want you to tell me a little bit about, you know, how does that fit into your picture of the future?

BREWSTER KAHLE: The wonderful thing about the internet still is that it can be changed. It's still built by people. They may be in corporations, but you can still make a big dent and, there were a couple “aha” moments for me in, in trying to, like, why do we build a better web? Right? what's the foundational parts that we need to be able to do that?
And we ended up with this centralization, not only of all the servers being in these colos that are operated by other companies and a cloud-based thing, other people own everything, that you can't go and just take your computer on your desk and be a first class internet thing. That used to be possible with Gopher and Waze and the early web. So we lost some of those things, but we can get them back.
Jason Scott at the Internet Archive, working with volunteers all over, made emulators of the early computers like IBM PCs and Macintosh and these old computers, Commodore 64, Atari machines, and they would run in JavaScript in your browser, so you could click and go and download an IBM PC and it boots in your browser and it uses the Internet Archive as a giant floppy drive to run your favorite game from 20 years ago. The cool thing about that for me, yes, I could get to play all my old games, it was kind of great, but we also had this ability to run a full on computer in your browser, so you didn't even have to download and install something.
So you could go and be a computer on the internet, not just a consumer, a reader. You could actually be a writer, you could be a publisher, you could, you could do activities, you could, so that was fantastic. And then another big change was the protocols of the browsers change to allow peer-to-peer interactions. That's how you get, you know, Google Meet or you get these video things that are going peer to peer where there's no central authority going in, interrupting your video streams or whatever.
So, okay, with these tools in hand now, then we could try to realize part of the dream that a lot of us had originally, and even Tim Burners Lee, of building a decentralized web. Could you make a web such that your website is not owned and controlled on some computer someplace, but actually exists everywhere and nowhere, kind of a peer-to-peer backend for the web.
Could you make it so that if you run a club, that you could do a WordPress-like website that would then not live anywhere, but as readers were reading it, they would also serve it. And there would be libraries that would be able to go and archive it as a living object, not as just snapshots of pages. That became possible. It turns out it's still very hard, and the Internet Archive started pulling together people, doing these summits and these different conferences to get discussions around this and people are running with it.

CINDY COHN: Yeah, and so I love this because I know so many people who go to the archive to play Oregon Trail, right? And I love it when I get a chance to say, you know, this isn't just a game, right? This is a way of thinking that is reflected in this. I kind of love that, you know, ‘you died with dysentery’ becomes an entryway into a whole other way of thinking about the web.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. And also, if you can’t make it in person to this year’s EFF awards where we celebrate the people working towards the better future we all care so much about, you can watch the whole event at eff.org/awards.
We also wanted to share that our friend Cory Doctorow, has a new podcast, have a listen to this:

WHO BROKE THE INTERNET TRAILER: How did the internet go from this? You could actually find what you were looking for right away, to this, I feel I can inhale. Spoiler alert, it was not an accident. I'm Cory Doctorow, host of Who Broke the Internet from CBC's Understood. In this four part series, I'm gonna tell you why the internet sucks now, whose fault it is and my plan to fix it. Find who broke the internet on whatever terrible app you get your podcasts.

JASON KELLEY: And now back to our conversation with Brewster Kahle.
The fact that you do things like archive these old games is something that I think a lot of people don't know. There are just so many projects that the internet archive does and it is interesting to hear how they're sort of all building towards this better future that is sort of built, like, sort of makes up the bones of the work that you do. Can you talk about any of the other projects that you are particularly sort of proud of that maybe other people haven't heard about?

BREWSTER KAHLE: Yeah, and I really wanna apologize. If you go to archive.org, it is daunting. Most people find things to read in the Internet Archive or see in the internet archive, mostly by going to search engines, or Wikipedia. For instance, we really dedicated ourselves to try to help reinforce Wikipedia. We started archiving all of the outbound links. And we figured out how to work with the communities to allow us to fix those broken links. So we've now fixed 22 million broken links in Wikipedia, 10,000 a day get now added to go back to the Wayback Machine.
Also, there are about two million books that are linked straight into, if you click on it, it goes right to the right page so you can go and see the citation. Not only is this important for homework, people that are after hours trying to cram for their, uh, for their homework, um, but it's also important for Wikipedians because, um, links in Wikipedia that go to someplace you can actually cite is a link that works, it gets more weight.
And if we're going to have all the literature, the scholarly literature and the book literature available in Wikipedia, it needs to be clickable. And you can't click your way into a Overdrive borrowed book from your library. You have to be able to do this from, something like the Internet Archive. So Wikipedia, reinforcing Wikipedia.
Another is television. We've been archiving television. 24 hours a day since the year 2000. Russian, Chinese, Japanese, Iraqi, Al Jazeera, BBC, CNN, ABC, Fox, 24 hours a day, DVD quality. And not all of it is available but the US television news, you can search and find things. And we're also doing summarizations now, so you can start to understand – in English – what is Russian State television telling the Russians? So we can start to get perspectives. Or look inside other people's bubbles to be able to get an idea of what's going on. Or a macroscope ability to step back and get the bigger picture. That's what libraries are for, is to go and use these materials in new and different ways that weren't the way that the publishers originally intended.
Other things. We've digitizing about 3000 books a day. So that's going along well. Then we are doing Democracy’s Library. Democracy's Library, I think is a cool one. So democracies need an educated populace. So they tend to publish openly. Authoritarian governments and corporations don't care about having an educated populace. That's not their goal. They have other goals, um, but democracies want things to be openly available.
But it turns out that even though the United States, for instance, and all democracies publish openly, most of those materials are not available publicly. They may be available in some high priced database system of somebody or other. But mostly they're just not available at all.
So we launched the Democracy's Library Project to go and take all of the published works at the federal level, the provincial state level, and municipal levels, and make that all available in bulk and in services so that other people could also go and build new services on this. We launched it with Canada and the United States. The Canadians are kicking the United States's butt. I mean, they're doing so great. So Internet Archive Canada, working with University of Toronto, and universities all over, have already digitized all of the federal print materials, and by working with the national library there have archived the government websites in Canada.
In the United States we've been archiving, with the help of many others, including historically with the Library of Congress, and National Archives to go and collect all of the web pages and services and data sets from all of the United States Federal websites from before and after every presidential election. It's called the End of Term Crawl, and this has been going on since 2008, and we've gotten into a lot of news recently because this administration has decided to take a lot of materials off the web. And again, asking a publisher, whether it's a government or commercial publisher or a social media publisher, to go and be their own archive or their own library is a bad idea. Don't trust a corporation to do a library's job, was what one headline said.
So we've been archiving all of these materials and making them available. Now, can we weave them back into the web with the right URLs? No, not yet. That's up to the browser companies and also some of the standards organizations. But it's, at least it's there and you can go to the Wayback Machine to find it.
So the Internet Archive is about the 200th most popular website.
We get millions of people a day coming to the website, and we get about 6 million people coming and using the internet archives resources that we don't even, they don't even come to the website. So it's just woven into the fabric of the web. So people say, oh, I've never heard of that. Never used it. It's like you probably have. It’s just part of how the internet works, it's plumbing.
So those are the aspects of the Internet Archive that are currently going on. We have people coming in all the time saying. Now, but are you doing this? And I said, no, but you can and we can be infrastructure for you. I think of the Internet Archive as infrastructure for obsessives. So the people that say, I really need this to persist to the next generation. We say, great, what do you need? How do we make that come true?

CINDY COHN: Yeah, I think that's both the superpower and in some ways the thing that the Internet Archive struggles with, which is because when your infrastructure, people don't think about you and they don't wanna think about you, so that when you come under attack, it's hard to get people to see what they might be losing.
And I think one of the things that, you know, one of the reasons I wanted you to come on here and talk about the archive is I think we need to start making some of that invisible stuff visible because it's not magic. It's not automatic. It takes, you know, I mean, your personal courage in standing up is wonderful, but there need to be hundreds and thousands and hundreds of thousands saying, you know, this is our library, this is our future.
This is, you know, this is important and, and we need to stand up and hopefully if we stand up enough, you know, we don't have to do it every four years or so. But you know, the number of people who I sent to the Wayback Machine when they were very, very worried about US government information going down and, and pointed out, look, you know, the archive's been quietly doing this for, you know, nearly 20 years now, is a lot. And that's because again, you're kind of quietly doing the important work.
And so, you know, my hope is that ,with this podcast and otherwise, we get a little more attention so that we can really build this better future and, and maybe in the better future, we don't have to think about it again. But right now there's a lot of different kinds of attacks.

BREWSTER KAHLE: It's a challenging time, especially in the United States for libraries. There's the book bannings, defunding. Probably structurally the worst thing is the licensing model. The idea that there's no digital ownership. I mean, just like really bad behavior on the part of the corporations. Um, so, but Internet Archive Canada is doing well. Internet Archive Europe is coming back up and serving interesting roles with public AI to go and do publicly oriented values driven AI technology, which is kind of great. We'd like to see internet archives planted in lots of places. The idea that we can just depend on the United States jurisdictions for being the information resource for the world I think that train is gone.
So let's go and build a robust infrastructure. It's kinda like what we saw out the internet. Can we build internet archives all over the world? And that takes not only money, but actually the money part is probably not the hardest part. It's people interested in dedicating their lives to open – to open source software, free and open source software, open access materials, the infrastructure to step out and work in non-profits as opposed to some of the, you know, the very tempting, um, stock option deals that come from these these VC-funded whatevers, um, and work and do the good work that they can point to and they can be proud of for the rest of their lives.

CINDY COHN: Yeah. And there is something so important about that, about getting to wake up every day and feel like you're making the world better. And I think your particular story about this, because you know, you made money early on, you did some companies and you decided to dig back into the public side of the work rather than, you know, stepping back and becoming a VC or, you know, buying your third island, or those kinds of things.
And I think that one of the things that's important is that I feel like there's a lot of people who don't think that you can be a technologist and a successful person without being an asshole. And, you know, I think you're a good counter example of somebody who is deeply technical, who thinks about things in a, you know, how do we build better infrastructure, who understands how all of these systems work. And use that information to build good, rather than, you know, necessarily deciding that the, you know, the best thing to do is to maybe take over a local government and build a small fiefdom to yourself.

BREWSTER KAHLE: Well, thank you for that. And yes, for-profit entities are gasoline. They're explosive and they don't tend to last long. But I think one of the best ideas the United States has come up with is the 501 C3 public charity, which is not the complete antidote to the C corporations that were also put across by the United States since World War II in ways that shouldn't have been, but the 501 C3 public charities are interesting. They tend to last longer. They take away the incentive to sell out, yet leave an ability to be an operational entity. You just have to do public good. You have to actually live and walk the walk and go and do that. But I think it's a fabulous structure. I mean, you, Cindy, how old is the EFF now?

CINDY COHN: 35. This is our 35th anniversary.

BREWSTER KAHLE: That's excellent. And the Internet Archive is like 28, 29 years old, and that's a long time for commercial, excuse me, for commercial entities or tech! Things in the tech world, they tend to turn over. So if you wanna build something long term, and you're willing to only do, as Lessig would put it, some rights reserved, or some profit motive reserved. Then the 501 C3 public charities, what other countries are adopting, this model is a mechanism of building infrastructure that can last a long time where you get your alignment with the public interest.

CINDY COHN: Yeah, I think that's right. And it's been interesting to me for the, you know, being in this space for a really long time, the nonprofit salaries, the nonprofit may not be as high, but the jobs are more stable. Like we don't have in our sector the waves of layoffs. I mean, occasionally for sure we're, you know, that that is a thing that happens in the nonprofit digital rights sector. But I would say compared to the for-profit world, there’s a much more stable structure, um, because you don't have this gasoline idea, these kind of highs and lows and ups and downs. And that could be, you know, there's nothing wrong with riding that wave and making some money. But the question becomes, well, what do you do after that? Do you take that path to begin with? Or do you take that path later, when you've got some assets, you know, some people come outta school with loans and things like that.

BREWSTER KAHLE: So we need this intermediary between the academic, the dot edu, and the dot com, and I think the dot org is such a thing. And also there was a time when we did a lot in dot gov of bringing civic tech. And civic tech in Canada is up and running and wonderful. So there's things that we can do in that.
We can also spread these ideas into other sectors like banking. How about some nonprofit banks, please? Why don't we have some nonprofit housing that actually supports nonprofit workers? We're doing an experiment with that to try to help support people that want to work in San Francisco for nonprofits and not feel that they have to commute from hours away.
So can we go and take some of these ideas pioneered by Richard Stallman, Larry Lessig, Vince Sur, the Cindy Cohns, and go and try it in new sectors? You're doing a law firm, one of the best of the Silicon Valley law firms, and you give away your product. Internet Archive gives away its product. Wikipedia gives away its product. This is, like, not supposed to happen, but it works really well. And it requires support and interest of people to work there and also to support it from the outside. But it functions so much better. It's less friction. It's easier for us to work with non other non-profits than it is to work with for-profits.

JASON KELLEY: Well I'm glad that you brought up the nonprofit points and really dug into it because earlier, Brewster, you mentioned the people listening to this are, you know, the reason you were able to fight back against the NSL letters is that EFF has supporters that keep it going, and those same supporters, the people listening to this are hopefully, and probably, the ones that help keep the Archive going. And I just wanted to make sure people know that the Archive is also supported by donors. And, uh, if people like it, they, they, there's nothing wrong with supporting both EFF and the Archive, and I hope everyone does both.

CINDY COHN: Yeah. There's a whole community. And one of the things that Brewster has really been a leader in is seeing and making space for us to think of ourselves as a community. Because we're stronger together. And I think that's another piece of the somewhat quiet work that Brewster and the Archive do is knitting together the open world into thinking of itself as an open world and, able to move together and leverage each other.

BREWSTER KAHLE: Well thank you for all the infrastructure EFF provides. And if anybody's in San Francisco, come over on a Friday afternoon at ! And we give it to her! If I'm here, I give it to her and try to help answer questions. We even have ice cream. And so the idea is to go and invite people into this other alternative form of success that maybe they weren't taught about in business school or, or, or, uh, you know, they want to go off and do something else.
That's fine, but at least understand a little bit of how the underlying structures of the internet, whether it's some of the original plumbing, um, some of these visions of Wikipedia, Internet Archive. How do we make all of this work? And it's by working together, trusting each other to try to do things right, even when the technology allows you to do things that are abusive. Stepping back from that and building, uh, the safeguards into the technology eventually, and celebrate what we can get done to support a better civic infrastructure.

CINDY COHN: That is the perfect place to end it. Thank you so much, Brewster, for coming on and bringing your inspiration to us.

JASON KELLEY: I loved that we wrapped up the season with Brewster because really there isn't anything more important, in a lot of ways, to freedom than a library. And the tool of freedom that Brewster built, the Internet Archive and all of the different pieces of it, is something that I think is so critical to how people think about the internet and what it can do, and honestly, it's taken for granted. I think once you start hearing Brewster talk about it, you realize just how important it is. I just love hearing from the person who thought of it and built it.

CINDY COHN: Yeah, he's so modest. The “I only had one idea,” right? Or two ideas, you know, one is privacy and the other is a universal access to all the world's information. You know, just some little things.

JASON KELLEY: Just a few things that he built into practice.

CINDY COHN: Well, and you know, he and a lot of other people, I think he's the first to point out that this is a sector that there's a lot of people working in this area and it's important that we think about it that way.
It does take the long view to build things that will last. And then I think he also really talked about the nonprofit sector and how, you know, that space is really important. And I liked his framing of it being kind of in between the dot edu, the academics and the dot com, that the dot orgs play this important role in bringing the public into the conversation about tech, and that's certainly what he's done.

JASON KELLEY: I loved how much of a positive pitch this was for nonprofits, and I think a lot of people think of charities they don't think about EFF necessarily, or the Internet Archive, but this tech sector of nonprofits is, you know, that community you talked about all working together to sort of build this structure that protects people's rights online and also gives them access to these incredible tools and projects and resources and, you know, everyone listening to this is probably a part of that community in one way or another. It's much bigger than I think people realize.

CINDY COHN: Yeah. And whether you're contributing code or doing lawyering or doing activism, you know, there's, there's spaces throughout, and those are only just three that we do.
But the other piece, and, and you know, I was very of course honored that he told the story about national security letters, but, you know, we can support each other. Right. That when somebody in this community comes under attack, that's where EFF often shows up. But when, you know, he said people have ideas and they wanna be able to develop them, you know, the archive provides the infrastructure. All of this stuff is really important and important to lean into in this time when we're really seeing a lot of public institutions and nonprofit institutions coming under attack.
What I really love about this season, Jason, is the way we've been able to shine our little spotlight on a bunch of different pieces of the sector. And there's so many more. You know, as somebody who started in this digital world in the nineties when, you know, I could present all of the case law about the internet on one piece of paper in a 20 minute presentation.
You know, watching this grow out and seeing that it's just the beginning has been really, it's been really fun to be able to talk to all of these pieces. And you know, to me the good news is that, that people, you know, sometimes their stories get presented as if they're alone or if there's this lone, you know, it's kind of a superhero narrative. There's this lone Brewster Kahle who's out there doing things, and now of course that's true. Brewster's, you know, again, Brewster's somebody who I readily point to when people need an example of somebody who, who did really well in tech but didn't completely become a money grubbing jerk as a result of it, but instead, you know, plowed it back into the community. It's important to have people like that, but it's also important to recognize that this is a community and that we're building it, and that it’s got plenty of space for the next person to show up and, and throw in ideas.
At least I hope that's how, you know, we fix the internet.

JASON KELLEY:  And that's it for this episode and for this season. Thank you to Brewster for the conversation today, and to all of our guests this season for taking the time to share their insight, experience, and wisdom with us these past few months. Everybody who listens, gets to learn a little bit more about how to fix the internet.
That is our goal at EFF. And every time I finish one of these conversations, I think, wow, there's a lot to do. So thank you so much for listening. If you wanna help us do that work, go to eff.org/pod and you can donate, become a member, and um, we have 30,000 members, but we could always use a few more because there is a lot to fix.
Thank you so much. Our theme music is by Nat Keefe of Beat Millware with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

This “insidious” police tech claims to predict crime (Lock and Code S06E18)

8 September 2025 at 12:15

This week on the Lock and Code podcast…

In the late 2010s, a group of sheriffs out of Pasco County, Florida, believed they could predict crime. The Sheriff’s Department there had piloted a program called “Intelligence-Led Policing” and the program would allegedly analyze disparate points of data to identify would-be criminals.

But in reality, the program didn’t so much predict crime, as it did make criminals out of everyday people, including children. 

High schoolers’ grades were fed into the Florida program, along with their attendance records and their history with “office discipline.” And after the “Intelligence-Led Policing” service analyzed the data, it instructed law enforcement officers on who they should pay visit to, who they should check in on, and who they should pester.

As reported by The Tampa Bay Times in 2020:

“They swarm homes in the middle of the night, waking families and embarrassing people in front of their neighbors. They write tickets for missing mailbox numbers and overgrown grass, saddling residents with court dates and fines. They come again and again, making arrests for any reason they can.

One former deputy described the directive like this: ‘Make their lives miserable until they move or sue.’”

Predictive policing can sound like science fiction, but it is neither scientific nor is it confined to fiction.

Police and sheriff’s departments across the US have used these systems to plug broad varieties of data into algorithmic models to try and predict not just who may be a criminal, but where crime may take place. Historical crime data, traffic information, and even weather patterns are sometimes offered up to tech platforms to suggest where, when, and how forcefully police units should be deployed.

And when the police go to those areas, they often find and document minor infractions that, when reported, reinforce the algorithmic analysis that an area is crime-ridden, even if those crimes are, as the Tampa Bay Times investigation found, a teenager smoking a cigarette, or stray trash bags outside a home.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Emily Galvin-Almanza, cofounder of Partners for Justice and author of the upcoming book “The Price of Mercy,” about predictive policing, its impact on communities, and the dangerous outcomes that might arise when police offload their decision-making to data.

“ I am worried about anything that a data broker can sell, they can sell to a police department, who can then feed that into an algorithmic or AI predictive policing system, who can then use that system—based on the purchases of people in ‘Neighborhood A’—to decide whether to hyper-police ‘Neighborhood A.’”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Podcast Episode: Protecting Privacy in Your Brain

27 August 2025 at 03:05

The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

How a scam hunter got scammed (Lock and Code S06E17)

24 August 2025 at 22:11

This week on the Lock and Code podcast…

If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year.

For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions.

In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety.

But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online.

“A lot of the time you think that it’s somebody who’s silly—who’s just messing about. It’s not. You are dealing with criminals.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Podcast Episode: Separating AI Hope from AI Hype

13 August 2025 at 03:05

If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

play
Privacy info. This embed will serve content from simplecast.com

 Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

 Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 

In this episode you’ll learn about:

  • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
  • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
  • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
  • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
  • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line 

Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

ARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they're vastly, vastly superhuman.
We don't think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It's not even clear when you've done well and when you've not done well.
We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough, it's just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna, like persuade you of something that you never wanted to believe, for instance, right?
We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.

CINDY COHN: That's Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.

JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.

CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.

ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There's the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I'm not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That's a true story.

CINDY COHN: Oh my God. Whoops.

ARVIND NARAYANAN: And, you know, there weren't any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.

CINDY COHN: So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
So, you know, from I think all around the world, there's this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it's that same moment and, I think that that is the promise that we have to hang on to.
So what would an educational world look like? You know, if you're a student or a teacher, if we are getting AI right?

ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that's the right frame of mind to be in anytime you're learning anything, I think so that's one kind of adaptation.
But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
But even in my experiences with my own kids, right, they're five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.

JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you're talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
And we've talked a little bit on EFF’s Deep Links blog about how, you know, that's probably an overstep in terms of like, people need to know how to use this, whether they're students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
So do you think schools, you know, given the way you see it, are well positioned to get to the point you're describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing because most teachers are overwhelmed as it is.

ARVIND NARAYANAN: Exactly. That's the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

JASON KELLEY: Yeah.

ARVIND NARAYANAN: You can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you're, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.

CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that's way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we're most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?

ARVIND NARAYANAN: That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
So that's the technical aspect, and that's because, you know, it's just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
It's something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

CINDY COHN: The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn't tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
And we know there's a big difference between the crime that the cops respond to and the general crime. So it's gonna look like the people who commit crimes are the people who always commit crimes when it's just the subset that the police are able to focus on, and we know there's a lot of bias baked into that as well.
So it's not just inside the data, it's outside the data that you have to think about in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair?

ARVIND NARAYANAN: That's totally, yeah, that's exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it's not the same morally problematic kind of use where you're denying someone their freedom. But a lot of the same pitfalls apply.
I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system, and when they're doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it's only escalating the arms race, right?
I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position, and so by pretending that we can predict it with AI, we're just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don't have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?

ARVIND NARAYANAN: There are a few big limitations here. Let's say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.

JASON KELLEY: Hopefully!

ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don't wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not.
And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
Right. So that's actually what it takes. And, you know, there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having a cultural understanding that this is the sort of thing that's necessary.
And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's in criminal justice, hiring, wherever it is. So I think that'll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They're gonna be very hard to do.

JASON KELLEY: Let's take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Arvind Narayanan.

CINDY COHN: So let's go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us. All kind of concerns. 'cause that's a, that's a piece of this story as well. And I'd love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.

ARVIND NARAYANAN: Sure. Yeah. So there's uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them.
So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
And to even try to enforce that you're kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis.
That's a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for.
And for that purpose, it doesn't have to be that convincing or that deceptive, it just has to be cheap fakes as it's called. It's the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right?
So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we're talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.

CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we're headed into that you've been thinking about a little more. 'cause we're, you know, we're not going back.
Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. 'cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?

ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today's economy at least, and asks, how quickly will this happen? What are the effects going to be?
So a lot of people who think this will happen, think that it's gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn't mean that human labor was superfluous, because we don't take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
And so for us, that's a, that's a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it's not that across the board you're gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening, and we also don't see this happening in the space of a few years.
We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it's not going to be as scary as a lot of people seem to think.

CINDY COHN: So let's say we're living in the future, the Arvind future where we've gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?

ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
There is this famous definition of AI that AI is whatever hasn't been done yet. So what that means is that when a technology is new and it's not working that well and its effects are double-edged, that's when we're more likely to call it AI.
But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that's gonna happen with generative AI to a large degree. It's just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that's happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let's automate government and replace it with a chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to, you know, it's if a chat bot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there's orders of magnitude, more human potential to open up and AI is not a magic bullet here.
You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
And there's, there's just an enormous range. And so being able to improve human potential, to me is the most exciting thing.

CINDY COHN: Thank you so much, Arvind.

ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.

CINDY COHN:  I really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there's a huge gap between, you know, the actual job and the thing that the AI can replicate.

JASON KELLEY:  Yeah, and he's really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it's sort of like asking if food is good for you, are vehicles good for you, but he's much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you're using it for.
It's not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he's very critical of that, which I think is, is good and, and most people are too, but it's happening anyway. So it's good to hear someone who's really thinking about it this way point out why that's incorrect.

CINDY COHN:  I think that's right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it's bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
And anybody who thinks they've checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don't get too excited about the possibilities of AI, but we also don't go all the way to the, the doomers side.

JASON KELLEY:  Yeah. You know, the normal technology thing was really helpful for me, right? It's something that, like you said with computers, it's a tool that, that has applications in some cases and not others, and people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it's been many years now and we're still learning how to make the internet useful, and I think it'll be a long time before we've necessarily figure out how AI can be useful. But there's a lot of lessons we can take away from the growth of the internet about how to apply AI.
You know, my dishwasher, I don't think needs to have wifi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But it seems like these are things we can learn from the way we've sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.

CINDY COHN:  Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher, or I don't want surveillance in my television.
And that's a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional market, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well, it's the problems in those systems in the way power works in those systems.
If you just center this on the tech, you're kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow's declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we're headed in the right direction or not, despite what we, what we hope for.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

 

“The worst thing” for online rights: An age-restricted grey web (Lock and Code S06E16)

11 August 2025 at 11:11

This week on the Lock and Code podcast…

The internet is cracking apart. It’s exactly what some politicians want.

In June, a Texas law that requires age verification on certain websites withstood a legal challenge brought all the way to the US Supreme Court. It could be a blueprint for how the internet will change very soon.

The law, titled HB 1181 and passed in 2023, places new requirements on websites that portray or depict “sexual material harmful to minors.” With the law, the owners or operators of websites that contain images or videos or illustrations or descriptions that “more than one-third of which is sexual material harmful to minors” must now verify the age of their website’s visitors, at least in Texas. Similarly, this means that Texas residents visiting adult websites (or websites meeting the “one-third” definition) must now go through some form of online age verification to watch adult content.

The law has obvious appeal from some groups, which believe that, similar to how things like alcohol and tobacco are age-restricted in the US, so, too, should there be age restrictions on pornography online.

But many digital rights advocates believe that online age verification is different because the current methods used for online age verification could threaten privacy, security, and anonymity online.

As Electronic Frontier Foundation, or EFF, wrote in June:

“A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms.”

Despite EFF’s warnings, this age-restricted reality has already arrived in the UK, where residents are being age-locked out of increasingly more online services because of the country’s passage of the Online Safety Act.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Kelly, activism director at EFF and co-host of the organization’s podcast “How to fix the internet,” about the security and privacy risks of online age verification, why comparisons to age restrictions that are cleared with a physical ID are not accurate, and the creation of what Kelley calls “the grey web,” where more and more websites—even those that are not harmful to minors—get placed behind online age verification models that could collect data, attach it to your real-life identity, and mishandle it in the future.

“This is probably the worst thing in my view that has ever happened to our rights online.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Podcast Episode: Smashing the Tech Oligarchy

30 July 2025 at 03:05

Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

Kara Swisher has been documenting the internet’s titans for almost 30 years through a variety of media outlets and podcasts. She believes that with adequate regulation we can keep people safe online without stifling innovation, and we can have an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 

In this episode you’ll learn about:

  • Why it’s so important that tech workers speak out about issues they want to improve and work to create companies that elevate best practices
  • Why completely unconstrained capitalism turns technology into weapons instead of tools
  • How antitrust legislation and enforcement can create a healthier online ecosystem
  • Why AI could either bring abundance for many or make the very rich even richer
  • The small online media outlets still doing groundbreaking independent reporting that challenges the tech oligarchy 

Kara Swisher is one of the world's foremost tech journalists and critics, and currently hosts two podcasts: On with Kara Swisher and Pivot, the latter co-hosted by New York University Professor Scott Galloway.  She's been covering the tech industry since the 1990s for outlets including the Washington Post, the Wall Street Journal, and the New York Times; she is an New York Magazine editor-at-large, a CNN contributor, and cofounder of the tech news sites Recode and All Things Digital. She also has authored several books, including “Burn Book” (Simon & Schuster, 2024) in which she documents the history of Silicon Valley and the tech billionaires who run it. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

KARA SWISHER: It's a tech that's not controlled by a small group of homogeneous people. I think that's pretty much it. I mean, and there's adequate regulation to allow for people to be safe and at the same time, not too much in order to be innovative and do things – you don't want the government deciding everything.
It's a place where the internet, which was started by US taxpayers, which was paid for, is beneficial for people, and that there's transparency in it, and that we can see what's happening and what's doing. And again, the concentration of power in the hands of a few people really is at the center of the problem.

CINDY COHN: That's Kara Swisher, describing the balance she'd like to see in a better digital future. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation

JASON KELLEY: And I'm Jason Kelley -- EFF's Activism Director. You're listening to How to Fix the Internet.

CINDY COHN: This show is about envisioning a better digital future that we can all work towards.

JASON KELLEY: And we are excited to have a guest who has been outspoken in talking about how we get there, pointing out the good, the bad and the ugly sides of the tech world.

CINDY COHN: Kara Swisher is one of the world's foremost tech journalists and critics. She's been covering the industry since the 1990s, and she currently hosts two podcasts: On with Kara Swisher and Pivot, and she's written several books, including last year's Burn Book where she documents the history of Silicon Valley and the tech billionaires who run it.
We are delighted that she's here. Welcome, Kara.

KARA SWISHER: Thank you.

CINDY COHN: We've had a couple of tech critics on the podcast recently, and one of the kind of themes that's come up for us is you kind of have to love the internet before you can hate on it. And I've heard you describe your journey that way as well. And I'd love for you to talk a little bit about it, because you didn't start off, really, looking for all the ways that things have gone wrong.

KARA SWISHER: I don't hate it. I don't. It's just, you know, I have eyes and I can see, you know, I mean, uh, one of the expressions I always use is you should, um, believe what you see, not see what you believe. And so I always just, that's what's happening. You can see it happening. You can see the coarsening of our dialogue now offline being affected by online. You could just see what's happened.
But I still love the the possibilities of technology and the promise of it. And I think that's what attracted me to it in the first place, and it's a question of how you use it as a tool or a weapon. And so I always look at it as a tool and some people have taken a lot of these technologies and use them as a weapon.

CINDY COHN: So what was that moment? Did you, do you have a moment when you decided you were really interested in tech and that you really found it to be important and worth devoting your time to?

KARA SWISHER: I was always interested in it because I had studied propaganda and the uses of TV and radio and stuff. So I was always interested in media, and this was the media on steroids. And so I recall downloading an entire book onto my computer and I thought, oh, look at this. Everything is digital. And so the premise that I came to at the time, or the idea I came to was that everything that can be digitized would be digitized, and that was a huge idea because that means entire industries would change.

CINDY COHN: Yeah.

JASON KELLEY: Kara, you started by talking about this concentration of power, which is obvious to anyone who's been paying attention, and at the same time, you know, we did use to have tech leaders who, I think, they had less power. It was less concentrated, but also people were more focused, I think, on solving real problems.
You know, you talk a lot about Steve Jobs. There was a goal of improving people's lives with technology, that that didn't necessarily it, it helped the bottom line, but the focus wasn't just on quarterly profits. And I wonder if you can talk a little bit about what you think it would look like if we returned to that in some way. Is that gone?

KARA SWISHER: I don't think we were there. I think they were always focused on quarterly profits. I think that was a canard. I wrote about it, that they would pretend that they were here to help. You know, it's sort of like the Twilight Zone episode To Serve Man. It's a cookbook. I always thought it was a cookbook for these people.
And they were always formulated in terms of making money and maximizing value for their shareholders, which was usually themselves. I wasn't stupid. I understood what they were doing, especially when these stocks went to the moon, especially the early internet days and their first boom. And they became instant, instant-airs, I think they were called that, which was instant millionaires and, and then now beyond that.
And so I was always aware of the money, even if they pretended they weren't, they were absolutely aware And so I don't have a romantic version of this at the beginning, um, except among a small group of people, you know, who, who, who were seeing it, like the Whole Earth Catalog and things like that, which we're looking at it as a way to bring everybody together or to spread knowledge throughout the world, which I also believed in too.

JASON KELLEY: Do you think any of those people are still around?

KARA SWISHER: No, they’re dead.

JASON KELLEY: I mean, literally, you know, they're literally dead, but are there any heirs of theirs?

KARA SWISHER: No, I mean, I don't think they had any power. I don't, I think that some of the theoretical stuff was about that, but no, they didn't have any power. The people that had power were the, the Mark Zuckerbergs, the Googles, and even, you know, the Microsofts, I mean, Bill Gates is kind of the exemplification of all that. As he, he took other people's ideas and he made it into an incredibly powerful company and everybody else sort of followed suit.

JASON KELLEY: And so mostly for you, the concentration of power is the biggest shift that's happened and you see regulation or, you know, anti-competitive moves as ways to get us back.

KARA SWISHER: We don't have any, like, if we had any laws, that would be great, but we don't have any that, that constrain them. And now under President Trump, there's not gonna be any rules around AI, probably. There aren't gonna be any rules around any significant rules, at least around any of it.
So they, the first period, which was the growth of where we are now, was not constrained in any way, and now it's not just not constrained, but it's helping whether it's cryptocurrency or things like that. And so I don't feel like there's any restrictions, like at this point, in fact, there's encouragement by government to do whatever you want.

CINDY COHN: I think that's a really big worry. And you know, I think you're aware, as are we, that, you know, just because somebody comes in and says they're gonna do something about a problem with legislation doesn't mean that they're, they're actually having that. And I think sometimes we feel like we sit in this space where we're like, we agree with you on the harm, but this thing you wanna do is a terrible idea and trying to get the means and the ends connected is kind of a lot of where we live sometimes, and I think you've seen that as well, that like once you've articulated the harm, that's kind of the start of the journey about whether the thing that you're talking about doing will actually meet that moment.

KARA SWISHER: Absolutely. The harms, they don't care about, that's the issue. And I think I was always cognizant of the harms, and that can make you seem like, you know, a killjoy of some sort. But it's not, it's just saying, wow, if you're gonna do this social media, you better pay attention to this or that.
They acted like the regular problems that people had didn't exist in the world, like racism, you know, sexism. They said, oh, that can be fixed, and they never offered any solutions, and then they created tools that made it worse.

CINDY COHN: I feel like the people who thought that we could really use technology to build a better world, I, I don't think they were wrong or naive. I just think they got stomped on by the money. Um, and, you know, uh.

KARA SWISHER: Which inevitably happens.

CINDY COHN: It does. And the question is, how do you squeeze out something, you know, given that this is the dynamic of capitalism, how do you squeeze out space for protecting people?
And we've had times in our society when we've done that better, and we've done that worse. And I feel like there are ways in which this is as bad as has gotten in my lifetime. You know, with the government actually coming in really strongly on the side of, empowering the powerful and disempowering the disempowered.
I see competition as a way to do this. EFF was, you know, it was primarily an organization focused on free speech and privacy, but we kind of backed into talking about competition 'cause we felt like we couldn't get at any of those problems unless we talked about the elephant in the room.
And I think you think about it, really on the individual, you know, you know all these guys, and on that very individual level of what, what kinds of things will, um, impact them.
And I'm wondering if you have some thoughts about the kinds of rules or regulations that might actually, you know, have an impact and not, not turn into, you know, yet another cudgel that they get to wield.

KARA SWISHER: Well any, any would be good. Like I don't, I don't, there isn't any, there isn't any you could speak of that's really problematic for them, except for the courts which are suing over antitrust issues or some regulatory agencies. But in general, what they've done is created an easy glide path for themselves.
I mean, we don't have a national privacy regulation. We don't have algorithmic transparency bills. We don't have data protection really, and to speak of for people. We don't have, you know, transparency into the data they collect. You know, we have more rules and laws on airplanes and cigarettes and everybody else, but we don't have any here. So you know, antitrust is a whole nother area of, of changing, of our antitrust rules. So these are all areas that have to be looked at. But we haven't, they haven't, they haven't passed a thing. I mean, lots of legislators have tried, but, um, it hasn't worked really.

CINDY COHN: You know, a lot of our supporters are people who work in tech but aren't necessarily the. You know, the tech giants, they're not the tops of these companies, but they work in the companies.
And one of the things that I, you know, I don't know if you have any insights if you've thought about this, but we speak with them a lot and they're dismayed at what's going on, but they kind of feel powerless. And I'm wondering if you have thoughts like, you know, speaking to the people who aren't, who aren't the Elons and the, the guys at the top, but who are there, and who I think are critical to keeping these companies going. Are there ways that they can make their voices heard that you've thought of that would, that might work? I guess I, I'm, I'm pulling on your insight because you know the actual people.

KARA SWISHER: Yeah, you know, speak out. Just speak out. You know, everybody gets a voice these days and there's all kinds of voices that never would've gotten heard and to, you know, talk to legislators, involve customers, um, create businesses where you do those good practices. Like that's the best way to do it is create wealth and capitalism and then use best practices there. That to me is the best way to do that.

CINDY COHN: Are there any companies that you look at from where you sit that you think are doing a pretty good job or at least trying? I don't know if you wanna call anybody out, but, um, you know, we see a few, um, and I kind of feel like all the air gets sucked out of the room.

KARA SWISHER: In bits and pieces. In bits and pieces, you know, Apple's good on the privacy thing, but then it's bad on a bunch of other things. Like you could, like, you, you, the problem is, you know, these are shareholder driven companies and so they're gonna do what's best for them and they could, uh, you know, wave over to privacy or wave over to, you know, more diversity, but they really are interested in making money.
And so I think the difficulty is figuring out, you know, do they have duties as citizens or do they just have duties as corporate citizens? And so that's always been a difficult thing in our society and will continue to be.

CINDY COHN: Yeah.

JASON KELLEY: We've always at EFF really stood up for the user in, in this way where sometimes we're praising a company that normally people are upset with because they did a good thing, right? Apple is good on privacy. When they do good privacy things we say, that's great. You know, and if Apple makes mistakes, we say that too.
And it feels like, um, you know, we're in the middle of, I guess, a “tech lash.” I don't know when it started. I don't know if it'll ever end. I don't know if there's, if that's even a real term in terms of like, you know, tech journalism. But do you find that it's difficult? Two, get people to accept sort of like any positive praise for companies that are often just at this point, completely easy to ridicule for all the mistakes they've made.

KARA SWISHER: I think the tech journalism has gotten really strong. It's gotten, I mean, just look at the DOGE coverage. I think it really, I'll point to WIRED as a good example, as they've done astonishing stuff. I think a lot of people have done a lot on, on, uh, you know, the abuses of social media. I think they've covered a lot of issues from the overuse of technology to, you know, all the crypto stuff. It doesn't mean people follow along, but they've certainly been there and revealed a lot of the flaws there. Um, while also covering it as like, this is what's happening with ai. Like this is what's happening, here's where it's going. And so you have to cover as a thing. Like, this is what's being developed. but then there's, uh, others, you know, who have to look into the real problems.

JASON KELLEY: I get a lot of news from 404 Media, right?

KARA SWISHER: Yeah, they’re great.

JASON KELLEY: That sort of model is relatively new and it sort of sits against some of these legacy models. Do you see, like, a growing role for things like that in a future?

KARA SWISHER: There's lots of different things. I mean, I came from like, as you mean, part of the time, although I got away from it pretty quickly, but some of 'em are doing great. It just depends on the story, right? Some of the stories are great, like. Uh, you know, uh, there's a ton of people at the Times have done great stuff on, on, on lots of things around kids and abuses and social media.
At the same time, there's all these really exciting young, not necessarily young, actually, um, independent media companies, whether it's Casey Newton, at Platformer, or Eric Newcomer covering VCs, or 404. There's all these really interesting new stuff. That's doing really well. WIRED is another one that's really seen a lot of bounce back under its current editor who just came on relatively recently.
So it just depends. It depends on where it is, but there's, Verge does a great job. But I think it's individually the stories in, there's no like big name in this area. There's just a lot of people and then there's all these really interesting experts or people who work in tech who've written a lot. That is always very interesting too, to me. It's interesting to hear from insiders what they think is happening.

CINDY COHN: Well, I'm happy to hear this, this optimism. 'Cause I worry a lot about, you know, the way that the business model for media has really been hollowed out. And then seeing things like, you know, uh, some of the big broadcast news people folding,

KARA SWISHER: Yeah, but broadcast never did journalism for tech, come on. Like, some did, I mean, one or two, but it wasn't them who was doing it. It was usually, you know, either the New York Times or these smaller institutions have been doing a great job. There's just been tons and tons of different things, completely different things.

JASON KELLEY: What do you think about the fear, maybe I'm, I'm misplacing it, maybe it's not as real as I imagine it is. Um, that results from something like a Gawker situation, right. You know, you have wealthy people.

KARA SWISHER: That was a long time ago.

JASON KELLEY: It was, but it, you know, a precedent was sort of set, right? I mean, do you think people in working in tech journalism can take aim at, you know, individual people that have a lot of power and wealth in, in the same way that they could before?

KARA SWISHER: Yeah. I think they can, if they're accurate. Yeah, absolutely.

CINDY COHN: Yeah, I think you're a good exhibit A for that, you pull no punches and things are okay. I mean, we get asked sometimes, um, you know, are, are you ever under attack because of your, your sharp advocacy? And I kind of think your sharp advocacy protects you as long as you're right. And I think of you as somebody who's also in, in a bit of that position.

KARA SWISHER: Mmhm.

CINDY COHN: You may say this is inevitable, but I I wanted to ask you, you know, I feel like when I talk with young technical people, um, they've kind of been poisoned by this idea that the only way you can be successful is, is if you're an asshole.
That there's no, there's no model, um, that, that just just goes to the deal. So if they want to be successful, they have to be just an awful person. And so even if they might have thought differently beforehand, that's what they think they have to do. And I'm wondering if you run into this as well, and I sometimes find myself trying to think about, you know, alternate role models for technical people and if you have any that you think of.

KARA SWISHER: Alternate role models? It's mostly men. But there are, there's all kinds of, like, I just did an interview with Lisa Su, who's head of AMD, one of the few women CEOs. And in AI, there's a number of women, uh, you know, you don't necessarily have to have diversity to make it better, but it sure helps, right? Because people have a different, not just diversity of gender or diversity of race, but diversity of backgrounds, politics. You know, the more diverse you are, the better products you make, essentially. That's my always been my feeling.
Look, most of these companies are the same as it ever was, and in fact, there's fewer different people running them, essentially. Um, but you know, that's always been the nature of, of tech essentially, that it was sort of a, a man's world.

CINDY COHN: Yeah, I see that as well. I just worry that young people or junior people coming up think that the only way that you can be successful is a, if you look like the guys who are already successful, but also, you know, if you're just kind of not, you know, if you're weird and not nice.

KARA SWISHER: It's just depends on the person. It's just that when you get that wealthy, you have a lot of people licking you up and down all day, and so you end up in the crazy zone like Elon Musk, or the arrogant zone like Mark Zuckerberg or whatever. It's just they don't get a lot of pushback and when you don't get a lot of friction, you tend to think everything you do is correct.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights. And EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 10th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Have a listen to this: [WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Kara Swisher.

CINDY COHN: I mean, you watched all these tech giants kind of move over to the Trump side and then, you know, stand there on the inauguration. It sounds like you thought that might've been inevitable.

KARA SWISHER: I said it was inevitable, they were all surprised. They're always surprised when I'm like, Elon's gonna crack up with the president. Oh look, they cracked up with, it's not hard to follow these people. In his case, he's, he's personally, there's something wrong with his head, obviously. He always cracks up with people. So that's what happened here.
In that case, they just wanted things. They want things. You think they liked Donald Trump? You’re wrong there? I'll tell you. They don't like him. They need him. They wanna use him and they were irritated by Biden 'cause he presumed to push back on and he didn't do a very good job of it, honestly. But they definitely want things.

CINDY COHN: I think the tech industry came up at a time when deregulation was all the rage, right? So in some ways they were kind of born into a world where regulation was an anathema and they took full advantage of the situation.
As did lots of other areas that got deregulated or were not regulated in the first place. But I think tech, because of timing in some ways, tech was really born into this zone. And there was some good things for it too. I mean, you know, EFF was, was successful in the nineties at making sure that the internet got first Amendment protection, that we didn't, go to the other side with things like the Communications Decency Act and squelch any adult material from being put online and reduce everything to the side. But getting that right and kind of walking through the middle ground where you have regulation that supports people but doesn't squelch them is just an ongoing struggle,

KARA SWISHER: Mm-hmm. Absolutely.

JASON KELLEY: I have this optimistic hope that these companies and their owners sort of crumble as they continue to, as Cory Doctorow says, enshittify, right? The only reason they don't crumble is that they have this lock in with users. They have this monopoly power, but you see a, you know, a TikTok pops up and suddenly Instagram has a real competitor, not because rules have been put in place to change Instagram, but because a different, new maybe better platform.

KARA SWISHER: There’s nothing like competition, making things better. Right? Competition always helps.

JASON KELLEY: Yeah, when I think of competition law, I think of crushing companies, I think of breaking them up. But what do you think we can do to make this sort of world better and more fertile for new companies? You know, you talked earlier about tech workers.

KARA SWISHER: Well, you have to pass those things where they don't get to. Antitrust is the best way to do that. Right? And, but those things move really slowly, unfortunately. And, you know, good antitrust legislation and antitrust enforcement, that's happening right now. But it opens up, I mean, the reason Google exists is 'cause of the antitrust actions around Microsoft.
And so we have to like continue to press on things like that and continue to have regulators that are allowed to pursue cases like that. And then at the same time have a real focus on creating wealth. We wanna create wealth, we wanna create, we wanna give people breaks.
We wanna have the government involved in funding some of these things, making it so that small companies don't get run over by larger companies.
Not letting power concentrate into a small group of people. When that happens, that's what happens. You end up with less companies. They kill them in the crib, these companies. And so not letting things get bought, have a scrutiny over things, stuff like that.

CINDY COHN: Yeah, I think a lot more merger review makes a lot of sense. I think a lot of thinking about, how are companies crushing each other and what are the things that we can do to try to stop that? Obviously we care a lot about interoperability, making sure that technologies that, that have you as a customer don't get to lock you in, and make it so that you're just stuck with their broken business model and can do other things.
There's a lot of space for that kind of thing. I mean, you know, I always tell the story, I'm sure you know this, that, you know, if it weren't for the FCC telling AT&T that they had to let people plug something other than phones into the wall, we wouldn't have had the internet, you know, the home internet revolution anyway.

KARA SWISHER: Right. Absolutely. 100%.

CINDY COHN: Yeah, so I think we are in agreement with you that, you know, competition is really central, but it's, you know, it's kind of an all of the above and certainly around privacy issues. We can do a lot around this business model. Which I think is driving so many of the other bad things that we are seeing, um, with some comprehensive privacy law.
But boy, it sure feels like right now, you know, we got two branches of government that are not on board with that. And the third one kind of doing okay, but not, you know, and the courts were doing okay, but slowly and inconsistently. Um, where do you see hope? Where are you, where are you looking for the for

KARA SWISHER: I mean, some of this stuff around AI could be really great for humanity, or it could be great for a small amount of people. That's really, you know, which one do we want? Do we want this technology to be a tool or a weapon against us? Do we want it to be in the hands of bigger companies or in the hands of all of us and we make decisions around it?
Will it help us be safer? Will it help us cure cancer or is it gonna just make a rich person a billion dollars richer? I mean, it's the age old story, isn't it? This is not a new theme in America where, the rich get richer and the poor get less. And so these, these technologies could, as you know, recently out a book all abundance.
It could create lots of abundance. It could create lots of interesting new jobs, or it could just put people outta work and let the, let the people who are richer get richer. And I don't think that's a society we wanna have. And years ago I was talking about income inequality with a really wealthy person and I said, you either have to do something about, you know, the fact that people, that we didn't have a $25 minimum wage, which I think would help a lot, lots of innovation would come from that. If people made more money, they'd have a little more choices. And it's worth the investment in people to do that.
And I said, we have to either deal with income inequality or armor plate your Tesla. Tesla. And I think he wanted to armor plate his Tesla. That's when ire, and then of course, cyber truck comes out. So there you have it. But, um, I think they don't care about that kind of stuff. You know, they're happy to create their little, we, those little worlds where they're highly protected, but it's not a world I wanna live in.

CINDY COHN: Kara, thank you so much. We really appreciate you coming in. I think you sit in such a different place in the world than where we sit, and it's always great to get your perspective.

KARA SWISHER: Absolutely. Anytime. You guys do amazing work and you know you're doing amazing work and you should always keep a watch on these people. It's not, you shouldn't be against everything. 'cause some people are right. But you certainly should keep a watch on people

CINDY COHN: Well, great. We, we sure will.

JASON KELLEY: up. Yeah, we'll keep doing it. Thank you,

CINDY COHN: Thank you.

KARA SWISHER: All right. Thank you so much.

CINDY COHN: Well, I always appreciate how Kara gets right to the point about how the concentration of power among a few tech moguls has led to so many of the problems we face online and how competition. Along with some things, we so often hear about real laws requiring transparency, privacy protections, and data protections can help shift the tide.

JASON KELLEY: Yeah, you know, some of these fixes are things that people have been talking about for a long time and I think we're at a point where everyone agrees on a big chunk of them. You know, especially the ones that we promote like competition and transparency oftentimes, and privacy. So it's great to hear that Kara, who's someone that, you know, has worked on this issue and in tech for a long time and thought about it and loves it, as she said, you know, agrees with us on some of the, some of the most important solutions.

CINDY COHN: Sometimes these criticisms of the tech moguls can feel like something everybody does, but I think it's important to remember that Kara was really one of the first ones to start pointing this out. And I also agree with you, you know, she's a person who comes from the position of really loving tech. And Kara's even a very strong capitalist. She really loves making money as well. You know, her criticism comes from a place of betrayal, that, again, like Molly White, earlier this season, kind of comes from a position of, you know, seeing the possibilities and loving the possibilities, and then seeing how horribly things are really going in the wrong direction.

JASON KELLEY: Yeah, she has this framing of, is it a tool or a weapon? And it feels like a lot of the tools that she loved became weapons, which I think is how a lot of us feel. You know, it's not always clear how to draw that line. But it's obviously a good question that people, you know, working in the tech field, and I think people even using technology should ask themselves, when you're really enmeshed with it, is the thing you're using or building or promoting, is it working for everyone?
You know, what are the chances, how could it become a weapon? You know, this beautiful tool that you're loving and you have all these good ideas and, you know, ideas that, that it'll change the world and improve it. There's always a way that it can become a weapon. So I think it's an important question to ask and, and an important question that people, you know, working in the field need to ask.

CINDY COHN: Yeah. And I think that, you know, that's the gem of her advice to tech workers. You know, find a way to make your voice heard if you see this happening. And there's a power in that. I do think that one thing that's still true in Silicon Valley is they compete for top talent.
And, you know, top talent indicating that they're gonna make choices based on some values is one of the levers of power. Now I don't think anybody thinks that's the only one. This isn't an individual responsibility question. We need laws, we need structures. You know, we need some structural changes in antitrust law and elsewhere in order to make that happen. It's not all on the shoulders of the tech workers, but I appreciate that she really did say, you know, there's a role to be played here. You're not just pawns in this game.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program for Public Understanding of Science and Technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Podcast Episode: Finding the Joy in Digital Security

16 July 2025 at 03:05

Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning, and retain more knowledge, when they're having a good time. It doesn’t mean the topic isn’t serious – it’s just about intentionally approaching a serious topic with joy.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

That’s how Helen Andromedon approaches her work as a digital security trainer in East Africa. She teaches human rights defenders how to protect themselves online, creating open and welcoming spaces for activists, journalists, and others at risk to ask hard questions and learn how to protect themselves against online threats. She joins EFF’s Cindy Cohn and Jason Kelley to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 

In this episode you’ll learn about:

  • How the Trump Administration’s shuttering of the United States Agency for International Development (USAID) has led to funding cuts for digital security programs in Africa and around the world, and why she’s still optimistic about the work
  • The importance of helping women feel safe and confident about using online platforms to create positive change in their communities and countries
  • Cultivating a mentorship model in digital security training and other training environments
  • Why diverse input creates training models that are accessible to a wider audience
  • How one size never fits all in digital security solutions, and how Dungeons & Dragons offers lessons to help people retain what they learn 

Helen Andromedon – a moniker she uses to protect her own security – is a digital security trainer in East Africa who helps human rights defenders learn how to protect themselves and their data online and on their devices. She played a key role in developing the Safe Sisters project, which is a digital security training program for women. She’s also a UX researcher and educator who has worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women’s Development Fund. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

HELEN ANDROMEDON: I'll say it bluntly. Learning should be fun. Even if I'm learning about your tool, maybe you design a tutorial that is fun for me to read through, to look at. It seems like that helps with knowledge retention.
I've seen people responding to activities and trainings that are playful. And yet we are working on a serious issue. You know, we are developing an advocacy campaign, it's a serious issue, but we are also having fun.

CINDY COHN: That's Helen Andromedan talking about the importance of joy and play in all things, but especially when it comes to digital security training. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: This show is all about envisioning a better digital world for everyone. Here at EFF, we often specialize in thinking about worst case scenarios and of course, jumping in to help when bad things happen. But the conversations we have here are an opportunity to envision the better world we can build if we start to get things right online.

JASON KELLEY: Our guest today is someone who takes a very active role in helping people take control of their digital lives and experiences.

CINDY COHN: Helen Andromedon - that's a pseudonym by the way, and a great one at that – is a digital security trainer in East Africa. She trains human rights defenders in how to protect themselves digitally. She's also a UX researcher and educator, and she's worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women's Development Fund.
She also played a key role in developing the Safe Sisters project, which is a digital security training, especially designed for women. Welcome Helen. Thank you so much for joining us.

HELEN ANDROMEDON: Thanks for having me. I've been a huge fan of the tools that came out of EFF and working with Ford Foundation. So yeah, it's such a blast to be here.

CINDY COHN: Wonderful. So we're in a time when a lot of people around the world are thinking more seriously than ever about how to protect their privacy and security. and that's, you know, from companies, but increasingly from governments and many, many other potential bad actors.
You know, there's no one size fits all training, as we know. And the process of determining what you need to protect and from whom you need to protect it is different for everybody. But we're particularly excited to talk to you, Helen, because you know that's what you've been doing for a very long time. And we want to hear how you think about, you know, how to make the resources available to people and make sure that the trainings really fit them. So can you start by explaining what the Safe Sisters project is?

HELEN ANDROMEDON: It's a program that came out of a collaboration amongst friends, but friends who were also working in different organizations and also were doing trainings. In the past, what would have it would be, we would send out an application, Hey, there's a training going on. But there was a different number of women that would actually apply to this fellowship.
It would always be very unequal. So what we decided to do is really kind of like experimenting is say, what if we do a training but only invite, women and people who are activists, people who are journalists, people who are really high risk, and give them a space to ask those hard questions because there are so many different things that come out of suffering online harassment and going through that in your life, you, when you need to share it, sometimes you do need a space where you don't feel judged, where you can kind of feel free to engage in really, really traumatic topics. So this fellowship was created, it had this unique percentage of people that would apply and we started in East Africa.
I think now because of what has happened in the last I, I guess three months, it has halted our ability to run the program in as many. Regions that need it. Um, but Safe Sister, I think what I see, it is a tech community of people who are able to train others or help others solve a problem.
So what problems do, I mean, so for example, I, I think I left my, my phone in the taxi. So what do I do? Um, how do I find my phone? What happens to all my data? Or maybe it could be a case of online harassment where there's some sort of revenge from the other side, from the perpetrator, trying to make the life of the victim really, really difficult at the moment.
So we needed people to be able to have solutions available to talk about and not just say, okay, you are a victim of harassment. What should I do? There's nothing to do, just go offline. No, we need to respond, but many of us don't have the background in ICT, uh, for example, in my region. I think that it is possible now to get a, a good background in IT or ICT related courses, um, up to, um, you know, up to PhD level even.
But sometimes I've, in working with Safe Sister, I've noticed that even such people might not be aware of the dangers that they are facing. Even when they know OPSEC and they're very good at it. They might not necessarily understand the risks. So we decided to keep working on the content each year, every time we can run the program, work on the content: what are the issues, currently, that people are facing? How can we address them through an educational fellowship, which is very, very heavy on mentorship. So mentorship is also a thing that we put a lot of stress on because again, we know that people don't necessarily have the time to take a course or maybe learn about encryption, but they are interested in it. So we want to be able to serve all the different communities and the different threat models that we are seeing.

CINDY COHN: I think that's really great and I, I wanna, um, drill in a couple of things. So first thing you, uh, ICT, internet Communications Technologies. Um, but what I, uh, what I think is really interesting about your approach is the way the fellowship works. You know, you're kind of each one teach one, right?
You're bringing in different people from communities. And if you know, most of us, I think as a, as a model, you know, finding a trusted person who can give you good information is a lot easier than going online and finding information all by yourself. So by kind of seeding these different communities with people who've had your advanced training, you're really kind of able to grow who gets the information. Is that part of the strategy to try to have that?

HELEN ANDROMEDON: It's kind of like two ways. So there is the way where we, we want people to have the information, but also we want people to have the correct information.
Because there is so much available, you can just type in, you know, into your URL and say, is this VPN trusted? And maybe you'll, you'll find a result that isn't necessarily the best one.
We want people to be able to find the resources that are guaranteed by, you know, EFF or by an organization that really cares about digital rights.

CINDY COHN: I mean, that is one of the problems of the current internet. When I started out in the nineties, there just wasn't information. And now really the role of organizations like yours is sifting through the misinformation, the disinformation, just the bad information to really lift up, things that are more trustworthy. It sounds like that's a lot of what you're doing.

HELEN ANDROMEDON: Yeah, absolutely. How I think it's going, I think you, I mean, you mentioned that it's kind of this cascading wave of, you know, knowledge, you know, trickling down into the communities. I do hope that's where it's heading.
I do see people reaching out to me who have been at Safe Sisters, um, asking me, yo Helen, which training should I do? You know, I need content for this. And you can see that they're actively engaging still, even though they went through the fellowship like say four years ago. So that I think is like evidence that maybe it's kind of sustainable, yeah.

CINDY COHN: Yeah. I think so. I wanted to drill down on one other thing you said, which is of course, you mentioned the, what I think of as the funding cuts, right, the Trump administration cutting off money for a lot of the programs like Safe Sisters, around the world. and I know there are other countries in Europe that are also cutting, support for these kind of programs.
Is that what you mean in terms of what's happened in the last few months?

HELEN ANDROMEDON: Yeah. Um, it's really turned around what our expectations for the next couple of years say, yeah, it's really done so, but also there's an opportunity for growth to recreate how, you know, what kind of proposals to develop. It's, yeah, it's always, you know, these things. Sometimes it's always just a way to change.

CINDY COHN: I wanna ask one more question. I really will let Jason ask some at some point, but, um, so what does the world look like if we get it right? Like if your work is successful, and more broadly, the internet is really supporting these kind of communities right now, what does it look like for the kind of women and human rights activists who you work with?

HELEN ANDROMEDON: I think that most of them would feel more confident to use those platforms for their work. So that gives it an extra boost because then they can be creative about their actions. Maybe it's something, maybe they want, you know, uh, they are, they are demonstrating against, uh, an illegal and inhumane act that has passed through parliament.
So online platforms. If they could, if it could be our right and if we could feel like the way we feel, you know, in the real world. So there's a virtual and a real world, you're walking on the road and you know you can touch things.
If we felt ownership of our online spaces so that you feel confident to create something that maybe can change. So in, in that ideal world, it would be that the women can use online spaces to really, really boost change in their communities and have others do so as well because you can teach others and you inspire others to do so. So it's, like, pops up everywhere and really makes things go and change.
I think also for my context, because I've worked with people in very repressive regimes where it is, the internet can be taken away from you. So it's things like the shutdowns, it's just ripped away from you. Uh, you can no longer search, oh, I have this, you know, funny thing on my dog. What should I do? Can I search for the information? Oh, you don't have the internet. What? It's taken away from you. So if we could have a way where the infrastructure of the internet was no longer something that was, like, in the hands of just a few people, then I think – So there's a way to do that, which I've recently learned from speaking to people who work on these things. It's maybe a way of connecting to the internet to go on the main highway, which doesn't require the government, um, the roadblocks and maybe it could be a kind of technology that we could use that could make that possible. So there is a way, and in that ideal world, it would be that, so that you can always find out, uh, what that color is and find out very important things for your life. Because the internet is for that, it's for information.
Online harassment, that one. I, I, yeah, I really would love to see the end of that. Um, just because, so also acknowledging that it's also something that has shown us. As human beings also something that we do, which is not be very kind to others. So it's a difficult thing. What I would like to see is that this future, we have researched it, we have very good data, we know how to avoid it completely. And then we also draw the parameters, so that everybody, when something happens to you, doesn't make you feel good, which is like somebody harassing you that also you are heard, because in some contexts, uh, even when you go to report to the police and you say, look, this happened to me. Sometimes they don't take it seriously, but because of what happens to you after and the trauma, yes, it is important. It is important and we need to recognize that. So it would be a world where you can see it, you can stop it.

CINDY COHN: I hear you and what I hear is that, that the internet should be a place where it's, you know, always available, and not subject to the whims of the government or the companies. There's technologies that can help do that, but we need to make them better and more widely available. That speaking out online is something you can do. And organizing online is something you can do. Um, but also that you have real accountability for harassment that might come as a response. And that could be, you know, technically protecting people, but also I think that sounds more like a policy and legal thing where you actually have resources to fight back if somebody, you know, misuses technology to try to harass you.

HELEN ANDROMEDON: Yeah, absolutely. Because right now the cases get to a point where it seems like depending on the whim of the person in charge, maybe if they go to, to report it, the case can just be dropped or it's not taken seriously. And then people do harm to themselves also, which is on, like, the extreme end and which is something that's really not, uh, nice to happen and should, it shouldn't happen.

CINDY COHN: It shouldn't happen, and I think it is something that disproportionately affects women who are online or marginalized people. Your vision of an internet where people can freely gather together and organize and speak is actually available to a lot of people around the world, but, but some people really don't experience that without tremendous blowback.
And that's, um, you know, that's some of the space that we really need to clear out so that it's a safe space to organize and make your voice heard for everybody, not just, you know, a few people who are already in power or have the, you know, the technical ability to protect themselves.

JASON KELLEY: We really want to, I think, help talk to the people who listen to this podcast and really understand and are building a better future and a better internet. You know, what kind of things you've seen when you train people. What are you thinking about when you're building these resources and these curriculums? What things come up like over and over that maybe people who aren't as familiar with the problems you've seen or the issues you've experienced.

HELEN ANDROMEDON: yeah, I mean the, Hmm, I, maybe they could be a couple of, of reasons that I think, um. What would be my view is, the thing that comes up in trainings is of course, you know, hesitation. there's this new thing and I'm supposed to download it. What is it going to do to my laptop?
My God, I share this laptop. What is it going to do? Now they tell me, do this, do this in 30 minutes, and then we have to break for lunch. So that's not enough time to actually learn because then you have to practice or you could practice, you could throw in a practice of a session, but then you leave this person and that person is as with normal.
Forget very normal. It happens. So the issues sometimes it's that kind of like hesitation to play with the tech toys. And I think that it's, good to be because we are cautious and we want to protect this device that was really expensive to get. Maybe it's borrowed, maybe it's secondhand.
I won't get, you know, like so many things that come up in our day to day because of, of the cost of things.

JASON KELLEY: You mentioned like what do you do when you leave your phone in a taxi? And I'll say that, you know, a few days ago I couldn't find my phone after I went somewhere and I completely freaked out. I know what I'm doing usually, but I was like, okay, how do I turn this thing off?
And I'm wondering like that taxi scenario, is that, is that a common one? Are there, you know, others that people experience there? I, I know you mentioned, you know, internet shutoffs, which happen far too frequently, but a lot of people probably aren't familiar with them. Is that a common scenario? You have to figure out what to do about, like, what are the things that pop up occasionally that, people listening to this might not be as aware of.

HELEN ANDROMEDON: So losing a device or a device malfunctioning is like the top one and internet shutdown is down here because they are not, they're periodic. Usually it's when there's an election cycle, that's when it happens. After that, you know, you sometimes, you have almost a hundred percent back to access. So I think I would put losing a device, destroying a device.
Okay, now what do I do now for the case of the taxi? The phone in the taxi. First of all, the taxi is probably crowded. So you don't think that phone will not be returned most likely.
So maybe there's intimate photos. You know, there's a lot, there's a lot that, you know, can be. So then if this person doesn't have a great password, which is usually the case because there is not so much emphasis when you buy a device. There isn't so much emphasis on, Hey, take time to make a strong password now. Now it's better. Now obviously there are better products available that teach you about device security as you are setting up the phone. But usually you buy it, you switch it on, so you don't really have the knowledge. This is a better password than that. Or maybe don't forget to put a password, for example.
So that person responding to that case would be now asking if they had maybe the find my device app, if we could use that, if that could work, like as you were saying, there's a possibility that it might, uh, bing in another place and be noticed and for sure taken away. So there's, it has to be kind of a backwards, a learning journey to say, let's start from ground zero.

JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred p Sloan Foundation's program in public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You are the reason we exist.
You can become a member for just $25 and for a little more, you can get some great, very stylish gear. The more members we have, the more power we have in state houses, courthouses and on the streets.
EFF has been fighting for digital rights for decades, and that fight is bigger than ever. So please, if you like what we do, go to ff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]
And now back to our conversation with Helen Andromedon.

CINDY COHN: So how do you find the people who come and do the trainings? How do you identify people who would be good fellows or who need to come in to do the training? Because I think that's its own problem, especially, you know, the Safe Sisters is very spread out among multiple countries.

HELEN ANDROMEDON: Right now it has been a combination of partners saying, Hey, we have an idea, and then seeing where the issues are.
As you know, a fellowship needs resources. So if there is an interest because of the methodology, at least, um, let's say it's a partner in Madagascar who is working on digital rights. They would like to make sure that their community, maybe staff and maybe people that they've given sub-grants to. So that entire community, they want to make sure that it is safe, they can communicate safely. Nothing, you know, is leaked out, they can work well. And they're looking for, how do we do this? We need trainers, we need content. we need somebody who understands also learning separate from the resources. So I think that the Safe Sister Fellowship also is something that, because it's like you can pick it up here and you can design it in, in whatever context you have.
I think that has made it like be stronger. You take it, you make it your own. So it has happened like that. So a partner has an interest. We have the methodology, we have the trainers, and then we have the tools as well. And then that's how it happens.

CINDY COHN: What I'm hearing here is that, you know, there's already a pretty strong network of partners across Africa and the communities you serve. there's groups and, you know, we know this from EFF, 'cause we hear from them as well ,that there are, there are actually a pretty well developed set of groups that are doing digital activism and human rights defenders using technology already across, uh, Africa and the rest of the communities. And that you have this network and you are the go-to people, uh, when people in the network realize they need a higher level of security thinking and training than they had. Does that sound right?

HELEN ANDROMEDON: sound right? Yeah. A higher level of our being aware And usually it comes down to how do we keep this information safe? Because we are having incidents. Yeah.

CINDY COHN: Do you have an incident that you could, you explain?

HELEN ANDROMEDON: Oh, um, queer communities, say, an incident of, executive director being kidnapped. And it was, we think, that it's probably got to do with how influential they were and what kind of message they were sending. So it, it's apparent. And then so shortly after that incident, there's a break-in into the, the office space. Now that one is actually quite common, um, especially in the civic space. So that one then, uh, if they, they were storing maybe case files, um, everything was in a hard copy. All the information was there, receipts, checks, um, payment details. That is very, very tragic in that case.
So in that, what we did, because this incident had happened in multiple places, we decided to run a program for all the staff that was, um, involved in their day to day. So we could do it like that and make sure that as a response to what happened, everybody gets some education. We have some quizzes, we have some tests, we have some community. We keep engaged and maybe. That would help. And yeah, they'll be more prepared in case it happens again.

CINDY COHN: Oh yeah. And this is such an old, old issue. You know, when we were doing the encryption fight in the nineties, we had stories of people in El Salvador and Guatemala where the office gets raided and the information gets in the hands of the government, whoever the opposition is, and then other people start disappearing and getting targeted too, because their identities are revealed in the information that gets seized. And that sounds like the very same pattern that you're still seeing.

HELEN ANDROMEDON: Yeah there's a lot to consider for that case. Uh, cloud saving, um, we have to see if there's somebody that can, there's somebody who can host their server. It's very, yeah, it's, it's interesting for that case.

CINDY COHN: Yeah. I think it's an ongoing issue and there are better tools than we had in the nineties, but people need to know about them and, and actually using them is not, it's not easy. It's, you, you have to actually think about it.

HELEN ANDROMEDON: Yeah, I, I don't know. I've seen a model that works, so if it's a tool, it's great. It's working well. I've seen it, uh, with I think the Tor project, because the, to project, has user communities. What it appears to be doing is engaging people with training, so doing safety trainings and then they get value from, from using your tool. because they get to have all this information, not only about your tool, but of safety. So that's a good model to build user communities and then get your tool used. I think this is also a problem.

CINDY COHN: Yeah. I mean, this is a, another traditional problem is that the trainers will come in and they'll do a training, but then nobody really is trained well enough to continue to use the tool.
And I see you, you know, building networks and building community and also having, you know, enough time for people to get familiar with and use these tools so that they won't just drop it after the training's over. It sounds like you're really thinking hard about that.

HELEN ANDROMEDON: Yeah. Um, yeah, I think that we have many opportunities and because the learning is so difficult to cultivate and we don't have the resources to make it long term. Um, so yes, you do risk having all the information forgotten. Yes.

JASON KELLEY: I wanna just quickly emphasize that some of the scenarios, Cindy, you've talked about, and Helen you just mentioned, I think a lot of: potential break-ins, harassment, kidnapping, and it's, it's really, it's awful, but I think this is one of the things that makes this kind of training so necessary. I know that this seems obvious to many people listening and, and to the folks here, but I think it's, it's really it. I. Just needs emphasized that these are serious issues. That, and that's why you can't make a one size fits all training because these are real problems that, you know, someone might not have to deal with in one country and they might have a regular problem with in another. Is there a kind of difference that you can just clarify about how you would train, for example, groups of women that are experiencing one thing when they, you know, need digital security advice or help versus let's say human rights defenders? Is the training completely different when you do that, or is it just really kind of emphasizing the same things about like protecting your privacy, protecting your data, using certain tools, things like that?

HELEN ANDROMEDON: Yeah. Jason, let me, let me first respond to your first comment about the tools. So one size fits all, obviously is wrong. Maybe get more people of diversity working on that tool and they'll give you their opinion because the development is a process. You don't just develop a tool - you have time to change, modify, test. Do I use that? Like if you had somebody like that in the room, they would tell you if you had two, that would be great because now you have two different points of evidence. And keep mixing. And then, um, I know it's like it's expensive. Like you have to do it one way and then get feedback, then do it another way. But I, I think just do more of that. Um, yeah. Um, how do I train? So the training isn't that different. There are some core concepts that we keep and then, so if it, if I had like five days, I would do like one or two days. The more technical, uh, concepts of digital safety, which everybody has to do, which is, look, this is my device, this is how it works, this is how I keep it safe. This is my account, this is how it works. This is how I keep it safe.
And then when you have more time, you can dive into the personas, let's say it's a journalist, so is there a resource for, and this is how then you pull a resource and then you show it is there a resource which identify specific tools developed for journalists? Oh, maybe there is, there is something that is like a panic button that one they need. So you then you start to put all these things together and in the remaining time you can kind of like hone into those differences.
Now for women, um, it would be … So if it's HRDs and it's mixed, I still would cover cyber harassment because it affects everyone. For women would, would be slightly different because maybe we could go into self-defense, we could go into how to deal, we could really hone into the finer points of responding to online harassment because for their their case, it's more likely because you did a threat model, it's more likely because of their agenda and because of the work that they do. So I think that would be how I would approach the two.

JASON KELLEY: And one, one quick thing that I just, I want to mention that you brought up earlier is, um, shared devices. There's a lot of, uh, solutionism in government, and especially right now with this sort of, assumption that if you just assume everyone has one device, if you just say everyone has their phone, everyone has their computer, you can, let's say, age verify people. You can say, well, kids who use this phone can't go to this website, and adults who use this other phone can go to this website. And this is a regular issue we've seen where there's not an awareness that people are buying secondhand devices a lot, people are sharing devices a lot.

HELEN ANDROMEDON: Yeah, absolutely. Shared devices is the assumption always. And then we do get a few people who have their own devices. So Jason, I just wanted to add one more factor that could be bad. Yeah. For the shared devices, because of the context, and the regions that I'm in, you have also the additional culture and religious norms, which sometimes makes it like you don't have liberty over your devices. So anybody at any one time, if they're your spouse or your parent, they can just take it from you, and demand that you let them in. So it's not necessarily that you could all have your own device, but the access to that device, it can be shared.

CINDY COHN: So as you look at the world of, kind of, tools that are available, where are the gaps? Where would you like to see better tools or different tools or tools at all, um, to help protect and empower the communities you work with?

HELEN ANDROMEDON: We need a solution for the internet shutdowns because, because sometimes it could have an, it could have health repercussions, you could have a need, a serious need, and you don't have access to the internet. So I don't know. We need to figure that one out. Um, the technology is there, as you mentioned earlier, before, but you know, it needs to be, like, more developed and tested. It would be nice to have technology that responds or gives victim advice. Now I've seen interventions. By case. Case by case. So many people are doing them now. Um, you, you know, you, you're right. They verify, then they help you with whatever. But that's a slow process.
Um, you're processing the information. It's very traumatic. So you need good advice. You need to stay calm, think through your options, and then make a plan, and then do the plan. So that's the kind of advice. Now I think there are apps because maybe I'm not using them or I don't, maybe that means they're not well known as of now.
Yeah. But that's technology I would like to see. Um, then also every, every, everything that is available. The good stuff. It's really good. It's really well written. It's getting better – more visuals, more videos, more human, um, more human like interaction, not that text. And mind you, I'm a huge fan of text, um, and like the GitHub text.
That's awesome. Um, but sometimes for just getting into the topic you need a different kind of, uh, ticket. So I don't know if we can invest in that, but the content is really good.
Practice would be nice. So we need practice. How do we get practice? That's a question I would leave to you. How do you practice a tool on your own? It's good for you, how do you practice it on your own? So it's things like that helping the, the person onboard, doing resources to help that transition. You want people to use it at scale.

JASON KELLEY: I wonder if you can talk a bit about that moment when you're training someone and you realize that they really get it. Maybe it's because it's fun, or maybe it's because they just sort of finally understand like, oh, that's how this works. Is that something, you know, I assume it's something you see a lot because you're clearly, you know, an experienced and successful teacher, but it's, it's just such a lovely moment when you're trying to teach someone

HELEN ANDROMEDON: when trying to teach someone something. Yeah, I mean, I can't speak for everybody, but I'll speak to myself. So there are some things that surprise me sitting in a class, in a workshop room, or reading a tutorial or watching how the internet works and reading about the cables, but also reading about electromagnetism. All those things were so different from, what were we talking about? Which is like how internet and civil society, all that stuff. But that thing, the science of it, the way it is, that should, for me, I think that it's enough because it's really great.
But then, um. So say we are, we are doing a session on how the internet works in relation to internet shutdowns. Is it enough to just talk about it? Are we jumping from problem to solution, or can we give some time? So that the person doesn't forget, can we give some time to explain the concept? Almost like moving their face away from the issue for a little bit and like, it's like a deception.
So let's talk about electromagnetism that you won't forget. Maybe you put two and two together about the cyber optic cables. Maybe you answer the correction, the, the right, uh, answer to a question in, at a talk. So it's, it's trying to make connections because we don't have that background. We don't have a tech background.
I just discovered Dungeons and Dragons at my age. So we don't have that tech liking tech, playing with it. We don't really have that, at least in my context. So get us there. Be sneaky, but get us there.

JASON KELLEY: You have to be a really good dungeon master. That's what I'm hearing. That's very good.

HELEN ANDROMEDON: yes.

CINDY COHN: I think that's wonderful and, and I agree with you about, like, bringing the joy, making it fun, and making it interesting on multiple levels, right?
You know, learning about the science as well as, you know, just how to do things that just can add a layer of connection for people that helps keep them engaged and keeps them in it. And also when stuff goes wrong, if you actually understand how it works under the hood, I think you're in a better position to decide what to do next too.
So you've gotta, you know, it not only makes it fun and interesting, it actually gives people a deeper level of understanding that can help 'em down the road.

HELEN ANDROMEDON: Yeah, I agree. Absolutely.

JASON KELLEY: Yeah, Helen, thanks so much for joining us – this has been really helpful and really fun.
Well, that was really fun and really useful for people I think, who are thinking about digital security and people who don't spend much time thinking about digital security, but maybe should start, um, something that she mentioned that, that, that you talked about, the Train the Trainer model, reminded me that we should mention our surveillance self-defense guides that, um, are available@ssd.ff.org.
That we talked about a little bit. They're a great resource as well as the Security Education companion website, which is security education companion.org.
Both of these are great things that came up and that people might want to check out.

CINDY COHN: Yeah, it's wonderful to hear someone like Helen, who's really out there in the field working with people, say that these guides help her. Uh, we try to be kind of the brain trust for people all over the world who are doing these trainings, but also make it easy if. If you're someone who's interested in learning how to do trainings, we have materials that'll help you get started. Um, and as, as we all know, we're in a time when more people are coming to us and other organizations seeking security help than ever before.

JASON KELLEY: Yeah, and unfortunately there's less resources now, so I think we, you know, in terms of funding, right, there's less resources in terms of funding. So it's important that people have access to these kinds of guides, and that was something that we talked about that kind of surprised me. Helen was really, I think, optimistic about the funding cuts, not obviously about them themselves, but about what the opportunities for growth could be because of them.

CINDY COHN: Yeah, I think this really is what resilience sounds like, right? You know, you get handed a situation in which you lose, you know, a lot of the funding support that you're gonna do, and she's used to pivoting and she pivots towards, you know, okay, these are the opportunities for us to grow, for us to, to build new baselines for the work that we do. And I really believe she's gonna do that. The attitude just shines through in the way that she approaches adversity.

JASON KELLEY: Yeah. Yeah. And I really loved, while we're thinking about the, the parts that we're gonna take away from this, I really loved the way she brought up the need for people to feel ownership of the online world. Now, she was talking about infrastructure specifically in that moment, but this is something that's come up quite a bit in our conversations with people.

CINDY COHN: Yeah, her framing of how important the internet is to people all around the world, you know, the work that our friends at Access now and others do with the Keep It On Coalition to try to make sure that the internet doesn't go down. She really gave a feeling for like just how vital and important the internet is, for people all over the world.

JASON KELLEY: Yeah. And even though, you know, some of these conversations were a little bleak in the sense of, you know, protecting yourself from potentially bad things, I was really struck by how she sort of makes it fun in the training and sort of thinking about, you know, how to get people to memorize things. She mentioned magnetism and fiber optics, and just like the science behind it. And it really made me, uh, think more carefully about how I'm gonna talk about certain aspects of security and, and privacy, because she really gets, I think, after years of training what sticks in people's mind.

CINDY COHN: I think that's just so important. I think that people like Helen are this really important kind of connective tissue between the people who are deep in the technology and the people who need it. And you know that this is its own skill and she just, she embodies it. And of course, the joy she brings really makes it alive.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelly.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

 

Podcast Episode: Cryptography Makes a Post-Quantum Leap

2 July 2025 at 03:05

The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems – and the cryptography they underpin – are no longer infeasible for computers to solve? Will our online defenses collapse? 

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

Not if Deirdre Connolly can help it. As a cutting-edge thinker in post-quantum cryptography, Connolly is making sure that the next giant leap forward in computing – quantum machines that use principles of subatomic mechanics to ignore some constraints of classical mathematics and solve complex problems much faster – don’t reduce our digital walls to rubble. Connolly joins EFF’s Cindy Cohn and Jason Kelley to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 

In this episode you’ll learn about: 

  • Why we’re not yet sure exactly what quantum computing can do yet, and that’s exactly why we need to think about post-quantum cryptography now 
  • What a “Harvest Now, Decrypt Later” attack is, and what’s happening today to defend against it
  • How cryptographic collaboration, competition, and community are key to exploring a variety of paths to post-quantum resilience
  • Why preparing for post-quantum cryptography is and isn’t like fixing the Y2K bug
  • How the best impact that end users can hope for from post-quantum cryptography is no visible impact at all
  • Don’t worry—you won’t have to know, or learn, any math for this episode!  

Deirdre Connolly is a research and applied cryptographer at Sandbox AQ with particular expertise in post-quantum encryption. She also co-hosts the “Security Cryptography Whatever” podcast about modern computer security and cryptography, with a focus on engineering and real-world experiences. Earlier, she was an engineer at the Zcash Foundation – a nonprofit that builds financial privacy infrastructure for the public good – as well as at Brightcove, Akamai, and HubSpot. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

DEIRDRE CONNOLLY: I only got into cryptography and especially post quantum quickly after that. further into my professional life. I was a software engineer for a whil,e and the Snowden leaks happened, and phone records get leaked. All of Verizon's phone records get leaked. and then Prism and more leaks and more leaks. And as an engineer first, I felt like everything that I was building and we were building and telling people to use was vulnerable.
I wanted to learn more about how to do things securely. I went further and further and further down the rabbit hole of cryptography. And then, I think I saw a talk which was basically like, oh, elliptic curves are vulnerable to a quantum attack. And I was like, well, I, I really like these things. They're very elegant mathematical objects, it's very beautiful. I was sad that they were fundamentally broken, and, I think it was, Dan Bernstein who was like, well, there's this new thing that uses elliptic curves, but is supposed to be post quantum secure.
But the math is very difficult and no one understands it. I was like, well, I want to understand it if it preserves my beautiful elliptic curves. That's how I just went, just running, screaming downhill into post quantum cryptography.

CINDY COHN: That's Deirdre Connolly talking about how her love of beautiful math and her anger at the Snowden revelations about how the government was undermining security, led her to the world of post-quantum cryptography.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. You're listening to How to Fix the Internet.

CINDY COHN: On this show we talk to tech leaders, policy-makers, thinkers, artists and engineers about what the future could look like if we get things right online.

JASON KELLEY: Our guest today is at the forefront of the future of digital security. And just a heads up that this is one of the more technical episodes that we've recorded -- you'll hear quite a bit of cryptography jargon, so we've written up some of the terms that come up in the show notes, so take a look there if you hear a term you don't recognize.

CINDY COHN: Deidre Connolly is a research engineer and applied cryptographer at Sandbox AQ, with a particular expertise in post-quantum encryption. She also co-hosts the Security, Cryptography, Whatever podcast, so she's something of a cryptography influencer too. When we asked our tech team here at EFF who we should be speaking with on this episode about quantum cryptography and quantum computers more generally, everyone agreed that Deirdre was the person. So we're very glad to have you here. Welcome, Deirdre.

DEIRDRE CONNOLLY: Thank you very much for having me. Hi.

CINDY COHN: Now we obviously work with a lot of technologists here and, and certainly personally cryptography is near and dear to my heart, but we are not technologists, neither Jason nor I. So can you just give us a baseline of what post-quantum cryptography is and why people are talking about it?

DEIRDRE CONNOLLY: Sure. So a lot of the cryptography that we have deployed in the real world relies on a lot of math and security assumptions on that math based on things like abstract groups, Diffie-Hellman, elliptic curves, finite fields, and factoring prime numbers such as, uh, systems like RSA.
All of these, constructions and problems, mathematical problems, have served us very well in the last 40-ish years of cryptography. They've let us build very useful, efficient, small cryptography that we've deployed in the real world. It turns out that they are all also vulnerable in the same way to advanced cryptographic attacks that are only possible and only efficient when run on a quantum computer, and this is a class of computation, a whole new class of computation versus digital computers, which is the main computing paradigm that we've been used to for the last 75 years plus.
Quantum computers allow these new classes of attacks, especially, variants of Shore's algorithm – named Dr. Peter Shore – that basically when run on a sufficiently large, cryptographically relevant quantum computer, makes all of the asymmetric cryptography based on these problems that we've deployed very, very vulnerable.
So post-quantum cryptography is trying to take that class of attack into consideration and building cryptography to both replace what we've already deployed and make it resilient to this kind of attack, and trying to see what else we can do with these fundamentally different mathematical and cryptographic assumptions when building cryptography.

CINDY COHN: So we've kind of, we've secured our stuff behind a whole lot of walls, and we're slowly building a bulldozer. This is a particular piece of the world where the speed at which computers can do things has been part of our protection, and so we have to rethink that.

DEIRDRE CONNOLLY: Yeah, quantum computing is a fundamentally new paradigm of how we process data that promises to have very interesting, uh, and like, applications beyond what we can envision right now. Like things like protein folding, chemical analysis, nuclear simulation, and cryptanalysts, or very strong attacks against cryptography.
But it is a field where it's such a fundamentally new computational paradigm that we don't even know what its applications fully would be yet, because like we didn't fully know what we were doing with digital computers in the forties and fifties. Like they were big calculators at one time.

JASON KELLEY: When it was suggested that we talk to you about this. I admit that I have not heard much about this field, and I realized quickly when looking into it that there's sort of a ton of hype around quantum computing and post-quantum cryptography and that kind of hype can make it hard to know whether or not something is like actually going to be a big thing or, whether this is something that's becoming like an investment cycle, like a lot of things do. And one of the things that quickly came up as an actual, like real danger is what's called sort of “save now decrypt later.”

DEIRDRE CONNOLLY: Oh yeah.

JASON KELLEY: Right? We have all these messages, for example, that have been encrypted with current encryption methods. And if someone holds onto those, they can decrypt them using quantum computers in the future. How serious is that danger?

DEIRDRE CONNOLLY: It’s definitely a concern and it's the number one driver I would say to post-quantum crypto adoption in broad industry right now is mitigating the threat of a Store Now/Decrypt Later attack, also known as Harvest Now/Decrypt Later, a bunch of names that all mean the same thing.
And fundamentally, it's, uh, especially if you're doing any kind of key agreement over a public channel, and doing key agreement over a public channel is part of the whole purpose of like, you want to be able to talk to someone who you've never really, touched base with before, and you all kind of know, some public parameters that even your adversary knows and based on just the fact that you can send messages to each other and some public parameters, and some secret values that only you know, and only the other party knows you can establish a shared secret, and then you can start encrypting traffic between you to communicate. And this is what you do in your web browser when you have an HTTPS connection, that's over TLS.
This is what you do with Signal or WhatsApp or any, or, you know, Facebook Messenger with the encrypted communications. They're using Diffie-Helman as part of the protocol to set up a shared secret, and then you use that to encrypt their message bodies that you're sending back and forth between you.
But if you can just store all those communications over that public channel, and the adversary knows the public parameters 'cause they're freely published, that's part of Kerckhoff’s Principle about good cryptography - the only thing that the adversary shouldn't know about your crypto system is the secret key values that you're actually using. It should be secure against an adversary that knows everything that you know, except the secret key material.
And you can just record all those public messages and all the public key exchange messages, and you just store them in a big database somewhere. And then when you have your large cryptographically relevant quantum computer, you can rifle through your files and say, hmm, let's point it at this.
And that's the threat that's live now to the stuff that we have already deployed and the stuff that we're continuing to do communications on now that is protected by elliptic curve Diffie Hellman, or Finite Field Diffie Hellman, or RSA. They can just record that and just theoretically point an attack at it at a later date when that attack comes online.
So like in TLS, there's a lot of browsers and servers and infrastructure providers that have updated to post-quantum resilient solutions for TLS. So they're using a combination of the classic elliptic curve, Diffie Hellman and a post-quantum KEM, uh, called ML Kem that was standardized by the United States based on a public design that's been, you know, a multi international collaboration to help do this design.
I think that's been deployed in Chrome, and I think it's deployed by CloudFlare and it's getting deployed – I think it's now become the default option in the latest version of Open SSL. And a lot of other open source projects, so that's TLS similar, approaches are being adopted in open SSH, the most popular SSH implementation in the world. Signal, the service has updated their key exchange to also include a post quantum KEM and their updated key establishments. So when you start a new conversation with someone or reset a conversation with someone that is the latest version of Signal is now protected against that sort of attack.
That is definitely happening and it's happening the most rapidly because of that Store now/Decrypt later attack, which is considered live. Everything that we're doing now can just be recorded and then later when the attack comes online, they can attack us retroactively. So that's definitely a big driver of things changing in the wild right now.

JASON KELLEY: Okay. I'm going to throw out two parallels for my very limited knowledge to make sure I understand. This reminds me a little bit of sort of the work that had to be done before Y2K in, in the sense of like, now people think nothing went wrong and nothing was ever gonna go wrong, but all of us working anywhere near the field know actually it took a ton of work to make sure that nothing blew up or stopped working.
And the other is that in, I think it was 1998, EFF was involved in something we called Deep Crack, where we made, that's a, I'm realizing now that's a terrible name. But anyway, the DES cracker, um, we basically wanted to show that DES was capable of being cracked, right? And that this was a - correct me if I'm wrong - it was some sort of cryptographic standard that the government was using and people wanted to show that it wasn't sufficient.

DEIRDRE CONNOLLY: Yes - I think it was the first digital encryption standard. And then after its vulnerability was shown, they, they tripled it up to, to make it useful. And that's why Triple DES is still used in a lot of places and is actually considered okay. And then later came the advanced encryption standard, AES, which we prefer today.

JASON KELLEY: Okay, so we've learned the lesson, or we are learning the lesson, it sounds like.

DEIRDRE CONNOLLY: Uh huh.

CINDY COHN: Yeah, I think that that's, that's right. I mean, EFF built the DES cracker because in the nineties the government was insisting that something that everybody knew was really, really insecure and was going to only get worse as computers got stronger and, and strong computers got in more people's hands, um, to basically show that the emperor had no clothes, um, that this wasn't very good.
And I think with the NIST standards and what's happening with post-quantum is really, you know, the hopeful version is we learned that lesson and we're not seeing government trying to pretend like there isn't a risk in order to preserve old standards, but instead leading the way with new ones. Is that fair?

DEIRDRE CONNOLLY: That is very fair. NIST ran this post-quantum competition almost over 10 years, and it had over 80 submissions in the first round from all over the world, from industry, academia, and a mix of everything in between, and then it narrowed it down to. the three that are, they're not all out yet, but there's the key agreement, one called ML Kem, and three signatures. And there's a mix of cryptographic problems that they're based on, but there were multiple rounds, lots of feedback, lots of things got broken.
This competition has absolutely led the way for the world of getting ready for post-quantum cryptography. There are some competitions that have happened in Korea, and I think there's some work happening in China for their, you know, for their area.
There are other open standards and there are standards happening in other standards bodies, but the NIST competition has led the way, and it, because it's all open and all these standards are open and all of the work and the cryptanalysis that has gone in for the whole stretch. It's all been public and all these standards and drafts and analysis and attacks have been public. It's able to benefit everyone in the world.

CINDY COHN: I got started in the crypto wars in the nineties where the government was kind of the problem and they still are. And I do wanna ask you about whether you're seeing any role of the kinda national social security, FBI infrastructure, which has traditionally tried to put a thumb on the scales and make things less secure so that they could have access, if you're seeing any of that there.
But on the NIST side, I think this provides a nice counter example of how government can help facilitate building a better world sometimes, as opposed to being the thing we have to drag kicking and screaming into it.
But let me circle around to the question I embedded in that, which is, you know, one of the problems that that, that we know happened in the nineties around DES, and then of course some of the Snowden revelations indicated some mucking about in security as well behind the scenes by the NSA. Are you seeing anything like that and, and what should we be on the lookout for?

DEIRDRE CONNOLLY: Not in the PQC stuff. Uh, there, like there have been a lot of people that were paying very close attention to what these independent teams were proposing and then what was getting turned into a standard or a proposed standard and every little change, because I, I was closely following the key establishment stuff.
Um, every little change people were trying to be like, did you tweak? Why did you tweak that? Did, like, is there a good reason? And like, running down basically all of those things. And like including trying to get into the nitty gritty of like. Okay. We think this is approximately these many bits of security using these parameter and like talking about, I dunno, 123 versus 128 bits and like really paying attention to all of that stuff.
And I don't think there was any evidence of anything like that. And, and for, for plus or minus, because there were. I don't remember which crypto scheme it was, but it, there was definitely an improvement from, I think some of the folks at NSA very quietly back in the day to, I think it was the S boxes, and I don't remember if it was DES or AES or whatever it was.
But people didn't understand at the time because it was related to advanced, uh, I think it was a differential crypto analysis attacks that folks inside there knew about, and people in outside academia didn't quite know about yet. And then after the fact they were like, oh, they've made this better. Um, we're not, we're not even seeing any evidence of anything of that character either.
It's just sort of like, it's very open letting, like if everything's proceeding well and the products are going well of these post-quantum standards, like, you know, leave it alone. And so everything looks good. And like, especially for NSA, uh, national Security Systems in the, in the United States, they have updated their own targets to migrate to post-quantum, and they are relying fully on the highest security level of these new standards.
So like they are eating their own dog food. They're protecting the highest classified systems and saying these need to be fully migrated to fully post quantum key agreement. Uh, and I think signatures at different times, but there has to be by like 2035. So if they were doing anything to kind of twiddle with those standards, they'd be, you know, hurting themselves and shooting themselves in the foot.

CINDY COHN: Well fingers crossed.

DEIRDRE CONNOLLY: Yes.

CINDY COHN: Because I wanna build a better internet and a better. Internet means that they aren't secretly messing around with our security. And so this is, you know, cautiously good news.

JASON KELLEY: Let's take a quick moment to thank our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]

JASON KELLEY: And now, back to our conversation with Deirdre Connolly.

CINDY COHN: I think the thing that's fascinating about this is kind of seeing this cat and mouse game about the ability to break codes, and the ability to build codes and systems that are resistant to the breaking, kind of playing out here in the context of building better computers for everyone.
And I think it's really fascinating and I think it also for people I. You know, this is a pretty technical conversation, um, even, you know, uh, for our audience. But this is the stuff that goes on under the hood of how we keep journalists safe, how we keep activists safe, how we keep us all safe, whether it's our bank accounts or our, you know, people are talking about mobile IDs now and other, you know, all sorts of sensitive documents that are going to not be in physical form anymore, but are gonna be in digital form.
And unless we get this lock part right, we're really creating problems for people. And you know, what I really appreciate about you and the other people kind of in the midst of this fight is it's very unsung, right? It's kind of under the radar for the rest of us, but yet it's the, it's the ground that we need to stand on to, to be safe moving forward.

DEIRDRE CONNOLLY: Yeah, and there's a lot of assumptions, uh, that even the low level theoretical cryptographers and the people implementing their, their stuff into software and the stuff, the people trying to deploy, that there's a, a lot of assumptions that have been baked into what we've built that to a degree don't quite fit in some of the, the things we've been able to build in a post-quantum secure way, or the way we think it's a post-quantum secure way.
Um, we're gonna need to change some stuff and we think we know how to change some stuff to make it work. but we are hoping that we don't accidentally introduce any vulnerabilities or gaps.
We're trying, but also we're not a hundred percent sure that we're not missing something, 'cause these things are new. And so we're trying, and we're also trying to make sure we don't break things as we change them because we're trying to change them to be post quantum resilient. But you know, once you change something, if there's a possibility, you, you just didn't understand it completely. And you don't wanna break something that was working well in one direction because you wanna improve it in another direction.

CINDY COHN: And that's why I think it's important to continue to have a robust community of people who are the breakers, right? Who are, are hackers, who are, who are attacking. And that is a, you know, that's a mindset, right? That's a way of thinking about stuff that is important to protect and nurture, um, because, you know, there's an old quote from Bruce Schneider: Anyone can build a crypto system that they themselves cannot break. Right? It takes a community of people trying to really pound away at something to figure out where the holes are.
And you know, a lot of the work that EFF does around coders rights and other kinds of things is to make sure that there's space for that. and I think it's gonna be as needed in a quantum world as it was in a kind of classical computer world.

DEIRDRE CONNOLLY: Absolutely. I'm confident that we will learn a lot more from the breakers about this new cryptography because, like, we've tried to be robust through this, you know, NIST competition, and a lot of those, the things that we learn apply to other constructions as they come out. but like there's a whole area of people who are going to be encountering this kind of newish cryptography for the first time, and they kind of look at it and they're like. Oh, uh, I, I think I might be able to do something interesting with this, and we're, we'll all learn more and we'll try to patch and update as quickly as possible

JASON KELLEY: And this is why we have competitions to figure out what the best options are and why some people might favor one algorithm over another for different, different processes and things like that.

DEIDRE CONNOLLY: And that's why we're probably gonna have a lot of different flavors of post-quantum cryptography getting deployed in the world because it's not just, ah, you know, I don't love NIST. I'm gonna do my own thing in my own country over here. Or, or have different requirements. There is that at play, but also you're trying to not put all your eggs in one basket as well.

CINDY COHN: Yeah, so we want a menu of things so that people can really pick, from, you know, vetted, but different strategies. So I wanna ask the kind of core question for the podcast, which is, um, what does it look like if we get this right, if we get quantum computing and, you know, post-quantum crypto, right?
How does the world look different? Or does it just look the same? How, what, what does it look like if we do this well?

DEIRDRE CONNOLLY: Hopefully to a person just using their phone or using their computer to talk to somebody on the other side of the world, hopefully they don't notice. Hopefully to them, if they're, you know, deploying a website and they're like, ah, I need to get a Let’s Encrypt certificate or whatever.
Hopefully Let's Encrypt just, you know, insert bot just kind of does everything right by default and they don't have to worry about it.
Um, for the builders, it should be, we have a good recommended menu of cryptography that you can use when you're deploying TLS, when you're deploying SSH, uh, when you're building cryptographic applications, especially.
So like if you are building something in Go or Java or you know, whatever it might be, the crypto library in your language will have the updated recommended signature algorithm or key agreement algorithm and be, like, this is how we, you know, they have code snippets to say like, this is how you should use it, and they will deprecate the older stuff.
And, like, unfortunately there's gonna be a long time where there's gonna be a mix of the new post-quantum stuff that we know how to use and know how to deploy and protect. The most important, you know, stuff like to mitigate Store now/Decrypt later and, you know, get those signatures with the most important, uh, protected stuff.
Uh, get those done. But there's a lot of stuff that we're not really clear about. How we wanna do it yet, and kind of going back to one of the things you mentioned earlier, uh, comparing this to Y2K, there was a lot of work that went into mitigating Y2K before, during, immediately after.
Unfortunately, the comparison to the post quantum migration kind of falls down because after Y2K, if you hadn't fixed something, it would break. And you would notice in usually an obvious way, and then you could go find it. You, you fix the most important stuff that, you know, if it broke, like you would lose billions of dollars or, you know, whatever. You'd have an outage.
For cryptography, especially the stuff that's a little bit fancier. Um, you might not know it's broken because the adversary is not gonna, it's not gonna blow up.
And you have to, you know, reboot a server or patch something and then, you know, redeploy. If it's gonna fail, it's gonna fail quietly. And so we're trying to kind of find these things, or at least make the kind of longer tail of stuff, uh, find fixes for that upfront, you know, so that at least the option is available.
But for a regular person, hopefully they shouldn't notice. So everyone's trying really hard to make it so that the best security, in terms of the cryptography is deployed with, without downgrading your experience. We're gonna keep trying to do that.
I don't wanna build crap and say “Go use it.” I want you to be able to just go about your life and use a tool that's supposed to be useful and helpful. And it's not accidentally leaking all your data to some third party service or just leaving a hole on your network for any, any actor who notices to walk through and you know, all that sort of stuff.
So whether it's like implementing things securely in software, or it's cryptography or you know, post-quantum weirdness, like for me, I just wanna build good stuff for people, that's not crap.

JASON KELLEY: Everyone listening to this agrees with you. We don't want to build crap. We want to build some beautiful things. Let's go out there and do it.

DEIRDRE CONNOLLY: Cool.

JASON KELLEY: Thank you so much, Deirdre.

DEIRDRE CONNOLLY: Thank you!

CINDY COHN: Thank you Deirdre. We really appreciate you coming and explaining all of this to, you know, uh, the lawyer and activist at EFF.

JASON KELLEY: Well, I think that was probably the most technical conversation we've had, but I followed along pretty well and I feel like at first I was very nervous based on the, save and decrypt concerns. But after we talked to Deirdre, I feel like the people working on this. Just like for Y2K are pretty much gonna keep us out of hot water. And I learned a lot more than I did know before we started the conversation. What about you, Cindy?

CINDY COHN: I learned a lot as well. I mean, cryptography and, attacks on security is always, you know, it's a process, and it's a process by which we do the best we can, and then, then we also do the best we can to rip it apart and find all the holes, and then we, we iterate forward. And it's nice to hear that that model is still the model, even as we get into something like quantum computers, which, um, frankly are still hard to conceptualize.
But I agree. I think that what the good news outta this interview is I feel like there's a lot of pieces in place to try to do this right, to have this tremendous shift in computing that we don't know when it's coming, but I think that the research indicates that it SI coming, be something that we can handle, um, rather than something that overwhelms us.
And I think that's really,it's good to hear that good people are trying to do the right thing here since it's not inevitable.

JASON KELLEY: Yeah, and it is nice when someone's sort of best vision for what the future looks like is hopefully your life. You will have no impacts from this because everything will be taken care of. That's always good.
I mean, it sounds like, you know, the main thing for EFF is, as you said, we have to make sure that security engineers, hackers have the resources that they need to protect us from these kinds of threats and, and other kinds of threats obviously.
But, you know, that's part of EFF's job, like you mentioned. Our job is to make sure that there are people able to do this work and be protected while doing it so that when the. Solutions do come about. You know, they work and they're implemented and the average person doesn't have to know anything and isn't vulnerable.

CINDY COHN: Yeah, I also think that, um, I appreciated her vision that this is a, you know, the future's gonna be not just one. Size fits all solution, but a menu of things that take into account, you know, both what works better in terms of, you know, bandwidth and compute time, but also what you know, what people actually need.
And I think that's a piece that's kind of built into the way that this is happening that's also really hopeful. In the past and, and I was around when EFF built the DES cracker, um, you know, we had a government that was saying, you know, you know, everything's fine, everything's fine when everybody knew that things weren't fine.
So it's also really hopeful that that's not the position that NIST is taking now, and that's not the position that people who may not even pick the NIST standards but pick other standards are really thinking through.

JASON KELLEY: Yeah, it's very helpful and positive and nice to hear when something has improved for the better. Right? And that's what happened here. We had this, this different attitude from, you know, government at large in the past and it's changed and that's partly thanks to EFF, which is amazing.

CINDY COHN: Yeah, I think that's right. And, um, you know, we'll see going forward, you know, the governments change and they go through different things, but this is, this is a hopeful moment and we're gonna push on through to this future.
I think there's a lot of, you know, there's a lot of worry about quantum computers and what they're gonna do in the world, and it's nice to have a little vision of, not only can we get it right, but there are forces in place that are getting it right. And of course it does my heart so, so good that, you know, someone like Deirdre was inspired by Snowden and dove deep and figured out how to be one of the people who was building the better world. We've talked to so many people like that, and this is a particular, you know, little geeky corner of the world. But, you know, those are our people and that makes me really happy.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

❌