Normal view

Received before yesterday

Is your phone listening to you? (re-air) (Lock and Code S07E03)

9 February 2026 at 13:49

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

ICYMI: Experts on Experts – Season One Roundup

3 February 2026 at 09:23

In 2025, we launched Experts on Experts: Commanding Perspectives as a pilot video series designed to spotlight the ideas shaping cybersecurity, directly from the people driving them. Over five episodes, Rapid7 leaders shared short, candid conversations on topics like agentic AI, MDR ROI, cybercrime-as-a-service, and policy in practice. With Season Two launching soon, now is the perfect time to revisit the first run of expert conversations that started it all. 

Each episode is now embedded in its supporting blog on rapid7.com, making it even easier to watch, read, and share. Here's your full recap of Season One.

Ep 1: What Happens When Agentic AIs Talk to Each Other?

Guest: Laura Ellis, VP of Data & AI

Agentic AI was one of the most talked-about themes of the year, but few tackled it with the clarity and urgency Laura Ellis brought to this episode. From governance models to inter-agent deception, the conversation explores how AI systems can interact in unpredictable ways. Laura shares her perspective on keeping humans at the helm, how to contain agent behavior in real-world infrastructure, and what’s realistic for security teams today. The episode came from a LinkedIn conversation about autonomy, oversight, and the potential for agent-to-agent manipulation, and answered a lot of questions. If you’re curious about how AI moves from experiment to ecosystem, this is a great place to start.

[Read and watch]

Ep 2: What MDR ROI Really Looks Like

Guest: Jon Hencinski, VP of Detection & Response

In this open and honest conversation, Jon Hencinski takes us inside the modern SOC to show what strong managed detection and response really looks like. From coverage and telemetry to analyst training and noise reduction, the episode walks through the building blocks of a high-performing MDR program. Jon speaks directly to security leaders and decision-makers, breaking down which metrics matter most, how to measure confidence in your provider, and why speed is still the differentiator. If you’re evaluating MDR partners or trying to articulate the value of your program internally, this episode offers a practical benchmark. It also pairs well with Rapid7’s IDC report on MDR business value, which (Spoiler Alert) found a 422% three-year ROI and payback in under six months.

[Read and watch]

Ep 3: The Business of Cybercrime

Guest: Raj Samani, SVP and Chief Scientist

Cybercrime is no longer just a threat, it’s an economy. In this episode, Raj Samani unpacks the business model behind ransomware, initial access brokers, and affiliate operations. He shares his view on how cybercriminals are scaling operations like startups, what security teams can do to map that behavior, and why understanding the economy of access is key to disruption. It’s an insightful look at how attacker innovation is outpacing the traditional response, and what needs to change. Raj also reflects on the blurred lines between opportunistic access and long-tail ransomware campaigns, and how buyers on the dark web shape the threat landscape. This conversation is especially useful for defenders who want to think more strategically about adversaries and the systems that support them.

[Read and watch]

Ep 4: What SOC Teams Are Doing Differently in 2025

Guest: Steve Edwards, Director of Threat Intelligence and Detection Engineering

This episode walks through the key findings of Rapid7’s IDC study on the business value of MDR and brings them to life through real-world SOC operations. Steve Edwards shares how telemetry access changes the game, what true coverage looks like in practice, and why teams are shifting away from reactive models to faster, context-rich detection. You’ll hear what happens in the first 24 to 48 hours of incident response and how Rapid7’s no-cap IR model improves confidence during high-pressure moments. Steve also breaks down how teams are using MITRE ATT&CK  mapping to prioritize security investments and measure response maturity over time. For security leaders and buyers evaluating managed services, this conversation offers a clear, practical lens on what a successful MDR program looks like from a security and business perspective.

[Read and watch]

Ep 5: Policy to Practice - What Cyber Resilience Really Takes

Guest: Sabeen Malik, VP of Global Government Affairs and Public Policy

With new regulations emerging across the globe, it’s easy to confuse compliance with resilience. In this episode, Sabeen Malik unpacks what it takes to bridge that gap. She talks through disclosure laws, geopolitical tension, and the difficulty of turning policy into something operators can act on. Sabeen brings both policy expertise and operational realism, making the case that cybersecurity regulation needs to be built for the real world, not for a checklist. She also explores the cultural side of risk, including how insider threats and trust-based frameworks play into resilience planning. If your organization is tracking regulatory changes or working toward a more mature security posture, this episode offers a smart lens on where policy can help, and how to overcome it's shortfalls.

[Read and watch]

Why Gen Z is Ditching Smartphones for Dumbphones

2 February 2026 at 00:00

Younger generations are increasingly ditching smartphones in favor of “dumbphones”—simpler devices with fewer apps, fewer distractions, and less tracking. But what happens when you step away from a device that now functions as your wallet, your memory, and your security key? In this episode, Tom and Scott explore the dumbphone movement through a privacy and […]

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Shared Security Podcast.

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Security Boulevard.

💾

One privacy change I made for 2026 (Lock and Code S07E02)

26 January 2026 at 08:31

This week on the Lock and Code podcast…

When you hear the words “data privacy,” what do you first imagine?

Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.

Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.

That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.

When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.

So, for 2026, he has switched to a new search engine, Brave Search.

Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Enshittification is ruining everything online (Lock and Code S07E01)

12 January 2026 at 00:03

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

ALPRs are recording your daily drive (Lock and Code S06E26)

5 January 2026 at 10:52

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

EFF’s ‘How to Fix the Internet’ Podcast: 2025 in Review

24 December 2025 at 11:45

2025 was a stellar year for EFF’s award-winning podcast, “How to Fix the Internet,” as our sixth season focused on the tools and technology of freedom. 

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futuresfrom tracking-based surveillance capitalism, to street level government surveillance, to the dominance of a few large platforms choking innovation, to the growing efforts by authoritarian governments to control what we see and saythe landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building solutions. That’s where our podcast comes in. 

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, EFF Executive Director Cindy Cohn and Activism Director Jason Kelley explore creative solutions to some of today’s biggest tech challenges. Our sixth season, which ran from May through September, featured: 

  • Digital Autonomy for Bodily Autonomy” – We all leave digital trails as we navigate the internetrecords of what we searched for, what we bought, who we talked to, where we went or want to go in the real worldand those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? Digital Defense Fund Director Kate Bertash joined Cindy and Jason to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 
  • Love the Internet Before You Hate On It” – There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. Molly Whitea researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech joined Cindy and Jason to discuss working toward a human-centered internet that gives everyone a sense of control and interaction; open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 
  • Why Three is Tor's Magic Number” – Many in Silicon Valley, and in U.S. business at large, seem to believe innovation springs only from competition, a race to build the next big thing first, cheaper, better, best. But what if collaboration and community breeds innovation just as well as adversarial competition? Tor Project Executive Director Isabela Fernandes joined Cindy and Jason to discuss the importance of not just accepting technology as it’s given to us, but collaboratively breaking it, tinkering with it, and rebuilding it together until it becomes the technology that we really need to make our world a better place. 
  • Securing Journalism on the ‘Data-Greedy’ Internet” – Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs? Freedom of the Press Foundation Digital Security Director Harlo Holmes joined Cindy and Jason to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed. 
  • Cryptography Makes a Post-Quantum Leap” – The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problemsand the cryptography they underpinare no longer infeasible for computers to solve? Will our online defenses collapse? Research and applied cryptographer Deirdre Connolly joined Cindy and Jason to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 
  • Finding the Joy in Digital Security” – Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning and retain more knowledge when they're having a good time. It doesn’t mean the topic isn’t seriousit’s just about intentionally approaching a serious topic with joy. East Africa digital security trainer Helen Andromedon joined Cindy and Jason to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 
  • Smashing the Tech Oligarchy” – Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more. Tech journalist and critic Kara Swisher joined Cindy and Jason to discuss regulation that can keep people safe online without stifling innovation, creating an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 
  • Separating AI Hope from AI Hype” – If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. Princeton Professor and “AI Snake Oil” publisher Arvind Narayanan joined Cindy and Jason to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 
  • Protecting Privacy in Your Brain” – Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to ita Pandora’s box of epic proportions. Neuroscientist Rafael Yuste and human rights lawyer Jared Genser, co-founders of The Neurorights Foundation, joined Cindy and Jason to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 
  • Building and Preserving the Library of Everything” – Access to knowledge not only creates an informed populace that democracy requires but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed ofso long as that knowledge is allowed to flow freely. Internet Archive founder and digital librarian Brewster Kahle joined Cindy and Jason to discuss how the free flow of knowledge makes all of us more free.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Pig butchering is the next “humanitarian global crisis” (Lock and Code S06E25)

15 December 2025 at 10:39

This week on the Lock and Code podcast

This is the story of the world’s worst scam and how it is being used to fuel entire underground economies that have the power to rival nation-states across the globe. This is the story of “pig butchering.”

“Pig butchering” is a violent term that is used to describe a growing type of online investment scam that has ruined the lives of countless victims all across the world. No age group is spared, nearly no country is untouched, and, if the numbers are true, with more than $6.5 billion stolen in 2024 alone, no scam might be more serious today, than this.

Despite this severity, like many types of online fraud today, most pig-butchering scams start with a simple “hello.”

Sent through text or as a direct message on social media platforms like X, Facebook, Instagram, or elsewhere, these initial communications are often framed as simple mistakes—a kind stranger was given your number by accident, and if you reply, you’re given a kind apology and a simple lure: “You seem like such a kind person… where are you from?”

Here, the scam has already begun. Pig butchers, like romance scammers, build emotional connections with their victims. For months, their messages focus on everyday life, from family to children to marriage to work.

But, with time, once the scammer believes they’ve gained the trust of their victim, they launch their attack: An investment “opportunity.”

Pig butchers tell their victims that they’ve personally struck it rich by investing in cryptocurrency, and they want to share the wealth. Here, the scammers will lead their victims through opening an entirely bogus investment account, which is made to look real through sham websites that are littered with convincing tickers, snazzy analytics, and eye-popping financial returns.

When the victims “invest” in these accounts, they’re actually giving money directly to their scammers. But when the victims log into their online “accounts,” they see their money growing and growing, which convinces many of them to invest even more, perhaps even until their life savings are drained.

This charade goes on as long as possible until the victims learn the truth and the scammers disappear. The continued theft from these victims is where “pig-butchering” gets its name—with scammers fattening up their victims before slaughter.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Erin West, founder of Operation Shamrock and former Deputy District Attorney of Santa Clara County, about pig butchering scams, the failures of major platforms like Meta to stop them, and why this global crisis represents far more than just a few lost dollars.

“It’s really the most compelling, horrific, humanitarian global crisis that is happening in the world today.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Air fryer app caught asking for voice data (re-air) (Lock and Code S06E24)

2 December 2025 at 11:22

This week on the Lock and Code podcast

It’s often said online that if a product is free, you’re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?

In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether people’s air fryers—seriously–might be spying on them.

By analyzing the associated Android apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.

As the researchers wrote:

“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason.”

Bizarrely, these types of data requests are far from rare.

Today, on the Lock and Code podcast, we revisit a 2024 episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.

These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Your coworker is tired of AI “workslop” (Lock and Code S06E23)

17 November 2025 at 10:44

This week on the Lock and Code podcast…

Everything’s easier with AI… except having to correct it.

In just the three years since OpenAI released ChatGPT, not only has onlife life changed at home—it’s also changed at work. Some of the biggest software companies today, like Microsoft and Google, are forwarding a vision of an AI-powered future where people don’t write their own emails anymore, or make their own slide decks for presentations, or compile their own reports, or even read their own notifications, because AI will do it for them.

But it turns out that offloading this type of work onto AI has consequences.

In September, a group of researchers from Stanford University and BetterUp Labs published findings from an ongoing study into how AI-produced work impacts the people who receive that work. And it turns out that the people who receive that work aren’t its biggest fans, because it’s not just work that they’re having to read, review, and finalize. It is, as the researchers called it, “workslop.”

Workslop is:

“AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. It can appear in many different forms, including documents, slide decks, emails, and code. It often looks good, but is overly long, hard to read, fancy, or sounds off.”

Far from an indictment on AI tools in the workplace, the study instead reveals the economic and human costs that come with this new phenomenon of “workslop.” The problem, according to the researchers, is not that people are using technology to help accomplish tasks. The problem is that people are using technology to create ill-fitting work that still requires human input, review, and correction down the line.

“The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” the researchers wrote.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Kristina Rapuano, senior research scientist at BetterUp Labs, about AI tools in the workplace, the potential lost productivity costs that come from “workslop,” and the sometimes dismal opinions that teammates develop about one another when receiving this type of work.

“This person said, ‘Having to read through workshop is demoralizing. It takes away time I could be spending doing my job because someone was too lazy to do theirs.'”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Would you sext ChatGPT? (Lock and Code S06E22)

3 November 2025 at 10:30

This week on the Lock and Code podcast…

In the final, cold winter months of the year, ChatGPT could be heating up.

On October 14, OpenAI CEO Sam Altman said that the “restrictions” that his company previously placed on their flagship product, ChatGPT, would be removed, allowing, perhaps, for “erotica” in the future.

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote on the platform X. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

This wasn’t the first time that OpenAI or its executive had addressed mental health.

On August 26, OpenAI published a blog titled “Helping people when they need it most,” which explored new protections for users, including stronger safeguards for long conversations, better recognition of people in crisis, and easier access to outside emergency services and even family and friends. The blog alludes to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises,” but it never explains what, explicitly, that means.

But on the very same day the blog was posted, OpenAI was sued for the alleged role that ChatGPT played in the suicide of a 16-year-old boy. According to chat logs disclosed in the lawsuit, the teenager spoke openly to the AI chatbot about suicide, he shared that he wanted to leave a noose in his room, and he even reportedly received an offer to help write a suicide note.

Bizarrely, this tragedy plays a role in the larger story, because it was Altman himself who tied the company’s mental health campaign to its possible debut of erotic content.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

What “erotica” entails is unclear, but one could safely assume it involves all the capabilities currently present in ChatGPT, through generative chat, of course, but also image generation.   

Today, on the Lock and Code podcast with host David Ruiz, we speak with Deb Donig, on faculty at the UC Berkeley School of Information, about the ethics of AI erotica, the possible accountability that belongs to users and to OpenAI, and why intimacy with an AI-power chatbot feels so strange.

“A chat bot offers, we might call it, ‘intimacy’s performance,’ without any of its substance, so you get all of the linguistic markers of connection, but no possibility for, for example, rejection. That’s part of the human experience of a relationship.”

Tune in today to listen to the full conversation.

how notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

What does Google know about me? (Lock and Code S06E21)

20 October 2025 at 10:26

This week on the Lock and Code podcast…

Google is everywhere in our lives. It’s reach into our data extends just as far.

After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google.

Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android.

Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive.

After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look.

But Ruiz wasn’t happy to let the company’s access continue. So he has a plan.

 ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

❌