Normal view

Received yesterday — 13 February 2026

The first Android 17 beta is now available on Pixel devices

13 February 2026 at 15:58

You might have noticed some reporting a few days ago that Android 17 was rolling out in beta form, but that didn't happen. For reasons Google still has not explained, the release was canceled. Two days later, Android 17 is here for real. If you've got a recent Pixel device, you can try the latest version today, but don't expect big changes just yet—there's still a long way to go before release.

Google will probably have more to say about feature changes for Android 17 in the coming months, but this first wide release is aimed mostly at testing system and API changes. One of the biggest changes in the beta is expanded support for adaptive apps, which ensures that apps can scale to different screen sizes. That makes apps more usable on large-screen devices like tablets and foldables with multiple displays.

We first saw this last year in Android 16, but developers were permitted to opt out of support. The new adaptive app roadmap puts an end to that. Any app that targets Android 17 (API level 37) must support resizing and windowed multitasking. Apps can continue to target the older API for the time being, but Google filters apps from the Play Store if they don't keep up.

Read full article

Comments

© Ryan Whitwam

Platforms bend over backward to help DHS censor ICE critics, advocates say

13 February 2026 at 07:00

Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.

Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.

In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content "to control what the public can see, hear, or say about ICE operations."

Read full article

Comments

© Aurich Lawson | Getty Images

Received before yesterday

It took two years, but Google released a YouTube app on Vision Pro

12 February 2026 at 14:53

When Apple's Vision Pro mixed reality headset launched in February 2024, users were frustrated at the lack of a proper YouTube app—a significant disappointment given the device's focus on video content consumption, and YouTube's strong library of immersive VR and 360 videos. That complaint continued through the release of the second-generation Vision Pro last year, including in our review.

Now, two years later, an official YouTube app from Google has launched on the Vision Pro's app store. It's not just a port of the iPad app, either—it has panels arranged spatially in front of the user as you'd expect, and it supports 3D videos, as well as 360- and 180-degree ones.

YouTube's App Store listing says users can watch "every video on YouTube" (there's a screenshot of a special interface for Shorts vertical videos, for example) and that they get "the full signed-in experience" with watch history and so on.

Read full article

Comments

© YouTube

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

12 February 2026 at 14:42

On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

© Google

We let Chrome's Auto Browse agent surf the web for us—here's what happened

12 February 2026 at 07:00

We are now a few years into the AI revolution, and talk has shifted from who has the best chatbot to whose AI agent can do the most things on your behalf. Unfortunately, AI agents are still rough around the edges, so tasking them with anything important is not a great idea. OpenAI launched its Atlas agent late last year, which we found to be modestly useful, and now it's Google's turn.

Unlike the OpenAI agent, Google's new Auto Browse agent has extraordinary reach because it's part of Chrome, the world's most popular browser by a wide margin. Google began rolling out Auto Browse (in preview) earlier this month to AI Pro and AI Ultra subscribers, allowing them to send the agent across the web to complete tasks.

I've taken Chrome's agent for a spin to see whether you can trust it to handle tedious online work for you. For each test, I lay out the problem I need to solve, how I prompted the robot, and how well (or not) it handled the job.

Read full article

Comments

© Aurich Lawson

Google recovers "deleted" Nest video in high-profile abduction case

11 February 2026 at 15:15

Like most cloud-enabled home security cameras, Google's Nest products don't provide long-term storage unless you pay a monthly fee. That video may not vanish into the digital aether right on time, though. Investigators involved with the high-profile abduction of Nancy Guthrie have released video from Guthrie's Nest doorbell camera—video that was believed to have been deleted because Guthrie wasn't paying for the service.

Google's cameras connect to the recently upgraded Home Premium subscription service. For $10 per month, you get 30 days of stored events, and $20 gets you 60 days of events with 10 days of the full video. If you don't pay anything, Google only saves three hours of event history. After that, the videos are deleted, at least as far as the user is concerned. Newer Nest cameras have limited local storage that can cache clips for a few hours in case connectivity drops out, but there is no option for true local storage. Guthrie's camera was reportedly destroyed by the perpetrators.

Suspect in abduction approaches doorbell camera.

Expired videos are no longer available to the user, and Google won't restore them even if you later upgrade to a premium account. However, that doesn't mean the data is truly gone. Nancy Guthrie was abducted from her home in the early hours of February 1, and at first, investigators said there was no video of the crime because the doorbell camera was not on a paid account. Yet, video showing a masked individual fiddling with the camera was published on February 10.

Read full article

Comments

© Ryan Whitwam

Google's Personal Data Removal Tool Now Covers Government IDs

10 February 2026 at 15:00
Google on Tuesday expanded its "Results about you" tool to let users request the removal of Search results containing government-issued ID numbers -- including driver's licenses, passports and Social Security numbers -- adding to the tool's existing ability to flag results that surface phone numbers, email addresses, and home addresses. The update, announced on Safer Internet Day, is rolling out in the U.S. over the coming days. Google also streamlined its process for reporting non-consensual explicit images on Search, allowing users to select and submit removal requests for multiple images at once rather than reporting them individually.

Read more of this story at Slashdot.

Upgraded Google safety tools can now find and remove more of your personal info

10 February 2026 at 11:59

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don't have the best of intentions, but Google has some handy tools to address that, and they've gotten an upgrade today. The "Results About You" tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today's upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver's license, and Social Security. You can access the option to add these to Google's ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it's looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver's license number, which is fine, as it's not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

Read full article

Comments

© Aurich Lawson

Apple and Google Agree To Change App Stores After 'Effective Duopoly' Claim

10 February 2026 at 11:00
Apple and Google have agreed to a set of commitments to the UK's Competition and Markets Authority that will prevent them from giving preferential treatment to their own apps and require greater transparency around how third-party apps are approved for sale. The CMA announced the measures on Tuesday, seven months after it declared that the two companies held an "effective duopoly" over the UK's mobile app ecosystem. Both companies also committed to not using data gathered from third-party developers in ways the regulator deems unfair. The CMA granted both app stores "strategic market status" in October 2025, a designation that gave it the authority to demand changes. CMA head Sarah Cardell called the commitments "important first steps" and said the regulator would "closely monitor" implementation. Technology analyst Paolo Pescatore described the announcement as a "pragmatic first step" but noted some may see it as "addressing the low-hanging fruit." The UK's app economy is the largest in Europe by revenue and number of developers, generating an estimated 1.5% of the country's GDP.

Read more of this story at Slashdot.

Alphabet selling very rare 100-year bonds to help fund AI investment

Alphabet has lined up banks to sell a rare 100-year bond, stepping up a borrowing spree by Big Tech companies racing to fund their vast investments in AI this year.

The so-called century bond will form part of a debut sterling issuance this week by Google’s parent company, said people familiar with the matter.

Alphabet was also selling $20 billion of dollar bonds on Monday and lining up a Swiss franc bond sale, the people said. The dollar portion of the deal was upsized from $15 billion because of strong demand, they added.

Read full article

Comments

© Torsten Asmus via Getty

Google experiments with locking YouTube Music lyrics behind paywall

9 February 2026 at 15:40

Google continues to turn the screws on free YouTube users, expanding a test that restricts access to song lyrics on YouTube Music. Users without a premium subscription have found that Google's streaming music service only shows song lyrics a few times before demanding money.

For as long as YouTube Music has existed, lyrics have been accessible to all users in the mobile app. That started to change over recent months as Google tested a paywall. The lyrics section still appears in the app when playing a song with a free account, but opening it eats into a limited allotment of lyric views. A substantial uptick in user reports, spotted by 9to5Google, suggests this restriction is now rolling out widely.

"You have [x] views remaining," the app now warns free users who access lyrics. It looks like users get five free lyric views before they have to pay up. Google has still neglected to officially announce the addition of this feature to its Premium subscription—there's no mention of lyrics being part of the paid tier on Google's support page.

Read full article

Comments

© Google

Is your phone listening to you? (re-air) (Lock and Code S07E03)

9 February 2026 at 13:49

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Autodesk Takes Google To Court Over AI Movie Software Named 'Flow'

9 February 2026 at 13:01
Autodesk has sued Google in San Francisco federal court, alleging the search giant infringed its "Flow" trademark by launching competing AI-powered software for movie, TV and video game production in May 2025. Autodesk says it has used the Flow name since September 2022 and that Google assured it would not commercialize a product under the same name -- then filed a trademark application in Tonga, where filings are not publicly accessible, before seeking U.S. protection.

Read more of this story at Slashdot.

Google Lines Up 100-Year Sterling Bond Sale

9 February 2026 at 12:01
Alphabet has lined up banks to sell a rare 100-year bond, stepping up a borrowing spree by Big Tech companies racing to fund their vast investments in AI this year. From a report: The so-called century bond will form part of a debut sterling issuance this week by Google's parent company, according to people familiar with the matter. Alphabet was also selling $15bn of dollar bonds on Monday and lining up a Swiss franc bond sale, the people said. Century bonds -- long-term borrowing at its most extreme -- are highly unusual, although a flurry were sold during the period of very low interest rates that followed the financial crisis, including by governments such as Austria and Argentina. The University of Oxford, EDF and the Wellcome Trust -- the most recent in 2018 -- are the only issuers to have previously tapped the sterling century market. Such sales are even rarer in the tech sector, with most of the industry's biggest groups issuing up to 40 years, although IBM sold a 100-year bond back in 1996. Big Tech companies and their suppliers are expected to invest almost $700bn in AI infrastructure this year and are increasingly turning to the debt markets to finance the giant data centre build-out. Michael Burry, writing on Substack: Alphabet looking to issue a 100-year bond. Last time this happened in tech was Motorola in 1997, which was the last year Motorola was considered a big deal. At the start of 1997, Motorola was a top 25 market cap and top 25 revenue corporation in America. Never again. The Motorola corporate brand in 1997 was ranked #1 in the US, ahead of Microsoft. In 1998, Nokia overtook Motorola in cell phones, and after the iPhone it fell out of the consumer eye. Today Motorola is the 232nd largest market cap with only $11 billion in sales.

Read more of this story at Slashdot.

Waymo leverages Genie 3 to create a world model for self-driving cars

6 February 2026 at 15:44

Google-spinoff Waymo is in the midst of expanding its self-driving car fleet into new regions. Waymo touts more than 200 million miles of driving that informs how the vehicles navigate roads, but the company's AI has also driven billions of miles virtually, and there's a lot more to come with the new Waymo World Model. Based on Google DeepMind's Genie 3, Waymo says the model can create "hyper-realistic" simulated environments that train the AI on situations that are rarely (or never) encountered in real life—like snow on the Golden Gate Bridge.

Until recently, the autonomous driving industry relied entirely on training data collected from real cars and real situations. That means rare, potentially dangerous events are not well represented in training data. The Waymo World Model aims to address that by allowing engineers to create simulations with simple prompts and driving inputs.

Google revealed Genie 3 last year, positioning it as a significant upgrade over other world models by virtue of its long-horizon memory. In Google's world model, you can wander away from a given object, and when you look back, the model will still "remember" how that object is supposed to look. In earlier attempts at world models, the simulation would lose that context almost immediately. With Genie 3, the model can remember details for several minutes.

Read full article

Comments

© Ryan Whitwam

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea

6 February 2026 at 06:30

Last week, filmmaker Darren Aronofsky's AI studio Primordial Soup and Time magazine released the first two episodes of On This Day... 1776. The year-long series of short-form videos features short vignettes describing what happened on that day of the American Revolution 250 years ago, but it does so using “a variety of AI tools” to produce photorealistic scenes containing avatars of historical figures like George Washington, Thomas Paine, and Benjamin Franklin.

In announcing the series, Time Studios President Ben Bitonti said the project provides "a glimpse at what thoughtful, creative, artist-led use of AI can look like—not replacing craft but expanding what’s possible and allowing storytellers to go places they simply couldn’t before."

The trailer for "On This Day... 1776."

Outside critics were decidedly less excited about the effort. The AV Club took the introductory episodes to task for "repetitive camera movements [and] waxen characters" that make for "an ugly look at American history." CNET said that this "AI slop is ruining American history," calling the videos a "hellish broth of machine-driven AI slop and bad human choices." The Guardian lamented that the "once-lauded director of Black Swan and The Wrestler has drowned himself in AI slop," calling the series "embarrassing," "terrible," and "ugly as sin." I could go on.

Read full article

Comments

© Primordial Soup

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites

5 February 2026 at 14:32

One of the weirdest corners of the Internet is suddenly hard to find on Bing, after the search engine inexplicably started blocking approximately 1.5 million independent websites hosted on Neocities.

Founded in 2013 to archive the "aesthetic awesomeness" of GeoCities websites, Neocities keeps the spirit of the 1990s Internet alive. It lets users design free websites without relying on standardized templates devoid of personality. For hundreds of thousands of people building websites around art, niche fandoms, and special expertise—or simply seeking a place to get a little weird online—Neocities provides a blank canvas that can be endlessly personalized when compared to a Facebook page. Delighted visitors discovering these sites are more likely to navigate by hovering flashing pointers over a web of spinning GIFs than clicking a hamburger menu or infinitely scrolling.

That's the style of Internet that Kyle Drake, Neocities' founder, strives to maintain. So he was surprised when he noticed that Bing was curiously blocking Neocities sites last summer. At first, the issue seemed resolved by contacting Microsoft, but after receiving more recent reports that users were struggling to log in, Drake discovered that another complete block was implemented in January. Even more concerning, he saw that after delisting the front page, Bing had started pointing users to a copycat site where he was alarmed to learn they were providing their login credentials.

Read full article

Comments

© Aurich Lawson | NeoCities

Google hints at big AirDrop expansion for Android "very soon"

5 February 2026 at 13:06

There is very little functional difference between iOS and Android these days. The systems could integrate quite well if it weren't for the way companies prioritize lock-in over compatibility. At least in the realm of file sharing, Google is working to fix that. After adding basic AirDrop support to Pixel 10 devices last year, the company says we can look forward to seeing it on many more phones this year.

At present, the only Android phones that can initiate an AirDrop session with Apple devices are Google's latest Pixel 10 devices. When Google announced this upgrade, it vaguely suggested that more developments would come, and it now looks like we'll see more AirDrop support soon.

According to Android Authority, Google is planning a big AirDrop expansion in 2026. During an event at the company's Taipei office, Eric Kay, Google's VP of engineering for Android, laid out the path ahead.

Read full article

Comments

© Ryan Whitwam

Google Plots Big Expansion in India as US Restricts Visas

3 February 2026 at 14:02
Alphabet is plotting to dramatically expand its presence in India [non-paywalled source], with the possibility of taking millions of square feet in new office space in Bangalore, India's tech hub. From a report: Google's parent company has leased one office tower and purchased options on two others in Alembic City, a development in the Whitefield tech corridor, totaling 2.4 million square feet, according to people familiar with the deal. The first tower is expected to open to employees in the coming months, while construction on the remaining two is set to conclude next year. Options in the real estate industry give would-be tenants the exclusive right to rent, or in some cases buy, a property at a predetermined price within a specific time frame. It's also possible Alphabet will not exercise the option to use the additional towers. If it does take all of the space, the complex could accommodate as many as 20,000 additional staff, which could more than double the company's footprint in India, said the people, asking not to be identified because the plans aren't public. Alphabet currently employs around 14,000 in the country, out of a global workforce of roughly 190,000. [...] US President Donald Trump's visa restrictions have made it harder to bring foreign talent to America, prompting some companies to recruit more staff overseas. India has become an increasingly important place for US companies to hire, particularly in the race to dominate artificial intelligence.

Read more of this story at Slashdot.

Google court filings suggest ChromeOS has an expiration date

3 February 2026 at 13:25

Chromebooks debuted 16 years ago with the limited release of Google's Cr-48, an unassuming compact laptop that was provided free to select users. From there, Chromebooks became one of the most popular budget computing options and a common fixture in schools and businesses. According to some newly uncovered court documents, Google's shift to Android PCs means Chromebooks have an expiration date in 2034.

The documents were filed as part of Google's long-running search antitrust case, which began in 2020 and reached a verdict in 2024. While Google is still seeking to have the guilty verdict overturned, it has escaped most of the remedies that government prosecutors requested. According to The Verge, the company's plans for Chromebooks and the upcoming Android-based Aluminium came up in filings from the remedy phase of the trial.

As Google moves toward releasing Aluminium, it sought to keep the upcoming machines above the fray and retain the Chrome browser (which it did). In Judge Amit Mehta's final order, devices running ChromeOS or a ChromeOS successor are excluded. To get there, Google had to provide a little more detail on its plans.

Read full article

Comments

© Google

Google Project Genie lets you create interactive worlds from a photo or prompt

29 January 2026 at 15:26

Last year, Google showed off Genie 3, an updated version of its AI world model with impressive long-term memory that allowed it to create interactive worlds from a simple text prompt. At the time, Google only provided Genie to a small group of trusted testers. Now, it's available more widely as Project Genie, but only for those paying for Google's most expensive AI subscription.

World models are exactly what they sound like—an AI that generates a dynamic environment on the fly. They're not technically 3D worlds, though. World models like Genie 3 create a video that responds to your control inputs, allowing you to explore the simulation as if it were a real virtual world. Genie 3 was a breakthrough in world models because it could remember details of the world it was creating for a much longer time. But in this context, a "long time" is a couple of minutes.

Project Genie is essentially a cleaned-up version of Genie 3, which plugs into updated AI models like Nano Banana Pro and Gemini 3. Google has a number of pre-built worlds available in Project Genie, but it's the ability to create new things that makes it interesting. You can provide an image for reference or simply tell Genie what you want from the environment and the character.

Read full article

Comments

© Google

Google Dismantles Massive Proxy Network That Hid Espionage, Cybercrime for Nation-State Actors

29 January 2026 at 03:45

Proxy Network, Google, Google Threat Intelligence, Nation-State Actors,

Google dismantled what is believed to be one of the world's largest residential proxy networks, taking legal action to seize domains controlling IPIDEA's infrastructure and removing millions of consumer devices unknowingly enrolled as proxy exit nodes.

The takedown involved platform providers, law enforcement and security firms working to eliminate a service that enabled espionage, cybercrime and information operations at scale.

Residential proxy networks sell access to IP addresses owned by internet service providers and assigned to residential customers. By routing traffic through consumer devices worldwide, attackers mask malicious activity behind legitimate-looking IP addresses, creating significant detection challenges for network defenders.

IPIDEA became notorious for facilitating multiple botnets, with its software development kits playing key roles in device enrollment while proxy software enabled attacker control. This includes the BadBox2.0 botnet Google targeted with legal action last year, plus the more recent Aisuru and Kimwolf botnets.

Also read: Cloudflare Outage or Cyberattack? The Real Reason Behind the Massive Disruption

The scale of abuse proves staggering. During just one week in January this year, Google observed over 550 individual threat groups it tracks using IP addresses associated with IPIDEA exit nodes to obfuscate their activities. These groups originated from China, North Korea, Iran and Russia, conducting activities including access to victim software-as-a-service environments, on-premises infrastructure compromise and password spray attacks.

"While proxy providers may claim ignorance or close these security gaps when notified, enforcement and verification is challenging given intentionally murky ownership structures, reseller agreements, and diversity of applications," Google's analysis stated.

Google's investigation revealed that many ostensibly independent residential proxy brands actually connect to the same actors controlling IPIDEA. The company identified 13 proxy and VPN brands as part of the IPIDEA network, including 360 Proxy, ABC Proxy, Cherry Proxy, Door VPN, IP 2 World, Luna Proxy, PIA S5 Proxy and others.

The same actors control multiple software development kit domains marketed to app developers as monetization tools. These SDKs support Android, Windows, iOS and WebOS platforms, with developers paid per download for embedding the code. Once incorporated into applications, the SDKs transform devices into proxy network exit nodes while providing whatever primary functionality the app advertised.

Google analyzed over 600 Android applications across multiple download sources containing code connecting to IPIDEA command-and-control domains. These apps appeared largely benign—utilities, games and content—but utilized monetization SDKs enabling proxy behavior without clear disclosure to users.

The technical infrastructure operates through a two-tier system. Upon startup, infected devices connect to Tier One domains and send diagnostic information. They receive back a list of Tier Two servers to contact for proxy tasks. The device then polls these Tier Two servers periodically, receiving instructions to proxy traffic to specific domains and establishing dedicated connections to route that traffic.

[caption id="attachment_109008" align="aligncenter" width="600"]Proxy Network, Google, Google Threat Intelligence, Nation-State Actors, Two-Tier C2 Infrastructure. (Source: Google Threat Intelligence)[/caption]

Google identified approximately 7,400 Tier Two servers as of the takedown. The number changes daily, consistent with demand-based scaling. These servers are hosted globally, including in the United States.

Analysis of Windows binaries revealed 3,075 unique file hashes where dynamic analysis recorded DNS requests to at least one Tier One domain. Some posed as legitimate software like OneDriveSync and Windows Update, though IPIDEA actors didn't directly distribute these trojanized applications.

Residential proxies pose direct risks to consumers whose devices become exit nodes. Users knowingly or unknowingly provide their IP addresses and devices as launchpads for hacking and unauthorized activities, potentially causing providers to flag or block them. Proxy applications also introduce security vulnerabilities to home networks.

When a device becomes an exit node, network traffic the user doesn't control passes through it. This means attackers can access other devices on the same private network, effectively exposing security vulnerabilities to the internet. Google's analysis confirmed IPIDEA proxy software not only routed traffic through exit nodes but also sent traffic to devices to compromise them.

Google's disruption involved three coordinated actions. First, the company took legal action to seize domains controlling devices and proxying traffic through them. Second, Google shared technical intelligence on discovered IPIDEA software development kits with platform providers, law enforcement and research firms to drive ecosystem-wide enforcement.

Third, Google ensured Play Protect, Android's built-in security system, automatically warns users and removes applications incorporating IPIDEA SDKs while blocking future installation attempts. This protects users on certified Android devices with Google Play services.

Google believes the actions significantly degraded IPIDEA's proxy network and business operations, reducing available devices by millions. Because proxy operators share device pools through reseller agreements, the disruption likely impacts affiliated entities downstream.

Also read: What Is a Proxy Server? A Complete Guide to Types, Uses, and Benefits

The residential proxy market has become what Google describes as a "gray market" thriving on deception—hijacking consumer bandwidth to provide cover for global espionage and cybercrime. Consumers should exercise extreme caution with applications offering payment for "unused bandwidth" or "internet sharing," as these represent primary growth vectors for illicit proxy networks.

Google urges users to purchase connected devices only from reputable manufacturers and verify certification. The company's Android TV website provides up-to-date partner lists, while users can check Play Protect certification status through device settings.

The company calls for proxy accountability and policy reform. While some providers may behave ethically and enroll devices only with clear consumer consent, any claims of "ethical sourcing" must be backed by transparent, auditable proof. App developers bear responsibility for vetting monetization SDKs they integrate.

Google begins rolling out Chrome's "Auto Browse" AI agent today

28 January 2026 at 13:00

Google began stuffing Gemini into its dominant Chrome browser several months ago, and today the AI is expanding its capabilities considerably. Google says the chatbot will be easier to access and connect to more Google services, but the biggest change is the addition of Google's autonomous browsing agent, which it has dubbed Auto Browse. Similar to tools like OpenAI Atlas, Auto Browse can handle tedious tasks in Chrome so you don't have to.

The newly unveiled Gemini features in Chrome are accessible from the omnipresent AI button that has been lurking at the top of the window for the last few months. Initially, that button only opened Gemini in a pop-up window, but Google now says it will default to a split-screen or "Sidepanel" view. Google confirmed the update began rolling out over the past week, so you may already have it.

You can still pop Gemini out into a floating window, but the split-view gives Gemini more room to breathe while manipulating a page with AI. This is also helpful when calling other apps in the Chrome implementation of Gemini. The chatbot can now access Gmail, Calendar, YouTube, Maps, Google Shopping, and Google Flights right from the Chrome window. Google technically added this feature around the middle of January, but it's only talking about it now.

Read full article

Comments

© Google

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode

27 January 2026 at 12:00

It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.

In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google's Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.

There are, of course, multiple versions of Gemini 3, and Google doesn't like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you're searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex "long tail" query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).

Read full article

Comments

© Google

A WhatsApp bug lets malicious media files spread through group chats

27 January 2026 at 06:55

WhatsApp is going through a rough patch. Some users would argue it has been ever since Meta acquired the once widely trusted messaging platform. User sentiment has shifted from “trusted default messenger” to a grudgingly necessary Meta product.

Privacy-aware users still see WhatsApp as one of the more secure mass-market messaging platforms if you lock down its settings. Even then, many remain uneasy about Meta’s broader ecosystem, and wish all their contacts would switch to a more secure platform.

Back to current affairs, which will only reinforce that sentiment.

Google’s Project Zero has just disclosed a WhatsApp vulnerability where a malicious media file, sent into a newly created group chat, can be automatically downloaded and used as an attack vector.

The bug affects WhatsApp on Android and involves zero‑click media downloads in group chats. You can be attacked simply by being added to a group and having a malicious file sent to you.

According to Project Zero, the attack is most likely to be used in targeted campaigns, since the attacker needs to know or guess at least one contact. While focused, it is relatively easy to repeat once an attacker has a likely target list.

And to put a cherry on top for WhatsApp’s competitors, a potentially even more serious concern for the popular messaging platform, an international group of plaintiffs sued Meta Platforms, alleging the WhatsApp owner can store, analyze, and access virtually all of users’ private communications, despite WhatsApp’s end-to-end encryption claims.

How to secure WhatsApp

Reportedly, Meta pushed a server change on November 11, 2025, but Google says that only partially resolved the issue. So, Meta is working on a comprehensive fix.

Google’s advice is to disable Automatic Download or enable WhatsApp’s Advanced Privacy Mode so that media is not automatically downloaded to your phone.

And you’ll need to keep WhatsApp updated to get the latest patches, which is true for any app and for Android itself.

Turn off auto-download of media

Goal: ensure that no photos, videos, audio, or documents are pulled to the device without an explicit decision.

  • Open WhatsApp on your Android device.
  • Tap the three‑dot menu in the top‑right corner, then tap Settings.
  • Go to Storage and data (sometimes labeled Data and storage usage).
  • Under Media auto-download, you will see When using mobile data, when connected on Wi‑Fi. and when roaming.
  • For each of these three entries, tap it and uncheck all media types: Photos, Audio, Videos, Documents. Then tap OK.
  • Confirm that each category now shows something like “No media” under it.

Doing this directly implements Project Zero’s guidance to “disable Automatic Download” so that malicious media can’t silently land on your storage as soon as you are dropped into a hostile group.

Stop WhatsApp from saving media to your Android gallery

Even if WhatsApp still downloads some content, you can stop it from leaking into shared storage where other apps and system components see it.

  • In Settings, go to Chats.
  • Turn off Media visibility (or similar option such as Show media in gallery). For particularly sensitive chats, open the chat, tap the contact or group name, find Media visibility, and set it to No for that thread.

WhatsApp is a sandbox, and should contain the threat. Which means, keeping media inside WhatsApp makes it harder for a malicious file to be processed by other, possibly more vulnerable components.

Lock down who can add you to groups

The attack chain requires the attacker to add you and one of your contacts to a new group. Reducing who can do that lowers risk.

  • ​In Settings, tap Privacy.
  • Tap Groups.
  • Change from Everyone to My contacts or ideally My contacts except… and exclude any numbers you do not fully trust.
  • If you use WhatsApp for work, consider keeping group membership strictly to known contacts and approved admins.

Set up two-step verification on your WhatsApp account

Read this guide for Android and iOS to learn how to do that.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

One privacy change I made for 2026 (Lock and Code S07E02)

26 January 2026 at 08:31

This week on the Lock and Code podcast…

When you hear the words “data privacy,” what do you first imagine?

Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.

Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.

That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.

When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.

So, for 2026, he has switched to a new search engine, Brave Search.

Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Google will pay $8.25m to settle child data-tracking allegations

20 January 2026 at 06:40

Google has settled yet another class-action lawsuit accusing it of collecting children’s data and using it to target them with advertising. The tech giant will pay $8.25 million to address allegations that it tracked data on apps specifically designated for kids.

AdMob’s mobile data collection

This settlement stems from accusations that apps provided under Google’s “Designed for Families” programme, which was meant to help parents find safe apps, tracked children. Under the terms of this programme, developers were supposed to self-certify COPPA compliance and use advertising SDKs that disabled behavioural tracking. However, some did not, instead using software embedded in the apps that was created by a Google-owned mobile advertising company called AdMob.

When kids used these apps, which included games, AdMob collected data from these apps, according to the class action lawsuit. This included IP addresses, device identifiers, usage data, and the child’s location to within five meters, transmitting it to Google without parental consent. The AdMob software could then use that information to display targeted ads to users.

This kind of activity is exactly what the Children’s Online Privacy Protection Act (COPPA) was created to stop. The law requires operators of child-directed services to obtain verifiable parental consent before collecting personal information from children under 13. That includes cookies and other identifiers, which are the core tools advertisers use to track and target people.

The families filing the lawsuit alleged that Google knew this was going on:

“Google and AdMob knew at the time that their actions were resulting in the exfiltration data from millions of children under thirteen but engaged in this illicit conduct to earn billions of dollars in advertising revenue.”

Security researchers had alerted Google to the issue in 2018, according to the filing.

YouTube settlement approved

What’s most disappointing is that these privacy issues keep happening. This news arrives at the same time that a judge approved a settlement on another child privacy case involving Google’s use of children’s data on YouTube. This case dates back to October 2019, the same year that Google and YouTube paid a whopping $170m fine for violating COPPA.

Families in this class action suit alleged that YouTube used cookies and persistent identifiers on child-directed channels, collecting data including IP addresses, geolocation data, and device serial numbers. This is the same thing that it does for adults across the web, but COPPA protects kids under 13 from such activities, as do some state laws.

According to the complaint, YouTube collected this information between 2013 and 2020 and used it for behavioural advertising. This form of advertising infers people’s interests from their identifiers, and it is more lucrative than contextual advertising, which focuses only on a channel’s content.

The case said that various channel owners opted into behavioural advertising, prompting Google to collect this personal information. No parental consent was obtained, the plaintiffs alleged. Channel owners named in the suit included Cartoon Network, Hasbro, Mattel, and DreamWorks Animation.

Under the YouTube settlement (which was agreed in August and recently approved by a judge), families can file claims through YouTubePrivacySettlement.com, although the deadline is this Wednesday. Eligible families are likely to get $20–$30 after attorneys’ fees and administration costs, if 1–2% of eligible families submit claims.

COPPA is evolving

Last year, the FTC amended its COPPA Rule to introduce mandatory opt-in consent for targeted advertising to children, separate from general data-collection consent.

The amendments expand the definition of personal information to include biometric data and government-issued ID information. It also lets the FTC use a site operator’s marketing materials to determine whether a site targets children.

Site owners must also now tell parents who they’ll share information with, and the amendments stop operators from keeping children’s personal information forever. If these all sounds like measures that should have been included to protect children online from the get-go, we agree with you. In any case, companies have until this April to comply with the new rules.

Will the COPPA rules make a difference? It’s difficult to say, given the stream of privacy cases involving Google LLC (which owns YouTube and AdMob, among others). When viewed against Alphabet’s overall earnings, an $8.25m penalty risks being seen as a routine business expense rather than a meaningful deterrent.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Could ChatGPT Convince You to Buy Something?

20 January 2026 at 07:08

Eighteen months ago, it was plausible that artificial intelligence might take a different path than social media. Back then, AI’s development hadn’t consolidated under a small number of big tech firms. Nor had it capitalized on consumer attention, surveilling users and delivering ads.

Unfortunately, the AI industry is now taking a page from the social media playbook and has set its sights on monetizing consumer attention. When OpenAI launched its ChatGPT Search feature in late 2024 and its browser, ChatGPT Atlas, in October 2025, it kicked off a race to capture online behavioral data to power advertising. It’s part of a yearslong turnabout by OpenAI, whose CEO Sam Altman once called the combination of ads and AI “unsettling” and now promises that ads can be deployed in AI apps while preserving trust. The rampant speculation among OpenAI users who believe they see paid placements in ChatGPT responses suggests they are not convinced.

In 2024, AI search company Perplexity started experimenting with ads in its offerings. A few months after that, Microsoft introduced ads to its Copilot AI. Google’s AI Mode for search now increasingly features ads, as does Amazon’s Rufus chatbot. OpenAI announced on Jan. 16, 2026, that it will soon begin testing ads in the unpaid version of ChatGPT.

As a security expert and data scientist, we see these examples as harbingers of a future where AI companies profit from manipulating their users’ behavior for the benefit of their advertisers and investors. It’s also a reminder that time to steer the direction of AI development away from private exploitation and toward public benefit is quickly running out.

The functionality of ChatGPT Search and its Atlas browser is not really new. Meta, commercial AI competitor Perplexity and even ChatGPT itself have had similar AI search features for years, and both Google and Microsoft beat OpenAI to the punch by integrating AI with their browsers. But OpenAI’s business positioning signals a shift.

We believe the ChatGPT Search and Atlas announcements are worrisome because there is really only one way to make money on search: the advertising model pioneered ruthlessly by Google.

Advertising model

Ruled a monopolist in U.S. federal court, Google has earned more than US$1.6 trillion in advertising revenue since 2001. You may think of Google as a web search company, or a streaming video company (YouTube), or an email company (Gmail), or a mobile phone company (Android, Pixel), or maybe even an AI company (Gemini). But those products are ancillary to Google’s bottom line. The advertising segment typically accounts for 80% to 90% of its total revenue. Everything else is there to collect users’ data and direct users’ attention to its advertising revenue stream.

After two decades in this monopoly position, Google’s search product is much more tuned to the company’s needs than those of its users. When Google Search first arrived decades ago, it was revelatory in its ability to instantly find useful information across the still-nascent web. In 2025, its search result pages are dominated by low-quality and often AI-generated content, spam sites that exist solely to drive traffic to Amazon sales—a tactic known as affiliate marketing—and paid ad placements, which at times are indistinguishable from organic results.

Plenty of advertisers and observers seem to think AI-powered advertising is the future of the ad business.

Highly persuasive

Paid advertising in AI search, and AI models generally, could look very different from traditional web search. It has the potential to influence your thinking, spending patterns and even personal beliefs in much more subtle ways. Because AI can engage in active dialogue, addressing your specific questions, concerns and ideas rather than just filtering static content, its potential for influence is much greater. It’s like the difference between reading a textbook and having a conversation with its author.

Imagine you’re conversing with your AI agent about an upcoming vacation. Did it recommend a particular airline or hotel chain because they really are best for you, or does the company get a kickback for every mention? If you ask about a political issue, does the model bias its answer based on which political party has paid the company a fee, or based on the bias of the model’s corporate owners?

There is mounting evidence that AI models are at least as effective as people at persuading users to do things. A December 2023 meta-analysis of 121 randomized trials reported that AI models are as good as humans at shifting people’s perceptions, attitudes and behaviors. A more recent meta-analysis of eight studies similarly concluded there was “no significant overall difference in persuasive performance between (large language models) and humans.”

This influence may go well beyond shaping what products you buy or who you vote for. As with the field of search engine optimization, the incentive for humans to perform for AI models might shape the way people write and communicate with each other. How we express ourselves online is likely to be increasingly directed to win the attention of AIs and earn placement in the responses they return to users.

A different way forward

Much of this is discouraging, but there is much that can be done to change it.

First, it’s important to recognize that today’s AI is fundamentally untrustworthy, for the same reasons that search engines and social media platforms are.

The problem is not the technology itself; fast ways to find information and communicate with friends and family can be wonderful capabilities. The problem is the priorities of the corporations who own these platforms and for whose benefit they are operated. Recognize that you don’t have control over what data is fed to the AI, who it is shared with and how it is used. It’s important to keep that in mind when you connect devices and services to AI platforms, ask them questions, or consider buying or doing the things they suggest.

There is also a lot that people can demand of governments to restrain harmful corporate uses of AI. In the U.S., Congress could enshrine consumers’ rights to control their own personal data, as the EU already has. It could also create a data protection enforcement agency, as essentially every other developed nation has.

Governments worldwide could invest in Public AI—models built by public agencies offered universally for public benefit and transparently under public oversight. They could also restrict how corporations can collude to exploit people using AI, for example by barring advertisements for dangerous products such as cigarettes and requiring disclosure of paid endorsements.

Every technology company seeks to differentiate itself from competitors, particularly in an era when yesterday’s groundbreaking AI quickly becomes a commodity that will run on any kid’s phone. One differentiator is in building a trustworthy service. It remains to be seen whether companies such as OpenAI and Anthropic can sustain profitable businesses on the back of subscription AI services like the premium editions of ChatGPT, Plus and Pro, and Claude Pro. If they are going to continue convincing consumers and businesses to pay for these premium services, they will need to build trust.

That will require making real commitments to consumers on transparency, privacy, reliability and security that are followed through consistently and verifiably.

And while no one knows what the future business models for AI will be, we can be certain that consumers do not want to be exploited by AI, secretly or otherwise.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

12 January 2026 at 02:01

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.

Enshittification is ruining everything online (Lock and Code S07E01)

12 January 2026 at 00:03

This week on the Lock and Code podcast…

There’s a bizarre thing happening online right now where everything is getting worse.

Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends.

But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification.

Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself.

This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where it all started.

 ”Once these laws were established, the tech companies were able to take advantage of them. And today we have a bunch of companies that aren’t tech companies that are nevertheless using technology to rig the game in ways that the tech companies pioneered.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Phishing campaign abuses Google Cloud services to steal Microsoft 365 logins

6 January 2026 at 10:01

Attackers are sending very convincing fake “Google” emails that slip past spam filters, route victims through several trusted Google-owned services, and ultimately lead to a look-alike Microsoft 365 sign-in page designed to harvest usernames and passwords.

Researchers found that cybercriminals used Google Cloud Application Integration’s Send Email feature to send phishing emails from a legitimate Google address: noreply-application-integration@google[.]com.

Google Cloud Application Integration allows users to automate business processes by connecting any application with point-and-click configurations. New customers currently receive free credits, which lowers the barrier to entry and may attract some cybercriminals.

The initial email arrives from what looks like a real Google address and references something routine and familiar, such as a voicemail notification, a task to complete, or permissions to access a document. The email includes a link that points to a genuine Google Cloud Storage URL, so the web address appears to belong to Google and doesn’t look like an obvious fake.

After the first click, you are redirected to another Google‑related domain (googleusercontent[.]com) showing a CAPTCHA or image check. Once you pass the “I’m not a robot check,” you land on what looks like a normal Microsoft 365 sign‑in page, but on close inspection, the web address is not an official Microsoft domain.

Any credentials provided on this site will be captured by the attackers.

The use of Google infrastructure provides the phishers with a higher level of trust from both email filters and the receiving users. This is not a vulnerability, just an abuse of cloud-based services that Google provides.

Google’s response

Google said it has taken action against the activity:

“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration. Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”

We’ve seen several phishing campaigns that abuse trusted workflows from companies like Google, PayPal, DocuSign, and other cloud-based service providers to lend credibility to phishing emails and redirect targets to their credential-harvesting websites.

How to stay safe

Campaigns like these show that some responsibility for spotting phishing emails still rests with the recipient. Besides staying informed, here are some other tips you can follow to stay safe.

  • Always check the actual web address of any login page; if it’s not a genuine Microsoft domain, do not enter credentials.​ Using a password manager will help because they will not auto-fill your details on fake websites.
  • Be cautious of “urgent” emails about voicemails, document shares, or permissions, even if they appear to come from Google or Microsoft.​ Creating urgency is a common tactic by scammers and phishers.
  • Go directly to the service whenever possible. Instead of clicking links in emails, open OneDrive, Teams, or Outlook using your normal bookmark or app.
  • Use multi‑factor authentication (MFA) so that stolen passwords alone are not enough, and regularly review which apps have access to your account and remove anything you don’t recognize.

Pro tip: Malwarebytes Scam Guard can recognize emails like this as scams. You can upload suspicious text, emails, attachments and other files and ask for its opinion. It’s really very good at recognizing scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Most Parked Domains Now Serving Malicious Content

16 December 2025 at 09:14

Direct navigation — the act of visiting a website by manually typing a domain name in a web browser — has never been riskier: A new study finds the vast majority of “parked” domains — mostly expired or dormant domain names, or common misspellings of popular websites — are now configured to redirect visitors to sites that foist scams and malware.

A lookalike domain to the FBI Internet Crime Complaint Center website, returned a non-threatening parking page (left) whereas a mobile user was instantly directed to deceptive content in October 2025 (right). Image: Infoblox.

When Internet users try to visit expired domain names or accidentally navigate to a lookalike “typosquatting” domain, they are typically brought to a placeholder page at a domain parking company that tries to monetize the wayward traffic by displaying links to a number of third-party websites that have paid to have their links shown.

A decade ago, ending up at one of these parked domains came with a relatively small chance of being redirected to a malicious destination: In 2014, researchers found (PDF) that parked domains redirected users to malicious sites less than five percent of the time — regardless of whether the visitor clicked on any links at the parked page.

But in a series of experiments over the past few months, researchers at the security firm Infoblox say they discovered the situation is now reversed, and that malicious content is by far the norm now for parked websites.

“In large scale experiments, we found that over 90% of the time, visitors to a parked domain would be directed to illegal content, scams, scareware and anti-virus software subscriptions, or malware, as the ‘click’ was sold from the parking company to advertisers, who often resold that traffic to yet another party,” Infoblox researchers wrote in a paper published today.

Infoblox found parked websites are benign if the visitor arrives at the site using a virtual private network (VPN), or else via a non-residential Internet address. For example, Scotiabank.com customers who accidentally mistype the domain as scotaibank[.]com will see a normal parking page if they’re using a VPN, but will be redirected to a site that tries to foist scams, malware or other unwanted content if coming from a residential IP address. Again, this redirect happens just by visiting the misspelled domain with a mobile device or desktop computer that is using a residential IP address.

According to Infoblox, the person or entity that owns scotaibank[.]com has a portfolio of nearly 3,000 lookalike domains, including gmai[.]com, which demonstrably has been configured with its own mail server for accepting incoming email messages. Meaning, if you send an email to a Gmail user and accidentally omit the “l” from “gmail.com,” that missive doesn’t just disappear into the ether or produce a bounce reply: It goes straight to these scammers. The report notices this domain also has been leveraged in multiple recent business email compromise campaigns, using a lure indicating a failed payment with trojan malware attached.

Infoblox found this particular domain holder (betrayed by a common DNS server — torresdns[.]com) has set up typosquatting domains targeting dozens of top Internet destinations, including Craigslist, YouTube, Google, Wikipedia, Netflix, TripAdvisor, Yahoo, eBay, and Microsoft. A defanged list of these typosquatting domains is available here (the dots in the listed domains have been replaced with commas).

David Brunsdon, a threat researcher at Infoblox, said the parked pages send visitors through a chain of redirects, all while profiling the visitor’s system using IP geolocation, device fingerprinting, and cookies to determine where to redirect domain visitors.

“It was often a chain of redirects — one or two domains outside the parking company — before threat arrives,” Brunsdon said. “Each time in the handoff the device is profiled again and again, before being passed off to a malicious domain or else a decoy page like Amazon.com or Alibaba.com if they decide it’s not worth targeting.”

Brunsdon said domain parking services claim the search results they return on parked pages are designed to be relevant to their parked domains, but that almost none of this displayed content was related to the lookalike domain names they tested.

Samples of redirection paths when visiting scotaibank dot com. Each branch includes a series of domains observed, including the color-coded landing page. Image: Infoblox.

Infoblox said a different threat actor who owns domaincntrol[.]com — a domain that differs from GoDaddy’s name servers by a single character — has long taken advantage of typos in DNS configurations to drive users to malicious websites. In recent months, however, Infoblox discovered the malicious redirect only happens when the query for the misconfigured domain comes from a visitor who is using Cloudflare’s DNS resolvers (1.1.1.1), and that all other visitors will get a page that refuses to load.

The researchers found that even variations on well-known government domains are being targeted by malicious ad networks.

“When one of our researchers tried to report a crime to the FBI’s Internet Crime Complaint Center (IC3), they accidentally visited ic3[.]org instead of ic3[.]gov,” the report notes. “Their phone was quickly redirected to a false ‘Drive Subscription Expired’ page. They were lucky to receive a scam; based on what we’ve learnt, they could just as easily receive an information stealer or trojan malware.”

The Infoblox report emphasizes that the malicious activity they tracked is not attributed to any known party, noting that the domain parking or advertising platforms named in the study were not implicated in the malvertising they documented.

However, the report concludes that while the parking companies claim to only work with top advertisers, the traffic to these domains was frequently sold to affiliate networks, who often resold the traffic to the point where the final advertiser had no business relationship with the parking companies.

Infoblox also pointed out that recent policy changes by Google may have inadvertently increased the risk to users from direct search abuse. Brunsdon said Google Adsense previously defaulted to allowing their ads to be placed on parked pages, but that in early 2025 Google implemented a default setting that had their customers opt-out by default on presenting ads on parked domains — requiring the person running the ad to voluntarily go into their settings and turn on parking as a location.

Google is discontinuing its dark web report: why it matters

16 December 2025 at 06:10

Google has announced that early next year they are discontinuing the dark web report, which was meant to monitor breach data that’s circulating on the dark web.

The news raised some eyebrows, but Google says it’s ending the feature because feedback showed the reports didn’t provide “helpful next steps.” New scans will stop on January 15, 2026, and on February 16, the entire tool will disappear along with all associated monitoring data. Early reactions are mixed: some users express disappointment and frustration, others seem largely indifferent because they already rely on alternatives, and a small group feels relieved that the worry‑inducing alerts will disappear.

All those sentiments are understandable. Knowing that someone found your information on the dark web does not automatically make you safer. You cannot simply log into a dark market forum and ask criminals to delete or return your data.

But there is value in knowing what’s out there, because it can help you respond to the situation before problems escalate. That’s where dark web and data exposure tools show their use: they turn vague fear (“Is my data out there?”) into specific risk (“This email and password are in a breach.”).

The dark web is often portrayed as a shady corner of the internet where stolen data circulates endlessly, and to some extent, that’s accurate. Password dumps, personal records, social security numbers (SSNs), and credit card details are traded for profit. Once combined into massive credential and identity databases accessible to cybercriminals, this information can be used for account takeovers, phishing, and identity fraud.

There are no tools to erase critical information that is circulating on dark web forums but that was never really the promise.

Google says it is shifting its focus towards “tools that give you more actionable steps,” like Password Manager, Security Checkup, and Results About You. Without doubt, those tools help, but they work better when users understand why they matter. Discontinuing dark web report removes a simple visibility feature, but it also reminds users that cybersecurity awareness means staying careful on the open web and understanding what attackers might use against them.

How can Malwarebytes help?

The real value comes from three actions: being aware of the exposure, cutting off easy new data sources, and reacting quickly when something goes wrong.

This is where dedicated security tools can help you.

Malwarebytes Personal Data Remover assists you in discovering and removing your data from data broker sites (among others), shrinking the pool of information that can be aggregated, resold, or used to profile you.

Our Digital Footprint scan gives you a clearer picture of where your data has surfaced online, including exposures that could eventually feed into dark web datasets.

Malwarebytes Identity Theft Protection adds ongoing monitoring and recovery support, helping you spot suspicious use of your identity and get expert help if someone tries to open accounts or take out credit in your name.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Google, Apple Warn of State-Linked Surveillance Threats

spyware

Google and Apple have released new global cyber threat notifications, alerting users across dozens of countries to potential targeting by state-linked hackers. The latest warnings reflect growing concerns about government-backed surveillance operations and the expanding commercial spyware marketplace.  Both companies confirmed that the alerts were sent this week as part of their ongoing efforts to protect users from digital espionage. The warnings are tied to commercial surveillance firms, including Intellexa, which has been repeatedly linked to high-end spyware deployments around the globe. 

Apple Sends Warning Across More than 80 Countries 

Apple stated that its newest set of threat notifications was dispatched on December 2, though the company declined to identify the number of affected users or the specific actors involved. These warnings are triggered when technical evidence indicates that individuals are being deliberately targeted by advanced hacking techniques believed to be connected to state agencies or their contractors.  While Apple did not specify locations for this week’s alerts, it confirmed that, since the initiative began, users in more than 150 countries have received similar warnings. This aligns with the company’s broader strategy of alerting customers when activity consistent with state-directed surveillance operations is detected. 

Google Reports Intellexa Spyware Targeting Several Hundred Accounts 

Google also announced that it had notified “several hundred accounts” identified as being targeted by spyware developed by Intellexa, a surveillance vendor sanctioned by the United States. According to Google’s threat intelligence team, the attempted compromises spanned a wide geographic range. Users in Pakistan, Kazakhstan, Angola, Egypt, Uzbekistan, Saudi Arabia, and Tajikistan were among those affected. 
Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets
The tech giant stated that Intellexa has continued to operate and adapt its tools despite U.S. sanctions. Executives associated with the company did not respond to inquiries about the allegations. Google also noted that this round of alerts covered people in more than 80 countries, stressing the nature of the attempted intrusions by state-linked hackers.

Rising Scrutiny of Commercial Spyware 

The latest notifications from Google and Apple are part of a bigger concern surrounding the global spyware industry. Both companies have repeatedly warned that commercial surveillance tools, particularly those sold to government clients, are becoming increasingly common in targeting journalists, activists, political figures, and other high-risk individuals.  Previous disclosures from Apple and Google have already prompted official scrutiny. The European Union has launched investigations in past cases, especially after reports that senior EU officials were targeted with similar spyware technologies. These inquiries often expand into broader examinations of cross-border surveillance practices and the companies that supply such tools. 
Also read: Leaked Files Expose Intellexa’s Remote Access to Customer Systems and Live Surveillance Ops

Tech Firms Decline to Name Specific Attackers 

Despite the breadth of the new alerts, neither Google nor Apple offered details about the identities of the actors behind the latest attempts. Apple also declined to describe the nature of the malicious activity detected. Both companies stress that withholding technical specifics is common when dealing with state-linked hackers, as revealing investigative methods could interfere with ongoing monitoring operations.  Although the exact attackers remain unnamed, the alerts demonstrate a global distribution of spyware activity. Google’s identification of affected users across multiple continents, along with Apple’s acknowledgment of notifications issued in over 150 countries over time, shows that the threat posed by government-aligned surveillance groups continues to expand. 

CISA Warns that Two Android Vulnerabilities Are Under Attack

2 December 2025 at 16:09

CISA Warns Android Vulnerabilities Under Attack

CISA warned today that two Android zero-day vulnerabilities are under active attack, within hours of Google releasing patches for the flaws. Both are high-severity Android framework vulnerabilities. CVE-2025-48572 is a Privilege Escalation vulnerability, while CVE-2025-48633 is an Information Disclosure vulnerability. Both were among 107 Android vulnerabilities addressed by Google in its December security bulletin released today.

Android Vulnerabilities CVE-2025-48572 and CVE-2025-48633 Under Attack

Google warned that the CVE-2025-48572 and CVE-2025-48633 framework vulnerabilities “may be under limited, targeted exploitation.” The U.S. Cybersecurity and Infrastructure Security Agency (CISA) followed with its own alert adding the Android vulnerabilities to its Known Exploited Vulnerabilities (KEV) catalog. “These types of vulnerabilities are a frequent attack vector for malicious cyber actors and pose significant risks to the federal enterprise,” CISA warned. “CISA strongly urges all organizations to reduce their exposure to cyberattacks by prioritizing timely remediation of KEV Catalog vulnerabilities as part of their vulnerability management practice,” the U.S. cybersecurity agency added. The vulnerabilities are so new that the CVE Program lists the CVE numbers as “reserved,” with details yet to be released. Neither Google nor CISA provided further details on how the vulnerabilities are being exploited.

7 Critical Android Vulnerabilities Also Patched

The December Android security bulletin also addressed seven critical vulnerabilities, the most severe of which is CVE-2025-48631, a framework Denial of Service (DoS) vulnerability that Google warned “could lead to remote denial of service with no additional execution privileges needed.” Four of the critical vulnerabilities affect the Android kernel and are all Elevation of Privilege (EoP) vulnerabilities: CVE-2025-48623, CVE-2025-48624, CVE-2025-48637, and CVE-2025-48638. The other two critical vulnerabilities affect Qualcomm closed-source components: CVE-2025-47319, an Exposure of Sensitive System Information to an Unauthorized Control Sphere vulnerability, and CVE-2025-47372, a Buffer Overflow vulnerability that could lead to memory corruption. Google lists CVE-2025-47319 as “Critical” while Qualcomm lists the vulnerability as Medium severity; both list CVE-2025-47372 as Critical. The Qualcomm vulnerabilities are addressed in detail in The Cyber Express article Qualcomm Issues Critical Security Alert Over Secure Boot Vulnerability published earlier today.

Google Sues to Disrupt Chinese SMS Phishing Triad

13 November 2025 at 09:47

Google is suing more than two dozen unnamed individuals allegedly involved in peddling a popular China-based mobile phishing service that helps scammers impersonate hundreds of trusted brands, blast out text message lures, and convert phished payment card data into mobile wallets from Apple and Google.

In a lawsuit filed in the Southern District of New York on November 12, Google sued to unmask and disrupt 25 “John Doe” defendants allegedly linked to the sale of Lighthouse, a sophisticated phishing kit that makes it simple for even novices to steal payment card data from mobile users. Google said Lighthouse has harmed more than a million victims across 120 countries.

A component of the Chinese phishing kit Lighthouse made to target customers of The Toll Roads, which refers to several state routes through Orange County, Calif.

Lighthouse is one of several prolific phishing-as-a-service operations known as the “Smishing Triad,” and collectively they are responsible for sending millions of text messages that spoof the U.S. Postal Service to supposedly collect some outstanding delivery fee, or that pretend to be a local toll road operator warning of a delinquent toll fee. More recently, Lighthouse has been used to spoof e-commerce websites, financial institutions and brokerage firms.

Regardless of the text message lure or brand used, the basic scam remains the same: After the visitor enters their payment information, the phishing site will automatically attempt to enroll the card as a mobile wallet from Apple or Google. The phishing site then tells the visitor that their bank is going to verify the transaction by sending a one-time code that needs to be entered into the payment page before the transaction can be completed.

If the recipient provides that one-time code, the scammers can link the victim’s card data to a mobile wallet on a device that they control. Researchers say the fraudsters usually load several stolen wallets onto each mobile device, and wait 7-10 days after that enrollment before selling the phones or using them for fraud.

Google called the scale of the Lighthouse phishing attacks “staggering.” A May 2025 report from Silent Push found the domains used by the Smishing Triad are rotated frequently, with approximately 25,000 phishing domains active during any 8-day period.

Google’s lawsuit alleges the purveyors of Lighthouse violated the company’s trademarks by including Google’s logos on countless phishing websites. The complaint says Lighthouse offers over 600 templates for phishing websites of more than 400 entities, and that Google’s logos were featured on at least a quarter of those templates.

Google is also pursuing Lighthouse under the Racketeer Influenced and Corrupt Organizations (RICO) Act, saying the Lighthouse phishing enterprise encompasses several connected threat actor groups that work together to design and implement complex criminal schemes targeting the general public.

According to Google, those threat actor teams include a “developer group” that supplies the phishing software and templates; a “data broker group” that provides a list of targets; a “spammer group” that provides the tools to send fraudulent text messages in volume; a “theft group,” in charge of monetizing the phished information; and an “administrative group,” which runs their Telegram support channels and discussion groups designed to facilitate collaboration and recruit new members.

“While different members of the Enterprise may play different roles in the Schemes, they all collaborate to execute phishing attacks that rely on the Lighthouse software,” Google’s complaint alleges. “None of the Enterprise’s Schemes can generate revenue without collaboration and cooperation among the members of the Enterprise. All of the threat actor groups are connected to one another through historical and current business ties, including through their use of Lighthouse and the online community supporting its use, which exists on both YouTube and Telegram channels.”

Silent Push’s May report observed that the Smishing Triad boasts it has “300+ front desk staff worldwide” involved in Lighthouse, staff that is mainly used to support various aspects of the group’s fraud and cash-out schemes.

An image shared by an SMS phishing group shows a panel of mobile phones responsible for mass-sending phishing messages. These panels require a live operator because the one-time codes being shared by phishing victims must be used quickly as they generally expire within a few minutes.

Google alleges that in addition to blasting out text messages spoofing known brands, Lighthouse makes it easy for customers to mass-create fake e-commerce websites that are advertised using Google Ads accounts (and paid for with stolen credit cards). These phony merchants collect payment card information at checkout, and then prompt the customer to expect and share a one-time code sent from their financial institution.

Once again, that one-time code is being sent by the bank because the fake e-commerce site has just attempted to enroll the victim’s payment card data in a mobile wallet. By the time a victim understands they will likely never receive the item they just purchased from the fake e-commerce shop, the scammers have already run through hundreds of dollars in fraudulent charges, often at high-end electronics stores or jewelers.

Ford Merrill works in security research at SecAlliance, a CSIS Security Group company, and he’s been tracking Chinese SMS phishing groups for several years. Merrill said many Lighthouse customers are now using the phishing kit to erect fake e-commerce websites that are advertised on Google and Meta platforms.

“You find this shop by searching for a particular product online or whatever, and you think you’re getting a good deal,” Merrill said. “But of course you never receive the product, and they will phish that one-time code at checkout.”

Merrill said some of the phishing templates include payment buttons for services like PayPal, and that victims who choose to pay through PayPal can also see their PayPal accounts hijacked.

A fake e-commerce site from the Smishing Triad spoofing PayPal on a mobile device.

“The main advantage of the fake e-commerce site is that it doesn’t require them to send out message lures,” Merrill said, noting that the fake vendor sites have more staying power than traditional phishing sites because it takes far longer for them to be flagged for fraud.

Merrill said Google’s legal action may temporarily disrupt the Lighthouse operators, and could make it easier for U.S. federal authorities to bring criminal charges against the group. But he said the Chinese mobile phishing market is so lucrative right now that it’s difficult to imagine a popular phishing service voluntarily turning out the lights.

Merrill said Google’s lawsuit also can help lay the groundwork for future disruptive actions against Lighthouse and other phishing-as-a-service entities that are operating almost entirely on Chinese networks. According to Silent Push, a majority of the phishing sites created with these kits are sitting at two Chinese hosting companies: Tencent (AS132203) and Alibaba (AS45102).

“Once Google has a default judgment against the Lighthouse guys in court, theoretically they could use that to go to Alibaba and Tencent and say, ‘These guys have been found guilty, here are their domains and IP addresses, we want you to shut these down or we’ll include you in the case.'”

If Google can bring that kind of legal pressure consistently over time, Merrill said, they might succeed in increasing costs for the phishers and more frequently disrupting their operations.

“If you take all of these Chinese phishing kit developers, I have to believe it’s tens of thousands of Chinese-speaking people involved,” he said. “The Lighthouse guys will probably burn down their Telegram channels and disappear for a while. They might call it something else or redevelop their service entirely. But I don’t believe for a minute they’re going to close up shop and leave forever.”

[Correction] Gmail can read your emails and attachments to power “smart features”

20 November 2025 at 08:48

Update November 22. We’ve updated this article after realising we contributed to a perfect storm of misunderstanding around a recent change in the wording and placement of Gmail’s smart features. The settings themselves aren’t new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google’s AI models, and that users were being opted in automatically. After taking a closer look at Google’s documentation and reviewing other reporting, that doesn’t appear to be the case.

Gmail does scan email content to power its own “smart features,” such as spam filtering, categorisation, and writing suggestions. But this is part of how Gmail normally works and isn’t the same as training Google’s generative AI models. Google also maintains that these feature settings are opt-in rather than opt-out, although users’ experiences seem to vary depending on when and how the new wording appeared.

It’s easy to see where the confusion came from. Google’s updated language around “smart features” is vague, and the term “smart” often implies AI—especially at a time when Gemini is being integrated into other parts of Google’s products. When the new wording started appearing for some users without much explanation, many assumed it signalled a broader shift. It’s also come around the same time as a proposed class-action lawsuit in the state of California, which, according to Bloomberg, alleges that Google gave Gemini AI access to Gmail, Chat, and Meet without proper user consent.

We’ve revised this article to reflect what we can confirm from Google’s documentation, as it’s always been our aim to give readers accurate, helpful guidance.


Google has updated some Gmail settings around how its “smart features” work, which control how Gmail analyses your messages to power built-in functions.

According to reports we’ve seen, Google has started automatically opting users in to allow Gmail to access all private messages and attachments for its smart features. This means your emails are analyzed to improve your experience with Chat, Meet, Drive, Email and Calendar products. However, some users are now reporting that these settings are switched on by default instead of asking for explicit opt-in—although Google’s help page states that users are opted-out for default.

How to check your settings

Opting in or out requires you to change settings in two places, so I’ve tried to make it as easy to follow as possible. Feel free to let me know in the comments if I missed anything.

To fully opt out, you must turn off Gmail’s smart features in two separate locations in your settings. Don’t miss one, or AI training may continue.

Step 1: Turn off Smart features in Gmail, Chat, and Meet settings

  • Open Gmail on your desktop or mobile app.
  • Click the gear icon → See all settings (desktop) or Menu → Settings (mobile).
  • Find the section called smart features in Gmail, Chat, and Meet. You’ll need to scroll down quite a bit.
Smart features settings
  • Uncheck this option.
  • Scroll down and hit Save changes if on desktop.

Step 2: Turn off Google Workspace smart features

  • Still in Settings, locate Google Workspace smart features.
  • Click on Manage Workspace smart feature settings.
  • You’ll see two options: Smart features in Google Workspace and Smart features in other Google products.
Smart feature settings

  • Toggle both off.
  • Save again in this screen.

Step 3: Verify if both are off

  • Make sure both toggles remain off.
  • Refresh your Gmail app or sign out and back in to confirm changes.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Scam USPS and E-Z Pass Texts and Websites

20 November 2025 at 07:07

Google has filed a complaint in court that details the scam:

In a complaint filed Wednesday, the tech giant accused “a cybercriminal group in China” of selling “phishing for dummies” kits. The kits help unsavvy fraudsters easily “execute a large-scale phishing campaign,” tricking hordes of unsuspecting people into “disclosing sensitive information like passwords, credit card numbers, or banking information, often by impersonating well-known brands, government agencies, or even people the victim knows.”

These branded “Lighthouse” kits offer two versions of software, depending on whether bad actors want to launch SMS and e-commerce scams. “Members may subscribe to weekly, monthly, seasonal, annual, or permanent licenses,” Google alleged. Kits include “hundreds of templates for fake websites, domain set-up tools for those fake websites, and other features designed to dupe victims into believing they are entering sensitive information on a legitimate website.”

Google’s filing said the scams often begin with a text claiming that a toll fee is overdue or a small fee must be paid to redeliver a package. Other times they appear as ads—­sometimes even Google ads, until Google detected and suspended accounts—­luring victims by mimicking popular brands. Anyone who clicks will be redirected to a website to input sensitive information; the sites often claim to accept payments from trusted wallets like Google Pay.

1 million victims, 17,500 fake sites: Google takes on toll-fee scammers

13 November 2025 at 09:43

A Phishing-as-a-Service (PhaaS) platform based in China, known as “Lighthouse,” is the subject of a new Google lawsuit.

Lighthouse enables smishing (SMS phishing) campaigns, and if you’re in the US there is a good chance you’ve seen their texts about a small amount you supposedly owe in toll fees. Here’s an example of a toll-fee scam text:

Google’s lawsuit brings claims against the Lighthouse platform under federal racketeering and fraud statutes, including the Racketeer Influenced and Corrupt Organizations Act (RICO), the Lanham Act, and the Computer Fraud and Abuse Act.

The texts lure targets to websites that impersonate toll authorities or other trusted organizations. The goal is to steal personal information and credit card numbers for use in further financial fraud.

As we reported in October 2025, Project Red Hook launched to combine the power of the US Homeland Security Investigations (HSI), law enforcement partners, and businesses to raise awareness of how Chinese organized crime groups use gift cards to launder money.

These toll, postage, and refund scams might look different on the surface, but they all feed the same machine, each one crafted to look like an urgent government or service message demanding a small fee. Together, they form an industrialized text-scam ecosystem that’s earned Chinese crime groups more than $1 billion in just three years.

Google says Lighthouse alone affected more than 1 million victims across 120 countries. A September report by Netcraft discussed two phishing campaigns believed to be associated with Lighthouse and “Lucid,” a very similar PhaaS platform. Since identifying these campaigns, Netcraft has detected more than 17,500 phishing domains targeting 316 brands from 74 countries.

As grounds for the lawsuit, Google says it found at least 107 phishing website templates that feature its own branding to boost credibility. But a lawsuit can only go so far, and Google says robust public policy is needed to address the broader threat of scams:

“We are collaborating with policymakers and are today announcing our endorsement of key bipartisan bills in the U.S. Congress.”

Will lawsuits, disruptions, and even bills make toll-fee scams go away? Not very likely. The only thing that will really help is if their source of income dries up because people stop falling for smishing. Education is the biggest lever.

Red flags in smishing messages

There are some tell-tale signs in these scams to look for:

  1. Spelling and grammar mistakes: the scammers seem to have problems with formatting dates. For example “September 10nd”, “9st” (instead of 9th or 1st).
  2. Urgency: you only have one or two days to pay. Or else…
  3. The over-the-top threats: Real agencies won’t say your “credit score will be affected” for an unpaid traffic violation.
  4. Made-up legal codes: “Ohio Administrative Code 15C-16.003” doesn’t match any real Ohio BMV administrative codes. When a code looks fake, it probably is!
  5. Sketchy payment link: Truly trusted organizations don’t send urgent “pay now or else” links by text.
  6. Vague or missing personalization: Genuine government agencies tend to use your legal name, not a generic scare message sent to many people at the same time.

Be alert to scams

Recognizing scams is the most important part of protecting yourself, so always consider these golden rules:

  • Always search phone numbers and email addresses to look for associations with known scams.
  • When in doubt, go directly to the website of the organization that contacted you to see if there are any messages for you.
  • Do not get rushed into decisions without thinking them through.
  • Do not click on links in unsolicited text messages.
  • Do not reply, even if the text message explicitly tells you to do so.

If you have engaged with the scammers’ website:

  • Immediately change your passwords for any accounts that may have been compromised. 
  • Contact your bank or financial institution to report the incident and take any necessary steps to protect your accounts, such as freezing them or monitoring for suspicious activity. 
  • Consider a fraud alert or credit freeze. To start layering protection, you might want to place a fraud alert or credit freeze on your credit file with all three of the primary credit bureaus. This makes it harder for fraudsters to open new accounts in your name.
  • US citizens can report confirmed cases of identity theft to the FTC at identitytheft.gov.

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Aisuru Botnet Shifts from DDoS to Residential Proxies

28 October 2025 at 20:51

Aisuru, the botnet responsible for a series of record-smashing distributed denial-of-service (DDoS) attacks this year, recently was overhauled to support a more low-key, lucrative and sustainable business: Renting hundreds of thousands of infected Internet of Things (IoT) devices to proxy services that help cybercriminals anonymize their traffic. Experts say a glut of proxies from Aisuru and other sources is fueling large-scale data harvesting efforts tied to various artificial intelligence (AI) projects, helping content scrapers evade detection by routing their traffic through residential connections that appear to be regular Internet users.

Image credit: vxdb

First identified in August 2024, Aisuru has spread to at least 700,000 IoT systems, such as poorly secured Internet routers and security cameras. Aisuru’s overlords have used their massive botnet to clobber targets with headline-grabbing DDoS attacks, flooding targeted hosts with blasts of junk requests from all infected systems simultaneously.

In June, Aisuru hit KrebsOnSecurity.com with a DDoS clocking at 6.3 terabits per second — the biggest attack that Google had ever mitigated at the time. In the weeks and months that followed, Aisuru’s operators demonstrated DDoS capabilities of nearly 30 terabits of data per second — well beyond the attack mitigation capabilities of most Internet destinations.

These digital sieges have been particularly disruptive this year for U.S.-based Internet service providers (ISPs), in part because Aisuru recently succeeded in taking over a large number of IoT devices in the United States. And when Aisuru launches attacks, the volume of outgoing traffic from infected systems on these ISPs is often so high that it can disrupt or degrade Internet service for adjacent (non-botted) customers of the ISPs.

“Multiple broadband access network operators have experienced significant operational impact due to outbound DDoS attacks in excess of 1.5Tb/sec launched from Aisuru botnet nodes residing on end-customer premises,” wrote Roland Dobbins, principal engineer at Netscout, in a recent executive summary on Aisuru. “Outbound/crossbound attack traffic exceeding 1Tb/sec from compromised customer premise equipment (CPE) devices has caused significant disruption to wireline and wireless broadband access networks. High-throughput attacks have caused chassis-based router line card failures.”

The incessant attacks from Aisuru have caught the attention of federal authorities in the United States and Europe (many of Aisuru’s victims are customers of ISPs and hosting providers based in Europe). Quite recently, some of the world’s largest ISPs have started informally sharing block lists identifying the rapidly shifting locations of the servers that the attackers use to control the activities of the botnet.

Experts say the Aisuru botmasters recently updated their malware so that compromised devices can more easily be rented to so-called “residential proxy” providers. These proxy services allow paying customers to route their Internet communications through someone else’s device, providing anonymity and the ability to appear as a regular Internet user in almost any major city worldwide.

From a website’s perspective, the IP traffic of a residential proxy network user appears to originate from the rented residential IP address, not from the proxy service customer. Proxy services can be used in a legitimate manner for several business purposes — such as price comparisons or sales intelligence. But they are massively abused for hiding cybercrime activity (think advertising fraud, credential stuffing) because they can make it difficult to trace malicious traffic to its original source.

And as we’ll see in a moment, this entire shadowy industry appears to be shifting its focus toward enabling aggressive content scraping activity that continuously feeds raw data into large language models (LLMs) built to support various AI projects.

‘INSANE’ GROWTH

Riley Kilmer is co-founder of spur.us, a service that tracks proxy networks. Kilmer said all of the top proxy services have grown substantially over the past six months.

“I just checked, and in the last 90 days we’ve seen 250 million unique residential proxy IPs,” Kilmer said. “That is insane. That is so high of a number, it’s unheard of. These proxies are absolutely everywhere now.”

Today, Spur says it is tracking an unprecedented spike in available proxies across all providers, including;

LUMINATI_PROXY    11,856,421
NETNUT_PROXY    10,982,458
ABCPROXY_PROXY    9,294,419
OXYLABS_PROXY     6,754,790
IPIDEA_PROXY     3,209,313
EARNFM_PROXY    2,659,913
NODEMAVEN_PROXY    2,627,851
INFATICA_PROXY    2,335,194
IPROYAL_PROXY    2,032,027
YILU_PROXY    1,549,155

Reached for comment about the apparent rapid growth in their proxy network, Oxylabs (#4 on Spur’s list) said while their proxy pool did grow recently, it did so at nowhere near the rate cited by Spur.

“We don’t systematically track other providers’ figures, and we’re not aware of any instances of 10× or 100× growth, especially when it comes to a few bigger companies that are legitimate businesses,” the company said in a written statement.

Bright Data was formerly known as Luminati Networks, the name that is currently at the top of Spur’s list of the biggest residential proxy networks. Bright Data likewise told KrebsOnSecurity that Spur’s current estimates of its proxy network are dramatically overstated and inaccurate.

“We did not actively initiate nor do we see any 10x or 100x expansion of our network, which leads me to believe that someone might be presenting these IPs as Bright Data’s in some way,” said Rony Shalit, Bright Data’s chief compliance and ethics officer. “In many cases in the past, due to us being the leading data collection proxy provider, IPs were falsely tagged as being part of our network, or while being used by other proxy providers for malicious activity.”

“Our network is only sourced from verified IP providers and a robust opt-in only residential peers, which we work hard and in complete transparency to obtain,” Shalit continued. “Every DC, ISP or SDK partner is reviewed and approved, and every residential peer must actively opt in to be part of our network.”

HK NETWORK

Even Spur acknowledges that Luminati and Oxylabs are unlike most other proxy services on their top proxy providers list, in that these providers actually adhere to “know-your-customer” policies, such as requiring video calls with all customers, and strictly blocking customers from reselling access.

Benjamin Brundage is founder of Synthient, a startup that helps companies detect proxy networks. Brundage said if there is increasing confusion around which proxy networks are the most worrisome, it’s because nearly all of these lesser-known proxy services have evolved into highly incestuous bandwidth resellers. What’s more, he said, some proxy providers do not appreciate being tracked and have been known to take aggressive steps to confuse systems that scan the Internet for residential proxy nodes.

Brundage said most proxy services today have created their own software development kit or SDK that other app developers can bundle with their code to earn revenue. These SDKs quietly modify the user’s device so that some portion of their bandwidth can be used to forward traffic from proxy service customers.

“Proxy providers have pools of constantly churning IP addresses,” he said. “These IP addresses are sourced through various means, such as bandwidth-sharing apps, botnets, Android SDKs, and more. These providers will often either directly approach resellers or offer a reseller program that allows users to resell bandwidth through their platform.”

Many SDK providers say they require full consent before allowing their software to be installed on end-user devices. Still, those opt-in agreements and consent checkboxes may be little more than a formality for cybercriminals like the Aisuru botmasters, who can earn a commission each time one of their infected devices is forced to install some SDK that enables one or more of these proxy services.

Depending on its structure, a single provider may operate hundreds of different proxy pools at a time — all maintained through other means, Brundage said.

“Often, you’ll see resellers maintaining their own proxy pool in addition to an upstream provider,” he said. “It allows them to market a proxy pool to high-value clients and offer an unlimited bandwidth plan for cheap reduce their own costs.”

Some proxy providers appear to be directly in league with botmasters. Brundage identified one proxy seller that was aggressively advertising cheap and plentiful bandwidth to content scraping companies. After scanning that provider’s pool of available proxies, Brundage said he found a one-to-one match with IP addresses he’d previously mapped to the Aisuru botnet.

Brundage says that by almost any measurement, the world’s largest residential proxy service is IPidea, a China-based proxy network. IPidea is #5 on Spur’s Top 10, and Brundage said its brands include ABCProxy (#3), Roxlabs, LunaProxy, PIA S5 Proxy, PyProxy, 922Proxy, 360Proxy, IP2World, and Cherry Proxy. Spur’s Kilmer said they also track Yilu Proxy (#10) as IPidea.

Brundage said all of these providers operate under a corporate umbrella known on the cybercrime forums as “HK Network.”

“The way it works is there’s this whole reseller ecosystem, where IPidea will be incredibly aggressive and approach all these proxy providers with the offer, ‘Hey, if you guys buy bandwidth from us, we’ll give you these amazing reseller prices,'” Brundage explained. “But they’re also very aggressive in recruiting resellers for their apps.”

A graphic depicting the relationship between proxy providers that Synthient found are white labeling IPidea proxies. Image: Synthient.com.

Those apps include a range of low-cost and “free” virtual private networking (VPN) services that indeed allow users to enjoy a free VPN, but which also turn the user’s device into a traffic relay that can be rented to cybercriminals, or else parceled out to countless other proxy networks.

“They have all this bandwidth to offload,” Brundage said of IPidea and its sister networks. “And they can do it through their own platforms, or they go get resellers to do it for them by advertising on sketchy hacker forums to reach more people.”

One of IPidea’s core brands is 922S5Proxy, which is a not-so-subtle nod to the 911S5Proxy service that was hugely popular between 2015 and 2022. In July 2022, KrebsOnSecurity published a deep dive into 911S5Proxy’s origins and apparent owners in China. Less than a week later, 911S5Proxy announced it was closing down after the company’s servers were massively hacked.

That 2022 story named Yunhe Wang from Beijing as the apparent owner and/or manager of the 911S5 proxy service. In May 2024, the U.S. Department of Justice arrested Mr Wang, alleging that his network was used to steal billions of dollars from financial institutions, credit card issuers, and federal lending programs. At the same time, the U.S. Treasury Department announced sanctions against Wang and two other Chinese nationals for operating 911S5Proxy.

The website for 922Proxy.

DATA SCRAPING FOR AI

In recent months, multiple experts who track botnet and proxy activity have shared that a great deal of content scraping which ultimately benefits AI companies is now leveraging these proxy networks to further obfuscate their aggressive data-slurping activity. That’s because by routing it through residential IP addresses, content scraping firms can make their traffic far trickier to filter out.

“It’s really difficult to block, because there’s a risk of blocking real people,” Spur’s Kilmer said of the LLM scraping activity that is fed through individual residential IP addresses, which are often shared by multiple customers at once.

Kilmer says the AI industry has brought a veneer of legitimacy to residential proxy business, which has heretofore mostly been associated with sketchy affiliate money making programs, automated abuse, and unwanted Internet traffic.

“Web crawling and scraping has always been a thing, but AI made it like a commodity, data that had to be collected,” Kilmer said. “Everybody wanted to monetize their own data pots, and how they monetize that is different across the board.”

Kilmer said many LLM-related scrapers rely on residential proxies in cases where the content provider has restricted access to their platform in some way, such as forcing interaction through an app, or keeping all content behind a login page with multi-factor authentication.

“Where the cost of data is out of reach — there is some exclusivity or reason they can’t access the data — they’ll turn to residential proxies so they look like a real person accessing that data,” Kilmer said of the content scraping efforts.

Aggressive AI crawlers increasingly are overloading community-maintained infrastructure, causing what amounts to persistent DDoS attacks on vital public resources. A report earlier this year from LibreNews found some open-source projects now see as much as 97 percent of their traffic originating from AI company bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Cloudflare is now experimenting with tools that will allow content creators to charge a fee to AI crawlers to scrape their websites. The company’s “pay-per-crawl” feature is currently in a private beta, and it lets publishers set their own prices that bots must pay before scraping content.

On October 22, the social media and news network Reddit sued Oxylabs (PDF) and several other proxy providers, alleging that their systems enabled the mass-scraping of Reddit user content even though Reddit had taken steps to block such activity.

“Recognizing that Reddit denies scrapers like them access to its site, Defendants scrape the data from Google’s search results instead,” the lawsuit alleges. “They do so by masking their identities, hiding their locations, and disguising their web scrapers as regular people (among other techniques) to circumvent or bypass the security restrictions meant to stop them.”

Denas Grybauskas, chief governance and strategy officer at Oxylabs, said the company was shocked and disappointed by the lawsuit.

“Reddit has made no attempt to speak with us directly or communicate any potential concerns,” Grybauskas said in a written statement. “Oxylabs has always been and will continue to be a pioneer and an industry leader in public data collection, and it will not hesitate to defend itself against these allegations. Oxylabs’ position is that no company should claim ownership of public data that does not belong to them. It is possible that it is just an attempt to sell the same public data at an inflated price.”

As big and powerful as Aisuru may be, it is hardly the only botnet that is contributing to the overall broad availability of residential proxies. For example, on June 5 the FBI’s Internet Crime Complaint Center warned that an IoT malware threat dubbed BADBOX 2.0 had compromised millions of smart-TV boxes, digital projectors, vehicle infotainment units, picture frames, and other IoT devices.

In July, Google filed a lawsuit in New York federal court against the Badbox botnet’s alleged perpetrators. Google said the Badbox 2.0 botnet “compromised more than 10 million uncertified devices running Android’s open-source software, which lacks Google’s security protections. Cybercriminals infected these devices with pre-installed malware and exploited them to conduct large-scale ad fraud and other digital crimes.”

A FAMILIAR DOMAIN NAME

Brundage said the Aisuru botmasters have their own SDK, and for some reason part of its code tells many newly-infected systems to query the domain name fuckbriankrebs[.]com. This may be little more than an elaborate “screw you” to this site’s author: One of the botnet’s alleged partners goes by the handle “Forky,” and was identified in June by KrebsOnSecurity as a young man from Sao Paulo, Brazil.

Brundage noted that only systems infected with Aisuru’s Android SDK will be forced to resolve the domain. Initially, there was some discussion about whether the domain might have some utility as a “kill switch” capable of disrupting the botnet’s operations, although Brundage and others interviewed for this story say that is unlikely.

A tiny sample of the traffic after a DNS server was enabled on the newly registered domain fuckbriankrebs dot com. Each unique IP address requested its own unique subdomain. Image: Seralys.

For one thing, they said, if the domain was somehow critical to the operation of the botnet, why was it still unregistered and actively for-sale? Why indeed, we asked. Happily, the domain name was deftly snatched up last week by Philippe Caturegli, “chief hacking officer” for the security intelligence company Seralys.

Caturegli enabled a passive DNS server on that domain and within a few hours received more than 700,000 requests for unique subdomains on fuckbriankrebs[.]com.

But even with that visibility into Aisuru, it is difficult to use this domain check-in feature to measure its true size, Brundage said. After all, he said, the systems that are phoning home to the domain are only a small portion of the overall botnet.

“The bots are hardcoded to just spam lookups on the subdomains,” he said. “So anytime an infection occurs or it runs in the background, it will do one of those DNS queries.”

Caturegli briefly configured all subdomains on fuckbriankrebs dot com to display this ASCII art image to visiting systems today.

The domain fuckbriankrebs[.]com has a storied history. On its initial launch in 2009, it was used to spread malicious software by the Cutwail spam botnet. In 2011, the domain was involved in a notable DDoS against this website from a botnet powered by Russkill (a.k.a. “Dirt Jumper”).

Domaintools.com finds that in 2015, fuckbriankrebs[.]com was registered to an email address attributed to David “Abdilo” Crees, a 27-year-old Australian man sentenced in May 2025 to time served for cybercrime convictions related to the Lizard Squad hacking group.

Update, Nov. 1, 2025, 10:25 a.m. ET: An earlier version of this story erroneously cited Spur’s proxy numbers from earlier this year; Spur said those numbers conflated residential proxies — which are rotating and attached to real end-user devices — with “ISP proxies” located at AT&T. ISP proxies, Spur said, involve tricking an ISP into routing a large number of IP addresses that are resold as far more static datacenter proxies.

Apple may have to open its walled garden to outside app stores

23 October 2025 at 07:29

The UK’s Competition and Markets Authority (CMA) ruled that both Google and Apple have a “strategic market status.” Basically, they have a monopoly over their respective mobile platforms.

As a result, Apple may soon be required to allow rival app stores on iPhones—a major shift for the smartphone industry. Between them, Apple and Google power nearly all UK mobile devices, according to the CMA:

“Around 90–100% of UK mobile devices run on Apple or Google’s mobile platforms.”

According to analyst data cited by the BBC, around 48.5% of British consumers use iPhones, with most of the rest on Android devices. 

If enforced, this change will reshape the experience of most of the smartphone users in the UK, and we have heard similar noises coming from the EU.

Apple has pushed back, warning that EU-style regulation could limit access to new features. The company points to Apple Intelligence, which has been rolled out in other parts of the world but is not available in the EU—something Apple blames on heavy regulation.

For app developers, the move could have profound effects. Smaller software makers, often frustrated by Apple’s 15–30% commission on in-app purchases, might gain alternative distribution routes. Competing app stores might offer lower fees or more flexible rules, making the app ecosystem more diverse, and potentially more affordable for users.

Apple, however, argues that relaxing control could hurt users by weakening privacy standards and delaying feature updates.

Security and privacy

Allowing multiple app stores will undeniably reshape the iPhone’s security model. Apple’s current “closed system” approach minimizes risk by funneling all apps through its vetted App Store, where every submission goes through security reviews and malware screening. This walled approach has kept large-scale malware incidents on iPhones relatively rare compared to Android.

It remains to be seen whether competing app stores will hold the same standards or have the resources to enforce them. Users can expect more variability in safety practices, which could increase exposure to fraudulent or malware-infested software.

On the other hand, we may also see app stores that prioritize safety or cater to a more privacy-focused audience. So, it doesn’t have to be all bad—but Apple has a point when it warns about higher risk.

For most users, the safest approach will be to stick with Apple’s store or other trusted marketplaces, at least in the early days. Android’s history shows that third-party app stores often become hotspots for adware and phishing, so security education is key. Regulators and developers will need to work together to make the review process and data-handling practices transparent.

There is no set timeline for when or how the CMA will enforce these changes, or how far Apple will go to comply. The company could challenge the decision or introduce limited reforms. Either way, it’s a major step toward redefining how trust, privacy, and control are balanced in the mobile age.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

What does Google know about me? (Lock and Code S06E21)

20 October 2025 at 10:26

This week on the Lock and Code podcast…

Google is everywhere in our lives. It’s reach into our data extends just as far.

After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google.

Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android.

Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive.

After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look.

But Ruiz wasn’t happy to let the company’s access continue. So he has a plan.

 ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Microsoft Patch Tuesday, September 2025 Edition

9 September 2025 at 17:21

Microsoft Corp. today issued security updates to fix more than 80 vulnerabilities in its Windows operating systems and software. There are no known “zero-day” or actively exploited vulnerabilities in this month’s bundle from Redmond, which nevertheless includes patches for 13 flaws that earned Microsoft’s most-dire “critical” label. Meanwhile, both Apple and Google recently released updates to fix zero-day bugs in their devices.

Microsoft assigns security flaws a “critical” rating when malware or miscreants can exploit them to gain remote access to a Windows system with little or no help from users. Among the more concerning critical bugs quashed this month is CVE-2025-54918. The problem here resides with Windows NTLM, or NT LAN Manager, a suite of code for managing authentication in a Windows network environment.

Redmond rates this flaw as “Exploitation More Likely,” and although it is listed as a privilege escalation vulnerability, Kev Breen at Immersive says this one is actually exploitable over the network or the Internet.

“From Microsoft’s limited description, it appears that if an attacker is able to send specially crafted packets over the network to the target device, they would have the ability to gain SYSTEM-level privileges on the target machine,” Breen said. “The patch notes for this vulnerability state that ‘Improper authentication in Windows NTLM allows an authorized attacker to elevate privileges over a network,’ suggesting an attacker may already need to have access to the NTLM hash or the user’s credentials.”

Breen said another patch — CVE-2025-55234, a 8.8 CVSS-scored flaw affecting the Windows SMB client for sharing files across a network — also is listed as privilege escalation bug but is likewise remotely exploitable. This vulnerability was publicly disclosed prior to this month.

“Microsoft says that an attacker with network access would be able to perform a replay attack against a target host, which could result in the attacker gaining additional privileges, which could lead to code execution,” Breen noted.

CVE-2025-54916 is an “important” vulnerability in Windows NTFS — the default filesystem for all modern versions of Windows — that can lead to remote code execution. Microsoft likewise thinks we are more than likely to see exploitation of this bug soon: The last time Microsoft patched an NTFS bug was in March 2025 and it was already being exploited in the wild as a zero-day.

“While the title of the CVE says ‘Remote Code Execution,’ this exploit is not remotely exploitable over the network, but instead needs an attacker to either have the ability to run code on the host or to convince a user to run a file that would trigger the exploit,” Breen said. “This is commonly seen in social engineering attacks, where they send the user a file to open as an attachment or a link to a file to download and run.”

Critical and remote code execution bugs tend to steal all the limelight, but Tenable Senior Staff Research Engineer Satnam Narang notes that nearly half of all vulnerabilities fixed by Microsoft this month are privilege escalation flaws that require an attacker to have gained access to a target system first before attempting to elevate privileges.

“For the third time this year, Microsoft patched more elevation of privilege vulnerabilities than remote code execution flaws,” Narang observed.

On Sept. 3, Google fixed two flaws that were detected as exploited in zero-day attacks, including CVE-2025-38352, an elevation of privilege in the Android kernel, and CVE-2025-48543, also an elevation of privilege problem in the Android Runtime component.

Also, Apple recently patched its seventh zero-day (CVE-2025-43300) of this year. It was part of an exploit chain used along with a vulnerability in the WhatsApp (CVE-2025-55177) instant messenger to hack Apple devices. Amnesty International reports that the two zero-days have been used in “an advanced spyware campaign” over the past 90 days. The issue is fixed in iOS 18.6.2, iPadOS 18.6.2, iPadOS 17.7.10, macOS Sequoia 15.6.1, macOS Sonoma 14.7.8, and macOS Ventura 13.7.8.

The SANS Internet Storm Center has a clickable breakdown of each individual fix from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on wonky updates.

AskWoody also reminds us that we’re now just two months out from Microsoft discontinuing free security updates for Windows 10 computers. For those interested in safely extending the lifespan and usefulness of these older machines, check out last month’s Patch Tuesday coverage for a few pointers.

As ever, please don’t neglect to back up your data (if not your entire system) at regular intervals, and feel free to sound off in the comments if you experience problems installing any of these fixes.

The Ongoing Fallout from a Breach at AI Chatbot Maker Salesloft

1 September 2025 at 17:55

The recent mass-theft of authentication tokens from Salesloft, whose AI chatbot is used by a broad swath of corporate America to convert customer interaction into Salesforce leads, has left many companies racing to invalidate the stolen credentials before hackers can exploit them. Now Google warns the breach goes far beyond access to Salesforce data, noting the hackers responsible also stole valid authentication tokens for hundreds of online services that customers can integrate with Salesloft, including Slack, Google Workspace, Amazon S3, Microsoft Azure, and OpenAI.

Salesloft says its products are trusted by 5,000+ customers. Some of the bigger names are visible on the company’s homepage.

Salesloft disclosed on August 20 that, “Today, we detected a security issue in the Drift application,” referring to the technology that powers an AI chatbot used by so many corporate websites. The alert urged customers to re-authenticate the connection between the Drift and Salesforce apps to invalidate their existing authentication tokens, but it said nothing then to indicate those tokens had already been stolen.

On August 26, the Google Threat Intelligence Group (GTIG) warned that unidentified hackers tracked as UNC6395 used the access tokens stolen from Salesloft to siphon large amounts of data from numerous corporate Salesforce instances. Google said the data theft began as early as Aug. 8, 2025 and lasted through at least Aug. 18, 2025, and that the incident did not involve any vulnerability in the Salesforce platform.

Google said the attackers have been sifting through the massive data haul for credential materials such as AWS keys, VPN credentials, and credentials to the cloud storage provider Snowflake.

“If successful, the right credentials could allow them to further compromise victim and client environments, as well as pivot to the victim’s clients or partner environments,” the GTIG report stated.

The GTIG updated its advisory on August 28 to acknowledge the attackers used the stolen tokens to access email from “a very small number of Google Workspace accounts” that were specially configured to integrate with Salesloft. More importantly, it warned organizations to immediately invalidate all tokens stored in or connected to their Salesloft integrations — regardless of the third-party service in question.

“Given GTIG’s observations of data exfiltration associated with the campaign, organizations using Salesloft Drift to integrate with third-party platforms (including but not limited to Salesforce) should consider their data compromised and are urged to take immediate remediation steps,” Google advised.

On August 28, Salesforce blocked Drift from integrating with its platform, and with its productivity platforms Slack and Pardot.

The Salesloft incident comes on the heels of a broad social engineering campaign that used voice phishing to trick targets into connecting a malicious app to their organization’s Salesforce portal. That campaign led to data breaches and extortion attacks affecting a number of companies including Adidas, Allianz Life and Qantas.

On August 5, Google disclosed that one of its corporate Salesforce instances was compromised by the attackers, which the GTIG has dubbed UNC6040 (“UNC” stands for “uncategorized threat group”). Google said the extortionists consistently claimed to be the threat group ShinyHunters, and that the group appeared to be preparing to escalate its extortion attacks by launching a data leak site.

ShinyHunters is an amorphous threat group known for using social engineering to break into cloud platforms and third-party IT providers, and for posting dozens of stolen databases to cybercrime communities like the now-defunct Breachforums.

The ShinyHunters brand dates back to 2020, and the group has been credited with or taken responsibility for dozens of data leaks that exposed hundreds of millions of breached records. The group’s member roster is thought to be somewhat fluid, drawing mainly from active denizens of the Com, a mostly English-language cybercrime community scattered across an ocean of Telegram and Discord servers.

Recorded Future’s Alan Liska told Bleeping Computer that the overlap in the “tools, techniques and procedures” used by ShinyHunters and the Scattered Spider extortion group likely indicate some crossover between the two groups.

To muddy the waters even further, on August 28 a Telegram channel that now has nearly 40,000 subscribers was launched under the intentionally confusing banner “Scattered LAPSUS$ Hunters 4.0,” wherein participants have repeatedly claimed responsibility for the Salesloft hack without actually sharing any details to prove their claims.

The Telegram group has been trying to attract media attention by threatening security researchers at Google and other firms. It also is using the channel’s sudden popularity to promote a new cybercrime forum called “Breachstars,” which they claim will soon host data stolen from victim companies who refuse to negotiate a ransom payment.

The “Scattered Lapsus$ Hunters 4.0” channel on Telegram now has roughly 40,000 subscribers.

But Austin Larsen, a principal threat analyst at Google’s threat intelligence group, said there is no compelling evidence to attribute the Salesloft activity to ShinyHunters or to other known groups at this time.

“Their understanding of the incident seems to come from public reporting alone,” Larsen told KrebsOnSecurity, referring to the most active participants in the Scattered LAPSUS$ Hunters 4.0 Telegram channel.

Joshua Wright, a senior technical director at Counter Hack, is credited with coining the term “authorization sprawl” to describe one key reason that social engineering attacks from groups like Scattered Spider and ShinyHunters so often succeed: They abuse legitimate user access tokens to move seamlessly between on-premises and cloud systems.

Wright said this type of attack chain often goes undetected because the attacker sticks to the resources and access already allocated to the user.

“Instead of the conventional chain of initial access, privilege escalation and endpoint bypass, these threat actors are using centralized identity platforms that offer single sign-on (SSO) and integrated authentication and authorization schemes,” Wright wrote in a June 2025 column. “Rather than creating custom malware, attackers use the resources already available to them as authorized users.”

It remains unclear exactly how the attackers gained access to all Salesloft Drift authentication tokens. Salesloft announced on August 27 that it hired Mandiant, Google Cloud’s incident response division, to investigate the root cause(s).

“We are working with Salesloft Drift to investigate the root cause of what occurred and then it’ll be up to them to publish that,” Mandiant Consulting CTO Charles Carmakal told Cyberscoop. “There will be a lot more tomorrow, and the next day, and the next day.”

Oregon Man Charged in ‘Rapper Bot’ DDoS Service

19 August 2025 at 16:51

A 22-year-old Oregon man has been arrested on suspicion of operating “Rapper Bot,” a massive botnet used to power a service for launching distributed denial-of-service (DDoS) attacks against targets — including a March 2025 DDoS that knocked Twitter/X offline. The Justice Department asserts the suspect and an unidentified co-conspirator rented out the botnet to online extortionists, and tried to stay off the radar of law enforcement by ensuring that their botnet was never pointed at KrebsOnSecurity.

The control panel for the Rapper Bot botnet greets users with the message “Welcome to the Ball Pit, Now with refrigerator support,” an apparent reference to a handful of IoT-enabled refrigerators that were enslaved in their DDoS botnet.

On August 6, 2025, federal agents arrested Ethan J. Foltz of Springfield, Ore. on suspicion of operating Rapper Bot, a globally dispersed collection of tens of thousands of hacked Internet of Things (IoT) devices.

The complaint against Foltz explains the attacks usually clocked in at more than two terabits of junk data per second (a terabit is one trillion bits of data), which is more than enough traffic to cause serious problems for all but the most well-defended targets. The government says Rapper Bot consistently launched attacks that were “hundreds of times larger than the expected capacity of a typical server located in a data center,” and that some of its biggest attacks exceeded six terabits per second.

Indeed, Rapper Bot was reportedly responsible for the March 10, 2025 attack that caused intermittent outages on Twitter/X. The government says Rapper Bot’s most lucrative and frequent customers were involved in extorting online businesses — including numerous gambling operations based in China.

The criminal complaint was written by Elliott Peterson, an investigator with the Defense Criminal Investigative Service (DCIS), the criminal investigative division of the Department of Defense (DoD) Office of Inspector General. The complaint notes the DCIS got involved because several Internet addresses maintained by the DoD were the target of Rapper Bot attacks.

Peterson said he tracked Rapper Bot to Foltz after a subpoena to an ISP in Arizona that was hosting one of the botnet’s control servers showed the account was paid for via PayPal. More legal process to PayPal revealed Foltz’s Gmail account and previously used IP addresses. A subpoena to Google showed the defendant searched security blogs constantly for news about Rapper Bot, and for updates about competing DDoS-for-hire botnets.

According to the complaint, after having a search warrant served on his residence the defendant admitted to building and operating Rapper Bot, sharing the profits 50/50 with a person he claimed to know only by the hacker handle “Slaykings.” Foltz also shared with investigators the logs from his Telegram chats, wherein Foltz and Slaykings discussed how best to stay off the radar of law enforcement investigators while their competitors were getting busted.

Specifically, the two hackers chatted about a May 20 attack against KrebsOnSecurity.com that clocked in at more than 6.3 terabits of data per second. The brief attack was notable because at the time it was the largest DDoS that Google had ever mitigated (KrebsOnSecurity sits behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content).

The May 2025 DDoS was launched by an IoT botnet called Aisuru, which I discovered was operated by a 21-year-old man in Brazil named Kaike Southier Leite. This individual was more commonly known online as “Forky,” and Forky told me he wasn’t afraid of me or U.S. federal investigators. Nevertheless, the complaint against Foltz notes that Forky’s botnet seemed to diminish in size and firepower at the same time that Rapper Bot’s infection numbers were on the upswing.

“Both FOLTZ and Slaykings were very dismissive of attention seeking activities, the most extreme of which, in their view, was to launch DDoS attacks against the website of the prominent cyber security journalist Brian Krebs,” Peterson wrote in the criminal complaint.

“You see, they’ll get themselves [expletive],” Slaykings wrote in response to Foltz’s comments about Forky and Aisuru bringing too much heat on themselves.

“Prob cuz [redacted] hit krebs,” Foltz wrote in reply.

“Going against Krebs isn’t a good move,” Slaykings concurred. “It isn’t about being a [expletive] or afraid, you just get a lot of problems for zero money. Childish, but good. Let them die.”

“Ye, it’s good tho, they will die,” Foltz replied.

The government states that just prior to Foltz’s arrest, Rapper Bot had enslaved an estimated 65,000 devices globally. That may sound like a lot, but the complaint notes the defendants weren’t interested in making headlines for building the world’s largest or most powerful botnet.

Quite the contrary: The complaint asserts that the accused took care to maintain their botnet in a “Goldilocks” size — ensuring that “the number of devices afforded powerful attacks while still being manageable to control and, in the hopes of Foltz and his partners, small enough to not be detected.”

The complaint states that several days later, Foltz and Slaykings returned to discussing what that they expected to befall their rival group, with Slaykings stating, “Krebs is very revenge. He won’t stop until they are [expletive] to the bone.”

“Surprised they have any bots left,” Foltz answered.

“Krebs is not the one you want to have on your back. Not because he is scary or something, just because he will not give up UNTIL you are [expletive] [expletive]. Proved it with Mirai and many other cases.”

[Unknown expletives aside, that may well be the highest compliment I’ve ever been paid by a cybercriminal. I might even have part of that quote made into a t-shirt or mug or something. It’s also nice that they didn’t let any of their customers attack my site — if even only out of a paranoid sense of self-preservation.]

Foltz admitted to wiping the user and attack logs for the botnet approximately once a week, so investigators were unable to tally the total number of attacks, customers and targets of this vast crime machine. But the data that was still available showed that from April 2025 to early August, Rapper Bot conducted over 370,000 attacks, targeting 18,000 unique victims across 1,000 networks, with the bulk of victims residing in China, Japan, the United States, Ireland and Hong Kong (in that order).

According to the government, Rapper Bot borrows much of its code from fBot, a DDoS malware strain also known as Satori. In 2020, authorities in Northern Ireland charged a then 20-year-old man named Aaron “Vamp” Sterritt with operating fBot with a co-conspirator. U.S. prosecutors are still seeking Sterritt’s extradition to the United States. fBot is itself a variation of the Mirai IoT botnet that has ravaged the Internet with DDoS attacks since its source code was leaked back in 2016.

The complaint says Foltz and his partner did not allow most customers to launch attacks that were more than 60 seconds in duration — another way they tried to keep public attention to the botnet at a minimum. However, the government says the proprietors also had special arrangements with certain high-paying clients that allowed much larger and longer attacks.

The accused and his alleged partner made light of this blog post about the fallout from one of their botnet attacks.

Most people who have never been on the receiving end of a monster DDoS attack have no idea of the cost and disruption that such sieges can bring. The DCIS’s Peterson wrote that he was able to test the botnet’s capabilities while interviewing Foltz, and that found that “if this had been a server upon which I was running a website, using services such as load balancers, and paying for both outgoing and incoming data, at estimated industry average rates the attack (2+ Terabits per second times 30 seconds) might have cost the victim anywhere from $500 to $10,000.”

“DDoS attacks at this scale often expose victims to devastating financial impact, and a potential alternative, network engineering solutions that mitigate the expected attacks such as overprovisioning, i.e. increasing potential Internet capacity, or DDoS defense technologies, can themselves be prohibitively expensive,” the complaint continues. “This ‘rock and a hard place’ reality for many victims can leave them acutely exposed to extortion demands – ‘pay X dollars and the DDoS attacks stop’.”

The Telegram chat records show that the day before Peterson and other federal agents raided Foltz’s residence, Foltz allegedly told his partner he’d found 32,000 new devices that were vulnerable to a previously unknown exploit.

Foltz and Slaykings discussing the discovery of an IoT vulnerability that will give them 32,000 new devices.

Shortly before the search warrant was served on his residence, Foltz allegedly told his partner that “Once again we have the biggest botnet in the community.” The following day, Foltz told his partner that it was going to be a great day — the biggest so far in terms of income generated by Rapper Bot.

“I sat next to Foltz while the messages poured in — promises of $800, then $1,000, the proceeds ticking up as the day went on,” Peterson wrote. “Noticing a change in Foltz’ behavior and concerned that Foltz was making changes to the botnet configuration in real time, Slaykings asked him ‘What’s up?’ Foltz deftly typed out some quick responses. Reassured by Foltz’ answer, Slaykings responded, ‘Ok, I’m the paranoid one.”

The case is being prosecuted by Assistant U.S. Attorney Adam Alexander in the District of Alaska (at least some of the devices found to be infected with Rapper Bot were located there, and it is where Peterson is stationed). Foltz faces one count of aiding and abetting computer intrusions. If convicted, he faces a maximum penalty of 10 years in prison, although a federal judge is unlikely to award anywhere near that kind of sentence for a first-time conviction.

Senator Chides FBI for Weak Advice on Mobile Security

30 June 2025 at 13:33

Agents with the Federal Bureau of Investigation (FBI) briefed Capitol Hill staff recently on hardening the security of their mobile devices, after a contacts list stolen from the personal phone of the White House Chief of Staff Susie Wiles was reportedly used to fuel a series of text messages and phone calls impersonating her to U.S. lawmakers. But in a letter this week to the FBI, one of the Senate’s most tech-savvy lawmakers says the feds aren’t doing enough to recommend more appropriate security protections that are already built into most consumer mobile devices.

A screenshot of the first page from Sen. Wyden’s letter to FBI Director Kash Patel.

On May 29, The Wall Street Journal reported that federal authorities were investigating a clandestine effort to impersonate Ms. Wiles via text messages and in phone calls that may have used AI to spoof her voice. According to The Journal, Wiles told associates her cellphone contacts were hacked, giving the impersonator access to the private phone numbers of some of the country’s most influential people.

The execution of this phishing and impersonation campaign — whatever its goals may have been — suggested the attackers were financially motivated, and not particularly sophisticated.

“It became clear to some of the lawmakers that the requests were suspicious when the impersonator began asking questions about Trump that Wiles should have known the answers to—and in one case, when the impersonator asked for a cash transfer, some of the people said,” the Journal wrote. “In many cases, the impersonator’s grammar was broken and the messages were more formal than the way Wiles typically communicates, people who have received the messages said. The calls and text messages also didn’t come from Wiles’s phone number.”

Sophisticated or not, the impersonation campaign was soon punctuated by the murder of Minnesota House of Representatives Speaker Emerita Melissa Hortman and her husband, and the shooting of Minnesota State Senator John Hoffman and his wife. So when FBI agents offered in mid-June to brief U.S. Senate staff on mobile threats, more than 140 staffers took them up on that invitation (a remarkably high number considering that no food was offered at the event).

But according to Sen. Ron Wyden (D-Ore.), the advice the FBI provided to Senate staffers was largely limited to remedial tips, such as not clicking on suspicious links or attachments, not using public wifi networks, turning off bluetooth, keeping phone software up to date, and rebooting regularly.

“This is insufficient to protect Senate employees and other high-value targets against foreign spies using advanced cyber tools,” Wyden wrote in a letter sent today to FBI Director Kash Patel. “Well-funded foreign intelligence agencies do not have to rely on phishing messages and malicious attachments to infect unsuspecting victims with spyware. Cyber mercenary companies sell their government customers advanced ‘zero-click’ capabilities to deliver spyware that do not require any action by the victim.”

Wyden stressed that to help counter sophisticated attacks, the FBI should be encouraging lawmakers and their staff to enable anti-spyware defenses that are built into Apple’s iOS and Google’s Android phone software.

These include Apple’s Lockdown Mode, which is designed for users who are worried they may be subject to targeted attacks. Lockdown Mode restricts non-essential iOS features to reduce the device’s overall attack surface. Google Android devices carry a similar feature called Advanced Protection Mode.

Wyden also urged the FBI to update its training to recommend a number of other steps that people can take to make their mobile devices less trackable, including the use of ad blockers to guard against malicious advertisements, disabling ad tracking IDs in mobile devices, and opting out of commercial data brokers (the suspect charged in the Minnesota shootings reportedly used multiple people-search services to find the home addresses of his targets).

The senator’s letter notes that while the FBI has recommended all of the above precautions in various advisories issued over the years, the advice the agency is giving now to the nation’s leaders needs to be more comprehensive, actionable and urgent.

“In spite of the seriousness of the threat, the FBI has yet to provide effective defensive guidance,” Wyden said.

Nicholas Weaver is a researcher with the International Computer Science Institute, a nonprofit in Berkeley, Calif. Weaver said Lockdown Mode or Advanced Protection will mitigate many vulnerabilities, and should be the default setting for all members of Congress and their staff.

“Lawmakers are at exceptional risk and need to be exceptionally protected,” Weaver said. “Their computers should be locked down and well administered, etc. And the same applies to staffers.”

Weaver noted that Apple’s Lockdown Mode has a track record of blocking zero-day attacks on iOS applications; in September 2023, Citizen Lab documented how Lockdown Mode foiled a zero-click flaw capable of installing spyware on iOS devices without any interaction from the victim.

Earlier this month, Citizen Lab researchers documented a zero-click attack used to infect the iOS devices of two journalists with Paragon’s Graphite spyware. The vulnerability could be exploited merely by sending the target a booby-trapped media file delivered via iMessage. Apple also recently updated its advisory for the zero-click flaw (CVE-2025-43200), noting that it was mitigated as of iOS 18.3.1, which was released in February 2025.

Apple has not commented on whether CVE-2025-43200 could be exploited on devices with Lockdown Mode turned on. But HelpNetSecurity observed that at the same time Apple addressed CVE-2025-43200 back in February, the company fixed another vulnerability flagged by Citizen Lab researcher Bill Marczak: CVE-2025-24200, which Apple said was used in an extremely sophisticated physical attack against specific targeted individuals that allowed attackers to disable USB Restricted Mode on a locked device.

In other words, the flaw could apparently be exploited only if the attacker had physical access to the targeted vulnerable device. And as the old infosec industry adage goes, if an adversary has physical access to your device, it’s most likely not your device anymore.

I can’t speak to Google’s Advanced Protection Mode personally, because I don’t use Google or Android devices. But I have had Apple’s Lockdown Mode enabled on all of my Apple devices since it was first made available in September 2022. I can only think of a single occasion when one of my apps failed to work properly with Lockdown Mode turned on, and in that case I was able to add a temporary exception for that app in Lockdown Mode’s settings.

My main gripe with Lockdown Mode was captured in a March 2025 column by TechCrunch’s Lorenzo Francheschi-Bicchierai, who wrote about its penchant for periodically sending mystifying notifications that someone has been blocked from contacting you, even though nothing then prevents you from contacting that person directly. This has happened to me at least twice, and in both cases the person in question was already an approved contact, and said they had not attempted to reach out.

Although it would be nice if Apple’s Lockdown Mode sent fewer, less alarming and more informative alerts, the occasional baffling warning message is hardly enough to make me turn it off.

❌