Normal view

Received yesterday — 13 February 2026

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

13 February 2026 at 16:39

Amazon and Flock Safety have ended a partnership that would've given law enforcement access to a vast web of Ring cameras.

The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.

The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new "Search Party" feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.

Read full article

Comments

© Jagoda Matejczuk / 500px | 500px Prime

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

13 February 2026 at 15:58

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” 

This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.   

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.  

 This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.  

Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.  

Meta Should Know the Privacy and Legal Risks  

Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.  

In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates. 

Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.   

In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law. 

And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.  

 Privacy Advocates Will Continue to Focus our Resources on Meta  

 Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.  

Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.  

The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

Ring Just Ended Its Controversial Partnership With Flock Safety

13 February 2026 at 10:12

Ring isn't having the week it probably thought it would have. The Amazon-owned company aired an ad on Super Bowl Sunday for "Search Party," its new feature that turns a neighborhood's collective Ring cameras into one network, with the goal of locating lost dogs. Viewers, however, saw this as a major privacy violation—it doesn't take much to imagine using this type of surveillance tech to locate people, not pets.

The backlash wasn't just isolated to the ad, however. The controversy reignited criticisms of the company's partnership with Flock Safety, a security company that sells security cameras that track vehicles, notably for license plate recognition. But the partnership with Ring wasn't about tracking vehicles: Instead, Flock Safety's role was to make it easier for law enforcement agencies that use Flock Safety software to request Ring camera footage from users. Agencies could put in a request to an area where a crime supposedly took place, and Ring users would be notified about the request. They didn't have to agree to share footage, however.

Law enforcement could already request footage from Ring users, through the platform's existing "Community Requests" feature. But this partnership would let agencies make these requests directly through Flock Safety's software. If a user submitted footage following a request, Ring said that data would be "securely packaged" by Flock Safety and share to the agency through FlockOS or Flock Nova.

Ring cancels its partnership with Flock Safety

That partnership is officially over. On Friday, Ring published a blog post announcing the end of its relationship with Flock Safety. The company said, after a review, the integration "would require significantly more time and resources than anticipated." As such, both parties have agree to cancel the partnership.

Importantly, Ring says that since the integration never actually launched, no user footage was ever sent to Flock Safety—despite the company announcing the partnership four months ago. Social media influencers had spread the false claim that Flock Safety was seeding Ring footage directly to law enforcement agencies, such as ICE. While those claims are inaccurate, they were likely fueled by reporting from 404 Media that ICE has been able to access Flock Safety's data in its investigations. Had Ring's partnership with Flock Safety gone ahead, there would be legitimate cause to believe that agencies like ICE could tap into the footage Ring users had shared—even if those users were under the impression they were only sharing footage with local agencies to solve specific cases.

While privacy advocates will likely celebrate this news, the cancelled partnership has no effect on Community Requests. Law enforcement agencies will still be able to request footage from Ring users, and those users will still have a say in whether or not they send that footage. Ring sees the feature as an objective good, allowing users to voluntarily share footage that could help law enforcement solve important cases. In its announcement on Friday, Ring cited the December 2025 Brown University shooting, in which seven users shared 168 video clips with law enforcement. According to Ring, one of those videos assisted police in identifying the suspect's car, which, in turn, solved the case.

Meta Plans to Add Facial Recognition Technology to Its Smart Glasses

In an internal memo last year, Meta said the political tumult in the United States would distract critics from the feature’s release.

© Lucia Vazquez for The New York Times

The maker of Meta’s smart glasses said it sold more than seven million pairs last year.

Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash

13 February 2026 at 04:00
Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety." [...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE.

Read more of this story at Slashdot.

Received before yesterday

Outlook add-in goes rogue and steals 4,000 credentials and payment data

12 February 2026 at 09:35

The once popular Outlook add-in AgreeTo was turned into a powerful phishing kit after the developer abandoned the project.

The post Outlook add-in goes rogue and steals 4,000 credentials and payment data appeared first on Security Boulevard.

Outlook add-in goes rogue and steals 4,000 credentials and payment data

12 February 2026 at 09:35

Researchers found a malicious Microsoft Outlook add-in which was able to steal 4,000 stolen Microsoft account credentials, credit card numbers, and banking security answers. 

How is it possible that the Microsoft Office Add-in Store ended listing an add-in that silently loaded a phishing kit inside Outlook’s sidebar?

A developer launched an add-in called AgreeTo, an open-source meeting scheduling tool with a Chrome extension. It was a popular tool, but at some point, it was abandoned by its developer, its backend URL on Vercel expired, and an attacker later claimed that same URL.

That requires some explanation. Office add-ins are essentially XML manifests that tell Outlook to load a specific URL in an iframe. Microsoft reviews and signs the manifest once but does not continuously monitor what that URL serves later.

So, when the outlook-one.vercel.app subdomain became free to claim, a cybercriminal jumped at the opportunity to scoop it up and abuse the powerful ReadWriteItem permissions requested and approved in 2022. These permissions meant the add-in could read and modify a user’s email when loaded. The permissions were appropriate for a meeting scheduler, but they served a different purpose for the criminal.

While Google removed the dead Chrome extension in February 2025, the Outlook add-in stayed listed in Microsoft’s Office Store, still pointing to a Vercel URL that no longer belonged to the original developer.

An attacker registered that Vercel subdomain and deployed a simple four-page phishing kit consisting of fake Microsoft login, password collection, Telegram-based data exfiltration, and a redirect to the real login.microsoftonline.com.

What make this work was simple and effective. When users opened the add-in, they saw what looked like a normal Microsoft sign-in inside Outlook. They entered credentials, which were sent via a JavaScript function to the attacker’s Telegram bot along with IP data, then were bounced to the real Microsoft login so nothing seemed suspicious.

The researchers were able to access the attacker’s poorly secured Telegram-based exfiltration channel and recovered more than 4,000 sets of stolen Microsoft account credentials, plus payment and banking data, indicating the campaign was active and part of a larger multi-brand phishing operation.

“The same attacker operates at least 12 distinct phishing kits, each impersonating a different brand – Canadian ISPs, banks, webmail providers. The stolen data included not just email credentials but credit card numbers, CVVs, PINs, and banking security answers used to intercept Interac e-Transfer payments. This is a professional, multi-brand phishing operation. The Outlook add-in was just one of its distribution channels.”

What to do

If you are or ever have used the AgreeTo add-in after May 2023:

  • Make sure it’s removed. If not, uninstall the add-in.
  • Change the password for your Microsoft account.
  • If that password (or close variants) was reused on other services (email, banking, SaaS, social), change those as well and make each one unique.
  • Review recent sign‑ins and security activity on your Microsoft account, looking for logins from unknown locations or devices, or unusual times.
  • Review other sensitive information you may have shared via email.
  • Scan your mailbox for signs of abuse: messages you did not send, auto‑forwarding rules you did not create, or password‑reset emails for other services you did not request.
  • Watch payment statements closely for at least the next few months, especially small “test” charges and unexpected e‑transfer or card‑not‑present transactions, and dispute anything suspicious immediately.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

3D Printer Surveillance

12 February 2026 at 07:01

New York is contemplating a bill that adds surveillance to 3D printers:

New York’s 2026­2027 executive budget bill (S.9005 / A.10005) includes language that should alarm every maker, educator, and small manufacturer in the state. Buried in Part C is a provision requiring all 3D printers sold or delivered in New York to include “blocking technology.” This is defined as software or firmware that scans every print file through a “firearms blueprint detection algorithm” and refuses to print anything it flags as a potential firearm or firearm component.

I get the policy goals here, but the solution just won’t work. It’s the same problem as DRM: trying to prevent general-purpose computers from doing specific things. Cory Doctorow wrote about it in 2018 and—more generally—spoke about it in 2011.

With Ring, American Consumers Built a Surveillance Dragnet

11 February 2026 at 20:45
Ring's Super Bowl ad on Sunday promoted "Search Party," a feature that lets a user post a photo of a missing dog in the Ring app and triggers outdoor Ring cameras across the neighborhood to use AI to scan for a match. 404 Media argues the cheerful premise obscures what the Amazon-owned company has become: a massive, consumer-deployed surveillance network. Ring founder Jamie Siminoff, who left in 2023 and returned last year, has since moved to re-establish police partnerships and push more AI into Ring cameras. The company has also partnered with Flock, a surveillance firm used by thousands of police departments, and launched a beta feature called "Familiar Faces" that identifies known people at your door. Chris Gilliard, author of the upcoming book Luxury Surveillance, called the ad "a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement." Further reading: No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare, EFF Says

Read more of this story at Slashdot.

The original Secure Boot certificates are about to expire, but you probably won’t notice

11 February 2026 at 16:45

With the original release of Windows 8, Microsoft also enforced Secure Boot. It’s been 15 years since that release, and that means the original 2011 Secure Boot certificates are about to expire. If these certificates are not replaced with new ones, Secure Boot will cease to function – your machine will still boot and operate, but the benefits of Secure Boot are mostly gone, and as newer vulnerabilities are discovered, systems without updated Secure Boot certificates will be increasingly exposed.

Microsoft has already been rolling out new certificates through Windows updates, but only for users of supported versions of Windows, which means Windows 11. If you’re using Windows 10, without the Extended Security Updates, you won’t be getting the new certificates through Windows Update. Even if you use Windows 11, you may need a UEFI update from your laptop or motherboard OEM, assuming they still support your device.

For Linux users using Secure Boot, you’re probably covered by fwupd, which will update the certificates as part of your system’s update program, like KDE’s Discover. Of course, you can also use fwupd manually in the terminal, if you’d like. For everyone else not using Secure Boot, none of this will matter and you’re going to be just fine. I honestly doubt there will be much fallout from this updating process, but there’s always bound to be a few people who fall between the cracks.

All we can do is hope whomever is responsible for Secure Boot at Microsoft hasn’t started slopcoding yet.

Google recovers "deleted" Nest video in high-profile abduction case

11 February 2026 at 15:15

Like most cloud-enabled home security cameras, Google's Nest products don't provide long-term storage unless you pay a monthly fee. That video may not vanish into the digital aether right on time, though. Investigators involved with the high-profile abduction of Nancy Guthrie have released video from Guthrie's Nest doorbell camera—video that was believed to have been deleted because Guthrie wasn't paying for the service.

Google's cameras connect to the recently upgraded Home Premium subscription service. For $10 per month, you get 30 days of stored events, and $20 gets you 60 days of events with 10 days of the full video. If you don't pay anything, Google only saves three hours of event history. After that, the videos are deleted, at least as far as the user is concerned. Newer Nest cameras have limited local storage that can cache clips for a few hours in case connectivity drops out, but there is no option for true local storage. Guthrie's camera was reportedly destroyed by the perpetrators.

Suspect in abduction approaches doorbell camera.

Expired videos are no longer available to the user, and Google won't restore them even if you later upgrade to a premium account. However, that doesn't mean the data is truly gone. Nancy Guthrie was abducted from her home in the early hours of February 1, and at first, investigators said there was no video of the crime because the doorbell camera was not on a paid account. Yet, video showing a masked individual fiddling with the camera was published on February 10.

Read full article

Comments

© Ryan Whitwam

Claim Your Payout From the 23andMe Data Breach Before It's Too Late

11 February 2026 at 09:30

If you were affected by 23andMe's data breach—which involved the information of approximately 6.4 million U.S. residents—you have just a few more days to claim your compensation. Following the 2023 credential-stuffing attack, 23AndMe in 2024 agreed to a $30–$50 million payout for impacted consumers. The genetic testing company then filed for Chapter 11 bankruptcy in 2025 (introducing new privacy concerns around the potential sale of customer data). The courts approved the deal last month, and class members have until Feb. 17 to submit claims related to the cyber incident.

How much you'll receive from the 23andMe settlement

There are several tiers of payouts with the 23andMe settlement. Users with an "extraordinary claim"—those who experienced identity theft or fraudulent tax filings as a result of the breach—could qualify for up to $10,000 to reimburse verified expenses, including costs for physical or cyber security systems as well as mental health treatment.

Claimants who received notices that certain health information was leaked in the breach will be paid up to $165. Eligible data include raw genotype data, health reports (including health predisposition reports, wellness reports, and carrier status reports), and self-reported health conditions. Individuals residing in Alaska, California, Illinois, and Oregon will receive an additional $100 thanks to state privacy laws. Note that payments will likely take time to be distributed.

The settlement also provides for five years of identity monitoring services through a customized program called Privacy & Medical Shield + Genetic Monitoring. This is available to all class members regardless of payout.

How to file a 23andMe claim

Consumers who were impacted by the 2023 data breach can file a Cyber Security Incident Claim, which must be submitted by Feb. 17, 2026 (unless you received a notice in 2026 indicating otherwise). To be eligible, you must have been a 23andMe customer between May 1, 2023 and October 1, 2023 and have received a notice (via letter or email) that your information was compromised in the breach. You also must attest that you incurred damages (monetary or non-monetary) as a result of the incident.

Claims can be filed online via the settlement website, or you can mail a hard copy of your claim form (postmarked by Feb. 17) to the address listed. To complete a claim, you must provide some personal information as well as details about the harm incurred with supporting documentation, such as bank or credit card statements substantiating losses.

Discord Tries To Walk Back Age Verification Panic, Says Most Users Won't Need Face Scans

11 February 2026 at 04:00
Discord has moved to calm a user backlash over its upcoming age verification mandate by clarifying that the "vast majority" of people will never be asked to confirm their age through a face scan or government ID. The platform said it will instead rely on an internal "age prediction" model that draws on account information, device and activity data, and behavioral patterns across its communities to estimate whether someone is an adult. Users whose age the model cannot confidently determine will still need to submit a video selfie or ID. Those not verified as adults or identified as under 18 will be placed in a "teen-appropriate" experience that blocks access to age-restricted servers and channels. The clarification came after users threatened to leave the platform and cancel Nitro subscriptions, and after a third-party vendor used by Discord for age verification suffered a data breach last year that exposed user information and a small number of uploaded ID cards.

Read more of this story at Slashdot.

Upgraded Google safety tools can now find and remove more of your personal info

10 February 2026 at 11:59

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don't have the best of intentions, but Google has some handy tools to address that, and they've gotten an upgrade today. The "Results About You" tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today's upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver's license, and Social Security. You can access the option to add these to Google's ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it's looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver's license number, which is fine, as it's not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

Read full article

Comments

© Aurich Lawson

What to Do If (or When) Your Email Is Leaked to the Dark Web

10 February 2026 at 17:00

The dark web has a bad reputation—one it has earned, at that. It's a complex subsection of the web, and it's not all bad by any means, but its nature does allow illicit and illegal activity to prosper anonymously. That's why hackers choose the dark web as their point of sale for stolen user data: If you're going to traffic digital contraband, you're going to do so as privately as possible.

As such, you might be a bit stressed if you're told your email address was found on the dark web. Maybe you use an identity theft service, which discovered your information here. Perhaps you're noticing an uptick in spam, especially spam that seems targeted to you personally. In any case, it's understandable to be anxious. The good news is, this is more common than you think, and there are steps you can take to protect your data going forward.

What is the dark web?

Despite its aforementioned reputation, the dark web is not "Evil Doers Central." It's simply one part of the deep web, or the part of the internet not indexed by search engines. The deep web makes up the vast majority of the global internet, but the dark web is unique, because it requires a specific type of browser, like Tor, and knowledge of specific dark web addresses, to access.

The dark web is inherently private, and inherently anonymous. That's why it attracts bad actors. But that doesn't mean that's all it's good for. Anyone who needs to access the internet without worrying about intervention can use the dark web. Think about journalists in countries that would rather they not tell their stories, or citizens whose governments censor the public internet. There's plenty of bad to be had, to be sure, but there's also perfectly innocent and productive content, too. For more information about this murky, mysterious place, check out our full explainer and guide here.

Why is my email address on the dark web?

If your email address is on the dark web, it's likely because one of the companies you shared it with suffered a data breach. Unfortunately, data breaches happen all the time, and there's really no way to ensure that a company you choose to share your email address with won't be a victim of a breach at some point in the future. Sometimes the company itself is breached; other times, it's a third-party the company shares data to.

When bad actors break into an organization's systems and steal their data, they often put the spoils on the dark web. This makes it easier to sell the stolen data anonymously. As such, it's really no surprise if your email ends up on the dark web—though that might not be much consolation.

What can hackers do with my email on the dark web?

Your email address is for sale, and someone buys it. Now what? Well, such a hacker could choose a few tactics here. First, they'll likely want to try breaking into different accounts you might have used that email address with. If you lost any passwords in the data breach, they might try those, too. That's why it's an excellent idea to change your passwords as soon as you learn about the breach—but more on that later.

If they can't break into your accounts on their own, they'll want to enlist your services—unknowingly, of course. To do so, they'll likely target you in phishing attacks, and, seeing as they know your email address, they'll probably come via email. There are a lot of phishing campaigns out there, but here are some examples: You might receive fake data breach notices, with a link to check your account; you might find a message telling you it's time to change your password; you might get an email warning you about a login attempt; you might even receive an aggressive email, with demands from the hackers.

Hackers may also choose to impersonate you. They might create an email that looks very similar to yours, and reach out to your contacts in order to trick them into thinking it's really you. Tell your close contacts (especially any you think won't look close at the "from" line in an email) that your email was leaked on the dark web, and to watch out for imposters.

Here's what to do if your email address is on the dark web

First of all, don't panic. Again, data breaches happen so often that many of our email addresses (among other data) have leaked onto the dark web. While this isn't a good thing, it also isn't the end of the world.

Next, change your passwords, starting with your email account itself. If you know the account the email was stolen from, make sure to change this next, as your password may have also be affected in the data breach. As usual, make each password strong and unique: You should never reuse passwords with any account, and all of them should be long and difficult for both humans and computers to guess. As long as each of your accounts uses a strong and unique password, you really shouldn't have to change all of your passwords: Hackers may have your email, but they won't have all these passwords to use with it.

From here, make sure all of your accounts use two-factor authentication (2FA), when available. 2FA ensures that even if you have the email address and password for a given account, you still need access to a trusted device to verify your identity. Hackers won't be able to do anything with your stolen credentials if they don't have physical access to, say, your smartphone. This is a crucial step for maintaining your security following a data breach. You could also choose to use passkeys instead of passwords for any accounts that offers it. Passkeys combine the convenience of passwords with the security of 2FA: You log in with your fingerprint, face scan, or PIN, and there's no password to actually steal.

From here, monitor your various accounts connected to this email, especially your financial accounts. Your email address alone likely won't put you in too much jeopardy, but if you lost additional information, you'll want to ensure hackers don't breach your important accounts. You could take drastic steps, like freezing your credit, but, again, if it's just your email address, this is likely a step too far.

Can I remove my email from the dark web?

While some data removal services claim to be able to remove data like email addresses from the dark web, it's just not 100% possible. The dark web is vast and unregulated, and once the data leaks onto it, the cat's kind of out of the bag. Sure, a service like DeleteMe could request data web hosts to take down your email, but they don't have to. Plus, hackers who buy your email already have it. Again, exposed email addresses are not the end of the world. But if you can't stand having your email on the dark web, your best bet may be to make a new account.

Preventing your email address from winding up on the dark web

What you can do is take measures to prevent data loss in the future. The best step to take is to stop sharing your email in the first place. You don't need to be a hermit, though: Use an email alias service, like Apple's Hide My Email or Proton's email alias feature, to generate a new alias every time you need to share your email. Messages sent to the alias are forwarded to your inbox, so the experience is the same for you, all without exposing your actual address to the world. If one of these companies suffers a data breach, no problem: Just retire the alias.

To that point, going forward, consider using a data monitoring and removal service. Maybe you already do, and that's how you learned about your email on the dark web to begin with. But if you don't, there are many options out there to choose from. While none can promise they'll remove email addresses from the dark web, they might spot your email if it ends up there. If you use aliases, you can then kill that particular address and make a new one for the affected account. Plus, if your email ends up somewhere other than the dark web, they might be able to remove it for you.

Open Letter to Tech Companies: Protect Your Users From Lawless DHS Subpoenas

10 February 2026 at 17:52

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security's (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

These subpoenas are unlawful, and the government knows it.

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

Discord will limit profiles to teen-appropriate mode until you verify your age

10 February 2026 at 10:29

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Google Is Rolling Out Two New Ways to Remove Your Sensitive Data From Search

10 February 2026 at 09:00

Google announced two new ways for users to remove their sensitive information from the web Tuesday morning—or, at least, remove that data from Google Search. The first lets users request that Google remove sensitive government ID information from Search, while the second gives users new tools to request the same for non-consensual explicit images.

Google's "Results about you" tool is getting an update

google tool
Credit: Google

First, Google is updating its existing "Results about you" tool, which helps users scour the internet for their personal information. Before today, this tool could already locate data points like your name, phone number, email addresses, and home addresses. Following the update, you can now find and request the deletion of search results containing highly sensitive information, including your driver's license, passport, or Social Security number.

To launch this tool, click here. If you've never used "Results about you" before, you'll need to set it up to tell Google what to look out for. Once you do, you'll be able to add government ID numbers, such as your driver's license, passport, and Social Security number. If Google finds a match, the company will let you know. You can receive an alert from the Google app on your smartphone, which takes you to a summary of what data was found and where. From here, you can choose from "Request to remove," or "Mark as reviewed."

Unfortunately, this tool won't remove the data from the websites that are hosting it, but it will eventually remove the search results—sharply reducing the chance that someone will find your data on their own.

Google says these changes will roll out in the U.S. over the "coming days," while it is working on bringing them to other countries in the future.

Google's simpler way to remove explicit images from Search

google tool
Credit: Google

In addition to these changes, Google is now rolling out a simpler tool for users to request the remove of non-consensual explicit images (NCEI) from Search. If you find such an image on Search, you can tap the three dots on that image, choose "remove result," then "it shows a sexual image of me." You'll have the choice to report whether the photo is real, or is artificially generated, as well, and you can report multiple images at once, if needed. Your requests will all appear in the Results about you hub, so you can track the progress of each.

The tool lets you opt-in to an option that will filter additional explicit results in other searches. Google says it will also share links to "emotional and legal support" after you submit a request.

Man tricked hundreds of women into handing over Snapchat security codes

10 February 2026 at 08:28

Fresh off a breathless Super Bowl Sunday, we’re less thrilled to bring you this week’s Weirdo Wednesday. Two stories caught our eye, both involving men who crossed clear lines and invaded women’s privacy online.

Last week, 27-year-old Kyle Svara of Oswego, Illinois admitted to hacking women’s Snapchat accounts across the US. Between May 2020 and February 2021, Svara harvested account security codes from 571 victims, leading to confirmed unauthorized access to at least 59 accounts.

Rather than attempting to break Snapchat’s robust encryption protocols, Svara targeted the account owners themselves with social engineering.

After gathering phone numbers and email addresses, he triggered Snapchat’s legitimate login process, which sent six-digit security codes directly to victims’ devices. Posing as Snapchat support, he then sent more than 4,500 anonymous messages via a VoIP texting service, claiming the codes were needed to “verify” or “secure” the account.

Svara showed particular interest in Snapchat’s My Eyes Only feature—a secondary four-digit PIN meant to protect a user’s most sensitive content. By persuading victims to share both codes, he bypassed two layers of security without touching a single line of code. He walked away with private material, including nude images.

Svara didn’t do this solely for his own kicks. He marketed himself as a hacker-for-hire, advertising on platforms like Reddit and offering access to specific accounts in exchange for money or trades.

Selling his services to others was how he got found out. Although Svara stopped hacking in early 2021, his legal day of reckoning followed the 2024 sentencing of one of his customers: Steve Waithe, a former track and field coach who worked at several high-profile universities including Northeastern. Waithe paid Svara to target student athletes he was supposed to mentor.

Svara also went after women in his home area of Plainfield, Illinois, and as far away as Colby College in Maine.

He now faces charges including identity theft, wire fraud, computer fraud, and making false statements to law enforcement about child sex abuse material. Sentencing is scheduled for May 18.

How to protect your Snapchat account

Never send someone your login details or secret codes, even if you think you know them.

This is also a good time to talk about passkeys.

Passkeys let you sign in without a password, but unlike multi-factor authentication, passkeys are cryptographically tied to your device, and can’t be phished or forwarded like one-time codes. Snapchat supports them, and they offer stronger protection than traditional multi-factor authentication, which is increasingly susceptible to smart phishing attacks.

Bad guys with smart glasses

Unfortunately, hacking women’s social media accounts to steal private content isn’t new. But predators will always find a way to use smart tech in nefarious ways. Such is the case with new generations of ‘smart glasses’ powered by AI.

This week, CNN published stories from women who believed they were having private, flirtatious interactions with strangers—only to later discover the men were recording them using camera-equipped smart glasses and posting the footage online.

These clips are often packaged as “rizz” videos—short for “charisma”—where so-called manfluencers film themselves chatting up women in public, without consent, to build followings and sell “coaching” services.

The glasses, sold by companies like Meta, are supposed to be used for recording only with consent, and often display a light to show that they’re recording. In practice, that indicator is easy to hide.

When combined with AI-powered services to identify people, as researchers did in 2024, the possibilities become even more chilling. We’re unaware of any related cases coming to court, but suspect it’s only a matter of time.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

This Popular AI Chat App Exposed 300 Million Private Messages

9 February 2026 at 17:00

Have you ever used an application called Chat & Ask AI? If so, there's a good chance your messages were exposed last month. In January, an independent researcher was able to easily access some 300 million messages on the service, according to 404 Media's Emanual Maiberg. The data included chat logs related to all kinds of sensitive topics, from drug use to suicide.

Chat & Ask AI, an app offered by the Istanbul-based company Codeway that is available on both Apple and Google app stores, claims to have around 50 million users. The application essentially resells access to large language models from other companies, including OpenAI, Claude, and Google, providing limited free access to its users.

The problem that lead to the data leak was related to an insecure Google Firebase configuration, a relatively common vulnerability. The researcher was easily able to make himself an "authenticated" user, at which point he could read messages from 25 million of the app's users. He reportedly extracted and analyzed around 60,000 messages before reporting the issue to Codeway.

The good news: The issue was quickly patched. More good news: there have been no reports of these messages leaking to the broader internet. Still, this is yet one more reason to carefully consider the kinds of messages you send AI chatbots. Remember, conversations with AI chatbots aren't private—by their nature, these systems often save your conversations to "remember" them later. In the case of a data breach, that could potentially lead to embarrassment, or worse—and using an reseller like Chat & Ask AI to access large language models adds another layer of potential security risks, as this recent leak demonstrates.

Your Browser's Extensions May Be Reading Your Passwords

9 February 2026 at 16:00

We should all take common-sense steps to make sure our data stays safe and secure: use strong passwords with our accounts, and never reuse passwords; employ two-factor authentication on any account that offers it; and avoid clicking strange links in emails or text messages. But even when you follow all those rules, your personal data can still be at risk, strictly because the services you rely on aren't following these rules themselves.

Some websites are putting your passwords at risk

Researchers at the University of Wisconsin-Madison discovered that a concerning number of browser extensions can access sensitive information that you enter into websites. Think passwords, credit card info, and Social Security numbers.

The team behind the discovery says they weren't out looking to break a security story. Instead, they were "messing around with login pages," specifically Google login pages, when they found that the sites' HTML source code could see the passwords they entered in plain text. They turned their sights onto other websites—more than 7,000, reportedly—and found that about 15% of them were also storing sensitive information in plain text. That's over 1,000 websites exposing important data.

That, of course, is not supposed to happen: When you enter sensitive data into a website—say, your password into Google's login page—that site shouldn't see your password at all. In short, the sites confirm your passwords through hashing algorithms—essentially, jumbling your password into a code that can be checked against the code the site stores on their end. They can then confirm you entered the right password without ever exposing the actual text. By storing things like passwords and Social Security numbers in plain text, those sites are exposing that data to anyone in the know.

Importantly, that includes browser extensions. The researchers claim that 17,300 Chrome extensions—or 12.5% of the extensions available for download on Google's browser—have the permissions they need to view this sensitive plain text data. Think about the permissions you ignore when setting up a new extension, including permissions that give extensions full access to see and change what you enter on a webpage. Researchers didn't expose any extensions by name, as the situation is not necessarily the fault of the extensions, but considering the scope, it's possible some of the extensions you use can access sensitive information you enter in certain sites.

Again, legitimate extensions are not the priority: Instead, it's the risk that a developer will create an extension with the intent of scraping sensitive info stored in plain text. While the researchers claim there are no extensions actively abusing this vulnerability yet, this isn't a theoretical problem. Researchers created an extension from scratch that could pull this user data, uploaded it to the Chrome Web Store, and got it approved. They took it down immediately, but proved it's possible for a hacker to get such a malicious extension on the official store. Even if the hacker didn't make the extension, they could acquire a legitimate extension with an existing user base, adjust the code to take advantage of the vulnerability, and spring the updated extension on unsuspecting users. It happens all the time, and not just on Chrome.

How to protect your sensitive data from malicious browser extensions

Unfortunately, there's little you can do to prevent these sites from storing your passwords, credit cards, and Social Security numbers in plain text. The hope is, following these discoveries, websites will improve their security and kill the vulnerabilities on their end. But that's on them, not you.

There are some steps you can take to mitigate the damage, however. First, make sure to limit your use of browser extensions. The fewer extensions you use, the less likely it is you'll use a malicious one. Use only extensions you fully trust, and frequently check in on updates. If the extension changes hands to a new developer, vet that new owner before continuing to use it. You could even disable your extensions when sharing sensitive information with websites. If you need to provide your Social Security number on an official web form, for example, you could disable your extensions to prevent them from reading the data.

You can also limit the data you share that could stored in plain text. If given the option, use passkeys instead of passwords, as passkeys don't actually use any plain text data that hackers could steal. Similarly, use secure payment systems, such as Apple Pay or Google Pay, which don't actually share your credit card information with the website you're making a payment on. The name of the game is to avoiding typing out your sensitive details unless absolutely necessary—and then, reducing the parties who can see those details.

Is your phone listening to you? (re-air) (Lock and Code S07E03)

9 February 2026 at 13:49

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Discord Will Require a Face Scan or ID for Full Access Next Month

9 February 2026 at 11:01
Discord said today it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults. From a report: Users who aren't verified as adults will not be able to access age-restricted servers and channels, won't be able to speak in Discord's livestream-like "stage" channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox. [...] A government ID might still be required for age verification in its global rollout. According to Discord, to remove the new "teen-by-default" changes and limitations, "users can choose to use facial age estimation or submit a form of identification to [Discord's] vendor partners, with more options coming in the future." The first option uses AI to analyze a user's video selfie, which Discord says never leaves the user's device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents "are deleted quickly -- in most cases, immediately after age confirmation."

Read more of this story at Slashdot.

iPhone Lockdown Mode Protects Washington Post Reporter

6 February 2026 at 07:00

404Media is reporting that the FBI could not access a reporter’s iPhone because it had Lockdown Mode enabled:

The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

“Because the iPhone was in Lockdown mode, CART could not extract that device,” the court record reads, referring to the FBI’s Computer Analysis Response Team, a unit focused on performing forensic analyses of seized devices. The document is written by the government, and is opposing the return of Natanson’s devices.

The FBI raided Natanson’s home as part of its investigation into government contractor Aurelio Perez-Lugones, who is charged with, among other things, retention of national defense information. The government believes Perez-Lugones was a source of Natanson’s, and provided her with various pieces of classified information. While executing a search warrant for his mobile phone, investigators reviewed Signal messages between Pere-Lugones and the reporter, the Department of Justice previously said.

Flock cameras shared license plate data without permission

5 February 2026 at 06:24

Mountain View, California, pulled the plug on its entire license plate reader camera network this week. It discovered that Flock Safety, which ran the system, had been sharing city data with hundreds of law enforcement agencies, including federal ones, without permission.

Flock Safety runs an automated license plate recognition (ALPR) system that uses AI to identify vehicles’ number plates on the road. Mountain View Police Department (MVPD) policy chief Mike Canfield ordered all 30 of the city’s Flock cameras disabled on February 3.

Two incidents of unauthorized sharing came to light. The first was a “national lookup” setting that was toggled on for one camera at the intersection of the city’s Charleston and San Antonio roads. Flock allegedly switched it on without telling the city.

That setting could violate California’s 2015 statute SB 34, which bars state and local agencies from sharing license plate reader data with out-of-state or federal entities. The law states:

“A public agency shall not sell, share, or transfer ALPR information, except to another public agency, and only as otherwise permitted by law.”

The statute defines a public agency as the state, or any city or county within it, covering state and local law enforcement agencies.

Last October, the state Attorney General sued the Californian city of El Cajon for knowingly violating that law by sharing license place data with agencies in more than two dozen states.

However, MVPD said that Flock kept no records from the national lookup period, so nobody can determine what information actually left the system.

Mountain View says it never chose to share, which makes the violation different in kind. For the people whose plates were scanned, the distinction is academic.

A separate “statewide lookup” feature had also been active on 29 of the city’s 30 cameras since the initial installation, running for 17 straight months until Mountain View found and disabled it on January 5. Through that tool, more than 250 agencies that had never signed any data agreement with Mountain View ran an estimated 600,000 searches over a single year, according to local paper the Mountain View Voice, which first uncovered the issue after filing a public records request.

Over the past year, more than two dozen municipalities across the country have ended contracts with Flock, many citing the same worry that data collected for local crime-fighting could be used for federal immigration enforcement. Santa Cruz became the first in California to terminate its contract last month.

Flock’s own CEO reportedly acknowledged last August that the company had been running previously undisclosed pilot programs with Customs and Border Protection and Homeland Security Investigations.

The cameras will remain offline until the City Council meets on February 24. Canfield says that he still supports license plate reader technology, just not this vendor.

This goes beyond one city’s vendor dispute. If strict internal policies weren’t enough to prevent unauthorized sharing, it raises a harder question: whether policy alone is an adequate safeguard when surveillance systems are operated by third parties.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

French Police Raid X Offices as Grok Investigations Grow

3 February 2026 at 16:25

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”

An AI plush toy exposed thousands of private chats with children

3 February 2026 at 11:55

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Google Shut Down Its Dark Web Monitoring Tool, so Here's What to Use Instead

3 February 2026 at 10:30

Another Google tool is biting the dust: The company's dark web monitoring tool, launched in March 2023, will be shut down on Feb. 16. According to Google, feedback on the feature suggested it "didn't provide helpful next steps"—so while it alerted users when their data was out in the wild, it wasn't clear what to do about it. Now, Google is shifting its focus from the dark web monitoring tool to features like its online Security Check-Up and passkey protection. In other words, instead of flagging when your account credentials appear in a data breach, Google wants to make sure that your accounts stay safe even if a breach has occurred.

There are reasons why you should be keeping an eye on dark web chatter, however—and there are tools to take over the monitoring job now Google has backed out.

What is the dark web—and why do I need to monitor it?

Keeper BreachWatch
Keeper provides a free dark web scan. Credit: Lifehacker

Essentially, the dark web is made up of online spaces that you can't get to just by pointing your browser at a web address. You need specialist software and a little bit of technical know-how to find your way into the dark web and to navigate around it. It's largely hidden from the world at large via encryption and rerouting. Why all the secrecy? The dark web is used to evade both law enforcement and ruling powers, so it's the perfect place to carry out somewhat illicit activities as well as get around the machinations of oppressive surveillance states. It's a place where hackers and whistleblowers alike can gather.

Speaking of hackers, dumps of information from data breaches will often find their way on to the dark web, to be traded or given away for free. Whether it's your email address, phone number, social security number, or passwords, if this data has been exposed by a hack, you're much more likely to find it on the dark web than on Reddit.

Dark web monitoring tools, like the one Google just shut down, are intended to give you a heads up if your details have appeared in a data dump. You can then do something about it, whether it's getting in touch with your bank to check for any signs of identity theft, or changing the password for your email service.

Having a dedicated tool for the task saves you from having to trawl the dark web yourself—which isn't particularly easy or pleasant—and while Google might be closing down its monitoring service, you've got several alternatives you can turn to instead.

The best dark web monitoring tools you can try

Proton Dark Web Monitoring
Proton's dark web scanner is part of the Proton Unlimited subscription. Credit: Lifehacker

Proton is a favorite among privacy enthusiasts, and the privacy-focused company also has a Dark Web Monitoring tool of its own. You do need a paid plan to access it though, from $12.99 a month or $119.88 a year, which includes multiple perks across all Proton's products. You can find it from the Security and privacy side panel in the Proton Mail app.

Proton uses a variety of intelligence datasets in its dark web sweep, and looks out for details including email addresses, usernames, dates of birth, physical addresses, and government IDs. The leaks will be categorized in terms of how urgently action needs to be taken, and Proton doesn't give your data to third parties.

Trend Micro has a Data Leak Checker that covers the dark web, which you can use without paying anything or even signing up for an account—though you can only check for mentions of your email address or phone number in leaks. For more comprehensive scans and alerts, you can sign up for a premium account, from $9.99 a month or $49.99 a year—and there's lots more included besides dark web monitoring.

Keeper Security takes the same approach with BreachWatch: You can run a quick scan for breaches including your email address without paying or signing up, but if you want anything more advanced (including proactive notifications) then you need to sign up for $24.99 a year. The feature can be added to any of Keeper's other paid-for plans too.

If you currently pay for a security product, such as a password manager or a VPN, you may well find that dark web monitoring is included—so check through your existing subscriptions. For example, the Surfshark Alert dark web monitoring tool comes as part of the Surfshark One VPN bundle, with pricing from $17.95 a month or $40.68 a year.

Microsoft is Giving the FBI BitLocker Keys

3 February 2026 at 07:05

Microsoft gives the FBI the ability to decrypt BitLocker in response to court orders: about twenty times per year.

It’s possible for users to store those keys on a device they own, but Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password, or if repeated failed attempts to login lock the device, it also makes them vulnerable to law enforcement subpoenas and warrants.

Apple’s new iOS setting addresses a hidden layer of location tracking

3 February 2026 at 06:20

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Apple Has a New Setting to Protect Your Location Data, but Not Everyone Can Use It

2 February 2026 at 13:30

Some iOS users are getting an extra layer of privacy when it comes to how their location data is shared. Limit Precise Location is a new setting that prevents some Apple devices from broadcasting specific locations to cell carriers.

Precise location sharing is useful, even essential, in some cases, such as when you're navigating with your maps app. But you may not want to constantly be sending your exact address to your phone provider, where it could be used for malicious purposes. If you enable Limit Precise Location, your iOS device will share your general area instead.

Precise location sharing comes with privacy risks

As TechCrunch points out, precise location sharing introduces a whole host of privacy and security risks. Cell carriers have been targeted by hackers, compromising sensitive customer data. Surveillance vendors and law enforcement agencies may also use location information broadcast via cellular networks for the purposes of real-time and ongoing tracking.

Users already have the option to disable precise location sharing at the app level on both iOS and Android for apps that don't need GPS coordinates to function—which is most of them. This allows you to prevent companies from receiving (and selling) your exact location data when a general location is sufficient. Limit Precise Location won't change these app-specific settings.

For now, the feature is available only on select Apple models—the iPhone Air, iPhone 16e, and iPad Pro (M5) Wi-Fi + Cellular—running iOS 26.3 with a limited number of global carriers:

  • U.S.: Boost Mobile

  • UK: EE, BT

  • Germany: Telekom

  • Thailand: AIS, True

Apple says that even with this setting enabled, emergency responders will still be able to pinpoint exact location during an emergency call.

How to disable precise location sharing

If you have a supported device with a partner carrier, go to Settings > Cellular and tap Cellular Data Options (you may need to select the specific line under SIMs if you have more than one). Scroll down and toggle Limit Precise Location sharing off.

ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data

2 February 2026 at 11:39
credentials EUAC CUI classified secrets SMB

Several threat clusters are using vishing in extortion campaigns that include tactics that are consistent with those used by high-profile threat group ShinyHunters. They are stealing SSO and MFA credentials to access companies' environments and steal data from cloud applications, according to Mandiant researchers.

The post ShinyHunters Leads Surge in Vishing Attacks to Steal SaaS Data appeared first on Security Boulevard.

Why Gen Z is Ditching Smartphones for Dumbphones

2 February 2026 at 00:00

Younger generations are increasingly ditching smartphones in favor of “dumbphones”—simpler devices with fewer apps, fewer distractions, and less tracking. But what happens when you step away from a device that now functions as your wallet, your memory, and your security key? In this episode, Tom and Scott explore the dumbphone movement through a privacy and […]

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Shared Security Podcast.

The post Why Gen Z is Ditching Smartphones for Dumbphones appeared first on Security Boulevard.

💾

US Government Also Received a Whistleblower Complaint That WhatsApp Chats Aren't Private

31 January 2026 at 22:11
Remember that lawsuit questioning WhatsApp's end-to-end encryption? Thursday Bloomberg reported those allegations had been investigated by special agents with America's Commerce Department, "according to the law enforcement records, as well as a person familiar with the matter and one of the contractors." Similar claims were also the subject of a 2024 whistleblower complaint to the US Securities and Exchange Commission, according to the records and the person, who spoke on the condition that they not be identified out of concern for potential retaliation. The investigation and whistleblower complaint haven't been previously reported... Last year, two people who did content moderation work for WhatsApp told an investigator with Commerce's Bureau of Industry and Security that some staff at Meta have been able to see the content of WhatsApp messages, according to the agent's report summarizing the interviews. [A spokesperson for the Bureau later told Bloomberg that investigator's assertions were "unsubstantiated and outside the scope of his authority as an export enforcement agent."] Those content moderators, who worked for Meta through a contract with the management and technology consulting firm Accenture Plc, also alleged that they and some of their colleagues had broad access to the substance of WhatsApp messages that were supposed to be encrypted and inaccessible, according to the report. "Both sources confirmed that they had employees within their physical work locations who had unfettered access to WhatsApp," wrote the agent... One of the content moderators who told the investigator she had access said she also "spoke with a Facebook team employee and confirmed that they could go back aways into WhatsApp (encrypted) messages, stating that they worked cases that involved criminal actions," according to the document... The investigator's report, dated July 2025, described the investigation as "ongoing," includes a case number and dubs the inquiry "Operation Sourced Encryption..." The inquiry was active as recently as January, according to a person familiar with the matter. The inquiry's current status and who may be the defined target are both unclear. Many investigations end without any formal accusations of wrongdoing... WhatsApp on its website says it does, in some instances, allow information about messages to be seen by the company. If someone reports a user or group for problematic messages, "WhatsApp receives up to five of the last messages they've sent to you" and "the user or group won't be notified," the company says. In those cases, WhatsApp says it receives the "group or user ID, information on when the message was sent, and the type of message sent (image, video, text, etc.)." Former contractors outlined much broader access. Larkin Fordyce was an Accenture contractor who the report says an agent interviewed about content moderation work for Meta. Fordyce told the investigator he spent years doing this work out of an Austin, Texas office starting as early as the end of 2018. He said moderators eventually were granted their own access to WhatsApp, but even before that they could request access to communications and "the Facebook team was able to 'pull whatever they wanted and then send it,'" the report states... The agent also gathered records that were filed in the whistleblower complaint to the SEC, according to his report, which doesn't describe the materials... The status of the whistleblower complaint is unclear. Some key points from the article: "The investigative report seen by Bloomberg doesn't include a technical explanation of the contractors' claims." "A spokesperson for Meta, which acquired WhatsApp in 2014, said the contractors' claims are impossible." One contractor "said that there was little vetting" of foreign nationals hired to do content moderation for Meta, saying this granted them "full access to the same portal to review" content moderation cases

Read more of this story at Slashdot.

Nine Phone Settings to Change Before Attending a Protest

30 January 2026 at 15:00

Before you head out to a protest, take some precautions to protect your privacy and both the physical and digital security of any device you bring along. The most secure option, of course, is to leave your phone at home, but you can also lock things down to minimize the risk that your data will be accessible to law enforcement or someone who gets hold of your device.

Thankfully, both iOS and Android have built-in device encryption if you're using a passcode, meaning that your device's data cannot be accessed when it is locked. (On Android, go to Settings > Security to ensure Encrypt Disk is enabled). You'll want to maximize this protection with the following privacy settings.

Turn off face and fingerprint scanning

At an absolute minimum, you'll want to disable biometric access, such as face and fingerprint authentication, on your device in favor of a passcode or PIN. As the Electronic Frontier Foundation notes, this minimizes the risk of being physically forced to unlock your device and may provide stronger legal protections against compelled decryption.

On iOS, go to Settings > Face ID & Passcode and toggle off iPhone Unlock. You can also set up a stronger passcode—a custom numeric or alphanumeric code—under Change Passcode. On Android, you'll find the option to delete your fingerprint in favor of your PIN or screen lock pattern under Settings > Security & Privacy > Device Unlock > Fingerprint.

Limit location tracking

Again, the best option to prevent your location from being tracked is to coordinate any details in advance and leave your phone at home. If you must bring it along, keep it off unless you absolutely need to use it.

You can turn on Airplane Mode in advance, as well as disable Bluetooth, wifi, and location services, which keeps your device from transmitting your location. However, note that some apps may still be able to store GPS data and transmit it when an internet connection is available—so again, the safest bet is to keep your device off for the duration.

Airplane Mode can be enabled (and wifi and Bluetooth disabled) in your device's settings or quick access menu. On Android, go to Settings > Location to disable location services and turn off Location History in your Google account. On iOS, head to Settings > Privacy & Security > Location Services to disable locations entirely.

Turn off previews and notifications

Temporarily disable notifications and screen previews so that if someone gets your device, they won't be able to glean any information from your lock screen. You can adjust these options under Settings > Notifications on iOS and Settings > Apps & notifications > Notifications on Android.

Adjust screen lock time

Minimize your screen lock time to as short a period as possible so that your screen turns off when you're not actively using it and will require authentication to reopen. On iOS, go to Settings > Display & Brightness > Auto-Lock and select 30 seconds. The exact path on Android may vary, but typically you'll find this under Settings > Display or Lock Screen.

Know that most devices have camera access from the lock screen, so you can take photos or record video without actually unlocking your device.

Enable app pinning or Guided Access

App pinning (Android) and Guided Access (iOS) are features that prevent others from navigating through your phone beyond a specific app or screen. This allows you to use an essential feature on your device while locking the rest behind your PIN or passcode. You can enable this preemptively, and if someone grabs your device, they won't be able to snoop around.

You can find this setting on Android under Security or Security & location > Advanced > App pinning and on iOS under Settings > Accessibility > Guided Access.

Use a SIM PIN

You can also lock your SIM card to prevent unauthorized use of your device or SIM card, including access to two-factor authentication codes sent via SMS. This PIN will be required any time your phone restarts or if someone tries to use your SIM card in another device. On iOS, go to Settings > Cellular, select your SIM, and tap SIM PIN. On Android, you'll find this under Settings > Security > More security settings (the exact path varies by device).

Sign out of, hide, or delete apps

This step will vary depending on what you keep on your phone and your risk tolerance, but you may want to consider signing out of your social media accounts and deleting apps that contain or allow access to sensitive data.

On iOS, you can also lock or hide specific apps: the former requires an extra authentication step to open apps on your home screen, while the latter sends apps to a hidden folder that also requires authentication to unlock. Touch and hold an app icon to bring up the quick actions menu, then tap Require Face ID/Require Passcode.

On Android, you can set up a "private space" to lock apps behind your pattern, PIN, or password. Apps are hidden from the launcher and recent views as well as quick search. Go to Settings > Security & privacy > Private space, authenticate with your screen lock, and tap Set up > Got it.

If necessary, turn on Lockdown Mode or Advanced Protection

Both iOS and Android have strict device-level security modes that significantly limit access to certain app and web features as well as blocking changes to settings. Both were designed with journalists, activists, and other users with access to sensitive data that may be targeted by cyber actors in mind. These settings are overkill for day-to-day use but add a potentially helpful layer of security in high-risk situations.

Enable Lockdown Mode on iOS via Settings > Privacy & Security > Lockdown Mode. On Android, turn on Advanced Protection under Settings > Security & privacy > Advanced Protection.

Protect your privacy after a protest

While the above steps are largely about securing your data during a protest, you should also follow best practices for protecting privacy (yours and others') after the fact. If you plan to post photos or videos, utilize blurring tools to block faces and other unique identifying features, and scrub file metadata, which includes information like photo location. You can do this by taking a screenshot of the image to post or sending a copy to yourself in Signal, which automatically strips metadata. Signal also has a photo blurring tool, or you can blur in your device's default photo editing app.

TikTok’s privacy update mentions immigration status. Here’s why.

30 January 2026 at 06:48

In 2026, could any five words be more chilling than “We’re changing our privacy terms?”

The timing could not have been worse for TikTok US when it sent millions of US users a mandatory privacy pop-up on January 22. The message forced users to accept updated terms if they wanted to keep using the app. Buried in that update was language about collecting “citizenship or immigration status.”

Specifically, TikTok said:

“Information You Provide may include sensitive personal information, as defined under applicable state privacy laws, such as information from users under the relevant age threshold, information you disclose in survey responses or in your user content about your racial or ethnic origin, national origin, religious beliefs, mental or physical health diagnosis, sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status, or financial information.”

The internet reacted badly. TikTok users took to social media, with some suggesting that TikTok was building a database of immigration status, and others pledging to delete their accounts. It didn’t help that TikTok’s US operation became a US-owned company on the same day, with Senator Ed Markey (D-Mass.) criticizing what he sees as a lack of transparency around the deal.

A legal requirement

In this case, things are may be less sinister than you’d think. The language is not new—it first appeared around August 2024. And TikTok is not asking users to provide their immigration status directly.

Instead, the disclosure covers sensitive information that users might voluntarily share in videos, surveys, or interactions with AI features.

The change appears to be driven largely by California’s AB-947, signed in October 2023. The law added immigration status to the state’s definition of sensitive personal information, placing it under stricter protections. Companies are required to disclose how they process sensitive personal information, even if they do not actively seek it out.

Other social media companies, including Meta, do not explicitly mention immigration status in their privacy policies. According to TechCrunch, that difference likely reflects how specific their disclosure language is—not a meaningful difference in what data is actually collected.

One meaningful change in TikTok’s updated policy does concern location tracking. Previous versions stated that TikTok did not collect GPS data from US users. The new policy says it may collect precise location data, depending on user settings. Users can reportedly opt out of this tracking.

Read the whole board, not just one square

So, does this mean TikTok—or any social media company—deserves our trust? That’s a harder question.

There are still red flags. In April, TikTok quietly removed a commitment to notify users before sharing data with law enforcement. According to Forbes, the company has also declined to say whether it shares, or would share, user data with agencies such as the Department of Homeland Security (DHS) or Immigration and Customs Enforcement (ICE).

That uncertainty is the real issue. Social media companies are notorious for collecting vast amounts of user data, and for being vague about how it may be used later. Outrage over a particularly explicit disclosure is understandable, but the privacy problem runs much deeper than a single policy update from one company.

People have reason to worry unless platforms explicitly commit to not collecting or inferring sensitive data—and explicitly commit to not sharing it with government agencies. And even then, skepticism is healthy. These companies have a long history of changing policies quietly when it suits them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

An AI Toy Exposed 50K Logs of Its Chats With Kids To Anyone With a Gmail Account

29 January 2026 at 19:02
An anonymous reader quotes a report from Wired: Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts. So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy. Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. More than 50,000 chat transcripts were accessible through the exposed web portal. When the researchers alerted Bondu about the findings, the company acted to take down the console within minutes and relaunched it the next day with proper authentication measures. "We take user privacy seriously and are committed to protecting user data," Bondu CEO Fateen Anam Rafid said in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.

Read more of this story at Slashdot.

Meta confirms it’s working on premium subscription for its apps

29 January 2026 at 16:06

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Introducing Encrypt It Already

29 January 2026 at 13:17

Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.

End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.  

We’ve divided this up into three categories, each with three different demands:

  • Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
    • Facebook should use end-to-end encryption for group messages
    • Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
    • Bluesky should launch its promised end-to-end encryption for DMs
  • Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
    • Telegram should default to end-to-end encryption for DMs
    • WhatsApp should use end-to-end encryption for backups by default
    • Ring should enable end-to-end encryption for its cameras by default
  • Protect Our Data: New features that companies should launch, often because their competition is doing it already.
    • Google should launch end-to-end encryption for Google Authenticator backups
    • Google should offer end-to-end encryption for Android backup data
    • Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps

What is only half the problem. How is just as important.

What Companies Should Do When They Launch End-to-End Encryption Features

There’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:

  • A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
  • Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
  • Data minimization principles whenever feasible, storing as little metadata as possible.

Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.

What You Can Do

When it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.

For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like. 

In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:

As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.

End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!

Join EFF

Help protect digital privacy & free speech for everyone

Google Settlement May Bring New Privacy Controls for Real-Time Bidding

29 January 2026 at 12:11

EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.

What Is Real-Time Bidding?

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.   

Why Is Real-Time Bidding Harmful?

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.

Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.

The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

Proposed Settlement with Google Is a Step in the Right Direction

As the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.

Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email. 

While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default. 

The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.

The Real Solution: Ban Online Behavioral Advertising

Limiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.

Security Researcher Finds Exposed Admin Panel for AI Toy

29 January 2026 at 15:46

Security Researcher Finds Exposed Admin Panel for AI Toy

A security researcher investigating an AI toy for a neighbor found an exposed admin panel that could have leaked the personal data and conversations of the children using the toy. The findings, detailed in a blog post by security researcher Joseph Thacker, outlines the work he did with fellow researcher Joel Margolis, who found the exposed admin panel for the Bondu AI toy. Margolis found an intriguing domain (console.bondu.com) in the mobile app backend’s Content Security Policy headers. There he found a button that simply said: “Login with Google.” “By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. But instead of a parent portal, it turned out to be the Bondu core admin panel. “We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

After some investigation in the admin panel, the researchers found they had full access to “Every conversation transcript that any child has had with the toy,” which numbered in the “tens of thousands of sessions.” The panel also contained personal data about children and their family, including:
  • The child’s name and birth date
  • Family member names
  • The child’s likes and dislikes
  • Objectives for the child (defined by the parent)
  • The name given to the toy by the child
  • Previous conversations between the child and the toy (used to give the LLM context)
  • Device information, such as location via IP address, battery level, awake status, and more
  • The ability to update device firmware and reboot devices
They noticed the application is based on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).” In addition to the authentication bypass, they also discovered an Insecure Direct Object Reference (IDOR) vulnerability in the product’s API “that allowed us to retrieve any child’s profile data by simply guessing their ID.” “This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

A (Very) Quick Response from Bondu

Margolis reached out to Bondu’s CEO on LinkedIn over the weekend – and the company took down the console “within 10 minutes.” “Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said. The company took other steps to investigate and look for additional security flaws, and also started a bug bounty program. They examined console access logs and found that there had been no unauthorized access except for the researchers’ activity, so the company was saved from a data breach. Despite the positive experience working with Bondu, the experience made Thacker reconsider buying AI toys for his own kids. “To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.” “AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.” Aside from potential security issues, “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said. Bondu's website says the AI toy was built with child safety in mind, noting that its "safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period."
❌