❌

Normal view

Received today β€” 13 December 2025

Chinese Whistleblower Living In US Is Being Hunted By Beijing With US Tech

13 December 2025 at 02:00
A former Chinese official who fled to the U.S. says Beijing has used advanced surveillance technology from U.S. companies to track, intimidate, and punish him and his family across borders. ABC News reports: Retired Chinese official Li Chuanliang was recuperating from cancer on a Korean resort island when he got an urgent call: Don't return to China, a friend warned. You're now a fugitive. Days later, a stranger snapped a photo of Li in a cafe. Terrified South Korea would send him back, Li fled, flew to the U.S. on a tourist visa and applied for asylum. But even there -- in New York, in California, deep in the Texas desert -- the Chinese government continued to hunt him down with the help of surveillance technology. Li's communications were monitored, his assets seized and his movements followed in police databases. More than 40 friends and relatives -- including his pregnant daughter -- were identified and detained, even by tracking down their cab drivers through facial recognition software. Three former associates died in detention, and for months shadowy men Li believed to be Chinese operatives stalked him across continents, interviews and documents seen by The Associated Press show. The Chinese government is using an increasingly powerful tool to cement its power at home and vastly amplify it abroad: Surveillance technology, much of it originating in the U.S., an AP investigation has found. Within China, this technology helped identify and punish almost 900,000 officials last year alone, nearly five times more than in 2012, according to state numbers. Beijing says it is cracking down on corruption, but critics charge that such technology is used in China and elsewhere to stifle dissent and exact retribution on perceived enemies. Outside China, the same technology is being used to threaten wayward officials, along with dissidents and alleged criminals, under what authorities call Operations "Fox Hunt" and "Sky Net." The U.S. has criticized these overseas operations as a "threat" and an "affront to national sovereignty." More than 14,000 people, including some 3,000 officials, have been brought back to China from more than 120 countries through coercion, arrests and pressure on relatives, according to state information.

Read more of this story at Slashdot.

Rethinking sudo with object capabilities

12 December 2025 at 18:35

Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools.

Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration.

What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model.

↫ Ariadne Conill

To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you’re trying to accomplish. As an example, Conill details mounting and unmounting – with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks.

Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I’m not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it’s described it does feel like a superior, more secure solution.

Received yesterday β€” 12 December 2025

The Data Breach That Hit Two-Thirds of a Country

12 December 2025 at 17:22
Online retailer Coupang, often called South Korea's Amazon, is dealing with the fallout from a breach that exposed the personal information of more than 33 million accounts -- roughly two-thirds of the country's population -- after a former contractor allegedly used credentials that remained active months after his departure to access customer data through the company's overseas servers. The breach began in June but went undetected until November 18, according to Coupang and investigators. Police have called it South Korea's worst-ever data breach. The compromised information includes names, phone numbers, email addresses and shipping addresses, though the company says login credentials, credit card numbers, and payment details were not affected. Coupang's former CEO Park Dae-jun told a parliamentary hearing that the alleged perpetrator was a Chinese national who had worked on authentication tasks before his contract ended last December. Chief information security officer Brett Matthes testified that the individual had a "privileged role" giving him access to a private encryption key that allowed him to forge tokens to impersonate customers. Legislators say the key remained active after the employee left. The CEO of Coupang's South Korean subsidiary has resigned. Founder and chair Bom Kim has yet to personally apologize but has been summoned to a second parliamentary hearing.

Read more of this story at Slashdot.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The post Building Trustworthy AI Agents appeared first on Security Boulevard.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data securityβ€”the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with othersβ€”emails, texts, social media postsβ€”and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different modelsβ€”some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

I’m not the first to suggest something like this. Researchers have proposed a β€œHuman Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy.

Over 10,000 Docker Hub Images Found Leaking Credentials, Auth Keys

11 December 2025 at 20:25
joshuark shares a report from BleepingComputer: More than 10,000 Docker Hub container images expose data that should be protected, including live credentials to production systems, CI/CD databases, or LLM model keys. After scanning container images uploaded to Docker Hub in November, security researchers at threat intelligence company Flare found that 10,456 of them exposed one or more keys. The most frequent secrets were access tokens for various AI models (OpenAI, HuggingFace, Anthropic, Gemini, Groq). In total, the researchers found 4,000 such keys. "These multi-secret exposures represent critical risks, as they often provide full access to cloud environments, Git repositories, CI/CD systems, payment integrations, and other core infrastructure components," Flare notes. [...] Additionally, they found hardcoded API tokens for AI services being hardcoded in Python application files, config.json files, YAML configs, GitHub tokens, and credentials for multiple internal environments. Some of the sensitive data was present in the manifest of Docker images, a file that provides details about the image.Flare notes that roughly 25% of developers who accidentally exposed secrets on Docker Hub realized the mistake and removed the leaked secret from the container or manifest file within 48 hours. However, in 75% of these cases, the leaked key was not revoked, meaning that anyone who stole it during the exposure period could still use it later to mount attacks. Flare suggests that developers avoid storing secrets in container images, stop using static, long-lived credentials, and centralize their secrets management using a dedicated vault or secrets manager. Organizations should implement active scanning across the entire software development life cycle and revoke exposed secrets and invalidate old sessions immediately.

Read more of this story at Slashdot.

Received before yesterday

Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks

11 December 2025 at 13:27

Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect one’s identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..

The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.

Thailand’s Personal Data Protection Act

11 December 2025 at 04:18

What is the Personal Data Protection Act (PDPA) of Thailand? The Personal Data Protection Act, B.E. 2562 (2019), often referred to by its acronym, PDPA, is Thailand’s comprehensive data privacy and protection law. Enacted to safeguard the personal data of individuals, it is heavily influenced by international privacy standards, most notably the European Union’s General […]

The post Thailand’s Personal Data Protection Act appeared first on Centraleyes.

The post Thailand’s Personal Data Protection Act appeared first on Security Boulevard.

10 Hacks for Online Privacy That Everyone Should Know

10 December 2025 at 10:30

The internet has become a vital tool for human connection, but it comes with its fair share of risks, with the biggest being your privacy and security. With the big tech giants hungry for every ounce of your data they can get and scammers looking to target you every day, you do need to take a few precautions to protect your online privacy and security. There's no foolproof approach to these two things, and unfortunately, the onus is on you to take care of your data.

Before you start looking for a VPN or ways to delete your online accounts, you should take a moment to understand your privacy and security needs. Once you do, it'll be a lot easier to take a few proactive steps to safeguard your privacy and security on the internet. Sadly, there's no "set it and forget it" solution for this, but I'm here to walk you through some useful hacks that can apply to whatever risks you might be facing.

Don't use real information, unless you have to

When you install an app on your phone, you'll often be bombarded with pop-ups asking for permission to access your contacts, location, notifications, microphone, camera, and many other things. Some are necessary, while most are not. The formula I use is to deny every permission unless it's absolutely necessary to the app's core function. Similarly, when you're creating a profile anywhere online, you should avoid giving out any personal information unless it's absolutely necessary.

You don't have to use your legal name, real date of birth, or an email address with your real name on most apps you sign up for. Some sites also still use antiquated password recovery methods such as security questions that ask for your mother's maiden name. Even in these fields, you don't have to reveal the truth. Every bit of information that you put on the internet can potentially be exposed in a breach. It's best to use information that's either totally or partially fake to safeguard your privacy.

You can remove yourself from Google search results

Google's Results About You page.
Credit: Pranay Parab

If your personal information is easily available on Google, and you want to get it removed, you can send Google a request to remove it. Check Google's support page for how to remove results to see specific instructions for your case. For most people, the simplest way to remove results about yourself is to go to Google's Results About You page, sign in, and follow the instructions on screen.

Use email aliases to identify where your data was leaked from

Most modern email services let you create unlimited aliases, which means that you don't need to reveal your primary email address each time you sign up for a new service. Instead of signing up with realemail@gmail.com, you can use something like realemail+sitename@gmail.com. Gmail lets you create unlimited aliases using this method, and you can use that to identify who leaked your data. If you suddenly start getting a barrage of spam to a particular alias, you'll know which site sold your data.

Your photos reveal a lot about you

When you take a photo, the file for it contains a lot of information about you. By default, all cameras will store EXIF (exchangeable image format) data, which logs when the photo was taken, which camera was used, and photo settings. You should remove exif data from photos before posting them on the internet. If you're using a smartphone to take photos, it'll also log the location of each image, which can be used to track you. While social media sites may sometimes remove location and exif data from your pictures, you cannot always rely on these platforms to protect your privacy for you.

You should take a few steps to strip exif data before uploading images. The easiest way to get started is to disable location access for your phone's camera app. On both iPhone and Android, you can open the Settings app, navigate to privacy settings or permissions, and deny location access to Camera. This will mean that you won't be able to search for a location in your photos app and identify all photos taken there, and you'll also lose out on some fun automated slideshows that Apple and Google create. However, it also means that your privacy is protected. You can also use apps to quickly hide faces and anonymize metadata from photos.

While you're at it, don't forget that screenshots can also leak sensitive information about you. Some types of malware steal sensitive information from screenshots, so be sure to periodically delete those, too.

Think about what you use AI for

ChatGPT's website on Safari
Credit: Pranay Parab

Nearly every single AI tool is mining your data to improve its services. Sometimes, this means it's using everything you type or upload. At other times, it could be using things you've written, photos or videos you've posted, or any other media you've ever uploaded to the internet, to train its AI models. There's not much you can do about mass data scraping off the internet, but you can and should be careful with your usage of AI tools. You can sometimes stop AI tools from perpetually using your data, but relying on these companies to honor those settings toggles is like relying on Meta to keep your data private. It's best to avoid revealing any personal information to any AI service, regardless of how strong a connection you feel with it. Just assume that anything you send to an AI service can, and probably will, be used to train AI models or even be sold to advertising companies.

You can delete information stored with data brokers

Yes, big companies like Facebook or TikTok can track you even if you don't have an account with them. Data brokers collect vast troves of information about your internet visits, and sell it to advertisers or literally anyone who's willing to pay. To limit the damage, you can start by following Lifehacker's guide to blocking companies from tracking you online. Next, you can go ahead and opt out of data collection by data brokers. If that's not enough, you can also use services that remove your personal information from data broker sites.

A VPN isn't always the right answer

Now, I'm sure some of you are thinking that using a VPN will protect you from most of the tracking on the internet. That may be true in some cases, but using a VPN 24/7 is not the right approach for most people. For starters, it just routes all your traffic via the VPN company's servers, which means that you need to place your trust in the company's promises not to log your information, and its ability to keep your data safe and private. It also won't protect you from the types of data leaks that might happen from, say, publicly posting photos tagged with location data.

Many VPN providers claim to be able to protect you, but there are downsides to consider. Some companies such as Mullvad and Proton VPN have earned a solid reputation for privacy, but using a VPN all the time can create more problems than it solves. Your internet speed slows down a lot, streaming services may not work properly, and lots of sites may not load at all because they block VPN IP addresses. In most cases, you'll probably be better off if you use adblockers and an encrypted DNS instead.

Try a different combination of privacy tools

For most people, ad blockers are a good privacy tool. Even though Google is cracking down on ad blockers, there are ways to get around those restrictions. I highly recommend using uBlock Origin, which also has a mobile version now. Once you've settled on a good ad blocker, you should consider also using a good DNS service to filter out trackers, malware, and phishing sites on a network level.

Having a DNS service is like having a privacy filter for all your internet traffic, whether it's on your phone, laptop, or even your router. I've been using NextDNS for a few years, but you can also try AdGuard DNS or ControlD. All of these services have a generous free tier, but you can optionally pay a small annual fee for more features.

Use a good firewall for your computer

Little Snitch on the Mac
Credit: Little Snitch

Almost all apps these days send telemetry data to remote servers. This isn't too much of a problem if you only use apps from trusted sources, and can help with things like automatic software updates. But malicious apps or even poorly managed ones may be more open with your data than you would like.

You can restrict some of that by using a good firewall app. This lets you monitor incoming and outgoing internet traffic from your device, and restrict devices from sending unwanted data to the internet. Blocking these requests can hamper some useful features, like those automatic app updates, but they can also stop apps from unnecessarily sending data to online servers. There are some great firewall apps for Mac and for Windows, and you should definitely consider using these for better online privacy.

Switch to a good password manager

I've probably said this a million times, but I will repeat my advice: use a good password manager. You may think it's a bit annoying, but this single step is the easiest way to greatly improve your security on the internet. Password managers can take the hassle of remembering passwords away from you, and they'll also generate unique passwords that are hard to crack. Both Bitwarden and Apple Passwords (which ships with your Mac, iPhone, and iPad) are free to use, and excellent at their job. Go right ahead and start using them today. I guarantee that you won't regret it.

Scammers harvesting Facebook photos to stage fake kidnappings, warns FBI

8 December 2025 at 08:17

The FBI has warned about a new type of scam where your Facebook pictures are harvested to act as β€œproof-of-life” pictures in a virtual kidnapping.

The scammers pretend they have kidnapped somebody and contact friends and next of kin to demand a ransom for their release. While the alleged victim is really just going about their normal day, criminals show the family real Facebook photos to β€œprove” that person is still alive but in their custody.

This attack resembles Facebook cloning but with a darker twist. Instead of just impersonating you to scam your friends, attackers weaponize your pictures to stage fake proof‑of‑life evidence.

Both scams feed on oversharing. Public posts give criminals more than enough information to impersonate you, copy your life, and convince your loved ones something is wrong.

This alert focuses on criminals scraping photos from social media (usually Facebook, but also LinkedIn, X, or any public profile) then manipulating those images with AI or simple editing to use during extortion attempts. If you know what to look for, you might spot inconsistencies like missing tattoos, unusual lighting, or proportions that don’t quite match.

Scammers rely on panic. They push tight deadlines, threaten violence, and try to force split-second decisions. That emotional pressure is part of their playbook.

In recent years, the FBI has also warned about synthetic media and deepfakes, like explicit images generated from benign photos and then used for sextortion, which is a closely related pattern of abuse of user‑posted pictures. Together, these warnings point to a trend: ordinary profile photos, holiday snaps, and professional headshots are increasingly weaponized for extortion rather than classic account hacking.

What you can do

To make it harder for criminals to use these tactics, be mindful of what information you share on social media. Share pictures of yourself, or your children, only with actual friends and not for the whole world to find. And when you’re travelling, post the beautiful pictures you have taken when you’re back, not while you’re away from home.

Facebook’s built-in privacy tool lets you quickly adjust:

  • Who can see your posts.
  • Who can see your profile information.
  • App and website permissions.

If you’re on the receiving end of a virtual kidnapping attempt:

  • Establish a code word only you and your loved ones know that you can use to prove it’s really you.
  • Always attempt to contact the alleged victim before considering paying any ransom demand.
  • Keep records of every communication with the scammers. They can be helpful in a police investigation.
  • Report the incident to the FBI’s Internet Crime Complaint Center at www.ic3.gov.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by usingΒ Malwarebytes Identity Theft Protection.

Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach

8 December 2025 at 00:16
food stamp fraud, Geofence, warrant, enforcement, DOJ AI crime

The Washington Post last month reported it was among a list of data breach victims of the Oracle EBS-related vulnerabilities, with a threat actor compromising the data of more than 9,700 former and current employees and contractors. Now, a former worker is launching a class-action lawsuit against the Post, claiming inadequate security.

The post Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach appeared first on Security Boulevard.

Woman Hailed As a Hero For Smashing Man's Meta Smart Glasses On Subway

6 December 2025 at 16:59
"Woman Hailed as Hero for Smashing Man's Meta Smart Glasses on Subway," reads the headline at Futurism: As Daily Dot reports, a New York subway rider has accused a woman of breaking his Meta smart glasses. "She just broke my Meta glasses," said the TikTok user, who goes by eth8n, in a video that has since garnered millions of views. "You're going to be famous on the internet!" he shouted at her through the window after getting off the train. The accused woman, however, peered back at him completely unfazed, as if to say that he had it coming. "I was making a funny noise people were honestly crying laughing at," he claimed in the caption of a followup video. "She was the only person annoyed..." But instead of coming to his support, the internet wholeheartedly rallied behind the alleged perpetrator, celebrating the woman as a folk hero β€” and perfectly highlighting how the public feels about gadgets like Meta's smart glasses. "Good, people are tired of being filmed by strangers," one user commented. "The fact that no one else on the train is defending him is telling," another wrote... Others accused the man of fabricating details of the incident. "'People were crying laughing' β€” I've never heard a less plausible NYC subway story," one user wrote. In a comment on TikTok, the man acknowledges he'd filmed her on the subway β€” it looks like he even zoomed in. The man says then her other options were "asking nicely to not post it or blur my face". He also warns that she could get arrested for breaking his glasses if he "felt like it". (And if he sees her again.) "I filed a claim with the police and it's a misdemeanor charge." A subsequent video's captions describe him unboxing new Meta smartglasses "and I'm about to do my thing again... no crazy lady can stop me now." I'm imagining being mugged β€” and then telling the mugger "You're going to be internet famous!" But maybe that just shows how easy it is to weaponize smartglasses and their potential for vast public exposure.

Read more of this story at Slashdot.

India Reviews Telecom Industry Proposal For Always-On Satellite Location Tracking

5 December 2025 at 16:21
India is weighing a proposal to mandate always-on satellite tracking in smartphones for precise government surveillance -- an idea strongly opposed by Apple, Google, Samsung, and industry groups. Reuters reports: For years, the [Prime Minister Narendra Modi's] administration has been concerned its agencies do not get precise locations when legal requests are made to telecom firms during investigations. Under the current system, the firms are limited to using cellular tower data that can only provide an estimated area location, which can be off by several meters. The Cellular Operators Association of India (COAI), which represents Reliance's Jio and Bharti Airtel, has proposed that precise user locations should only be provided if the government orders smartphone makers to activate A-GPS technology -- which uses satellite signals and cellular data -- according to a June internal federal IT ministry email. That would require location services to always be activated in smartphones with no option for users to disable them. Apple, Samsung, and Alphabet's Google have told New Delhi that should not be mandated, said three of the sources who have direct knowledge of the deliberations. A measure to track device-level location has no precedent anywhere else in the world, lobbying group India Cellular & Electronics Association (ICEA), which represents both Apple and Google, wrote in a confidential July letter to the government, which was viewed by Reuters. "The A-GPS network service ... (is) not deployed or supported for location surveillance," said the letter, which added that the measure "would be a regulatory overreach." Earlier this week, Modi's government was forced to rescind an order requiring smartphone makers to preload a state-run cyber safety app on all devices after public backlash and privacy concerns.

Read more of this story at Slashdot.

China Hackers Using Brickstorm Backdoor to Target Government, IT Entities

5 December 2025 at 17:36
china, flax typhoon,

Chinese-sponsored groups are using the popular Brickstorm backdoor to access and gain persistence in government and tech firm networks, part of the ongoing effort by the PRC to establish long-term footholds in agency and critical infrastructure IT environments, according to a report by U.S. and Canadian security offices.

The post China Hackers Using Brickstorm Backdoor to Target Government, IT Entities appeared first on Security Boulevard.

Why Deleting Your Browsing History Doesn’t Always Delete Your Browsing History

5 December 2025 at 11:30

Manually or automatically wiping your browsing history is a well-established way of protecting your privacy and making sure the digital trail you leave behind you is as short as possibleβ€”but it's important to be aware of the limitations of the process, and to understand why deleting your browsing history isn't always as comprehensive an act as you might think.

In short, the records of where you've been aren't only kept on your local computer or on your phone, they're found in various other places too. This is why fully wiping away your browsing history is more difficult than it initially appears.

Modern browsers typically sync your browsing history

Just about every modern browser can now sync your browsing history across devices, from laptop to mobile and back again. There are benefits to thisβ€”being able to continue your browsing on a different device, for exampleβ€”but it means that deleting the list of websites you've visited on one device won't necessarily clear it everywhere.

Consider Apple's Safari, which by default will sync your online history, bookmarks, and open tabs between all of the iPhones, iPads, and Macs using the same Apple account. You can manage this by selecting your account name and then iCloud in Settings on iOS/iPadOS or in System Settings on macOS.

Apple Safari
Deleting browsing history in Safari. Credit: Lifehacker

Whether or not Safari syncing is enabled through iCloud will affect how browsing history is deletedβ€”when you try to delete this history on mobile or desktop, you'll see a message telling you what will happen on your other devices. In Safari on a Mac, choose History > Clear History; on an iPhone or iPad, choose Apps > Safari > Clear History and Website Data from Settings.

Most other browsers work in the same way, with options for both syncing history and deleting history. In Chrome on the desktop, for example, open Settings via the three-dot menu (top right): You can manage syncing via You and Google > Sync and Google Services > Manage what you sync, and clearing your history via Privacy and security > Delete browsing data.

The apps and sites you use are tracking you

Aside from all the history your actual web browser is collecting, you also need to think about the data being vacuumed up by the apps and websites you're using. If you log into Facebook, Meta will know about the comments you've left and the photos you've liked, no matter how much you scrub your history from Edge or Firefox.

How much you can do about this really depends on the app or site. Amazon lets you clear your search history, for example: On the desktop site, click Browsing History on the toolbar at the top, then click the gear icon (top right). The next screen lets you delete all or some of your browsing history, and block future trackingβ€”though you won't be able to reorder items as easily, and your recommendations will be affected.

Google history
Clearing data from a Google account. Credit: Lifehacker

Meta lets you clear your Instagram and Facebook search history, at least: You can take care of both from the Meta Accounts Center page in a desktop browser. Click Your information and permissions then Search history to look back at what you've been searching for. The next screen gives you options for manually and automatically wiping your search history.

Google runs a whole host of online apps as well as a web browser. You can manage all your Google data from one central point from your desktop browser: Your Google Account page. Click Data and privacy to see everything Google has collected on you, and click through on any activity type to manually delete records or set them up to be automatically deleted after a certain period of time.

Your internet provider always knows where you've been

The final place there will be copies of your internet browsing history are on the servers of your internet service providerβ€”that is, whichever company you're paying for access to the internet is keeping logs of the places you've been, for all kinds of purposes (from security to advertising). And yes, this includes sites that you open while in incognito mode.

How this is handled varies from provider to provider. For example, AT&T's privacy notice states that the company will "automatically collect a variety of information", including "website and IP addresses," "videos watched," and "search terms entered." The company says this data will be kept for, "as long as we need it for business, tax, or legal purposes."

Proton VPN
A VPN can hide your browsing from your internet provider. Credit: Lifehacker

There's not a whole lot you can do about this eitherβ€”it's a trade-off you have to make if you want access to the web. Some providers, including AT&T, will let you opt out of certain types of information sharing if you get in touch with them directly, but you can't prevent the tracking from happening in the first place.

What you can do is mask your browsing with a VPN (Lifehacker has previously picked the best paid VPNs and the best free VPNs for you to try out). As all your internet traffic will be routed through the VPN's servers, your internet provider will no longer be able to see what you're doing. Your VPN provider will, howeverβ€”so find one that you can trust, and which has a no-logs policy that's been verified by a third-party security auditor.

Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps

4 December 2025 at 10:54
Google, Wiz, Cnapp, Exabeam, CNAPP, cloud threat, detections, threats, CNAP, severless architecture, itte Broadcom report cloud security threat

Security and developer teams are scrambling to address a highly critical security flaw in frameworks tied to the popular React JavaScript library. Not only is the vulnerability, which also is in the Next.js framework, easy to exploit, but React is widely used, including in 39% of cloud environments.

The post Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps appeared first on Security Boulevard.

Why One Man Is Fighting for Our Right to Control Our Garage Door Openers

4 December 2025 at 05:04
If companies can modify internet-connected products and charge subscriptions after people have already purchased them, what does it mean to own anything anymore?

UK’s Cookie Enforcement Campaign Brings 95% of Top Websites Into Compliance

4 December 2025 at 06:48

Cookie, cookie consent,ICO

Britain's data protection regulator issued 17 preliminary enforcement notices and sent warning letters to hundreds of website operators throughout 2025, a pressure campaign that brought 979 of the UK's top 1,000 websites into compliance with cookie consent rules and gave an estimated 40 million peopleβ€”roughly 80% of UK internet users over age 14β€”greater control over how they are tracked for personalized advertising.

The Information Commissioner's Office announced Thursday that only 21 websites remain non-compliant, with enforcement action continuing against holdouts.

The campaign focused on three key compliance areas: whether non-essential advertising cookies were stored on users' devices before users could exercise choice to accept or reject them, whether rejecting cookies was as easy as accepting them, and whether any non-essential cookies were placed despite users not consenting.

Enforcement Threats Drive Behavioral Change

Of the 979 compliant sites, 415 passed testing without any intervention. The remaining 564 improved practices after initially failing, following direct engagement from the ICO. The regulator sent letters that underlined their compliance shortcomings, opened investigations when letters failed to produce changes, and issued preliminary enforcement notices in 17 cases.

"We set ourselves the goal of giving people more meaningful control over how they were tracked online by the end of 2025. I can confidently say that we have delivered on that promise," stated Tim Capel, Interim Executive Director of Regulatory Supervision.

The enforcement campaign began in January 2025 when the ICO assessed the top 200 UK websites and communicated concerns to 134 organizations. The regulator warned that uncontrolled tracking intrudes on private lives and can lead to harm, citing examples including gambling addicts targeted with betting ads due to browsing history or LGBTQ+ individuals altering online behavior for fear of unintended disclosure.

Also read: UK Data Regulator Cracks Down on Sky Betting and Gaming’s Unlawful Cookie Practices

Industry-Wide Infrastructure Changes

The ICO engaged with trade bodies representing the majority of industries appearing in the top 1,000 websites and consent management platforms providing solutions to nearly 80% of the top 500 websites. These platforms made significant changes to ensure cookie banner options they provide to customers are compliant by default.

The action secured significant improvements to user experiences online, including greater prevalence of "reject" options on cookie banners and lower prevalence of cookies being placed before consent was given or after it was refused.

The regulator identified four main problem areas during its review: deceptive or missing choice where selection is preset, uninformed choice through unclear options, undermined choice where sites fail to adhere to user preferences, and irrevocable choice where users cannot withdraw consent.

Privacy-Friendly Advertising Exploration

The ICO committed to ongoing monitoring, stating that websites brought into compliance should not revert to previously unlawful practices believing violations will go undetected. We will continue to monitor compliance and engage with industry to ensure they uphold their legal obligations, while also supporting innovation that respects people's privacy," Capel said.

Following consultation earlier in 2025, the regulator continues working with stakeholders to understand whether publishers could deliver privacy-friendly online advertising to users who have not granted consent where privacy risk remains low. The ICO works with government to explore how legislation could be amended to reinforce this approach, with the next update scheduled for 2026.

Under current regulations, violations can result in fines up to Β£500,000 under Privacy and Electronic Communications Regulations or up to Β£17.5 million or 4% of global turnover under UK GDPR. Beyond financial penalties, non-compliance risks reputational damage and loss of consumer trust as privacy-conscious users increasingly scrutinize data practices.

India Pulls Its Preinstalled iPhone App Demand

3 December 2025 at 13:18
India has withdrawn its order requiring Apple and other smartphone makers to preinstall the government's Sanchar Saathi app after public backlash and privacy concerns. AppleInsider reports: On November 28, the India Ministry of Communication issued a secret directive to Apple and other smartphone manufacturers, requiring the preinstallation of a government-backed app. Less than a week later, the order has been rescinded. The withdrawal on Wednesday means Apple doesn't have to preload the Sanchar Saathi app onto iPhones sold in the country, in a way that couldn't be "disabled or restricted." [...] In pulling back from the demand, the government insisted that the app had an "increasing acceptance" among citizens. There was a tenfold spike of new user registrations on Tuesday alone, with over 600,000 new users made aware of the app from the public debacle. India Minister of Communications Jyotiraditya Scindia took a moment to insist that concerns the app could be used for increased surveillance were unfounded. "Snooping is neither possible nor will it happen" with the app, Scindia claimed. "This is a welcome development, but we are still awaiting the full text of the legal order that should accompany this announcement, including any revised directions under the Cyber Security Rules, 2024," said the Internet Freedom Foundation. It is treating the news with "cautious optimism, not closure," until formalities conclude. However, while promising, the backdown doesn't stop India from retrying something similar or another tactic in the future.

Read more of this story at Slashdot.

Apple To Resist India Order To Preload State-Run App As Political Outcry Builds

2 December 2025 at 18:23
Apple does not plan to comply with India's mandate to preload its smartphones with a state-owned cyber safety app that cannot be disabled. According to Reuters, the order "sparked surveillance concerns and a political uproar" after it was revealed on Monday. From the report: In the wake of the criticism, India's telecom minister Jyotiraditya M. Scindia on Tuesday said the app was a "voluntary and democratic system," adding that users can choose to activate it and can "easily delete it from their phone at any time." At present, the app can be deleted by users. Scindia did not comment on or clarify the November 28 confidential directive that ordered smartphone makers to start preloading it and ensure "its functionalities are not disabled or restricted." Apple however does not plan to comply with the directive and will tell the government it does not follow such mandates anywhere in the world as they raise a host of privacy and security issues for the company's iOS ecosystem, said two of the industry sources who are familiar with Apple's concerns. They declined to be named publicly as the company's strategy is private. "Its not only like taking a sledgehammer, this is like a double-barrel gun," said the first source.

Read more of this story at Slashdot.

AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance

2 December 2025 at 13:21

EFF intern Alexandra Halbeck contributed to this blog

When people talk to a chatbot, they often reveal highly personal information they wouldn’t share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.

AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.

Chat logs are deeply personal, just like your emails.

Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal people’s most sensitive information.

Without privacy protections, users would be chilled in their use of AI systems.


Consider the sensitivity of the following prompts: β€œhow to get abortion pills,” β€œhow to protect myself at a protest,” or β€œhow to escape an abusive relationship.” These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.

Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.

Chat logs require a warrant.

Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.

For over a century, the Fourth Amendment has protected the content of private communicationsβ€”such as letters, emails, and search engine promptsβ€”from unreasonable government searches. AI prompts require the same constitutional protection.

This protection is not aspirationalβ€”it already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.

AI companies must resist bulk surveillance orders.

AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.

What do these overbroad orders look like? In the past decade or so, police have often sought β€œreverse” search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included β€œtower dumps” or β€œgeofence warrants,” in which police order a company to search all users’ location data to identify anyone that’s been near a particular place at a particular time. It has also included β€œkeyword” warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politician’s name or busy street, or a geofence warrant near a protest or church.

Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficultβ€”if not impossibleβ€”to provide mass location data in response to a geofence warrant.

This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.

In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users.Β These aren’t radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:

  1. commit to fighting bulk orders for user data in court,
  2. commit to providing users with advanced notice before complying with a legal demand so that users can choose to fight on their own behalf, andΒ 
  3. commit to publishing periodic transparency reports, which tally up how many legal demands for user data the company receives (including the number of bulk orders specifically).

How to Identify Automated License Plate Readers at the U.S.-Mexico Border

2 December 2025 at 11:23

U.S. Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), and scores of state and local law enforcement agencies have installed a massive dragnet of automated license plate readers (ALPRs) in the US-Mexico borderlands.Β 

In many cases, the agencies have gone out of their way to disguise the cameras from public view. And the problem is only going to get worse: as recently as July 2025, CBP put out a solicitation to purchase 100 more covert trail cameras with license plate-capture ability.Β 

Last month, the Associated Press published an in-depth investigation into how agencies have deployed these systems and exploited this data to target drivers. But what do these cameras look like? Here's a guide to identifying ALPR systems when you're driving the open road along the border.

Special thanks to researcher Dugan MeyerΒ and AZ Mirror's Jerod MacDonald-Evoy. All images by EFF and Meyer were taken within the last three years.Β 

ALPR at Checkpoints and Land Ports of EntryΒ 

All land ports of entry have ALPR systems that collect all vehicles entering and exiting the country. They typically look like this:Β 

License plate readers along the lanes leading into a border crossing

ALPR systems at the Eagle Pass International Bridge Port of Entry. Source: EFF

Most interior checkpoints, which are anywhere from a few miles to more than 60 from the border, are also equipped with ALPR systems operated by CBP. However, the DEA operates a parallel system at most interior checkpoints in southern border states.Β 

When it comes to checkpoints, here's the rule of thumb: If you're traveling away from the border, you are typically being captured by a CBP/Border Patrol system (Border Patrol is a sub-agency of CBP). If you're traveling toward the border, it is most likely a DEA system.

Here's a representative example of a CBP checkpoint camera system:

ALPR cameras next to white trailers along the lane into a checkpoint

ALPR system at the Border Patrol checkpoint near Uvalde, Texas. Source: EFF

At a typical port of entry or checkpoint, each vehicle lane will have an ALPR system. We've even seen border patrol checkpoints that were temporarily closed continue to funnel people through these ALPR lanes, even though there was no one on hand to vet drivers face-to-face. According CBP's Privacy Impact Assessments (2017, 2020), CBP keeps this data for 15 years, but generally agents can only search the most recent five years worth of data.Β 

The scanners were previously made by a company called Perceptics which was infamously hacked, leading to a breach of driver data. The systems have since been "modernized" (i.e. replaced) by SAIC.

Here's a close up of the new systems:

Close up of a camera marked "Front."

Frontal ALPR camera at the checkpoint near Uvalde, Texas. Source: EFF

In 2024, the DEA announced plans to integrate port of entry ALPRs into its National License Plate Reader Program (NLPRP), which the agency says is a network of both DEA systems and external law enforcement ALPR systems that it uses to investigate crimes such as drug trafficking and bulk cash smuggling.

Again, if you're traveling towards the borderΒ and you pass a checkpoint, you're often captured by parallel DEA systems set up on the opposite side of the road. However, these systems have also been found to be installed on their own away from checkpoints.Β 

These are a major component of the DEA's NLPRP, which has a standard retention period of 90 days. This program dates back to at least 2010, according to records obtained by the ACLU.Β 

Here is a typical DEA system that you will find installed near existing Border Patrol checkpoints:

A series of cameras next to a trailer by the side of the road.

DEA ALPR set-up in southern Arizona. Source: EFF

These are typically made by a different vendor, Selex ES, which also includes the brands ELSAG and Leonardo. Here is a close-up:

Close-up of an ALPR cameras

Close-up of a DEA camera near the Tohono O'odham Nation in Arizona. Source: EFF

Covert ALPR

As you drive along border highways, law enforcement agencies have disguised cameras in order to capture your movements.Β 

The exact number of covert ALPRs at the border is unknown, but to date we have identified approximately 100 sites. We know CBP and DEA each operate covert ALPR systems, but it isn't always possible to know which agency operates any particular set-up.Β 

Another rule of thumb: if a covert ALPR has a Motorola Solutions camera (formerly Vigilant Solutions) inside, it's likely a CBP system. If it has a Selex ES camera inside, then it is likely a DEA camera.Β 

Here are examples of construction barrels with each kind of camera:Β 

A camera hidden inside an orange traffic barrell

A covert ALPR with a Motorola Solutions ALPR camera near Calexico, Calif. Source: EFF

These are typically seen along the roadside, often in sets of three, but almost always connected to some sort of solar panel. They are often placed behind existing barriers.

A camera hidden inside an orange traffic barrel

A covert ALPR with a Selex ES camera in southern Arizona. Source: EFF

The DEA models are also found by the roadside, but they also can be found inside or near checkpoints.Β 

If you're curious (as we were), here's what they look like inside, courtesy of the US Patent and Trademark Office:

Patent drawings showing a traffic barrel and the camera inside it

Patent for portable covert license plate reader. Source: USPTO

In addition to orange construction barrels, agencies also conceal ALPRs in yellow sandbarrels. For example, these can be found throughout southern Arizona, especially in the southeastern part of the state.

A camera hidden in a yellow sand barrel.

A covert ALPR system in Arizona. Source: EFF

ALPR Trailers

Sometimes a speed trailer or signage trailer isn't designed so much for safety but to conceal ALPR systems. Sometimes ALPRs are attached to indistinct trailers with no discernible purpose that you'd hardly notice by the side of the road.Β 

It's important to note that its difficult to know who these belong to, since they aren't often marked. We know that all levels of government, even in the interior of the country, have purchased these set ups.Β Β 

Here are some of the different flavors of ALPR trailers:

A speed trailer capturing ALPR. Speed limit 45 sign.

An ALPR speed trailer in Texas. Source: EFF

A white flat trailer by the side of the road with camera portals on either end.

ALPR trailer in Southern California. Source. EFF

An orange trailer with an ALPR camera and a solar panel.

ALPR trailer in Southern California. Source. EFF

An orange trailer with ALPR cameras by the side of the road.

An ALPR unit in southern Arizona. Source: EFF

A trailer with a pole with mounted ALPR cameras in the desert.

ALPR unit in southern Arizona. Source: EFF

A trailer with a solar panel and an ALPR camera.

A Jenoptik Vector ALPR trailer in La Joya, Texas. Source: EFF

One particularly worrisome version of an ALPR trailer is the Jenoptik Vector: at least two jurisdictions along the border have equipped these trailers not only with ALPR, but with TraffiCatch technology that gathers Bluetooth and Wi-Fi identifiers. This means that in addition to gathering plates, these devices would also document mobile devices, such as phones, laptops, and even vehicle entertainment systems.

Stationary ALPRΒ 

Stationary or fixed ALPR is one of the more traditional ways of installing these systems. The cameras are placed on existing utility poles or other infrastructure or on poles installed by the ALPR vendor.Β 

For example, here's a DEA system installed on a highway arch:

The back of a highway overpass sign with ALPR cameras.

The lower set of ALPR cameras belong to the DEA. Source: Dugan Meyer CC BY

A camera and solar panel attached to a streetlight pole.

ALPR camera in Arizona. Source: Dugan Meyer CC BY

Flock Safety

At the local level, thousands of cities around the United States have adopted fixed ALPR, with the company Flock Safety grabbing a huge chunk of the market over the last few years. County sheriffs and municipal police along the border have also embraced the trend, with many using funds earmarked for border security to purchase these systems. Flock allows these agencies to share with one another and contribute their ALPR scans to a national pool of data. As part of a pilot program, Border Patrol had access to this ALPR data for most of 2025.Β 

A typical Flock Safety setup involves attaching cameras and solar panels to poles. For example:

A red truck passed a pair of Flock Safety ALPR cameras on poles.

Flock Safety ALPR poles installed just outside the Tohono O'odham Nation in Arizona. Source: EFF

A black Flock Safety camera with a small solar panel

A close-up of a Flock Safety camera in Douglas, Arizona. Source: EFF

We've also seen these camera poles placed outside the Santa Teresa Border Patrol station in New Mexico.

Flock may now be the most common provider nationwide, but it isn't the only player in the field. DHS recently released a market survey of 16 different vendors providing similar technology.Β Β 

Mobile ALPRΒ 

ALPR cameras can also be found attached to patrol cars. Here's an example of a Motorola Solutions ALPR attached to a Hidalgo County Constable vehicle in South Texas:

An officer stands beside patrol car. Red circle identifies mobile ALPR

Mobile ALPR on a Hidalgo County Constable vehicle. Source: Weslaco Police Department

These allow officers not only to capture ALPR data in real time as they are driving along, but they will also receive an in-car alert when a scan matches a vehicle on a "hot list," the term for a list of plates that law enforcement has flagged for further investigation.Β 

Here's another example:Β 

A masked police officer stands next to a patrol vehicle with two ALPR cameras.

Mobile ALPR in La Mesa, Calif.. Source: La Mesa Police Department Facebook page

Identifying Other TechnologiesΒ 

EFF has been documenting the wide variety of technologies deployed at the border, including surveillance towers, aerostats, and trail cameras. To learn more, download EFF's zine, "Surveillance Technology at the US-Mexico Border" and explore our map of border surveillance, which includes Google Streetview links so you can see exactly how each installation looks on the ground. Currently we have mapped out most DEA and CBP checkpoint ALPR setups, with covert cameras planned for addition in the near future.

Air fryer app caught asking for voice data (re-air) (Lock and Code S06E24)

2 December 2025 at 11:22

This week on the Lock and Code podcast…

It’s often said online that if a product is free, you’re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?

In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether people’s air fryersβ€”seriously–might be spying on them.

By analyzing the associatedΒ AndroidΒ apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastriesβ€”they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.

As the researchers wrote:

β€œIn the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason.”

Bizarrely, these types of data requests are far from rare.

Today, on the Lock and Code podcast, we revisit a 2024Β episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.

These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: β€œSpellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: β€œGood God” by Wowa (unminus.com)


Listen upβ€”Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with ourΒ exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

How Tor Can Help You Be More Anonymous on the Internet

2 December 2025 at 08:00

The internet is many things, but for many of us, it is far from private. By choosing to engage with the digital world, you often must give up your anonymity: trackers watch your every move as your surf the web and scroll on social media sites, and they use that information to build profiles of who (and where) you are and deliver you more "relevant" ads.

It doesn't have to be this way. There are a number of tactics that can help keep your browsing private. You can use a VPN to make it look like your internet activity is coming from somewhere else; if you use Safari, you can take advantage of Private Relay to hide your IP address from websites you visit; or, you can connect the internet across a different network altogether: Tor.

What is Tor?

The whole idea behind Tor (which is short for The Onion Router) is to anonymize your internet browsing so that no one can tell that it is you visiting any particular website. Tor started out as a project of the U.S. Naval Research Lab in the 1990s, but developed into a nonprofit organization in 2006. Ever since, the network has been popular with users who want to privatize their web activity, whether they're citizens of countries with strict censorship laws, journalists working on sensitive stories, or simply privacy-focused individuals.

Tor is a network, but it's commonly conflated with the project's official browser, also known as Tor. The Tor Browser is a modified version of Firefox that connects to the Tor network. The browser removes many of the technical barriers to entry for the Tor network: You can still visit your desired URLs as you would in Chrome or Edge, but the browser will connect you to them automatically via the Tor network automatically. But what does that mean?

How does Tor work?

Traditionally, when you visit a website, your data is sent directly to that site, complete with your identifying information (i.e. your device's IP address). That website, your internet service provider, and any other entities that might be privy to your internet traffic can all see that it is your device making the request, and can collect that information accordingly. This can be as innocent as the website in question storing your details for your next visit, or as scummy as the site following you around the internet.

Tor flips the script on this internet browsing model. Rather than connect your device directly to the website you're visiting, Tor runs your connection through a number of different servers, known as "nodes." These nodes are hosted by volunteers all over the world, so there's no telling which nodes your request will go through when you initiate a connection.

But Tor would not be known for its privacy if it only relied on multiple nodes to bounce your traffic around. In addition to the nodes, Tor adds layers of encryption your request. When the request passes from one node to another, each node is only able to decrypt one layer of the encryption, just enough to learn where to send the next request to. This method ensures that no one node in the system knows too much: Each only knows where the request came from one step before, and where it is sending the request to in the following step. It's like peeling back layers of an onion, hence the platform's name.

Here's a simplified example of how it works: Let's say you want to visit Lifehacker.com through Tor. You initiate the request as you normally would, by typing the URL into Tor's address bar and hitting enter. When you do, Tor adds layered encryption to your request. The first node it sends it to, perhaps based in, say, the U.S., can unlock one layer of that encryption, which tells the node which node to send it to next. The next node, based perhaps in Japan, decrypts another layer of that encryption, which tells it to send it to a third node in Germany. That third node (known as the exit node) decrypts the final layer of encryption, which tells the node to connect to Lifehacker.com. Once Lifehacker receives the request, the reverse happens: Lifehacker sends the request to the node in Germany, which adds back its layer of encryption. It then sends it back to the node in Japan, which adds a second layer of encryption. It sends it back to the node in the U.S., which adds the final layer of encryption, before sending the fully encrypted request back to your browser, which can decrypt the entire request on your behalf. Congratulations: You have just visited Lifehacker.com, without revealing your identity.

Tor isn't perfect for privacy

While Tor goes a long way to anonymizing your internet activity, it won't protect you entirely. One of the network's biggest weaknesses is in the exit node: Since the final node in the chain carries the decrypted request, it can see where you're going, and, potentially, what you're doing when you get there. It won't be able to know where the request originated, but it can see that you're trying to access Lifehacker. Depending on what sites you're accessing, you might give enough information away to reveal yourself.

This was especially an issue when websites were largely using the unencrypted HTTP protocol. If you connected to an unencrypted website, that final node might be able to see your activity on the site itself, including login information, messages, or financial data. But now that most sites have switched to the encrypted HTTPS protocol, there's less concern with third-parties being able to access the contents of your traffic. Still, even if trackers can't see exactly what you're doing or saying on these sites, they can see you visited the site itself, which is why Tor is still useful in today's encrypted internet.

Who should use Tor?

If you've heard anything about Tor, you might know it as the go-to service for accessing the dark web. That is true, but that doesn't make Tor bad. The dark web is not inherently bad, either: It's simply a network of sites that cannot be accessed by standard web browsers. That includes a number of very bad sites filled with very bad stuff, to be sure. But it also encompasses a number of perfectly legal activities as well. Chrome or Firefox cannot see dark web sites, but Tor browser can.

But you don't need to visit the dark web in order for Tor to be useful. Anyone who wants to keep their internet traffic private from the world can benefit. You might have a serious need for this, such as if you live in a country that won't let you access certain websites, or if you're a reporter working on a story that could have ramifications should the information leak. But you don't need to have a specialized case to benefit. Tor can help reduce anyone's digital footprint, and keep trackers from following you around the internet.

One big drawback

If you do decide to use Tor, understand that it won't be as fast as other modern browsers. Running your traffic through multiple international nodes takes a toll on performance, so you may be waiting a bit longer for your websites to load than you're used to. However, it won't cost you anything to try it, as the browser is completely free to download and use on Mac, Windows, Linux, and Android. (Sorry, iOS fans.) If you're worried about what you've heard about the dark web, don't be: The only way to access that material it is to seek it out directly. Otherwise, using Tor will feel just like using any other browserβ€”albeit just a tad slower.

Like Social Media, AI Requires Difficult Choices

2 December 2025 at 07:03

In his 2020 book, β€œFuture Politics,” British barrister Jamie Susskind wrote that the dominant question of the 20th century was β€œHow much of our collective life should be determined by the state, and what should be left to the market and civil society?” But in the early decades of this century, Susskind suggested that we face a different question: β€œTo what extent should our lives be directed and controlled by powerful digital systemsβ€”and on what terms?”

Artificial intelligence (AI) forces us to confront this question. It is a technology that in theory amplifies the power of its users: A manager, marketer, political campaigner, or opinionated internet user can utter a single instruction, and see their messageβ€”whatever it isβ€”instantly written, personalized, and propagated via email, text, social, or other channels to thousands of people within their organization, or millions around the world. It also allows us to individualize solicitations for political donations, elaborate a grievance into a well-articulated policy position, or tailor a persuasive argument to an identity group, or even a single person.

But even as it offers endless potential, AI is a technology thatβ€”like the stateβ€”gives others new powers to control our lives and experiences.

We’ve seen this out play before. Social media companies made the same sorts of promises 20 years ago: instant communication enabling individual connection at massive scale. Fast-forward to today, and the technology that was supposed to give individuals power and influence ended up controlling us. Today social media dominates our time and attention, assaults our mental health, andβ€”together with its Big Tech parent companiesβ€”captures an unfathomable fraction of our economy, even as it poses risks to our democracy.

The novelty and potential of social media was as present then as it is for AI now, which should make us wary of its potential harmful consequences for society and democracy. We legitimately fear artificial voices and manufactured reality drowning out real people on the internet: on social media, in chat rooms, everywhere we might try to connect with others.

It doesn’t have to be that way. Alongside these evident risks, AI has legitimate potential to transform both everyday life and democratic governance in positive ways. In our new book, β€œRewiring Democracy,” we chronicle examples from around the globe of democracies using AI to make regulatory enforcement more efficient, catch tax cheats, speed up judicial processes, synthesize input from constituents to legislatures, and much more. Because democracies distribute power across institutions and individuals, making the right choices about how to shape AI and its uses requires both clarity and alignment across society.

To that end, we spotlight four pivotal choices facing private and public actors. These choices are similar to those we faced during the advent of social media, and in retrospect we can see that we made the wrong decisions back then. Our collective choices in 2025β€”choices made by tech CEOs, politicians, and citizens alikeβ€”may dictate whether AI is applied to positive and pro-democratic, or harmful and civically destructive, ends.

A Choice for the Executive and the Judiciary: Playing by the Rules

The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.) Although in this case the FEC made the right decision, this is just one example of how AIs could skirt laws that govern people.

Likewise, courts are having to decide if and when it is okay for an AI to reuse creative materials without compensation or attribution, which might constitute plagiarism or copyright infringement if carried out by a human. (The court outcomes so far are mixed.) Courts are also adjudicating whether corporations are responsible for upholding promises made by AI customer service representatives. (In the case of Air Canada, the answer was yes, and insurers have started covering the liability.)

Social media companies faced many of the same hazards decades ago and have largely been shielded by the combination of Section 230 of the Communications Act of 1994 and the safe harbor offered by the Digital Millennium Copyright Act of 1998. Even in the absence of congressional action to strengthen or add rigor to this law, the Federal Communications Commission (FCC) and the Supreme Court could take action to enhance its effects and to clarify which humans are responsible when technology is used, in effect, to bypass existing law.

A Choice for Congress: Privacy

As AI-enabled products increasingly ask Americans to share yet more of their personal informationβ€”their β€œcontextβ€œβ€”to use digital services like personal assistants, safeguarding the interests of the American consumer should be a bipartisan cause in Congress.

It has been nearly 10 years since Europe adopted comprehensive data privacy regulation. Today, American companies exert massive efforts to limit data collection, acquire consent for use of data, and hold it confidential under significant financial penaltiesβ€”but only for their customers and users in the EU.

Regardless, a decade later the U.S. has still failed to make progress on any serious attempts at comprehensive federal privacy legislation written for the 21st century, and there are precious few data privacy protections that apply to narrow slices of the economy and population. This inaction comes in spite of scandal after scandal regarding Big Tech corporations’ irresponsible and harmful use of our personal data: Oracle’s data profiling, Facebook and Cambridge Analytica, Google ignoring data privacy opt-out requests, and many more.

Privacy is just one side of the obligations AI companies should have with respect to our data; the other side is portabilityβ€”that is, the ability for individuals to choose to migrate and share their data between consumer tools and technology systems. To the extent that knowing our personal context really does enable better and more personalized AI services, it’s critical that consumers have the ability to extract and migrate their personal context between AI solutions. Consumers should own their own data, and with that ownership should come explicit control over who and what platforms it is shared with, as well as withheld from. Regulators could mandate this interoperability. Otherwise, users are locked in and lack freedom of choice between competing AI solutionsβ€”much like the time invested to build a following on a social network has locked many users to those platforms.

A Choice for States: Taxing AI Companies

It has become increasingly clear that social media is not a town square in the utopian sense of an open and protected public forum where political ideas are distributed and debated in good faith. If anything, social media has coarsened and degraded our public discourse. Meanwhile, the sole act of Congress designed to substantially reign in the social and political effects of social media platformsβ€”the TikTok ban, which aimed to protect the American public from Chinese influence and data collection, citing it as a national security threatβ€”is one it seems to no longer even acknowledge.

While Congress has waffled, regulation in the U.S. is happening at the state level. Several states have limited children’s and teens’ access to social media. With Congress having rejectedβ€”for nowβ€”a threatened federal moratorium on state-level regulation of AI, California passed a new slate of AI regulations after mollifying a lobbying onslaught from industry opponents. Perhaps most interesting, Maryland has recently become the first in the nation to levy taxes on digital advertising platform companies.

States now face a choice of whether to apply a similar reparative tax to AI companies to recapture a fraction of the costs they externalize on the public to fund affected public services. State legislators concerned with the potential loss of jobs, cheating in schools, and harm to those with mental health concerns caused by AI have options to combat it. They could extract the funding needed to mitigate these harms to support public servicesβ€”strengthening job training programs and public employment, public schools, public health services, even public media and technology.

A Choice for All of Us: What Products Do We Use, and How?

A pivotal moment in the social media timeline occurred in 2006, when Facebook opened its service to the public after years of catering to students of select universities. Millions quickly signed up for a free service where the only source of monetization was the extraction of their attention and personal data.

Today, about half of Americans are daily users of AI, mostly via free products from Facebook’s parent company Meta and a handful of other familiar Big Tech giants and venture-backed tech firms such as Google, Microsoft, OpenAI, and Anthropicβ€”with every incentive to follow the same path as the social platforms.

But now, as then, there are alternatives. Some nonprofit initiatives are building open-source AI tools that have transparent foundations and can be run locally and under users’ control, like AllenAI and EleutherAI. Some governments, like Singapore, Indonesia, and Switzerland, are building public alternatives to corporate AI that don’t suffer from the perverse incentives introduced by the profit motive of private entities.

Just as social media users have faced platform choices with a range of value propositions and ideological valencesβ€”as diverse as X, Bluesky, and Mastodonβ€”the same will increasingly be true of AI. Those of us who use AI products in our everyday lives as people, workers, and citizens may not have the same power as judges, lawmakers, and state officials. But we can play a small role in influencing the broader AI ecosystem by demonstrating interest in and usage of these alternatives to Big AI. If you’re a regular user of commercial AI apps, consider trying the free-to-use service for Switzerland’s public Apertus model.

None of these choices are really new. They were all present almost 20 years ago, as social media moved from niche to mainstream. They were all policy debates we did not have, choosing instead to view these technologies through rose-colored glasses. Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AIβ€”before the consolidation of power renders it too late to do so.

This post was written with Nathan E. Sanders, and originally appeared in Lawfare.

Flock Uses Overseas Gig Workers To Build Its Surveillance AI

1 December 2025 at 19:02
An anonymous reader quotes a report from 404 Media: Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company. The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system. Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business -- creating a surveillance system that constantly monitors US residents' movements -- means that footage might be more sensitive than other AI training jobs. [...] Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race." It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods. The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles. Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website. The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

Read more of this story at Slashdot.

Korea's Coupang Says Data Breach Exposed Nearly 34 Million Customers' Personal Information

1 December 2025 at 17:00
An anonymous reader quotes a report from TechCrunch: South Korean e-commerce platform Coupang over the weekend said nearly 34 million Korean customers' personal information had been leaked in a data breach that had been ongoing for more than five months. The company said it first detected the unauthorized exposure of 4,500 user accounts on November 18, but a subsequent investigation revealed that the breach had actually compromised about 33.7 million customer accounts in South Korea. The breach affected customers' names, email addresses, phone numbers, shipping addresses, and certain order histories, per Coupang. More sensitive data like payment information, credit card numbers, and login credentials was not compromised and remains secure, the company said. [...] Police have reportedly identified at least one suspect, a former Chinese Coupang employee now abroad, after launching an investigation following a November 18 complaint.

Read more of this story at Slashdot.

Australian Man Gets 7 Years for β€˜Evil Twin’ WiFi Attacks

1 December 2025 at 12:38

Australian evil twin wifi attack

An Australian man has been sentenced to more than seven years in jail on charges that he created β€˜evil twin’ WiFi networks to hack into women’s online accounts to steal intimate photos and videos. The Australian Federal Police (AFP) didn’t name the man in announcing the sentencing, but several Australian news outlets identified him as Michael Clapsis, 44, of Perth, an IT professional who allegedly used his skills to carry out the attacks. He was sentenced to seven years and four months in Perth District Court on November 28, and will be eligible for parole after serving half that time, according to the Sydney Morning Herald. The AFP said Clapsis pled guilty to 15 charges, ranging from unauthorised access or modification of restricted data to unauthorised impairment of electronic communication, failure to comply with an order, and attempted destruction of evidence, among other charges.

β€˜Evil Twin’ WiFi Network Detected on Australian Domestic Flight

The AFP investigation began in April 2024, when an airline reported that its employees had identified a suspicious WiFi network mimicking a legitimate access point – known as an β€œevil twin” – during a domestic flight. On April 19, 2024, AFP investigators searched the man’s luggage when he arrived at Perth Airport , where they seized a portable wireless access device, a laptop and a mobile phone.β€―They later executed a search warrant β€œat a Palmyra home.” Forensic analysis of data and seized devices β€œidentified thousands of intimate images and videos, personal credentials belonging to other people, and records of fraudulent WiFi pages,” the AFP said. The day after the search warrant, the man deleted more than 1,700 items from his account on a data storage application and β€œunsuccessfully tried to remotely wipe his mobile phone,” the AFP said. Between April 22 and 23, 2024, the AFP said the man β€œused a computer software tool to gain access to his employer’s laptop to access confidential online meetings between his employer and the AFP regarding the investigation.” The man allegedly used a portable wireless access device, called a β€œWiFi Pineapple,” to detect device probe requests and instantly create a network with the same name. A device would then connect to the evil twin network automatically. The network took people to a webpage and prompted them to log in using an email or social media account, where their credentials were then captured. AFP said its cybercrime investigators identified data related to use of the fraudulent WiFi pages at airports in Perth, Melbourne and Adelaide, as well as on domestic flights, β€œwhile the man also used his IT privileges to access restricted and personal data from his previous employment.” β€œThe man unlawfully accessed social media and other online accounts linked to multiple unsuspecting women to monitor their communications and steal private and intimate images and videos,” the AFP said.

Victims of Evil Twin WiFi Attack Enter Statements

At the sentencing, a prosecutor read from emotional impact statements from the man’s victims, detailing the distress they suffered and the enduring feelings of shame and loss of privacy. One said, β€œI feel like I have eyes on me 24/7,” according to the Morning Herald. Another said, β€œThoughts of hatred, disgust and shame have impacted me severely. Even though they were only pictures, they were mine not yours.” The paper said Clapsis’ attorney told the court that β€œHe’s sought to seek help, to seek insight, to seek understanding and address his way of thinking.” The case highlights the importance of avoiding free public WiFi when possible – and not accessing sensitive websites or applications if one must be used. Any network that requests personal details should be avoided. β€œIf you do want to use public WiFi, ensure your devices are equipped with a reputable virtual private network (VPN) to encrypt and secure your data,” the AFP said. β€œDisable file sharing, don’t use things like online banking while connected to public WiFi and, once you disconnect, change your device settings to β€˜forget network’.”

Banning VPNs

1 December 2025 at 07:59

This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of β€œprotecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed β€œsexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are β€œharmful to minors” beyond the type of speech that states can prohibit minors from accessingΒ­ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.

The EFF link explains why this is a terrible idea.

Cybersecurity Coalition to Government: Shutdown is Over, Get to Work

28 November 2025 at 13:37
budget open source supply chain cybersecurity ransomware White House Cyber Ops

The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.

The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.

The UK Has It Wrong on Digital ID. Here’s Why.

28 November 2025 at 05:10

In late September, the United Kingdom’s Prime Minister Keir Starmer announced his government’s plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, β€œcut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like people’s name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country.Β 

This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, we’ve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting β€œDigital passport IDs” in Android, and states contracting with mobile driver’s license providers (mDL). And as we’ve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID.Β 

But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Here’s three reasons why those policymakers have it wrong.Β 

Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to openingβ€”or closingβ€”doors to essential services and experiences.Β 

Mission CreepΒ 

In his initial announcement, Starmer stated: β€œYou will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively.Β 

The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services.Β 

The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association.Β 

And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system.Β 

These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do β€œnext to nothing to tackle channel crossings.”

Inclusivity is Not Inevitable, But Exclusion IsΒ 

While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as we’ve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum.Β 

The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but it’s short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods.Β 

These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.

Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring.

Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID.

Attacks on Privacy and SurveillanceΒ 

Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would β€œabsolutely have very strong encryption” and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that β€œdata associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that β€œthe government will work closely with expert stakeholders to make the programme effective, secure and inclusive.” 

But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged.Β 

This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the government’s ability to handle giant databases is questionable. Notably, the eVisa’s multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someone’s identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.

Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)β€”first introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single clientβ€”is still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.

And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this.

Ways Forward

Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to openingβ€”or closingβ€”doors to essential services and experiences.Β 

We cannot base our human rights on the government’s mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If you’re based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system.Β 

The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

French Regulator Fines Vanity Fair Publisher €750,000 for Persistent Cookie Consent Violations

28 November 2025 at 05:49

Vanity Fair, CondΓ© Nast, Cookie Consent

France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications CondΓ© Nast €750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.

The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.

NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, CondΓ© Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.

Repeated Violations Despite Compliance Order

CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications CondΓ© Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.

Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.

The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.

Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.

Escalating French Enforcement Actions

The fine amount takes into account that CondΓ© Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.

The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo €40 million in 2023 and Google €325 million earlier in 2025. Spain's AEPD issued a €100,000 fine against Euskaltel in related NOYB litigation.

Also read: Google Slapped with $381 Million Fine in France Over Gmail Ads, Cookie Consent Missteps

According to reports, CondΓ© Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.

The Cookie Consent Conundrum

French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.

The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.

The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech

26 November 2025 at 17:33

Legal Intern Alexandra Rhodes contributed to this blog post.Β 

EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is β€œon campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.Β Β 

Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s HomeΒ 

The case, Merrill v. Marana Unified School District, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: β€œGANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.Β Β 

Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft emailβ€”even though he was physically off campus at the time, before school hours, and had never sent the email.Β Β 

After the student’s suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment.Β 

Public School Students Have Greater First Amendment Protection for Off-Campus SpeechΒ 

The U.S.β€―Supreme Court has addressed the First Amendment rights of public school students in a handful of cases.Β 

Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech β€œmaterially and substantially” disrupted the school day or invaded the rights of others.Β 

Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021), in which EFF filed a brief, the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because β€œfrom the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” 

The Ninth Circuit has further held that off-campus speech is only punishable if it bears a β€œsufficient nexus” to the school and poses a credible threat of violence.Β 

In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.Β Β 

Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speechβ€”and makes all student speechβ€”automatically β€œon campus” for purposes of justifying punishment under the First Amendment.Β Β 

Schools Provide Students with Valuable Digital Toolsβ€”But Also Subject Them to SurveillanceΒ 

EFF supports the plaintiffs’ argument that the student’s speech was β€œoff campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a student’s speech β€œon campus.”   

Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.Β Β 

As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.Β Β 

Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft) to the websites they visit.Β Β 

School Digital Surveillance Chills Student Speech and Further Harms StudentsΒ 

In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as β€œon campus,” even if the student is geographically off campus or outside of school hours.Β Β 

First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in Mahanoy not to regulate β€œall the speech a student utters during the full 24-hour day.” There must be some place that is β€œoff campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.Β Β 

Second, we urged the court to reject such an β€œon campus” rule to mitigate the chilling effect of digital surveillance on students’ freedom of speechβ€”that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always β€œon campus,” students will undoubtedly curb their speech.Β 

Third, we argued that such an β€œon campus” rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a β€œpay for privacy” scheme: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections.Β 

Fourth, such an β€œon campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate, causing various harms to studentsβ€”including unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning.Β 

We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology. As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone elseβ€”to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes. Β 

FBI: Account Takeover Scammers Stole $262 Million this Year

26 November 2025 at 16:51
hacker, scam, Email, fraud, scam fraud

The FBI says that account takeover scams this year have resulted in 5,100-plus complaints in the U.S. and $262 million in money stolen, and Bitdefender says the combination of the growing number of ATO incidents and risky consumer behavior is creating an increasingly dangerous environment that will let such fraud expand.

The post FBI: Account Takeover Scammers Stole $262 Million this Year appeared first on Security Boulevard.

The Trust Crisis: Why Digital Services Are Losing Consumer Confidence

26 November 2025 at 12:45
TrustCloud third party risk Insider threat Security Digital Transformation

According to the Thales Consumer Digital Trust Index 2025, global confidence in digital services is slipping fast. After surveying more than 14,000 consumers across 15 countries, the findings are clear: no sector earned high trust ratings from even half its users. Most industries are seeing trust erode β€” or, at best, stagnate. In an era..

The post The Trust Crisis: Why Digital Services Are Losing Consumer Confidence appeared first on Security Boulevard.

Russian-Backed Threat Group Uses SocGholish to Target U.S. Company

26 November 2025 at 11:10
russian, Russia Microsoft phishing AWS Ukraine

The Russian state-sponsored group behind the RomCom malware family used the SocGholish loader for the first time to launch an attack on a U.S.-based civil engineering firm, continuing its targeting of organizations that offer support to Ukraine in its ongoing war with its larger neighbor.

The post Russian-Backed Threat Group Uses SocGholish to Target U.S. Company appeared first on Security Boulevard.

Google Maps Will Let You Hide Your Identity When Writing Reviews

25 November 2025 at 19:02
An anonymous reader quotes a report from PCMag: Four new features are coming to Google Maps, including a way to hide your identity in reviews. Maps will soon let you use a nickname and select an alternative profile picture for online reviews, so you can rate a business without linking it to full name and Google profile photo. Google says it will monitor for "suspicious and fake reviews," and every review is still associated with an account on Google's backend, which it believes will discourage bad actors. Look for a new option under Your Profile that says Use a custom name & picture for posting. You'll then be able to pick an illustration to represent you and add a nickname. Google didn't explain why it is introducing anonymous reviews; it pitched the idea as a way to be a business's "Secret Santa." Some users are nervous to publicly post reviews for local businesses as it may be used to track their location or movements. It may encourage more people to contribute honest feedback to its platform, for better or worse. Further reading: Gemini AI To Transform Google Maps Into a More Conversational Experience

Read more of this story at Slashdot.

The Latest Shai-Hulud Malware is Faster and More Dangerous

25 November 2025 at 16:17
supply chains, audits, configuration drift, security, supply, chain, Blue Yonder, secure, Checkmarx Abnormal Security cyberattack supply chain cybersecurity

A new iteration of the Shai-Hulud malware that ran through npm repositories in September is faster, more dangerous, and more destructive, creating huge numbers of malicious repositories, compromised scripts, and GitHub users attacked, creating one of the most significant supply chain attacks this year.

The post The Latest Shai-Hulud Malware is Faster and More Dangerous appeared first on Security Boulevard.

Attackers are Using Fake Windows Updates in ClickFix Scams

24 November 2025 at 21:40
Lumma, infostealer RATs Reliaquest

Huntress threat researchers are tracking a ClickFix campaign that includes a variant of the scheme in which the malicious code is hidden in the fake image of a Windows Update and, if inadvertently downloaded by victims, will deploy the info-stealing malware LummaC2 and Rhadamanthys.

The post Attackers are Using Fake Windows Updates in ClickFix Scams appeared first on Security Boulevard.

Hack of SitusAMC Puts Data of Financial Services Firms at Risk

24 November 2025 at 13:00
stolen, credentials, file data, anomaly detection, data exfiltration, threat, inside-out, breach, security strategy, data breaches, data search, Exabeam, data, data breaches, clinical trials, breach, breaches, data, residency, sovereignty, data, breaches, data breaches, NetApp data broker FTC location data

SitusAMC, a services provider with clients like JP MorganChase and Citi, said its systems were hacked and the data of clients and their customers possibly compromised, sending banks and other firms scrambling. The data breach illustrates the growth in the number of such attacks on third-party providers in the financial services sector.

The post Hack of SitusAMC Puts Data of Financial Services Firms at Risk appeared first on Security Boulevard.

The privacy nightmare of browser fingerprinting

24 November 2025 at 05:51

I suspect that many people who take an interest in Internet privacy don’t appreciate how hard it is to resist browser fingerprinting. Taking steps to reduce it leads to inconvenience and, with the present state of technology, even the most intrusive approaches are only partially effective. The data collected by fingerprinting is invisible to the user, and stored somewhere beyond the user’s reach.

On the other hand, browser fingerprinting produces only statistical results, and usually can’t be used to track or identify a user with certainty. The data it collects has a relatively short lifespan – days to weeks, not months or years. While it probably can be used for sinister purposes, my main concern is that it supports the intrusive, out-of-control online advertising industry, which has made a wasteland of the Internet.

↫ Kevin Boone

My view on this matter is probably a bit more extreme than some: I believe it should be illegal to track users for advertising purposes, because the data collected and the targeting it enables not only violate basic privacy rights enshrined in most constitutions, they also pose a massive danger in other ways. This very same targeting data is already being abused by totalitarian states to influence our politics, which has had disastrous results. Of course, our own democratic governments’ hands aren’t exactly clean either in this regard, as they increasingly want to use this data to stop β€œterrorists” and otherwise infringe on basic rights. Finally, any time such data ends up on the black market after data breaches, criminals, organised or otherwise, also get their hands on it.

I have no idea what such a ban should look like, or if it’s possible to do this even remotely effectively. In the current political climate in many western countries, which are dominated by the wealthy few and corporate interests, it’s highly unlikely that even if such a ban was passed as lip service to concerned constituents, any fines or other deterrents would probably be far too low to make a difference anyway. As such, my desire to have targeted online advertising banned is mostly theory, not practice – further illustrated by the European Union caving like cowards on privacy to even the slightest bit of pressure.

Best I can do for now is not partake in this advertising hellhole. I disabled and removed all advertising from OSNews recently, and have always strongly advised everyone to use as many adblocking options as possible. We not only have a Pi-Hole to keep all of our devices at home safe, but also use a second layer of on-device adblockers, and I advise everyone to do the same.

U.S., International Partners Target Bulletproof Hosting Services

22 November 2025 at 22:36
disney, code, data, API security ransomware extortion shift

Agencies with the US and other countries have gone hard after bulletproof hosting services providers this month, including Media Land, Hypercore, and associated companies and individuals, while the FiveEyes threat intelligence alliance published BPH mitigation guidelines for ISPs, cloud providers, and network defenders.

The post U.S., International Partners Target Bulletproof Hosting Services appeared first on Security Boulevard.

❌