Chinese Whistleblower Living In US Is Being Hunted By Beijing With US Tech
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools.
Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like
doasorsudo, that means an attacker can obtain root access in the event of a bug or misconfiguration.What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model.
β« Ariadne Conill
To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task youβre trying to accomplish. As an example, Conill details mounting and unmounting β with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks.
Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. Iβm not smart enough to determine if this approach makes sense compared to sudo or doas, but the way itβs described it does feel like a superior, more secure solution.
Read more of this story at Slashdot.
In a nod to the evolving threat landscape that comes with cloud computing and AI and the growing supply chain threats, Microsoft is broadening its bug bounty program to reward researchers who uncover threats to its users that come from third-party code, like commercial and open source software,
The post Microsoft Expands its Bug Bounty Program to Include Third-Party Code appeared first on Security Boulevard.
As they work to fend off the rapidly expanding number of attempts by threat actors to exploit the dangerous React2Shell vulnerability, security teams are learning of two new flaws in React Server Components that could lead to denial-of-service attacks or the exposure of source code.
The post React Fixes Two New RSC Flaws as Security Teams Deal with React2Shell appeared first on Security Boulevard.
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we havenβt made trustworthy. We canβt. And todayβs versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...
The post Building Trustworthy AI Agents appeared first on Security Boulevard.
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we havenβt made trustworthy. We canβt. And todayβs versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.
These arenβt edge cases. Theyβre the result of building AI systems without basic integrity controls. Weβre in the third leg of data securityβthe old CIA triad. Weβre good at availability and working on confidentiality, but weβve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.
The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.
To further development along these lines, I and others have proposed separating usersβ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.
What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.
First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with othersβemails, texts, social media postsβand revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.
Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This canβt be tied to a single AI model. Our AI future will include many different modelsβsome of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.
Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.
Fourth, it would be under the userβs fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.
Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a personβs data. And there are also write attacks, where adversaries add to or change a userβs data. Defending against both is critical; this all implies a complex and robust authentication system.
Sixth, and finally, it must be easy to use. If weβre envisioning digital personal assistants for everybody, it canβt require specialized security training to use properly.
Iβm not the first to suggest something like this. Researchers have proposed a βHuman Context Protocolβ (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Leeβs Solid protocol for distributed data ownership.
The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you canβt ignore one or the other.
Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI canβt easily manipulate you because you see what data itβs using and can correct it. It canβt easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but itβs the only way we can have trustworthy AI assistants.
This essay was originally published in IEEE Security & Privacy.
Read more of this story at Slashdot.
Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect oneβs identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..
The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.
Bad actors that include nation-state groups to financially-motivated cybercriminals from across the globe are targeting the maximum-severity but easily exploitable React2Shell flaw, with threat researchers see everything from probes and backdoors to botnets and cryptominers.
The post Attackers Worldwide are Zeroing In on React2Shell Vulnerability appeared first on Security Boulevard.
What is the Personal Data Protection Act (PDPA) of Thailand? The Personal Data Protection Act, B.E. 2562 (2019), often referred to by its acronym, PDPA, is Thailandβs comprehensive data privacy and protection law. Enacted to safeguard the personal data of individuals, it is heavily influenced by international privacy standards, most notably the European Unionβs General [β¦]
The post Thailandβs Personal Data Protection Act appeared first on Centraleyes.
The post Thailandβs Personal Data Protection Act appeared first on Security Boulevard.
The internet has become a vital tool for human connection, but it comes with its fair share of risks, with the biggest being your privacy and security. With the big tech giants hungry for every ounce of your data they can get and scammers looking to target you every day, you do need to take a few precautions to protect your online privacy and security. There's no foolproof approach to these two things, and unfortunately, the onus is on you to take care of your data.
Before you start looking for a VPN or ways to delete your online accounts, you should take a moment to understand your privacy and security needs. Once you do, it'll be a lot easier to take a few proactive steps to safeguard your privacy and security on the internet. Sadly, there's no "set it and forget it" solution for this, but I'm here to walk you through some useful hacks that can apply to whatever risks you might be facing.
When you install an app on your phone, you'll often be bombarded with pop-ups asking for permission to access your contacts, location, notifications, microphone, camera, and many other things. Some are necessary, while most are not. The formula I use is to deny every permission unless it's absolutely necessary to the app's core function. Similarly, when you're creating a profile anywhere online, you should avoid giving out any personal information unless it's absolutely necessary.
You don't have to use your legal name, real date of birth, or an email address with your real name on most apps you sign up for. Some sites also still use antiquated password recovery methods such as security questions that ask for your mother's maiden name. Even in these fields, you don't have to reveal the truth. Every bit of information that you put on the internet can potentially be exposed in a breach. It's best to use information that's either totally or partially fake to safeguard your privacy.
If your personal information is easily available on Google, and you want to get it removed, you can send Google a request to remove it. Check Google's support page for how to remove results to see specific instructions for your case. For most people, the simplest way to remove results about yourself is to go to Google's Results About You page, sign in, and follow the instructions on screen.
Most modern email services let you create unlimited aliases, which means that you don't need to reveal your primary email address each time you sign up for a new service. Instead of signing up with realemail@gmail.com, you can use something like realemail+sitename@gmail.com. Gmail lets you create unlimited aliases using this method, and you can use that to identify who leaked your data. If you suddenly start getting a barrage of spam to a particular alias, you'll know which site sold your data.
When you take a photo, the file for it contains a lot of information about you. By default, all cameras will store EXIF (exchangeable image format) data, which logs when the photo was taken, which camera was used, and photo settings. You should remove exif data from photos before posting them on the internet. If you're using a smartphone to take photos, it'll also log the location of each image, which can be used to track you. While social media sites may sometimes remove location and exif data from your pictures, you cannot always rely on these platforms to protect your privacy for you.
You should take a few steps to strip exif data before uploading images. The easiest way to get started is to disable location access for your phone's camera app. On both iPhone and Android, you can open the Settings app, navigate to privacy settings or permissions, and deny location access to Camera. This will mean that you won't be able to search for a location in your photos app and identify all photos taken there, and you'll also lose out on some fun automated slideshows that Apple and Google create. However, it also means that your privacy is protected. You can also use apps to quickly hide faces and anonymize metadata from photos.
While you're at it, don't forget that screenshots can also leak sensitive information about you. Some types of malware steal sensitive information from screenshots, so be sure to periodically delete those, too.
Nearly every single AI tool is mining your data to improve its services. Sometimes, this means it's using everything you type or upload. At other times, it could be using things you've written, photos or videos you've posted, or any other media you've ever uploaded to the internet, to train its AI models. There's not much you can do about mass data scraping off the internet, but you can and should be careful with your usage of AI tools. You can sometimes stop AI tools from perpetually using your data, but relying on these companies to honor those settings toggles is like relying on Meta to keep your data private. It's best to avoid revealing any personal information to any AI service, regardless of how strong a connection you feel with it. Just assume that anything you send to an AI service can, and probably will, be used to train AI models or even be sold to advertising companies.
Yes, big companies like Facebook or TikTok can track you even if you don't have an account with them. Data brokers collect vast troves of information about your internet visits, and sell it to advertisers or literally anyone who's willing to pay. To limit the damage, you can start by following Lifehacker's guide to blocking companies from tracking you online. Next, you can go ahead and opt out of data collection by data brokers. If that's not enough, you can also use services that remove your personal information from data broker sites.
Now, I'm sure some of you are thinking that using a VPN will protect you from most of the tracking on the internet. That may be true in some cases, but using a VPN 24/7 is not the right approach for most people. For starters, it just routes all your traffic via the VPN company's servers, which means that you need to place your trust in the company's promises not to log your information, and its ability to keep your data safe and private. It also won't protect you from the types of data leaks that might happen from, say, publicly posting photos tagged with location data.
Many VPN providers claim to be able to protect you, but there are downsides to consider. Some companies such as Mullvad and Proton VPN have earned a solid reputation for privacy, but using a VPN all the time can create more problems than it solves. Your internet speed slows down a lot, streaming services may not work properly, and lots of sites may not load at all because they block VPN IP addresses. In most cases, you'll probably be better off if you use adblockers and an encrypted DNS instead.
For most people, ad blockers are a good privacy tool. Even though Google is cracking down on ad blockers, there are ways to get around those restrictions. I highly recommend using uBlock Origin, which also has a mobile version now. Once you've settled on a good ad blocker, you should consider also using a good DNS service to filter out trackers, malware, and phishing sites on a network level.
Having a DNS service is like having a privacy filter for all your internet traffic, whether it's on your phone, laptop, or even your router. I've been using NextDNS for a few years, but you can also try AdGuard DNS or ControlD. All of these services have a generous free tier, but you can optionally pay a small annual fee for more features.
Almost all apps these days send telemetry data to remote servers. This isn't too much of a problem if you only use apps from trusted sources, and can help with things like automatic software updates. But malicious apps or even poorly managed ones may be more open with your data than you would like.
You can restrict some of that by using a good firewall app. This lets you monitor incoming and outgoing internet traffic from your device, and restrict devices from sending unwanted data to the internet. Blocking these requests can hamper some useful features, like those automatic app updates, but they can also stop apps from unnecessarily sending data to online servers. There are some great firewall apps for Mac and for Windows, and you should definitely consider using these for better online privacy.
I've probably said this a million times, but I will repeat my advice: use a good password manager. You may think it's a bit annoying, but this single step is the easiest way to greatly improve your security on the internet. Password managers can take the hassle of remembering passwords away from you, and they'll also generate unique passwords that are hard to crack. Both Bitwarden and Apple Passwords (which ships with your Mac, iPhone, and iPad) are free to use, and excellent at their job. Go right ahead and start using them today. I guarantee that you won't regret it.
The exploitation efforts by China-nexus groups and other bad actors against the critical and easily abused React2Shell flaw in the popular React and Next.js software accelerated over the weekend, with threats ranging from stolen credentials and initial access to downloaders, crypto-mining, and the NoodleRat backdoor being executed.
The post Exploitation Efforts Against Critical React2Shell Flaw Accelerate appeared first on Security Boulevard.
The FBI has warned about a new type of scam where your Facebook pictures are harvested to act as βproof-of-lifeβ pictures in a virtual kidnapping.
The scammers pretend they have kidnapped somebody and contact friends and next of kin to demand a ransom for their release. While the alleged victim is really just going about their normal day, criminals show the family real Facebook photos to βproveβ that person is still alive but in their custody.
This attack resembles Facebook cloning but with a darker twist. Instead of just impersonating you to scam your friends, attackers weaponize your pictures to stage fake proofβofβlife evidence.
Both scams feed on oversharing. Public posts give criminals more than enough information to impersonate you, copy your life, and convince your loved ones something is wrong.
This alert focuses on criminals scraping photos from social media (usually Facebook, but also LinkedIn, X, or any public profile) then manipulating those images with AI or simple editing to use during extortion attempts. If you know what to look for, you might spot inconsistencies like missing tattoos, unusual lighting, or proportions that donβt quite match.
Scammers rely on panic. They push tight deadlines, threaten violence, and try to force split-second decisions. That emotional pressure is part of their playbook.
In recent years, the FBI has also warned about synthetic media and deepfakes, like explicit images generated from benign photos and then used for sextortion, which is a closely related pattern of abuse of userβposted pictures. Together, these warnings point to a trend: ordinary profile photos, holiday snaps, and professional headshots are increasingly weaponized for extortion rather than classic account hacking.
To make it harder for criminals to use these tactics, be mindful of what information you share on social media. Share pictures of yourself, or your children, only with actual friends and not for the whole world to find. And when youβre travelling, post the beautiful pictures you have taken when youβre back, not while youβre away from home.
Facebookβs built-in privacy tool lets you quickly adjust:
If youβre on the receiving end of a virtual kidnapping attempt:
We donβt just report on threats β we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by usingΒ Malwarebytes Identity Theft Protection.
The Washington Post last month reported it was among a list of data breach victims of the Oracle EBS-related vulnerabilities, with a threat actor compromising the data of more than 9,700 former and current employees and contractors. Now, a former worker is launching a class-action lawsuit against the Post, claiming inadequate security.
The post Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach appeared first on Security Boulevard.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Chinese-sponsored groups are using the popular Brickstorm backdoor to access and gain persistence in government and tech firm networks, part of the ongoing effort by the PRC to establish long-term footholds in agency and critical infrastructure IT environments, according to a report by U.S. and Canadian security offices.
The post China Hackers Using Brickstorm Backdoor to Target Government, IT Entities appeared first on Security Boulevard.
Manually or automatically wiping your browsing history is a well-established way of protecting your privacy and making sure the digital trail you leave behind you is as short as possibleβbut it's important to be aware of the limitations of the process, and to understand why deleting your browsing history isn't always as comprehensive an act as you might think.
In short, the records of where you've been aren't only kept on your local computer or on your phone, they're found in various other places too. This is why fully wiping away your browsing history is more difficult than it initially appears.
Just about every modern browser can now sync your browsing history across devices, from laptop to mobile and back again. There are benefits to thisβbeing able to continue your browsing on a different device, for exampleβbut it means that deleting the list of websites you've visited on one device won't necessarily clear it everywhere.
Consider Apple's Safari, which by default will sync your online history, bookmarks, and open tabs between all of the iPhones, iPads, and Macs using the same Apple account. You can manage this by selecting your account name and then iCloud in Settings on iOS/iPadOS or in System Settings on macOS.
Whether or not Safari syncing is enabled through iCloud will affect how browsing history is deletedβwhen you try to delete this history on mobile or desktop, you'll see a message telling you what will happen on your other devices. In Safari on a Mac, choose History > Clear History; on an iPhone or iPad, choose Apps > Safari > Clear History and Website Data from Settings.
Most other browsers work in the same way, with options for both syncing history and deleting history. In Chrome on the desktop, for example, open Settings via the three-dot menu (top right): You can manage syncing via You and Google > Sync and Google Services > Manage what you sync, and clearing your history via Privacy and security > Delete browsing data.
Aside from all the history your actual web browser is collecting, you also need to think about the data being vacuumed up by the apps and websites you're using. If you log into Facebook, Meta will know about the comments you've left and the photos you've liked, no matter how much you scrub your history from Edge or Firefox.
How much you can do about this really depends on the app or site. Amazon lets you clear your search history, for example: On the desktop site, click Browsing History on the toolbar at the top, then click the gear icon (top right). The next screen lets you delete all or some of your browsing history, and block future trackingβthough you won't be able to reorder items as easily, and your recommendations will be affected.
Meta lets you clear your Instagram and Facebook search history, at least: You can take care of both from the Meta Accounts Center page in a desktop browser. Click Your information and permissions then Search history to look back at what you've been searching for. The next screen gives you options for manually and automatically wiping your search history.
Google runs a whole host of online apps as well as a web browser. You can manage all your Google data from one central point from your desktop browser: Your Google Account page. Click Data and privacy to see everything Google has collected on you, and click through on any activity type to manually delete records or set them up to be automatically deleted after a certain period of time.
The final place there will be copies of your internet browsing history are on the servers of your internet service providerβthat is, whichever company you're paying for access to the internet is keeping logs of the places you've been, for all kinds of purposes (from security to advertising). And yes, this includes sites that you open while in incognito mode.
How this is handled varies from provider to provider. For example, AT&T's privacy notice states that the company will "automatically collect a variety of information", including "website and IP addresses," "videos watched," and "search terms entered." The company says this data will be kept for, "as long as we need it for business, tax, or legal purposes."
There's not a whole lot you can do about this eitherβit's a trade-off you have to make if you want access to the web. Some providers, including AT&T, will let you opt out of certain types of information sharing if you get in touch with them directly, but you can't prevent the tracking from happening in the first place.
What you can do is mask your browsing with a VPN (Lifehacker has previously picked the best paid VPNs and the best free VPNs for you to try out). As all your internet traffic will be routed through the VPN's servers, your internet provider will no longer be able to see what you're doing. Your VPN provider will, howeverβso find one that you can trust, and which has a no-logs policy that's been verified by a third-party security auditor.
A new anonymous phone service allows you to sign up with just a zip code.
Security and developer teams are scrambling to address a highly critical security flaw in frameworks tied to the popular React JavaScript library. Not only is the vulnerability, which also is in the Next.js framework, easy to exploit, but React is widely used, including in 39% of cloud environments.
The post Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps appeared first on Security Boulevard.
A threat group dubbed ShadyPanda exploited traditional extension processes in browser marketplaces by uploading legitimate extensions and then quietly weaponization them with malicious updates, infecting 4.3 million Chrome and Edge users with RCE malware and spyware.
The post ShadyPandaβs Years-Long Browser Hack Infected 4.3 Million Users appeared first on Security Boulevard.
![]()
Britain's data protection regulator issued 17 preliminary enforcement notices and sent warning letters to hundreds of website operators throughout 2025, a pressure campaign that brought 979 of the UK's top 1,000 websites into compliance with cookie consent rules and gave an estimated 40 million peopleβroughly 80% of UK internet users over age 14βgreater control over how they are tracked for personalized advertising.
The Information Commissioner's Office announced Thursday that only 21 websites remain non-compliant, with enforcement action continuing against holdouts.
The campaign focused on three key compliance areas: whether non-essential advertising cookies were stored on users' devices before users could exercise choice to accept or reject them, whether rejecting cookies was as easy as accepting them, and whether any non-essential cookies were placed despite users not consenting.
Of the 979 compliant sites, 415 passed testing without any intervention. The remaining 564 improved practices after initially failing, following direct engagement from the ICO. The regulator sent letters that underlined their compliance shortcomings, opened investigations when letters failed to produce changes, and issued preliminary enforcement notices in 17 cases.
"We set ourselves the goal of giving people more meaningful control over how they were tracked online by the end of 2025. I can confidently say that we have delivered on that promise," stated Tim Capel, Interim Executive Director of Regulatory Supervision.
The enforcement campaign began in January 2025 when the ICO assessed the top 200 UK websites and communicated concerns to 134 organizations. The regulator warned that uncontrolled tracking intrudes on private lives and can lead to harm, citing examples including gambling addicts targeted with betting ads due to browsing history or LGBTQ+ individuals altering online behavior for fear of unintended disclosure.
The ICO engaged with trade bodies representing the majority of industries appearing in the top 1,000 websites and consent management platforms providing solutions to nearly 80% of the top 500 websites. These platforms made significant changes to ensure cookie banner options they provide to customers are compliant by default.
The action secured significant improvements to user experiences online, including greater prevalence of "reject" options on cookie banners and lower prevalence of cookies being placed before consent was given or after it was refused.
The regulator identified four main problem areas during its review: deceptive or missing choice where selection is preset, uninformed choice through unclear options, undermined choice where sites fail to adhere to user preferences, and irrevocable choice where users cannot withdraw consent.
The ICO committed to ongoing monitoring, stating that websites brought into compliance should not revert to previously unlawful practices believing violations will go undetected. We will continue to monitor compliance and engage with industry to ensure they uphold their legal obligations, while also supporting innovation that respects people's privacy," Capel said.
Following consultation earlier in 2025, the regulator continues working with stakeholders to understand whether publishers could deliver privacy-friendly online advertising to users who have not granted consent where privacy risk remains low. The ICO works with government to explore how legislation could be amended to reinforce this approach, with the next update scheduled for 2026.
Under current regulations, violations can result in fines up to Β£500,000 under Privacy and Electronic Communications Regulations or up to Β£17.5 million or 4% of global turnover under UK GDPR. Beyond financial penalties, non-compliance risks reputational damage and loss of consumer trust as privacy-conscious users increasingly scrutinize data practices.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
EFF intern Alexandra Halbeck contributed to this blog
When people talk to a chatbot, they often reveal highly personal information they wouldnβt share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.
AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.
Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal peopleβs most sensitive information.
Without privacy protections, users would be chilled in their use of AI systems.
Consider the sensitivity of the following prompts: βhow to get abortion pills,β βhow to protect myself at a protest,β or βhow to escape an abusive relationship.β These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.
Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.
Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.
For over a century, the Fourth Amendment has protected the content of private communicationsβsuch as letters, emails, and search engine promptsβfrom unreasonable government searches. AI prompts require the same constitutional protection.
This protection is not aspirationalβit already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.
AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.
What do these overbroad orders look like? In the past decade or so, police have often sought βreverseβ search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included βtower dumpsβ or βgeofence warrants,β in which police order a company to search all usersβ location data to identify anyone thatβs been near a particular place at a particular time. It has also included βkeywordβ warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politicianβs name or busy street, or a geofence warrant near a protest or church.
Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficultβif not impossibleβto provide mass location data in response to a geofence warrant.
This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.
In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users.Β These arenβt radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:

U.S. Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), and scores of state and local law enforcement agencies have installed a massive dragnet of automated license plate readers (ALPRs) in the US-Mexico borderlands.Β
In many cases, the agencies have gone out of their way to disguise the cameras from public view. And the problem is only going to get worse: as recently as July 2025, CBP put out a solicitation to purchase 100 more covert trail cameras with license plate-capture ability.Β
Last month, the Associated Press published an in-depth investigation into how agencies have deployed these systems and exploited this data to target drivers. But what do these cameras look like? Here's a guide to identifying ALPR systems when you're driving the open road along the border.
Special thanks to researcher Dugan MeyerΒ and AZ Mirror's Jerod MacDonald-Evoy. All images by EFF and Meyer were taken within the last three years.Β
All land ports of entry have ALPR systems that collect all vehicles entering and exiting the country. They typically look like this:Β
ALPR systems at the Eagle Pass International Bridge Port of Entry. Source: EFF
Most interior checkpoints, which are anywhere from a few miles to more than 60 from the border, are also equipped with ALPR systems operated by CBP. However, the DEA operates a parallel system at most interior checkpoints in southern border states.Β
When it comes to checkpoints, here's the rule of thumb: If you're traveling away from the border, you are typically being captured by a CBP/Border Patrol system (Border Patrol is a sub-agency of CBP). If you're traveling toward the border, it is most likely a DEA system.
Here's a representative example of a CBP checkpoint camera system:
ALPR system at the Border Patrol checkpoint near Uvalde, Texas. Source: EFF
At a typical port of entry or checkpoint, each vehicle lane will have an ALPR system. We've even seen border patrol checkpoints that were temporarily closed continue to funnel people through these ALPR lanes, even though there was no one on hand to vet drivers face-to-face. According CBP's Privacy Impact Assessments (2017, 2020), CBP keeps this data for 15 years, but generally agents can only search the most recent five years worth of data.Β
The scanners were previously made by a company called Perceptics which was infamously hacked, leading to a breach of driver data. The systems have since been "modernized" (i.e. replaced) by SAIC.
Here's a close up of the new systems:
Frontal ALPR camera at the checkpoint near Uvalde, Texas. Source: EFF
In 2024, the DEA announced plans to integrate port of entry ALPRs into its National License Plate Reader Program (NLPRP), which the agency says is a network of both DEA systems and external law enforcement ALPR systems that it uses to investigate crimes such as drug trafficking and bulk cash smuggling.
Again, if you're traveling towards the borderΒ and you pass a checkpoint, you're often captured by parallel DEA systems set up on the opposite side of the road. However, these systems have also been found to be installed on their own away from checkpoints.Β
These are a major component of the DEA's NLPRP, which has a standard retention period of 90 days. This program dates back to at least 2010, according to records obtained by the ACLU.Β
Here is a typical DEA system that you will find installed near existing Border Patrol checkpoints:
DEA ALPR set-up in southern Arizona. Source: EFF
These are typically made by a different vendor, Selex ES, which also includes the brands ELSAG and Leonardo. Here is a close-up:
Close-up of a DEA camera near the Tohono O'odham Nation in Arizona. Source: EFF
As you drive along border highways, law enforcement agencies have disguised cameras in order to capture your movements.Β
The exact number of covert ALPRs at the border is unknown, but to date we have identified approximately 100 sites. We know CBP and DEA each operate covert ALPR systems, but it isn't always possible to know which agency operates any particular set-up.Β
Another rule of thumb: if a covert ALPR has a Motorola Solutions camera (formerly Vigilant Solutions) inside, it's likely a CBP system. If it has a Selex ES camera inside, then it is likely a DEA camera.Β
Here are examples of construction barrels with each kind of camera:Β
A covert ALPR with a Motorola Solutions ALPR camera near Calexico, Calif. Source: EFF
These are typically seen along the roadside, often in sets of three, but almost always connected to some sort of solar panel. They are often placed behind existing barriers.
A covert ALPR with a Selex ES camera in southern Arizona. Source: EFF
The DEA models are also found by the roadside, but they also can be found inside or near checkpoints.Β
If you're curious (as we were), here's what they look like inside, courtesy of the US Patent and Trademark Office:
Patent for portable covert license plate reader. Source: USPTO
In addition to orange construction barrels, agencies also conceal ALPRs in yellow sandbarrels. For example, these can be found throughout southern Arizona, especially in the southeastern part of the state.
A covert ALPR system in Arizona. Source: EFF
Sometimes a speed trailer or signage trailer isn't designed so much for safety but to conceal ALPR systems. Sometimes ALPRs are attached to indistinct trailers with no discernible purpose that you'd hardly notice by the side of the road.Β
It's important to note that its difficult to know who these belong to, since they aren't often marked. We know that all levels of government, even in the interior of the country, have purchased these set ups.Β Β
Here are some of the different flavors of ALPR trailers:
An ALPR speed trailer in Texas. Source: EFF
ALPR trailer in Southern California. Source. EFF
ALPR trailer in Southern California. Source. EFF
An ALPR unit in southern Arizona. Source: EFF
ALPR unit in southern Arizona. Source: EFF
A Jenoptik Vector ALPR trailer in La Joya, Texas. Source: EFF
One particularly worrisome version of an ALPR trailer is the Jenoptik Vector: at least two jurisdictions along the border have equipped these trailers not only with ALPR, but with TraffiCatch technology that gathers Bluetooth and Wi-Fi identifiers. This means that in addition to gathering plates, these devices would also document mobile devices, such as phones, laptops, and even vehicle entertainment systems.
Stationary or fixed ALPR is one of the more traditional ways of installing these systems. The cameras are placed on existing utility poles or other infrastructure or on poles installed by the ALPR vendor.Β
For example, here's a DEA system installed on a highway arch:
The lower set of ALPR cameras belong to the DEA. Source: Dugan Meyer CC BY![]()
ALPR camera in Arizona. Source: Dugan Meyer CC BY![]()
At the local level, thousands of cities around the United States have adopted fixed ALPR, with the company Flock Safety grabbing a huge chunk of the market over the last few years. County sheriffs and municipal police along the border have also embraced the trend, with many using funds earmarked for border security to purchase these systems. Flock allows these agencies to share with one another and contribute their ALPR scans to a national pool of data. As part of a pilot program, Border Patrol had access to this ALPR data for most of 2025.Β
A typical Flock Safety setup involves attaching cameras and solar panels to poles. For example:
Flock Safety ALPR poles installed just outside the Tohono O'odham Nation in Arizona. Source: EFF
A close-up of a Flock Safety camera in Douglas, Arizona. Source: EFF
We've also seen these camera poles placed outside the Santa Teresa Border Patrol station in New Mexico.
Flock may now be the most common provider nationwide, but it isn't the only player in the field. DHS recently released a market survey of 16 different vendors providing similar technology.Β Β
ALPR cameras can also be found attached to patrol cars. Here's an example of a Motorola Solutions ALPR attached to a Hidalgo County Constable vehicle in South Texas:
Mobile ALPR on a Hidalgo County Constable vehicle. Source: Weslaco Police Department
These allow officers not only to capture ALPR data in real time as they are driving along, but they will also receive an in-car alert when a scan matches a vehicle on a "hot list," the term for a list of plates that law enforcement has flagged for further investigation.Β
Here's another example:Β
Mobile ALPR in La Mesa, Calif.. Source: La Mesa Police Department Facebook page
EFF has been documenting the wide variety of technologies deployed at the border, including surveillance towers, aerostats, and trail cameras. To learn more, download EFF's zine, "Surveillance Technology at the US-Mexico Border" and explore our map of border surveillance, which includes Google Streetview links so you can see exactly how each installation looks on the ground. Currently we have mapped out most DEA and CBP checkpoint ALPR setups, with covert cameras planned for addition in the near future.

This week on the Lock and Code podcastβ¦
Itβs often said online that if a product is free, youβre the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?
In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether peopleβs air fryersβseriouslyβmight be spying on them.
By analyzing the associatedΒ AndroidΒ apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didnβt just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastriesβthey also wanted a lot of user data, from precise location to voice recordings from a userβs phone.
As the researchers wrote:
βIn the air fryer category, as well as knowing customersβ precise location, all three products wanted permission to record audio on the userβs phone, for no specified reason.β
Bizarrely, these types of data requests are far from rare.
Today, on the Lock and Code podcast, we revisit a 2024Β episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.
These stories arenβt about mass government surveillance, and theyβre not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.
Tune in today to listen to the full conversation.
Show notes and credits:
Intro Music: βSpellboundβ by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: βGood Godβ by Wowa (unminus.com)
Listen upβMalwarebytes doesnβt just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with ourΒ exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.
The internet is many things, but for many of us, it is far from private. By choosing to engage with the digital world, you often must give up your anonymity: trackers watch your every move as your surf the web and scroll on social media sites, and they use that information to build profiles of who (and where) you are and deliver you more "relevant" ads.
It doesn't have to be this way. There are a number of tactics that can help keep your browsing private. You can use a VPN to make it look like your internet activity is coming from somewhere else; if you use Safari, you can take advantage of Private Relay to hide your IP address from websites you visit; or, you can connect the internet across a different network altogether: Tor.
The whole idea behind Tor (which is short for The Onion Router) is to anonymize your internet browsing so that no one can tell that it is you visiting any particular website. Tor started out as a project of the U.S. Naval Research Lab in the 1990s, but developed into a nonprofit organization in 2006. Ever since, the network has been popular with users who want to privatize their web activity, whether they're citizens of countries with strict censorship laws, journalists working on sensitive stories, or simply privacy-focused individuals.
Tor is a network, but it's commonly conflated with the project's official browser, also known as Tor. The Tor Browser is a modified version of Firefox that connects to the Tor network. The browser removes many of the technical barriers to entry for the Tor network: You can still visit your desired URLs as you would in Chrome or Edge, but the browser will connect you to them automatically via the Tor network automatically. But what does that mean?
Traditionally, when you visit a website, your data is sent directly to that site, complete with your identifying information (i.e. your device's IP address). That website, your internet service provider, and any other entities that might be privy to your internet traffic can all see that it is your device making the request, and can collect that information accordingly. This can be as innocent as the website in question storing your details for your next visit, or as scummy as the site following you around the internet.
Tor flips the script on this internet browsing model. Rather than connect your device directly to the website you're visiting, Tor runs your connection through a number of different servers, known as "nodes." These nodes are hosted by volunteers all over the world, so there's no telling which nodes your request will go through when you initiate a connection.
But Tor would not be known for its privacy if it only relied on multiple nodes to bounce your traffic around. In addition to the nodes, Tor adds layers of encryption your request. When the request passes from one node to another, each node is only able to decrypt one layer of the encryption, just enough to learn where to send the next request to. This method ensures that no one node in the system knows too much: Each only knows where the request came from one step before, and where it is sending the request to in the following step. It's like peeling back layers of an onion, hence the platform's name.
Here's a simplified example of how it works: Let's say you want to visit Lifehacker.com through Tor. You initiate the request as you normally would, by typing the URL into Tor's address bar and hitting enter. When you do, Tor adds layered encryption to your request. The first node it sends it to, perhaps based in, say, the U.S., can unlock one layer of that encryption, which tells the node which node to send it to next. The next node, based perhaps in Japan, decrypts another layer of that encryption, which tells it to send it to a third node in Germany. That third node (known as the exit node) decrypts the final layer of encryption, which tells the node to connect to Lifehacker.com. Once Lifehacker receives the request, the reverse happens: Lifehacker sends the request to the node in Germany, which adds back its layer of encryption. It then sends it back to the node in Japan, which adds a second layer of encryption. It sends it back to the node in the U.S., which adds the final layer of encryption, before sending the fully encrypted request back to your browser, which can decrypt the entire request on your behalf. Congratulations: You have just visited Lifehacker.com, without revealing your identity.
While Tor goes a long way to anonymizing your internet activity, it won't protect you entirely. One of the network's biggest weaknesses is in the exit node: Since the final node in the chain carries the decrypted request, it can see where you're going, and, potentially, what you're doing when you get there. It won't be able to know where the request originated, but it can see that you're trying to access Lifehacker. Depending on what sites you're accessing, you might give enough information away to reveal yourself.
This was especially an issue when websites were largely using the unencrypted HTTP protocol. If you connected to an unencrypted website, that final node might be able to see your activity on the site itself, including login information, messages, or financial data. But now that most sites have switched to the encrypted HTTPS protocol, there's less concern with third-parties being able to access the contents of your traffic. Still, even if trackers can't see exactly what you're doing or saying on these sites, they can see you visited the site itself, which is why Tor is still useful in today's encrypted internet.
If you've heard anything about Tor, you might know it as the go-to service for accessing the dark web. That is true, but that doesn't make Tor bad. The dark web is not inherently bad, either: It's simply a network of sites that cannot be accessed by standard web browsers. That includes a number of very bad sites filled with very bad stuff, to be sure. But it also encompasses a number of perfectly legal activities as well. Chrome or Firefox cannot see dark web sites, but Tor browser can.
But you don't need to visit the dark web in order for Tor to be useful. Anyone who wants to keep their internet traffic private from the world can benefit. You might have a serious need for this, such as if you live in a country that won't let you access certain websites, or if you're a reporter working on a story that could have ramifications should the information leak. But you don't need to have a specialized case to benefit. Tor can help reduce anyone's digital footprint, and keep trackers from following you around the internet.
If you do decide to use Tor, understand that it won't be as fast as other modern browsers. Running your traffic through multiple international nodes takes a toll on performance, so you may be waiting a bit longer for your websites to load than you're used to. However, it won't cost you anything to try it, as the browser is completely free to download and use on Mac, Windows, Linux, and Android. (Sorry, iOS fans.) If you're worried about what you've heard about the dark web, don't be: The only way to access that material it is to seek it out directly. Otherwise, using Tor will feel just like using any other browserβalbeit just a tad slower.
In his 2020 book, βFuture Politics,β British barrister Jamie Susskind wrote that the dominant question of the 20th century was βHow much of our collective life should be determined by the state, and what should be left to the market and civil society?β But in the early decades of this century, Susskind suggested that we face a different question: βTo what extent should our lives be directed and controlled by powerful digital systemsβand on what terms?β
Artificial intelligence (AI) forces us to confront this question. It is a technology that in theory amplifies the power of its users: A manager, marketer, political campaigner, or opinionated internet user can utter a single instruction, and see their messageβwhatever it isβinstantly written, personalized, and propagated via email, text, social, or other channels to thousands of people within their organization, or millions around the world. It also allows us to individualize solicitations for political donations, elaborate a grievance into a well-articulated policy position, or tailor a persuasive argument to an identity group, or even a single person.
But even as it offers endless potential, AI is a technology thatβlike the stateβgives others new powers to control our lives and experiences.
Weβve seen this out play before. Social media companies made the same sorts of promises 20 years ago: instant communication enabling individual connection at massive scale. Fast-forward to today, and the technology that was supposed to give individuals power and influence ended up controlling us. Today social media dominates our time and attention, assaults our mental health, andβtogether with its Big Tech parent companiesβcaptures an unfathomable fraction of our economy, even as it poses risks to our democracy.
The novelty and potential of social media was as present then as it is for AI now, which should make us wary of its potential harmful consequences for society and democracy. We legitimately fear artificial voices and manufactured reality drowning out real people on the internet: on social media, in chat rooms, everywhere we might try to connect with others.
It doesnβt have to be that way. Alongside these evident risks, AI has legitimate potential to transform both everyday life and democratic governance in positive ways. In our new book, βRewiring Democracy,β we chronicle examples from around the globe of democracies using AI to make regulatory enforcement more efficient, catch tax cheats, speed up judicial processes, synthesize input from constituents to legislatures, and much more. Because democracies distribute power across institutions and individuals, making the right choices about how to shape AI and its uses requires both clarity and alignment across society.
To that end, we spotlight four pivotal choices facing private and public actors. These choices are similar to those we faced during the advent of social media, and in retrospect we can see that we made the wrong decisions back then. Our collective choices in 2025βchoices made by tech CEOs, politicians, and citizens alikeβmay dictate whether AI is applied to positive and pro-democratic, or harmful and civically destructive, ends.
The Federal Election Commission (FEC) calls it fraud when a candidate hires an actor to impersonate their opponent. More recently, they had to decide whether doing the same thing with an AI deepfake makes it okay. (They concluded it does not.) Although in this case the FEC made the right decision, this is just one example of how AIs could skirt laws that govern people.
Likewise, courts are having to decide if and when it is okay for an AI to reuse creative materials without compensation or attribution, which might constitute plagiarism or copyright infringement if carried out by a human. (The court outcomes so far are mixed.) Courts are also adjudicating whether corporations are responsible for upholding promises made by AI customer service representatives. (In the case of Air Canada, the answer was yes, and insurers have started covering the liability.)
Social media companies faced many of the same hazards decades ago and have largely been shielded by the combination of Section 230 of the Communications Act of 1994 and the safe harbor offered by the Digital Millennium Copyright Act of 1998. Even in the absence of congressional action to strengthen or add rigor to this law, the Federal Communications Commission (FCC) and the Supreme Court could take action to enhance its effects and to clarify which humans are responsible when technology is used, in effect, to bypass existing law.
As AI-enabled products increasingly ask Americans to share yet more of their personal informationβtheir βcontextββto use digital services like personal assistants, safeguarding the interests of the American consumer should be a bipartisan cause in Congress.
It has been nearly 10 years since Europe adopted comprehensive data privacy regulation. Today, American companies exert massive efforts to limit data collection, acquire consent for use of data, and hold it confidential under significant financial penaltiesβbut only for their customers and users in the EU.
Regardless, a decade later the U.S. has still failed to make progress on any serious attempts at comprehensive federal privacy legislation written for the 21st century, and there are precious few data privacy protections that apply to narrow slices of the economy and population. This inaction comes in spite of scandal after scandal regarding Big Tech corporationsβ irresponsible and harmful use of our personal data: Oracleβs data profiling, Facebook and Cambridge Analytica, Google ignoring data privacy opt-out requests, and many more.
Privacy is just one side of the obligations AI companies should have with respect to our data; the other side is portabilityβthat is, the ability for individuals to choose to migrate and share their data between consumer tools and technology systems. To the extent that knowing our personal context really does enable better and more personalized AI services, itβs critical that consumers have the ability to extract and migrate their personal context between AI solutions. Consumers should own their own data, and with that ownership should come explicit control over who and what platforms it is shared with, as well as withheld from. Regulators could mandate this interoperability. Otherwise, users are locked in and lack freedom of choice between competing AI solutionsβmuch like the time invested to build a following on a social network has locked many users to those platforms.
It has become increasingly clear that social media is not a town square in the utopian sense of an open and protected public forum where political ideas are distributed and debated in good faith. If anything, social media has coarsened and degraded our public discourse. Meanwhile, the sole act of Congress designed to substantially reign in the social and political effects of social media platformsβthe TikTok ban, which aimed to protect the American public from Chinese influence and data collection, citing it as a national security threatβis one it seems to no longer even acknowledge.
While Congress has waffled, regulation in the U.S. is happening at the state level. Several states have limited childrenβs and teensβ access to social media. With Congress having rejectedβfor nowβa threatened federal moratorium on state-level regulation of AI, California passed a new slate of AI regulations after mollifying a lobbying onslaught from industry opponents. Perhaps most interesting, Maryland has recently become the first in the nation to levy taxes on digital advertising platform companies.
States now face a choice of whether to apply a similar reparative tax to AI companies to recapture a fraction of the costs they externalize on the public to fund affected public services. State legislators concerned with the potential loss of jobs, cheating in schools, and harm to those with mental health concerns caused by AI have options to combat it. They could extract the funding needed to mitigate these harms to support public servicesβstrengthening job training programs and public employment, public schools, public health services, even public media and technology.
A pivotal moment in the social media timeline occurred in 2006, when Facebook opened its service to the public after years of catering to students of select universities. Millions quickly signed up for a free service where the only source of monetization was the extraction of their attention and personal data.
Today, about half of Americans are daily users of AI, mostly via free products from Facebookβs parent company Meta and a handful of other familiar Big Tech giants and venture-backed tech firms such as Google, Microsoft, OpenAI, and Anthropicβwith every incentive to follow the same path as the social platforms.
But now, as then, there are alternatives. Some nonprofit initiatives are building open-source AI tools that have transparent foundations and can be run locally and under usersβ control, like AllenAI and EleutherAI. Some governments, like Singapore, Indonesia, and Switzerland, are building public alternatives to corporate AI that donβt suffer from the perverse incentives introduced by the profit motive of private entities.
Just as social media users have faced platform choices with a range of value propositions and ideological valencesβas diverse as X, Bluesky, and Mastodonβthe same will increasingly be true of AI. Those of us who use AI products in our everyday lives as people, workers, and citizens may not have the same power as judges, lawmakers, and state officials. But we can play a small role in influencing the broader AI ecosystem by demonstrating interest in and usage of these alternatives to Big AI. If youβre a regular user of commercial AI apps, consider trying the free-to-use service for Switzerlandβs public Apertus model.
None of these choices are really new. They were all present almost 20 years ago, as social media moved from niche to mainstream. They were all policy debates we did not have, choosing instead to view these technologies through rose-colored glasses. Today, though, we can choose a different path and realize a different future. It is critical that we intentionally navigate a path to a positive future for societal use of AIβbefore the consolidation of power renders it too late to do so.
This post was written with Nathan E. Sanders, and originally appeared in Lawfare.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
![]()
This is crazy. Lawmakers in several US states are contemplating banning VPNs, becauseβ¦think of the children!
As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of βprotecting childrenβ in A.B. 105/S.B. 130. Itβs an age verification bill that requires all websites distributing material that could conceivably be deemed βsexual contentβ to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are βharmful to minorsβ beyond the type of speech that states can prohibit minors from accessingΒ potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.
The EFF link explains why this is a terrible idea.
The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.
The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.
In late September, the United Kingdomβs Prime Minister Keir Starmer announced his governmentβs plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, βcut the faffβ in proving peopleβs identities by creating a virtual ID on personal devices with information like peopleβs name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country.Β
This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, weβve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting βDigital passport IDsβ in Android, and states contracting with mobile driverβs license providers (mDL). And as weβve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID.Β
But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Hereβs three reasons why those policymakers have it wrong.Β
Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to openingβor closingβdoors to essential services and experiences.Β
In his initial announcement, Starmer stated: βYou will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.β Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively.Β
The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services.Β
The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association.Β
And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system.Β
These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do βnext to nothing to tackle channel crossings.β
While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as weβve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum.Β
The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but itβs short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods.Β
These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.
Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring.
Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID.
Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would βabsolutely have very strong encryptionβ and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that βdata associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdomβ and that βthe government will work closely with expert stakeholders to make the programme effective, secure and inclusive.βΒ
But if digital ID is needed to verify peopleβs identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged.Β
This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the governmentβs ability to handle giant databases is questionable. Notably, the eVisaβs multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someoneβs identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.
Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)βfirst introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single clientβis still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.
And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this.
Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to openingβor closingβdoors to essential services and experiences.Β
We cannot base our human rights on the governmentβs mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If youβre based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system.Β
The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

![]()
France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications CondΓ© Nast β¬750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.
The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.
NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, CondΓ© Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.
CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications CondΓ© Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.
Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.
The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.
Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.
The fine amount takes into account that CondΓ© Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.
The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo β¬40 million in 2023 and Google β¬325 million earlier in 2025. Spain's AEPD issued a β¬100,000 fine against Euskaltel in related NOYB litigation.
According to reports, CondΓ© Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.
French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.
The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.
The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.
Legal Intern Alexandra Rhodes contributed to this blog post.Β
EFF filed an amicus brief urging the Arizona District Court to protect public school studentsβ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is βon campus.β We argued that students need private digital spaces beyond their schoolβs reach to speak freely, without the specter of constant school surveillance and punishment.Β Β
The case, Merrill v. Marana Unified School District, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: βGANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,β which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.Β Β
Within the hour, the studentβs mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The studentβs mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the studentβs lack of disciplinary record or history of violence, the student was ultimately suspended over the draft emailβeven though he was physically off campus at the time, before school hours, and had never sent the email.Β Β
After the studentβs suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the studentβs right to free speech under the First Amendment and violation of the studentβs right to due process under the Fourteenth Amendment.Β
The U.S.β―Supreme Court has addressed the First Amendment rights of public school students in a handful of cases.Β
Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech βmaterially and substantiallyβ disrupted the school day or invaded the rights of others.Β
Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021), in which EFF filed a brief, the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because βfrom the student speakerβs perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.βΒ
The Ninth Circuit has further held that off-campus speech is only punishable if it bears a βsufficient nexusβ to the school and poses a credible threat of violence.Β
In this case, therefore, the extent of the school districtβs authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.Β Β
Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speechβand makes all student speechβautomatically βon campusβ for purposes of justifying punishment under the First Amendment.Β Β
EFF supports the plaintiffsβ argument that the studentβs speech was βoff campus,β did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a studentβs speech βon campus.βΒ Β Β
Our amicus brief supports the plaintiffsβ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.Β Β
As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.Β Β
Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft) to the websites they visit.Β Β
In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as βon campus,β even if the student is geographically off campus or outside of school hours.Β Β
First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Courtβs admonition in Mahanoy not to regulate βall the speech a student utters during the full 24-hour day.β There must be some place that is βoff campusβ for public school students even when using digital tools provided by schools, otherwise schools will reach too far into studentsβ lives.Β Β
Second, we urged the court to reject such an βon campusβ rule to mitigate the chilling effect of digital surveillance on studentsβ freedom of speechβthat is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always βon campus,β students will undoubtedly curb their speech.Β
Third, we argued that such an βon campusβ rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a βpay for privacyβ scheme: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections.Β
Fourth, such an βon campusβ rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate, causing various harms to studentsβincluding unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning.Β
We urge the Arizona District Court to protect public school studentsβ freedom of speech and privacy by rejecting this approach to school-managed technology. As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone elseβto express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes. Β

The FBI says that account takeover scams this year have resulted in 5,100-plus complaints in the U.S. and $262 million in money stolen, and Bitdefender says the combination of the growing number of ATO incidents and risky consumer behavior is creating an increasingly dangerous environment that will let such fraud expand.
The post FBI: Account Takeover Scammers Stole $262 Million this Year appeared first on Security Boulevard.
According to the Thales Consumer Digital Trust Index 2025, global confidence in digital services is slipping fast. After surveying more than 14,000 consumers across 15 countries, the findings are clear: no sector earned high trust ratings from even half its users. Most industries are seeing trust erode β or, at best, stagnate. In an era..
The post The Trust Crisis: Why Digital Services Are Losing Consumer Confidence appeared first on Security Boulevard.
The Russian state-sponsored group behind the RomCom malware family used the SocGholish loader for the first time to launch an attack on a U.S.-based civil engineering firm, continuing its targeting of organizations that offer support to Ukraine in its ongoing war with its larger neighbor.
The post Russian-Backed Threat Group Uses SocGholish to Target U.S. Company appeared first on Security Boulevard.
A look at why identity security is failing in the age of deepfakes and AI-driven attacks, and how biometrics, MFA, PAD, and high-assurance verification must evolve to deliver true, phishing-resistant authentication.
The post How AI Threats Have Broken Strong AuthenticationΒ appeared first on Security Boulevard.
Read more of this story at Slashdot.
A new iteration of the Shai-Hulud malware that ran through npm repositories in September is faster, more dangerous, and more destructive, creating huge numbers of malicious repositories, compromised scripts, and GitHub users attacked, creating one of the most significant supply chain attacks this year.
The post The Latest Shai-Hulud Malware is Faster and More Dangerous appeared first on Security Boulevard.
Huntress threat researchers are tracking a ClickFix campaign that includes a variant of the scheme in which the malicious code is hidden in the fake image of a Windows Update and, if inadvertently downloaded by victims, will deploy the info-stealing malware LummaC2 and Rhadamanthys.
The post Attackers are Using Fake Windows Updates in ClickFix Scams appeared first on Security Boulevard.
SitusAMC, a services provider with clients like JP MorganChase and Citi, said its systems were hacked and the data of clients and their customers possibly compromised, sending banks and other firms scrambling. The data breach illustrates the growth in the number of such attacks on third-party providers in the financial services sector.
The post Hack of SitusAMC Puts Data of Financial Services Firms at Risk appeared first on Security Boulevard.
I suspect that many people who take an interest in Internet privacy donβt appreciate how hard it is to resist browser fingerprinting. Taking steps to reduce it leads to inconvenience and, with the present state of technology, even the most intrusive approaches are only partially effective. The data collected by fingerprinting is invisible to the user, and stored somewhere beyond the userβs reach.
On the other hand, browser fingerprinting produces only statistical results, and usually canβt be used to track or identify a user with certainty. The data it collects has a relatively short lifespan β days to weeks, not months or years. While it probably can be used for sinister purposes, my main concern is that it supports the intrusive, out-of-control online advertising industry, which has made a wasteland of the Internet.
β« Kevin Boone
My view on this matter is probably a bit more extreme than some: I believe it should be illegal to track users for advertising purposes, because the data collected and the targeting it enables not only violate basic privacy rights enshrined in most constitutions, they also pose a massive danger in other ways. This very same targeting data is already being abused by totalitarian states to influence our politics, which has had disastrous results. Of course, our own democratic governmentsβ hands arenβt exactly clean either in this regard, as they increasingly want to use this data to stop βterroristsβ and otherwise infringe on basic rights. Finally, any time such data ends up on the black market after data breaches, criminals, organised or otherwise, also get their hands on it.
I have no idea what such a ban should look like, or if itβs possible to do this even remotely effectively. In the current political climate in many western countries, which are dominated by the wealthy few and corporate interests, itβs highly unlikely that even if such a ban was passed as lip service to concerned constituents, any fines or other deterrents would probably be far too low to make a difference anyway. As such, my desire to have targeted online advertising banned is mostly theory, not practice β further illustrated by the European Union caving like cowards on privacy to even the slightest bit of pressure.
Best I can do for now is not partake in this advertising hellhole. I disabled and removed all advertising from OSNews recently, and have always strongly advised everyone to use as many adblocking options as possible. We not only have a Pi-Hole to keep all of our devices at home safe, but also use a second layer of on-device adblockers, and I advise everyone to do the same.
In this episode, we discuss the first reported AI-driven cyber espionage campaign, as disclosed by Anthropic. In September 2025, a state-sponsored Chinese actor manipulated the Claude Code tool to target 30 global organizations. We explain how the attack was executed, why it matters, and its implications for cybersecurity. Join the conversation as we examine the [β¦]
The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Shared Security Podcast.
The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Security Boulevard.
Agencies with the US and other countries have gone hard after bulletproof hosting services providers this month, including Media Land, Hypercore, and associated companies and individuals, while the FiveEyes threat intelligence alliance published BPH mitigation guidelines for ISPs, cloud providers, and network defenders.
The post U.S., International Partners Target Bulletproof Hosting Services appeared first on Security Boulevard.