Apple Issues Security Updates After Two WebKit Flaws Found Exploited in the Wild
As you may have heard, on December 3, 2025, the React team announced a critical Remote Code Execution (RCE) vulnerability in servers using the React Server Components (RSC) Flight protocol. The vulnerability, tracked as CVE-2025-55182, carries a CVSS score of 10.0 and is informally known as "React2Shell". It allows attackers to achieve prototype pollution during deserialization of RSC payloads by sending specially crafted multipart requests with "proto", "constructor", or "prototype" as module names. We're happy to announce that community contributor vognik submitted an exploit module for React2Shell which landed earlier this week and is included in this week's release.
Over the past couple of weeks Metasploit has made a couple of key improvements to the framework’s MSSQL attack capabilities. The first (PR 20637) is a new NTLM relay module, auxiliary/server/relay/smb_to_mssql, which enables users to start a malicious SMB server that will relay authentication attempts to one or more target MSSQL servers. When successful, the Metasploit operator will have an interactive session to the MSSQL server that can be used to run interactive queries, or MSSQL auxiliary modules.
Building on this work, it became clear that users would need to interact with MSSQL servers that required encryption as many do in hardened environments. To achieve that objective, issue 18745 was closed by updating Metasploits MSSQL protocol library to offer better encryption support. Now, Metasploit users can open interactive sessions to servers that offer and even require encrypted connections. This functionality is available automatically in the auxiliary/scanner/mssql/mssql_login and new auxiliary/server/relay/smb_to_mssql modules.
Authors: Blaklis, Tomais Williamson, and Valentin Lobstein chocapikk@leakix.net
Type: Exploit
Pull request: #20725 contributed by Chocapikk
Path:multi/http/magento_sessionreaper
AttackerKB reference: CVE-2025-54236
Description: This adds a new exploit module for CVE-2025-54236 (SessionReaper), a critical vulnerability in Magento/Adobe Commerce that allows unauthenticated remote code execution. The vulnerability stems from improper handling of nested deserialization in the payment method context, combined with an unauthenticated file upload endpoint.
Authors: Lachlan Davidson, Maksim Rogov, and maple3142
Type: Exploit
Pull request: #20760 contributed by sfewer-r7
Path: multi/http/react2shell_unauth_rce_cve_2025_55182
AttackerKB reference: CVE-2025-66478
Description: This adds an exploit for CVE-2025-55182 which is an unauthenticated RCE in React. This vulnerability has been referred to as React2Shell.
Authors: Peter Thaleikis and Valentin Lobstein chocapikk@leakix.net
Type: Exploit
Pull request: #20746 contributed by Chocapikk
Path: multi/http/wp_king_addons_privilege_escalation
AttackerKB reference: CVE-2025-8489
Description: This adds an exploit module for CVE-2025-8489, an unauthenticated privilege escalation vulnerability in the WordPress King Addons for Elementor plugin (versions 24.12.92 to 51.1.14). The vulnerability allows unauthenticated attackers to create administrator accounts by specifying the user_role parameter during registration, enabling remote code execution through plugin upload.
Author: bcoles bcoles@gmail.com
Type: Payload (Single)
Pull request: #20682 contributed by bcoles
Path:linux/loongarch64/reboot
Description: This extends our payloads support to a new architecture, LoongArch64. The first payload introduced for this new architecture is the reboot payload, which will cause the target system to restart once triggered.
Modules which have either been enhanced, or renamed:
You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.
As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:
If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

Turn XDR volume into revenue. Morpheus investigates 100% of alerts and triages 95% in under 2 minutes, letting MSSPs scale without adding headcount.
The post The Autonomous MSSP: How to Turn XDR Volume into a Competitive Advantage appeared first on D3 Security.
The post The Autonomous MSSP: How to Turn XDR Volume into a Competitive Advantage appeared first on Security Boulevard.
I have no context for this video—it’s from Reddit—but one of the commenters adds some context:
Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.
With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome)...
The post Friday Squid Blogging: Giant Squid Eating a Diamondback Squid appeared first on Security Boulevard.
I have no context for this video—it’s from Reddit—but one of the commenters adds some context:
Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.
With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome).
When we see big (giant or colossal) healthy squid like this, it’s often because a fisher caught something else (either another squid or sometimes an antarctic toothfish). The squid is attracted to whatever was caught and they hop on the hook and go along for the ride when the target species is reeled in. There are a few colossal squid sightings similar to this from the southern ocean (but fewer people are down there, so fewer cameras, fewer videos). On the original instagram video, a bunch of people are like “Put it back! Release him!” etc, but he’s just enjoying dinner (obviously as the squid swims away at the end).
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Session 5D: Side Channels 1
Authors, Creators & Presenters: Lukas Maar (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Thomas Steinbauer (Graz University of Technology), Daniel Gruss (Graz University of Technology), Stefan Mangard (Graz University of Technology)
PAPER
KernelSnitch: Side Channel-Attacks On Kernel Data Structures
The sharing of hardware elements, such as caches, is known to introduce microarchitectural side-channel leakage. One approach to eliminate this leakage is to not share hardware elements across security domains. However, even under the assumption of leakage-free hardware, it is unclear whether other critical system components, like the operating system, introduce software-caused side-channel leakage. In this paper, we present a novel generic software side-channel attack, KernelSnitch, targeting kernel data structures such as hash tables and trees. These structures are commonly used to store both kernel and user information, e.g., metadata for userspace locks. KernelSnitch exploits that these data structures are variable in size, ranging from an empty state to a theoretically arbitrary amount of elements. Accessing these structures requires a variable amount of time depending on the number of elements, i.e., the occupancy level. This variance constitutes a timing side channel, observable from user space by an unprivileged, isolated attacker. While the timing differences are very low compared to the syscall runtime, we demonstrate and evaluate methods to amplify these timing differences reliably. In three case studies, we show that KernelSnitch allows unprivileged and isolated attackers to leak sensitive information from the kernel and activities in other processes. First, we demonstrate covert channels with transmission rates up to 580 kbit/s. Second, we perform a kernel heap pointer leak in less than 65 s by exploiting the specific indexing that Linux is using in hash tables. Third, we demonstrate a website fingerprinting attack, achieving an F1 score of more than 89 %, showing that activity in other user programs can be observed using KernelSnitch. Finally, we discuss mitigations for our hardware-agnostic attacks.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures appeared first on Security Boulevard.
Regulators made their move in 2025.
Disclosure deadlines arrived. AI rules took shape. Liability rose up the chain of command. But for security teams on the ground, the distance between policy and practice only grew wider.
Part two of a … (more…)
The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle first appeared on The Last Watchdog.
The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle appeared first on Security Boulevard.
![]()
MCP is transforming AI agent connectivity, but authentication is the critical gap. Learn about Shadow IT risks, enterprise requirements, and solutions.
The post What Tech Leaders Need to Know About MCP Authentication in 2025 appeared first on Security Boulevard.
In a nod to the evolving threat landscape that comes with cloud computing and AI and the growing supply chain threats, Microsoft is broadening its bug bounty program to reward researchers who uncover threats to its users that come from third-party code, like commercial and open source software,
The post Microsoft Expands its Bug Bounty Program to Include Third-Party Code appeared first on Security Boulevard.
Israeli cybersecurity firms raised $4.4B in 2025 as funding rounds jumped 46%. Record seed and Series A activity signals a maturing, globally dominant cyber ecosystem.
The post Funding of Israeli Cybersecurity Soars to Record Levels appeared first on Security Boulevard.
OpenAI warns that frontier AI models could escalate cyber threats, including zero-day exploits. Defense-in-depth, monitoring, and AI security by design are now essential.
The post As Capabilities Advance Quickly OpenAI Warns of High Cybersecurity Risk of Future AI Models appeared first on Security Boulevard.
CVE-2025-55183, CVE-2025-55184, and CVE-2025-67779 require immediate attention
The post Three New React Vulnerabilities Surface on the Heels of React2Shell appeared first on Security Boulevard.
The NCSC warns prompt injection is fundamentally different from SQL injection. Organizations must shift from prevention to impact reduction and defense-in-depth for LLM security.
The post Prompt Injection Can’t Be Fully Mitigated, NCSC Says Reduce Impact Instead appeared first on Security Boulevard.
To transform cyber risk into economic advantage, leaders must treat cyber as a board-level business risk and rehearse cross-border incidents with partners to build trust.
The post Cyber Risk is Business Risk: Embedding Resilience into Corporate Strategy appeared first on Security Boulevard.
As they work to fend off the rapidly expanding number of attempts by threat actors to exploit the dangerous React2Shell vulnerability, security teams are learning of two new flaws in React Server Components that could lead to denial-of-service attacks or the exposure of source code.
The post React Fixes Two New RSC Flaws as Security Teams Deal with React2Shell appeared first on Security Boulevard.
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...
The post Building Trustworthy AI Agents appeared first on Security Boulevard.
Technology professionals hoping to come and work in the US face a new privacy concern. Starting December 15, skilled workers on H-1B visas and their families must flip their social media profiles to public before their consular interviews. It’s a deeply risky move from a security and privacy perspective.
According to a missive from the US State Department, immigration officers use all available information to vet newcomers for signs that they pose a threat to national security. That includes an “online presence review.” That review now requires not just H-1B applicants but also H-4 applicants (their dependents who want to move with them to the US) to “adjust the privacy settings on all of their social media profiles to ‘public.'”
An internal State Department cable obtained by CBS had sharper language: it instructs officers to screen for “any indications of hostility toward the citizens, culture, government, institutions, or founding principles of the United States.” What that means is unclear, but if your friends like posting strong political opinions, you should be worried.
This isn’t the first time that the government has forced people to lift the curtain on their private digital lives. The US State Department forced student visa applicants to make their social media profiles public in June this year.
This is a big deal for a lot of people. The H-1B program allows companies to temporarily hire foreign workers in specialty jobs. The US processed around 400,000 visas under the H-1B program last year, most of which were applications to renew employment, according to the Pew Research Center. When you factor in those workers’ dependents, we’re talking well over a million people. This decision forces them into long-term digital exposure that threatens not just them, but the US too.
A lot of these H-1B workers work for defense contractors, chip makers, AI labs, and big tech companies. These are organizations that foreign powers (especially those hostile to the US) care a lot about, and that makes those H-1B employees primary targets for them.
Making H-1B holders’ real names, faces, and daily routines public is a form of digital doxxing. The policy exposes far more personal information than is safe, creating significant new risks.
This information gives these actors a free organizational chart, complete with up-to-date information on who’s likely to be working on chip designs and sensitive software.
It also gives the same people all they need to target people on that chart. They have information on H-1B holders and their dependents, including intelligence about their friends and family, their interests, their regular locations, and even what kinds of technology they use. They become more exposed to risks like SIM swapping and swatting.
This public information also turns employees into organizational attack vectors. Adversaries can use personal and professional data to enhance spear-phishing and business email compromise techniques that cost organizations dearly. Public social media content becomes training data for fraud, serving up audio and video that threat actors can use to create lifelike impersonations of company employees.
Social media profiles also give adversaries an ideal way to approach people. They have a nasty habit of exploiting social media to target assets for recruitment. The head of MI5 warned two years ago that Chinese state actors had approached an estimated 20,000 Britons via LinkedIn to steal industrial or technological secrets.
Armed with a deep, intimate understanding of what makes their targets tick, attackers stand a much better chance of co-opting them. One person might need money because of a gambling problem or a sick relative. Another might be lonely and a perfect target for a romance scam.
Or how about basic extortion? LGBTQ+ individuals from countries where homosexuality is criminalized risk exposure to regimes that could harm them when they return. Family in hostile countries become bargaining chips. In some regions, families of high-value employees could face increased exposure if this information becomes accessible. Foreign nation states are good at exploiting pain points. This policy means that they won’t have to look far for them.
Visa applications might assume they can simply make an account private again once officials have evaluated them. But adversary states to the US are actively seeking such information. They have vast online surveillance operations that scrape public social media accounts. As soon as they notice someone showing up in the US with H-1B visa status, they’ll be ready to mine account data that they’ve already scraped.
So what is an H-1B applicant to do? Deleting accounts is a bad idea, because sudden disappearance can trigger suspicion and officers may detect forensic traces. A safer approach is to pause new posting and carefully review older content before making profiles public. Removing or hiding posts that reveal personal routines, locations, or sensitive opinions reduces what can be taken out of context or used for targeting once accounts are exposed.
The irony is that spies are likely using fake social media accounts honed for years to slip under the radar. That means they’ll keep operating in the dark while legitimate H-1B applicants are the ones who become vulnerable. So this policy may unintentionally create the very risks it aims to prevent. And it also normalizes mandatory public exposure as a condition of government interaction.
We’re at a crossroads. Today, visa applicants, their families, and their employers are at risk. The infrastructure exists to expand this approach in the future. Or officials could stop now and rethink, before these risks become more deeply entrenched.
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.
The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country.
But the case for digital identification has not been made.
As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:
Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.
Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.
Read our civil society briefing in full here.

For years, compliance has been one of the most resource-intensive responsibilities for cybersecurity teams. Despite growing investments in tools, the day-to-day reality of compliance is still dominated by manual, duplicative tasks. Teams chase down screenshots, review spreadsheets, and cross-check logs, often spending weeks gathering information before an assessment or audit.
The post 3 Compliance Processes to Automate in 2026 appeared first on Security Boulevard.
Understand what PII is, why email puts it at risk, and how your business can strengthen security to better protect sensitive information.
The post PII in email: Explanation, risks, & protection appeared first on Security Boulevard.
Researchers have found evidence that AI conversations were inserted in Google search results to mislead macOS users into installing the Atomic macOS Stealer (AMOS). Both Grok and ChatGPT were found to have been abused in these attacks.
Forensic investigation of an AMOS alert showed the infection chain started when the user ran a Google search for “clear disk space on macOS.” Following that trail, the researchers found not one, but two poisoned AI conversations with instructions. Their testing showed that similar searches produced the same type of results, indicating this was a deliberate attempt to infect Mac users.
The search results led to AI conversations which provided clearly laid out instructions to run a command in the macOS Terminal. That command would end with the machine being infected with the AMOS malware.
If that sounds familiar, you may have read our post about sponsored search results that led to fake macOS software on GitHub. In that campaign, sponsored ads and SEO-poisoned search results pointed users to GitHub pages impersonating legitimate macOS software, where attackers provided step-by-step instructions that ultimately installed the AMOS infostealer.
As the researchers pointed out:
“Once the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal command decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest credentials, escalate privileges, and establish persistence without ever triggering a security warning.”
This is dangerous for the user on many levels. Because there is no prompt or review, the user does not get a chance to see or assess what the downloaded script will do before it runs. It bypasses security because of the use of the command line, it can bypass normal file download protections and execute anything the attacker wants.
Other researchers have found a campaign that combines elements of both attacks: the shared AI conversation and fake software install instructions. They found user guides for installing OpenAI’s new Atlas browser for macOS through shared ChatGPT conversations, which in reality led to AMOS infections.
The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide which in reality will infect a system. ChatGPT’s sharing feature creates a public link to a single conversation that exists in the owner’s account. Attackers can craft a chat to produce the instructions they need and then tidy up the visible conversation so that what’s shared looks like a short, clean guide rather than a long back-and-forth.
Most major chat interfaces (including Grok on X) also let users delete conversations or selectively share screenshots. That makes it easy for criminals to present only the polished, “helpful” part of a conversation and hide how they arrived there.
The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide that, in reality, installs malware. ChatGPT’s sharing feature creates a public link to a conversation that lives in the owner’s account. Attackers can curate their conversations to create a short, clean conversation which they can share.
Then the criminals either pay for a sponsored search result pointing to the shared conversation or they use SEO techniques to get their posts high in the search results. Sponsored search results can be customized to look a lot like legitimate results. You’ll need to check who the advertiser is to find out it’s not real.

From there, it’s a waiting game for the criminals. They rely on victims to find these AI conversations through search and then faithfully follow the step-by-step instructions.
These attacks are clever and use legitimate platforms to reach their targets. But there are some precautions you can take.
curl … | bash or similar combinations.If you’ve scanned your Mac and found the AMOS information stealer:
If all this sounds too difficult for you to do yourself, ask someone or a company you trust to help you—our support team is happy to assist you if you have any concerns.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.
These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.
The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.
To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.
What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.
First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.
Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.
Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.
Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.
Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.
Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.
I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.
The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.
Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.
This essay was originally published in IEEE Security & Privacy.
When you’re shopping around for a Virtual Private Network (VPN) you’ll find yourself in a sea of promises like “military-grade encryption!” and “total anonymity!” You can’t scroll two inches without someone waving around these fancy terms.
But not all VPNs can be trusted. Some VPNs genuinely protect your privacy, and some only sound like they do.
With VPN usage rising around the world for streaming, travel, remote work, and basic digital safety, understanding what makes a VPN truly private matters more than ever.
After years of trying VPNs for myself, privacy-minded family members, and a few mission-critical projects, here’s what I wish everyone knew.
If you’re wondering whether a VPN is worth it, you’re not alone. As your privacy-conscious consumer advocate, let me break down three time-saving and cost-saving benefits of using a privacy-first VPN.
Ever feel like someone’s always looking over your shoulder online? Without a VPN, your internet service provider, and sometimes websites or governments, can keep tabs on what you do. A VPN encrypts your traffic and swaps out your real IP address for one of its own, letting you browse, shop, and read without a digital paper trail following you around.
I’ve run into this myself while traveling. There were times when I needed a VPN just to access US or European web apps that were blocked in certain Asian countries. In other cases, I preferred to appear “based” in the US so that English-language apps would load naturally, instead of defaulting to the local language, currency, or content of the country I was visiting.
Some of your favorite shows and websites are locked away simply because of where you live. In many cases, subscription or pay-per-view prices are higher in more prosperous regions. With a VPN, you can connect to servers in other countries and unlock content that isn’t available at home.
For example, when All Elite Wrestling (AEW) announced its major 2022 pay-per-view featuring CM Punk vs. Jon Moxley, US fans paid $49.99 through Bleacher Report. Fans in the UK, meanwhile, watched the exact same event on FiteTV for $23 less, around half the price. Because platforms determine pricing based on your IP address, a VPN server in another region can show you the pricing available in that country. Savings like that can make a VPN pay for itself quickly.
Before you join a network named “Starbucks Guest WiFi,” remember that nothing stops a cybercriminal from broadcasting a hotspot with the same name. Public Wi-Fi is convenient, but it’s also one of the easiest places for someone to snoop on your traffic.
Connecting to your VPN immediately encrypts everything you send or receive. That means you can check email, pay bills, or browse privately without worrying about someone nearby intercepting your information. Getting compromised will cost far more in money, time, and stress than most privacy-first VPN subscriptions.
For a VPN, “privacy-first” can’t be just a nice slogan. It’s a mindset that shapes every technical, business, and legal decision.
A privacy-first VPN:
If a VPN can’t explain how it handles these areas, that’s a red flag.
WireGuard isn’t a VPN service. It’s the protocol that powers many modern VPNs, including Malwarebytes Privacy VPN. It’s the engine that handles encryption and securely routes your traffic.
WireGuard is the superstar in the VPN world. Unlike clunkier, older protocols (like OpenVPN or IPSec) it’s deliberately lean and built for the modern internet. Its small codebase is easier to audit and leaves fewer places for bugs to hide. It’s fully open-source, so researchers can dig into exactly how it works.
Its cryptography is fast, efficient, and modern with strong encryption, solid key exchange, and lightweight hashing that reduces overhead. In practice, that means better privacy and better performance without a provider having to gather connection data just to keep speeds usable.
Of course, WireGuard is just the foundation. Each VPN implements it differently. The better ones add privacy-friendly tweaks like rotating IP addresses or avoiding static identifiers so that even they can’t link sessions back to individual users.
With VPN usage rising, especially where new age-verification rules have sparked debate about whether VPNs might face future scrutiny, it’s more important than ever to choose providers with strong, transparent privacy practices.
When you boil it down, a handful of questions reveal almost everything about how a VPN treats your privacy:
If a VPN provider gets evasive about any of this, or runs its service “for free” while collecting data to make the numbers work, that tells you almost everything you need to know.

One of the most revealing questions you can ask is deceptively simple: Who actually owns the servers?
Most VPNs rent hardware from large data centers or cloud platforms. When they do, your traffic travels through machines managed not only by the VPN’s engineers, but also by whoever runs those facilities. That introduces an access question: Who else has their hands on the hardware?
When a VPN owns and operates its equipment, including racks and networking gear, it reduces the number of unknowns dramatically. The fewer third parties in the chain, the easier it is to stand behind privacy guarantees.
RAM-only servers take this a step further. Because everything runs in memory, nothing is ever written to a hard drive. Pull the plug and the entire working state disappears instantly, like wiping a whiteboard clean. That means no logs sitting quietly on a disk, nothing for an intruder or authorities to seize, and nothing left behind if ownership, personnel, or legal circumstances change.
This setup also tends to go hand-in-hand with owning the hardware. Most public cloud environments simply don’t allow true diskless deployments with full control over the underlying machine.
Even with strong infrastructure and protocols, the details still matter. A solid kill switch keeps your traffic from leaking if the connection drops. Private DNS prevents queries from being routed through third parties. Multi-hop routes make correlation attacks harder. And torrent users may want carefully implemented port forwarding that doesn’t introduce side channels.
These aren’t flashy features, but they show whether a provider has considered the full privacy landscape, not just the obvious parts.
A provider that truly stands behind its privacy claims will welcome outside inspection. Independent audits, published findings, and ongoing transparency reports help confirm whether logging is disabled in practice, not just in principle. Some companies also maintain warrant canaries (more on this below). None of these are perfect, but together they paint a clear picture of how seriously the VPN treats user trust.
Okay, so here’s something interesting: some companies use something called a “warrant canary” to quietly let us know if they’ve received a top-secret government request for data. Here’s the deal…it’s illegal for them to simply tell us, “Hey, the government’s snooping around.” So, instead, they publish a simple statement that says something like, “As of January 2026, we haven’t received any secret orders for your data.”
The clever part is that they update this statement on a regular basis. If it suddenly disappears or just stops getting updated, it could mean the company got hit with one of these hush-hush requests and legally can’t talk about it. It’s like the digital version of a warning signal. It is nothing flashy, but if you’re paying attention, you’ll spot when something changes.
It’s not a perfect system (and who knows what the courts will think of it in the future), but a warrant canary is one-way companies try to be on our side, finding ways to keep us in the loop even when they’re told to stay silent. So, give an extra ounce of trust to companies that publish these regularly.
Expect to see continued evolution: new cryptography built for a post-quantum world, more transparency from providers, decentralized and community-run VPN options, and tighter integration with secure messaging, encrypted DNS, and whatever comes next.
It’s also worth keeping an eye on how governments respond to rising VPN use. In the UK, for example, new age-verification rules triggered a huge spike in VPN sign-ups and a public debate about whether VPN usage should be monitored more closely. There’s no proposal to restrict or ban VPNs, but the conversation is active.
If you care about your privacy online, don’t settle for slick marketing. Look for the real foundations like modern protocols, owned and well-managed infrastructure, RAM-only servers, regular audits, and a culture that treats transparency as a habit, not a stunt.
Privacy is engineered, not simply promised. With the right VPN, you stay in control of your digital life instead of hoping someone else remembers to keep your secrets safe.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
![]()
![]()
Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.
The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.
![]()
Security incidents don’t fail because of a lack of tools; they fail because of a lack of insight. In an environment where every minute of downtime equals revenue loss, customer impact, and regulatory risk, root cause analysis has become a decisive factor in how effectively organizations execute incident response and stabilize operations. The difference between […]
The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Kratikal Blogs.
The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Security Boulevard.
![]()
As the clock ticks down to the full enforcement of Hong Kong’s Protection of Critical Infrastructures (Computer Systems) Ordinance on January 1, 2026, designated operators of Critical Infrastructures (CI) and Critical Computer Systems (CCS) must act decisively. This landmark law mandates robust cybersecurity measures for Critical Computer Systems (CCS) to prevent disruptions, with non-compliance risking […]
The post Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..
The post Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know appeared first on Security Boulevard.
Discover the latest changes in online account management, focusing on Enterprise SSO, CIAM, and enhanced security. Learn how these updates streamline login processes and improve user experience.
The post Learn about changes to your online account management appeared first on Security Boulevard.