Authors, Creators & Presenters: Zizhi Jin (Zhejiang University), Qinhong Jiang (Zhejiang University), Xuancun Lu (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)
PAPER
PhantomLiDAR: Cross-Modality Signal Injection Attacks Against LiDAR
LiDAR is a pivotal sensor for autonomous driving, offering precise 3D spatial information. Previous signal attacks against LiDAR systems mainly exploit laser signals. In this paper, we investigate the possibility of cross-modality signal injection attacks, i.e., injecting intentional electromagnetic interference (IEMI) to manipulate LiDAR output. Our insight is that the internal modules of a LiDAR, i.e., the laser receiving circuit, the monitoring sensors, and the beam-steering modules, even with strict electromagnetic compatibility (EMC) testing, can still couple with the IEMI attack signals and result in the malfunction of LiDAR systems. Based on the above attack surfaces, we propose the alias attack, which manipulates LiDAR output in terms of Points Interference, Points Injection, Points Removal, and even LiDAR Power-Off. We evaluate and demonstrate the effectiveness of alias with both simulated and real-world experiments on five COTS LiDAR systems. We also conduct feasibility experiments in real-world moving scenarios. We provide potential defense measures that can be implemented at both the sensor level and the vehicle system level to mitigate the risks associated with IEMI attacks.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Struggling with MCP authentication? The November 2025 spec just changed everything. CIMD replaces DCR's complexity with a simple URL-based approach—no registration endpoints, no client ID sprawl, built-in identity verification. Here's your complete implementation guide with production code.
The recent TruffleNet campaign, first documented by Fortinet, highlights a familiar and uncomfortable truth for security leaders: some of the most damaging cloud attacks aren’t exploiting zero-day vulnerabilities. They’re exploiting identity models that were never designed for the scale and automation of modern cloud environments. Nothing about this attack was novel. That’s precisely the problem. …
Authors, Creators & Presenters: Martin Unterguggenberger (Graz University of Technology), Lukas Lamster (Graz University of Technology), David Schrammel (Graz University of Technology), Martin Schwarzl (Cloudflare, Inc.), Stefan Mangard (Graz University of Technology)
PAPER
TME-Box: Scalable In-Process Isolation through Intel TME-MK Memory Encryption
Efficient cloud computing relies on in-process isolation to optimize performance by running workloads within a single process. Without heavy-weight process isolation, memory safety errors pose a significant security threat by allowing an adversary to extract or corrupt the private data of other co-located tenants. Existing in-process isolation mechanisms are not suitable for modern cloud requirements, e.g., MPK's 16 protection domains are insufficient to isolate thousands of cloud workers per process. Consequently, cloud service providers have a strong need for lightweight in-process isolation on commodity x86 machines. This paper presents TME-Box, a novel isolation technique that enables fine-grained and scalable sandboxing on commodity x86 CPUs. By repurposing Intel TME-MK, which is intended for the encryption of virtual machines, TME-Box offers lightweight and efficient in-process isolation. TME-Box enforces that sandboxes use their designated encryption keys for memory interactions through compiler instrumentation. This cryptographic isolation enables fine-grained access control, from single cache lines to full pages, and supports flexible data relocation. In addition, the design of TME-Box allows the efficient isolation of up to 32K concurrent sandboxes. We present a performance-optimized TME-Box prototype, utilizing x86 segment-based addressing, that showcases geomean (geometric mean) performance overheads of 5.2 % for data isolation and 9.7 % for code and data isolation, evaluated with the SPEC CPU2017 benchmark suite.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our dependence on digital infrastructure has grown exponentially amid unprecedented technological advancements. With this reliance comes an increasingly threatening landscape and expanding attack surfaces. As cyberthreats become more sophisticated, so must our defensive strategies. Enter large language models (LLMs) and domain-specific language models, potent weapons in the fight against threats. LLMs have gained prominence due to..
A recent OpenAI-related breach via third-party provider Mixpanel exposes how AI supply chain vulnerabilities enable phishing, impersonation, and regulatory risk—even without direct system compromise.
A self-harm prevention kit is becoming an essential part of school safety planning as student mental health challenges continue to rise across the United States. Schools are increasingly responsible for supporting the emotional well-being of their students and creating safe environments that reduce the risk of self-harming behavior, suicide attempts, or harmful coping patterns. The ...
Resiliency has been top of mind in 2025, and recent high-profile CVEs serve as holiday reminders that adversaries aren't slowing down. But what changed this year was how the federal community responded. Increasingly, exploitability drove the clock: when vulnerabilities surfaced as actively exploited, agencies leaned on a more operational posture where "Are we exposed?" and "How fast can we fix it?" mattered as much as "How severe is it?" In that environment, 2025 was defined by a single, powerful transition: the shift from planning modernization to executing it at scale. For years, agencies have discussed digital transformation, zero trust, and the promise of AI. This year, those themes moved from strategy decks into day-to-day delivery.
What is SSL/TLS? SSL and TLS are protocols used on the transport layer, which is used to provide a secure connection between two nodes in a computer network. The first widely used protocol that was aimed to secure the Internet connections was SSL, which was created by Netscape in mid 1995. It uses both publicRead More
The Biggest Cyber Stories of the Year: What 2025 Taught Us madhav Thu, 12/18/2025 - 10:30
2025 didn’t just test cybersecurity; it redefined it.
From supply chains and healthcare networks to manufacturing floors and data centers, the digital world was reminded of a simple truth: everything is connected, and everything is at risk.
2025 didn’t just test cybersecurity; it redefined it.
From supply chains and healthcare networks to manufacturing floors and data centers, the digital world was reminded of a simple truth: everything is connected, and everything is at risk.
The year’s biggest incidents weren’t just technical failures. They were human, systemic, and operational. They showed how cyber now touches every layer of modern life: our health, our homes, our industries, and the trust that binds them.
Here’s a look at the top five cyber stories that shaped 2025, and what they tell us about the future we need to build.
1. Healthcare’s Wake-Up Call
There were several high-profile healthcare breaches in 2025, some of them among the largest healthcare data exposures we’ve ever seen. Many millions of individuals were affected, including patients, providers, and insurers. Personal details, medical histories, and treatment data, were all swept up in breaches that often started with a third-party partner.
The scale has been breathtaking, as has the impact. Hospitals faced operational paralysis. Claims systems went dark. Patients waited weeks for reimbursements or prescriptions to clear.
It’s also not hard to see why healthcare continues to make headlines. Almost half of the data these entities store in the cloud is sensitive, yet the basics still lag behind. The Thales 2025 Data Threat Report: Healthcare and Life Sciences Edition revealed that over a quarter admit they don’t even know exactly where all their data lives, and only 4% have encrypted more than 80% of their sensitive information.
It’s this gap between awareness and action that makes this sector so vulnerable. Security controls need to match the sensitivity of the data, or every connection becomes a potential point of exposure. It’s not enough to protect your own walls if your partners’ gates are open. Healthcare’s growing dependence on third-party data processors has become its soft underbelly.
For security teams and their leaders, this is a time to reassess how we segment systems, encrypt data, and protect the multitude of identities that interact with every healthcare entity. Because when information flows across hundreds of connected platforms, security cannot be left in its wake; it has to move with the data, wherever it goes.
That’s where the CipherTrust Data Security Platform comes in, tokenizing, encrypting, and monitoring information across hybrid networks, ensuring that privacy and compliance follow the data wherever it flows.
2. The Data Sovereignty Reckoning
Europe made headlines this spring when regulators handed down one of the largest privacy fines to date, this time for cross-border data transfers that failed to meet adequacy standards.
This ruling wasn’t about one platform or one company, because while laws evolve, trust remains fragile. This became clear in the 2025 Thales Consumer Digital Trust Index: No sector earned a “high trust” score above 50%, not even banking or healthcare.
That says a lot. Regulation on its own doesn’t build trust; real security does. In fact, 64% of consumers say they would trust brands more if they used advanced privacy tech, and a staggering 86% now expect multi-factor authentication.
It all comes down to controlling your and your customers’ data. It’s about data sovereignty.
People want data stored locally, protected by familiar laws, and secured with intelligent authentication that works quietly in the background. For businesses, trust won’t come from promises, but from proof through encryption, strong key management, and privacy-first design.
That’s why we have seen a growing interest in sovereign cloud solutions and tools like Thales Key Management - technologies that let organizations host and encrypt data locally while maintaining full operational flexibility.
The lesson is that regulatory landscapes will continue to evolve. Your controls must evolve faster.
3. Manufacturing and Retail: The New Front Lines
Spring and summer brought a double whammy to the UK economy. First, a wave of retail attacks, then a massive incident in manufacturing that saw production grind to a halt for weeks.
Factories stood still. Shops lost trading days. Suppliers faced cascading delays. The ripple effects stretched across Europe.
For years, manufacturing and retail were seen as less obvious targets, until they weren’t.
Earlier this year, several household names were hit by coordinated cyberattacks that impaired e-commerce sites, froze payment systems, and left customers unable to shop online or in-store. Over just 10 days, three of the UK’s biggest retail brands experienced outages that had a huge impact on their critical services, including digital checkouts and loyalty platforms.
Operational technology (OT) networks, which were once isolated from the internet, are now digitally intertwined with IT systems, cloud services, and customer platforms. Attackers know this. They’ve shifted focus from stealing data to stopping operations.
The result was that every connected conveyor belt, every smart logistics chain, every digital POS terminal became a potential entry point.
The industry response has been a new wave of OT-IT convergence security: integrating endpoint protection, real-time monitoring, and identity controls. Fundamentally, building resilience is achieved through tools like SafeNet Trusted Access, with a zero-trust architecture that verifies everything, segments everything, and assumes nothing is inherently safe.
4. Supply Chain Shock
Around the middle of 2025, a critical zero-day vulnerability in a widely used collaboration platform exposed tens of thousands of servers in both the private and public sectors globally. The exploit allowed cyber criminals to impersonate trusted users, move laterally across networks, and access sensitive repositories before patches were available.
It was the kind of digital domino effect that keeps CISOs awake at night. This wasn’t just a story about patching; it was about preparedness.
Organizations that practiced strong vulnerability management, application isolation, zero trust, and rapid incident response weathered the storm. Those without such playbooks faced weeks of uncertainty.
The broader takeaway is that in a hyperconnected economy, supply chain risk is a daily reality. Security today means protecting not just your environment, but every application, touchpoint, and partner your business depends on.
Supply chains are only as strong as the identities that connect them, and that’s where Thales IAM solutions are proving highly effective.
5. The Luxury of Data
In September, several high-profile luxury retailers disclosed breaches affecting millions of customers worldwide. The attackers didn’t target products or profits; they went after trust. Names, emails, contact numbers, and purchase histories. For affluent consumers, that information is identity itself.
Brand prestige, once built on exclusivity, now depends equally on data integrity.
These incidents shone a light on how consumer-facing industries remain among the most targeted. Because where data meets desire, attackers see value.
Encryption, both at rest and in use, combined with strong identity and access management, can make the difference between a contained event and a crisis that erodes reputation overnight.
For retail and luxury brands, the takeaway was sobering but actionable: protect customer data as fiercely as you protect your brand.
A Year of Lessons, Not Just Losses
Despite the number of high-profile breaches that plagued companies in 2025, the year was not one of defeat, but of definition. Every attack, every disruption, every hefty regulatory fine pointed toward a shared truth: resilience has become the new metric of success.
Cybersecurity is no longer just about defending against attacks, but about ensuring continuity, compliance, and confidence in a world that never stands still.
Entities that invested in encryption, key management, identity verification, and zero-trust principles minimized their losses, and they built trust in the process.
This is important because the ultimate goal isn’t just to be secure, it’s to be trusted.
Building a Future We Can All Trust
From healthcare and retail to manufacturing and government, the story of 2025 has been one of transformation through challenge.
As digital ecosystems expand and threats evolve, the path forward is clear: Encrypt what matters. Control who accesses it. Monitor every connection.
Above all, design security not as a barrier, but as an enabler of progress. At Thales, we call that building a future we can all trust.
Schema
{
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": "The Biggest Cyber Stories of the Year: What 2025 Taught Us",
"description": "From healthcare breaches to supply chain vulnerabilities, this year’s cyber incidents taught us that resilience is the new benchmark for trust. Here’s what 2025 revealed about the future of cybersecurity.",
"image": "https://cpl.thalesgroup.com/sites/default/files/content/footer/thaleslogo-white.png",
"author": {
"@type": "Organization",
"name": "Thales Group",
"url": "https://cpl.thalesgroup.com"
},
"publisher": {
"@type": "Organization",
"name": "Thales Group",
"description": "The world relies on Thales to protect and secure access to your most sensitive data and software wherever it is created, shared, or stored. Whether building an encryption strategy, licensing software, providing trusted access to the cloud, or meeting compliance mandates, you can rely on Thales to secure your digital transformation.",
"url": "https://cpl.thalesgroup.com",
"logo": {
"@type": "ImageObject",
"url": "https://cpl.thalesgroup.com/sites/default/files/content/footer/thaleslogo-white.png"
},
"sameAs": [
"https://www.x.com/ThalesCyberSec",
"https://www.linkedin.com/company/thalescloudsec",
"https://www.youtube.com/ThalesCloudSec"
]
},
"mainEntityOfPage": {
"@type": "WebPage",
"@id": "https://cpl.thalesgroup.com/blog/cybersecurity/biggest-cyber-stories-2025"
},
"datePublished": "2025-12-18",
"dateModified": "2025-12-18"
}
studio
THALES BLOG
The Biggest Cyber Stories of the Year: What 2025 Taught Us
Introduction Safety protocols in the virtual domain are perhaps more important than ever in the current world. There can be no denying that PKI management is one of the most crucial aspects of protecting our increasingly digital world. It is the element of most, if not all, secure transfers such as emails and monetary transactions.Read More
CISOs are often blamed after ransomware attacks, yet most breaches stem from organizational gaps, budget tradeoffs, and staffing shortages. This analysis explores why known risks remain unfixed and how security leaders can break the cycle.
Ransomware has become a systemic risk to healthcare, where downtime equals patient harm. From Change Healthcare to Ascension, this analysis explains why hospitals are targeted, what HIPAA really requires, and how resilience—not checklists—must drive security strategy.
2026 marks a critical turning point for cybersecurity leaders as AI-driven threats, data sovereignty mandates, and hybrid infrastructure risks reshape the CISO agenda. Discover the strategic priorities that will define tomorrow’s security posture.
Introduction Security has become a primary focus in today’s world, which is dominated by computers and technology. Businesses are always on a quest to find better ways how secure their information and messages. Another important component in the field of ‘cyber security’ is the understanding and management of certification. These are generally in the formRead More
As 2025 comes to a close, artificial intelligence (AI) is a clear throughline across enterprise organizations. Many teams are still in the thick of implementing AI or deciding where and how to use it. Keeping up with usage trends and developments on top of that has become increasingly difficult. AI innovation moves fast and LLMs permeate core workflows across research, communication, development, finance, and operations. Security teams are left chasing risks that shift as quickly as the technology.Zscaler ThreatLabz publishes annual research to help enterprises make sense of the fast-evolving AI foundation model landscape. The upcoming ThreatLabz 2026 AI Security Report will provide visibility into organizational AI usage, from the most-used LLMs and applications to regional and industry-specific patterns and risk mitigation strategies. What follows is a sneak peek into some of this year’s preliminary findings through November 2025. The full 2026 AI Security Report, including December 2025 data and deeper analysis, will be available next month. The data and categories shared in this preview reflect the current state of our research findings and are subject to be updated, added to, excluded, or recategorized in the final report.OpenAI dominates enterprise AI traffic in 2025Figure 1. Top LLM vendors by AI/ML transactions (January 2025–November 2025) OpenAI has held the top position among LLM vendors by an overwhelming margin to date in 2025, accounting for 113.6 billion AI/ML transactions, more than three times the transaction volume of its nearest competitor. GPT-5’s August release set a new performance bar across coding assistance, multimodal reasoning, and other capabilities that integrate into business functions. Just as importantly, OpenAI’s expanded Enterprise API portfolio (including stricter privacy controls and model-isolation options) has solidified OpenAI and GPT-powered capabilities as the “default engine” behind countless enterprise AI workflows. Everything from internal copilots to automated research agents now lean heavily on OpenAI’s stack, keeping it far ahead of the rest of the field.OpenAI’s dominance carries important implications for enterprise leaders, which will be explored in greater detail in the upcoming report:How vendor concentration impacts risk: The heavy reliance on OpenAI underscores growing vendor dependency within many organizations; transaction flow data shows that businesses may be relying on OpenAI even more than they realize.Hidden AI uses across workflows: Transaction categories reveal that LLM interaction is no longer limited to visible tools like ChatGPT. AI underpins everything from automated meeting summaries in productivity suites to behind-the-scenes copilots in common SaaS platforms.Codeium (Windsurf as of April 2025) emerged as the second-largest source of enterprise LLM traffic in 2025, with strong adoption of its proprietary coding-focused models. As enterprises increased their use of AI in software development, Codeium’s models are a go-to option for engineering teams, especially in secure development environments.Perplexity rose to the #3 position. Not only an AI-powered search assistant, Perplexity is also an LLM provider offering proprietary large language models that power its answer engine.Anthropic and Google currently round out the top five LLM vendors by transaction volume. Despite generating only a fraction of OpenAI’s activity, both LLMs played meaningful and differentiated roles in the 2025 enterprise AI landscape. Anthropic saw expanding adoption of its Claude 3 and 3.5 models over the past year, along with a July launch of Claude for Financial Services that further strengthened its position in compliance-heavy environments. Google also accelerated enterprise adoption through major enhancements to Gemini, including improved multimodal capabilities and security and access controls tailored for corporate deployments. It will be interesting to see how the adoption changes as we head into 2026.Engineering leads AI usage among core enterprise departmentsThreatLabz also mapped AI/ML traffic to a select set of common enterprise departments. Only applications with at least one million transactions and primarily associated with a specific department were included in the following analysis, and percentages reflect usage relative to these departments only, not total enterprise traffic.Distribution of AI usage across these core departments offers a directional view into enterprise AI adoption:Suggesting where AI has become operational, not just experimental.Indicating which business functions generate the highest volume of unique AI activity, signaling deeper integration into day-to-day operations.Highlighting potential areas of risk, as sensitive functions in R&D, engineering, legal, and finance increasingly depend on AI applications and LLM-driven workflows.Within this scoped view, Engineering accounts for 47.6% of transactions to date, making it the largest driver of enterprise AI activity among the departments analyzed by ThreatLabz. IT follows at 33.1%. Usage among these teams adds up quickly; everyday tasks like coding, testing, configuration, and system analysis lend themselves to repeated AI interactions. Engineering teams in particular integrate AI into daily build cycles, where even small efficiency gains compound quickly across releases. Marketing ranks third in AI usage among core enterprise departments, with Customer Support, HR, Legal, Sales, and Finance collectively accounting for the remaining share. Regardless of the variance, AI now clearly spans the entire enterprise, driving new efficiencies in workflows and productivity—even as it introduces new security requirements. High-volume applications demand the highest security attention2025 has been another year marked by the push-and-pull between rapid AI adoption and the need for more deliberate oversight. Accordingly, the rise in AI transactions has not translated neatly into unrestricted use. In many case, the applications responsible for the growth in LLM activity are also the ones triggering the most blocks by enterprises.This trend has played out across many categories of applications, including popular general AI tools like Grammarly and more specialized function-specific tools like GitHub Copilot. These are just two examples of applications appearing at the top of both transaction volume and block lists. Their proximity to sensitive content (whether business communications or proprietary source code) make them natural flashpoints for security controls.The upcoming ThreatLabz 2026 AI Security Report will feature further analysis on blocking trends.AI threats and vulnerabilities evolve alongside enterprise adoptionAs enterprises expand their use of GenAI applications and security teams block more AI traffic, the threat landscape is moving just as quickly. ThreatLabz continues to analyze how AI-driven threats are scaling alongside enterprise adoption. In addition to amplifying familiar techniques like social engineering and malvertising, attackers are beginning to operationalize agentic AI and autonomous attack workflows and exploit weaknesses in the AI model supply chain itself. The upcoming report will cover AI threats and risks in more detail, along with actionable guidance for enterprise leaders on how to effectively secure usage and stop AI-powered threats.Coming soon: ThreatLabz 2026 AI Security Report The findings shared here are just the start. The full ThreatLabz 2026 AI Security Report will be released in late January and offer comprehensive analysis of the enterprise AI landscape, including: AI data transfer trendsDLP violations and sensitive data exposureIndustry and regional adoption patternsBest practices for securing AIAI is now a fundamental aspect of how almost every business operates. ThreatLabz remains committed to helping enterprises innovate securely and stay ahead of emerging risks. Join us next month for the full report release and get the insights needed to secure your AI-driven future.
Dec 17, 2025 - Lina Romero - The OWASP Top 10 for LLMs was released this year to help security teams understand and mitigate the rising risks to LLMs. In previous blogs, we’ve explored risks 1-9, and today we’ll finally be deep diving LLM10: Unbounded Consumption. Unbounded Consumption occurs when LLMs allow users to conduct excessive prompt submissions, or submission of overly complex, large or verbose prompts, leading to resource depletion, potential Denial of Service (DoS) attacks, and more. An inference is the process that an AI model uses to generate an output based on its training. When a user feeds an LLM a prompt, the LLM generates inferences in response. Follow-up questions trigger more inferences, because each additional interaction builds upon all the inferences, and potentially also previously submitted prompts, required for the previous interactions. Rate limiting controls the amount of requests an LLM can receive. When an LLM does not have the adequate rate limiting, it can effectively become overwhelmed with inferences and either begin to malfunction, or reach a cap on utilization and stop responding. A part of the LLM application could become unavailable. In AI security, we often refer to the “CIA,” which stands for Confidentiality, Integrity and Availability. Unbounded Consumption can cause an LLM to fail at the “Availability” part of this equation, which in turn can affect the LLM’s Confidentiality and Integrity. Another way in which Unbounded Consumption can negatively impact an LLM is through Denial of Wallet (DOW). Effectively, attackers will hit the LLM with request upon request, which can run up the bill if rate limiting is not in place. Eventually, these attacks can cause the LLM to reject requests due to the high volume of abnormal activity, which will stop it from working entirely.
Mitigation Methods
Some ways to reduce the risk of Unbounded Consumption include: Input Validation- ensure that inputs do not exceed reasonable size limits
Rate Limiting- apply user quotas and limits to restrict requests per user
Limit Exposure of Logits and Logprobs- obfuscate the exposure of API responses, provide only necessary information to users
Resource Allocation Management- monitor resource utilization to prevent any single user from exceeding a reasonable limit
Timeouts and Throttling- set time limits and throttle processing for resource intense operations to prevent prolonged resource consumption
Sandbox Techniques- restrict the LLMs access to network resources to limit what information it can expose
Monitoring and Logging- get alerts and continually monitor usage for unusual patterns Unbounded Consumption poses a critical risk to LLMs as it can cause DoS or DoW, however, with proper security measures and training, teams can minimize the risk of Unbounded Consumption in their AI applications. For more information on the rest of the OWASP Top 10 for LLMs, head over to the LLM series on our blog page. And for general information on how to take charge of your own AI security posture, schedule a demo today!
Explore homomorphic encryption for privacy-preserving analytics in Model Context Protocol (MCP) deployments, addressing post-quantum security challenges. Learn how to secure your AI infrastructure with Gopher Security.
A zero-day vulnerability in SonicWall’s Secure Mobile Access (SMA) 1000 was reportedly exploited in the wild in a chained attack with CVE-2025-23006.
Key takeaways:
CVE-2025-40602 is a local privilege escalation vulnerability in the appliance management console (AMC) of the SonicWall SMA 1000 appliance.
CVE-2025-40602 has been exploited in a chained attack with CVE-2025-23006, a deserialization of untrusted data vulnerability patched in January.
A list of Tenable plugins for this vulnerability can be found on the individual CVE pages for CVE-2025-40602 and CVE-2025-23006.
Background
On December 17, SonicWall published a security advisory (SNWLID-2025-0019) for a newly disclosed vulnerability in its Secure Mobile Access (SMA) 1000 product, a remote access solution.
CVE
Description
CVSSv3
CVE-2025-40602
SonicWall SMA 1000 Privilege Escalation Vulnerability
6.6
Analysis
CVE-2025-40602 is a local privilege escalation vulnerability in the appliance management console (AMC) of the SonicWall SMA 1000 appliance. An authenticated, remote attacker could exploit this vulnerability to escalate privileges on an affected device. While on its own, this flaw would require authentication in order to exploit, the advisory from SonicWall states that CVE-2025-40602 has been exploited in a chained attack with CVE-2025-23006, a deserialization of untrusted data vulnerability patched in January. The combination of these two vulnerabilities would allow an unauthenticated attacker to execute arbitrary code with root privileges.
According to SonicWall, “SonicWall Firewall products are not affected by this vulnerability.”
Historical exploitation of SonicWall vulnerabilities
SonicWall products have been a frequent target for attackers over the years. Specifically, the SMA product line has been targeted in the past by ransomware groups, as well as being featured in the Top Routinely Exploited Vulnerabilities list co-authored by multiple United States and International Agencies.
Earlier this year, an increase in ransomware activity tied to SonicWall Gen 7 Firewalls was observed. While initially it was believed that a new zero-day may have been the root cause, SonicWall later provided a statement noting that exploitation activity was in relation to CVE-2024-40766, an improper access control vulnerability which had been observed to have been exploited in the wild. More information on this can be found on our blog.
Given the past exploitation of SonicWall devices, we put together the following list of known SMA vulnerabilities that have been exploited in the wild:
At the time this blog was published, no proof-of-concept (PoC) code had been published for CVE-2025-40602. If and when a public PoC exploit becomes available for CVE-2025-40602, we anticipate a variety of attackers will attempt to leverage this flaw as part of their attacks.
Solution
SonicWall has released patches to address this vulnerability as outlined in the table below:
Affected Version
Fixed Version
12.4.3-03093 and earlier
12.4.3-03245
12.5.0-02002 and earlier
12.5.0-02283
The advisory also provides a workaround to reduce potential impact. This involves restricting access to the AMC to trusted sources. We recommend reviewing the advisory for the most up to date information on patches and workaround steps.
Identifying affected systems
A list of Tenable plugins for this vulnerability can be found on the individual CVE page for CVE-2025-40602 as they’re released. This link will display all available plugins for this vulnerability, including upcoming plugins in our Plugins Pipeline. In addition, product coverage for CVE-2025-23006 can be found here.
Tenable Attack Surface Management customers are able to identify these assets using a filtered search for SonicWall devices:
Authors, Creators & Presenters: Caihua Li (Yale University), Seung-seob Lee (Yale University), Lin Zhong (Yale University)
PAPER
Blindfold: Confidential Memory Management by Untrusted Operating System
Confidential Computing (CC) has received increasing attention in recent years as a mechanism to protect user data from untrusted operating systems (OSes). Existing CC solutions hide confidential memory from the OS and/or encrypt it to achieve confidentiality. In doing so, they render OS memory optimization unusable or complicate the trusted computing base (TCB) required for optimization. This paper presents our results toward overcoming these limitations, synthesized in a CC design named Blindfold. Like many other CC solutions, Blindfold relies on a small trusted software component running at a higher privilege level than the kernel, called Guardian. It features three techniques that can enhance existing CC solutions. First, instead of nesting page tables, Blindfold's Guardian mediates how the OS accesses memory and handles exceptions by switching page and interrupt tables. Second, Blindfold employs a lightweight capability system to regulate the OS's semantic access to user memory, unifying case-by-case approaches in previous work. Finally, Blindfold provides carefully designed secure ABI for confidential memory management without encryption. We report an implementation of Blindfold that works on ARMv8-A/Linux. Using Blindfold's prototype, we are able to evaluate the cost of enabling confidential memory management by the untrusted Linux kernel. We show Blindfold has a smaller runtime TCB than related systems and enjoys competitive performance. More importantly, we show that the Linux kernel, including all of its memory optimizations except memory compression, can function properly for confidential memory. This requires only about 400 lines of kernel modifications.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
DataDome details how it aligns with CISA’s Secure by Design Pledge, outlining strong authentication, secure defaults, supply chain security, logging, and transparency.
Authors, Creators & Presenters: Caihua Li (Yale University), Seung-seob Lee (Yale University), Lin Zhong (Yale University)
PAPER
Blindfold: Confidential Memory Management by Untrusted Operating System
Confidential Computing (CC) has received increasing attention in recent years as a mechanism to protect user data from untrusted operating systems (OSes). Existing CC solutions hide confidential memory from the OS and/or encrypt it to achieve confidentiality. In doing so, they render OS memory optimization unusable or complicate the trusted computing base (TCB) required for optimization. This paper presents our results toward overcoming these limitations, synthesized in a CC design named Blindfold. Like many other CC solutions, Blindfold relies on a small trusted software component running at a higher privilege level than the kernel, called Guardian. It features three techniques that can enhance existing CC solutions. First, instead of nesting page tables, Blindfold's Guardian mediates how the OS accesses memory and handles exceptions by switching page and interrupt tables. Second, Blindfold employs a lightweight capability system to regulate the OS's semantic access to user memory, unifying case-by-case approaches in previous work. Finally, Blindfold provides carefully designed secure ABI for confidential memory management without encryption. We report an implementation of Blindfold that works on ARMv8-A/Linux. Using Blindfold's prototype, we are able to evaluate the cost of enabling confidential memory management by the untrusted Linux kernel. We show Blindfold has a smaller runtime TCB than related systems and enjoys competitive performance. More importantly, we show that the Linux kernel, including all of its memory optimizations except memory compression, can function properly for confidential memory. This requires only about 400 lines of kernel modifications.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
And why most of the arguments do not hold up under scrutiny Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all […]
For years, artificial intelligence sat at the edges of cybersecurity conversations. It appeared in product roadmaps, marketing claims, and isolated detection use cases, but rarely altered the fundamental dynamics between attackers and defenders. That changed in 2025. This year marked a clear inflection point where AI became operational on both sides of the threat landscape.
A series of actively exploited zero-day vulnerabilities affecting Windows, Google Chrome, and Apple platforms was disclosed in mid-December, according to The Hacker News, reinforcing a persistent reality for defenders: attackers no longer wait for exposure windows to close. They exploit them immediately. Unlike large-scale volumetric attacks that announce themselves through disruption, zero-day exploitation operates quietly.
The Monetary Authority of Singapore’s cloud advisory, part of its 2021 Technology Risk Management Guidelines, advises financial institutions to move beyond siloed monitoring to adopt a continuous, enterprise-wide approach. These firms must undergo annual audits. Here’s how Tenable can help.
Key takeaways:
High-stakes compliance: The MAS requires all financial institutions in Singapore to meet mandatory technology risk and cloud security guidelines and document compliance. Non-compliance can lead to severe financial penalties and business restrictions. Any third-party providers used by Singapore financial institutions must also comply with the standards.
The proactive mandate: Compliance requires a shift from static compliance checks to a continuous, proactive approach to managing exposure. This approach is essential for securing the key cloud risk areas mandated by MAS: identity and access management (IAM) and securing applications in the public cloud.
How to get there: Effective risk mitigation means breaking the most dangerous attack paths. Tenable Cloud Security, available in the Tenable One Exposure Management Platform, provides continuous monitoring, eliminates over-privileged permissions, and addresses misconfiguration risk.
Complying with government cybersecurity regulations can lull organizations into a false sense of security and lead to an over-reliance on point-in-time assessments conducted at irregular intervals. While such compliance efforts are essential to pass audits, they may do very little to actually reduce an organization’s risk. On the other hand, government efforts like the robust framework provided by the Monetary Authority of Singapore (MAS), Singapore’s central bank and integrated financial regulator, offer valuable guidance for organizations worldwide to consider as they look to reduce cyber risk.
The cloud advisory highlights key risks and control measures that Singapore’s financial institutions should consider before adopting public cloud services, including:
Developing a public cloud risk management strategy that takes into consideration the unique characteristics of public cloud services
Implementing strong controls in areas such as identity and access management (IAM), cybersecurity, data protection, and cryptographic key management
Expanding cybersecurity operations to include the security of public cloud workloads
Managing cloud resilience, outsourcing, vendor lock-in, and concentration risks
Ensuring the financial institution’s staff have the adequate skillsets to manage public cloud workloads and their risks.
The advisory recommends avoiding a siloed approach when performing security monitoring of on-premises apps or infrastructure and public cloud workloads. Instead, it advises financial institutions to “feed cyber-related information on public cloud workloads into their respective enterprise-wide IT security monitoring services to facilitate continuous monitoring and analysis of cyber events.”
Who must comply with MAS TRM and the cloud advisory?
While the MAS TRM guidelines and cloud advisory do not specifically state penalties for compliance failures, they are legally binding. They apply to all financial institutions operating under the authority’s regulation in Singapore, including banks, insurers, fintech firms, payment service providers, and venture capital managers. A financial institution in Singapore that leverages the services of a firm based outside the country must ensure that its service providers also meet the TRM requirements. MAS also factors adherence to the framework into its overall risk assessment of an organization; failure to comply can damage an organization's standing and reputation.
In short, the scope of accountability to the MAS TRM guidelines and cloud advisory is broad.
Complying with the MAS cloud advisory: How Tenable can help
We evaluated how the Tenable One Exposure Management Platform with Tenable Cloud Security can assist organizations in achieving and maintaining compliance with the MAS cloud advisory. Read on to understand two of the cloud advisory’s key focus areas and how to address them effectively with Tenable One — preventing dangerous attack path vectors from compromising sensitive cloud assets.
1. Identity and access management: Enforcing least privilege access
The MAS cloud advisory calls for financial institutions to “enforce the principle of least privilege stringently” when granting access to assets in the public cloud. It further advises firms to consider adopting zero trust principles in the architecture design of applications, where “access to public cloud services and resources is evaluated and granted on a per-request and need-to basis.”
At Tenable, we believe applying least privilege in Identity Access Management (IAM) is the cornerstone for effective cloud security. In the cloud, excessive permissions on accounts that can access sensitive data are a direct route to a breach.
How Tenable can help: CIEM and sensitive data protection
The Tenable Cloud Security domain within Tenable One offers integrated cloud infrastructure entitlement management (CIEM) that enforces strict least privilege across human and machine identities in Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Oracle Cloud Infrastructure (OCI), and Kubernetes environments.
Eliminate lateral movement: CIEM analyzes policies to identify privilege escalation risks and lateral movement paths, effectively closing dangerous attack vectors.
Data-driven prioritization: Tenable provides automated data classification and correlates sensitive data exposure with overly permissive identities. This ensures remediation focuses on the exposures that threaten your most critical regulated data.
Mandatory controls: The platform automatically monitors for privileged users who lack multi-factor authentication (MFA) and checks for regular access key rotation.
Cutting-edge identity intelligence correlates overprivileged IAM identities with vulnerabilities, misconfigurations, and sensitive data to see where privilege misuse could have the greatest impact. Guided, least-privilege remediation closes these identity exposure gaps. Source: Tenable, December 2025
Here’s a detailed look at how Tenable can help with three of the cloud advisory’s IAM provisions:
MAS cloud advisory item
How Tenable helps
10. As IAM is the cornerstone of effective cloud security risk management, FIs should enforce the principle of “least privilege” stringently when granting access to information assets in the public cloud.
Tenable provides easy visualization of effective permissions through identity intelligence and permission mapping. By querying permissions across identities, you can quickly surface problems and revoke excessive permissions with automatically generated least privilege policies.
11. Financial institutions should implement multi-factor authentication (MFA) for staff with privileges to configure public cloud services through the CSPs’ metastructure, especially staff with top-level account privileges (e.g. known as the “root user” or “subscription owner” for some CSPs).
Tenable offers detailed monitoring for privileged users, including IAM users who don't have multi-factor authentication (MFA) enabled.
12. Credentials used by system/application services for authentication in the public cloud, such as “access keys,” should be changed regularly. If the credentials are not used, they should be deleted immediately.
Tenable's audits check for this specific condition. They can identify IAM users whose access keys have not been rotated within a specified time frame (e.g., 90 days). This helps you to quickly identify and address this security vulnerability
Source: Tenable, December 2025
2. Securing applications in the public cloud: Minimizing risk exposure
For financial institutions using microservices and containers, the MAS cloud advisory advises that, to reduce the attack surface, each container includes only the core software components needed by the application. The cloud advisory also notes that security tools made for traditional on-premises IT infrastructure (e.g. vulnerability scanners) may not run effectively on containers, and advises financial institutions to adopt container-specific security solutions for preventing, detecting, and responding to container-specific threats. For firms using IaC to provision or manage public cloud workloads, it further calls for implementing controls to minimize the risk of misconfigurations.
At Tenable, we believe this explicit mandate for specialized cloud and container security solutions underscores the need for continuous, accurate risk assessment. Tenable Cloud Security is purpose-built to meet these requirements with full Cloud Security Posture Management (CSPM) and Cloud Workload Protection (CWP) capabilities across your cloud footprint. This ability to see and protect every cloud asset — from code to container — is crucial for enabling contextual prioritization of risk. We also believe that relying solely on static vulnerability scoring systems, like the Common Vulnerability Scoring System (CVSS) is insufficient because it fails to reflect real-world exploitability. To ensure financial institutions focus remediation efforts where they matter most, Tenable Exposure Management, including Tenable Cloud Security, incorporates the Tenable Vulnerability Priority Rating (VPR) — dynamic, predictive risk scoring that allows teams to address the most immediate and exploitable threats first.
How Tenable can help: Container security and cloud-to-code traceability
Tenable unifies cloud workload protection (CWP) with cloud security posture management (CSPM) to provide continuous, contextual risk assessment.
Workload and container security: Tenable provides solutions tailored to your security domain:
For the cloud security professional: Tenable offers robust, agentless cloud workload protection capabilities that continuously scan for, detect and visualize critical risks such as vulnerabilities, sensitive data exposure, malware and misconfigurations across virtual machines, containers and serverless environments.
For the vulnerability management owner: Tenable offers a streamlined solution with unified visibility for hybrid environments, providing the core capabilities to extend vulnerability management best practices to cloud workloads: Tenable Cloud Vulnerability Management, ensures agentless multi-cloud coverage, scanning containers in registries (shift-left) and runtime to prevent the deployment of vulnerable images and detect drift in production.
Cloud-to-code traceability: This unique feature links runtime findings (e.g., an exposed workload) directly back to its IaC source code, allowing for rapid remediation and automated pull requests, minimizing misconfiguration risk as mandated by MAS.
Embed security and compliance throughout the development lifecycle, in DevOps workflows like HashiCorp Terraform and CloudFormation, to minimize risks. Detect issues in the cloud and suggest the fix in code. Source: Tenable, December 2025
Here’s a detailed look at how Tenable can help with two of the cloud advisory’s provisions related to securing applications in the public cloud:
MAS cloud advisory item
How Tenable helps
19. Applications that run in a public cloud environment may be packaged in containers, especially for applications adopting a microservices architecture. Financial institutions should ensure that each container includes only the core software components needed by the application to reduce the attack surface. As containers typically share a host operating system, financial institutions should run containers with a similar risk profile together (e.g., based on the criticality of the service or the data that are processed) to minimize risk exposure. As security tools made for traditional on-premise[s] IT infrastructure (e.g. vulnerability scanners) may not run effectively on containers, financial institutions should adopt [a] container-specific security solution for preventing, detecting, and responding to container-specific threats.
Tenable integrates with your CI/CD pipelines and container registries to provide visibility and control throughout the container lifecycle. Here's how it works:
Tenable scans container images for vulnerabilities, misconfigurations, and malware as they're being built and stored in registries. This is a "shift-left" approach, which means it helps you find and fix security issues early in the development process.
You can create and enforce security policies based on vulnerability scores, the presence of specific malware, or other security criteria.
Tenable's admission controllers act as runtime guardrails, ensuring that the policies you've defined are enforced at the point of deployment. This prevents deployment of images that failed initial scans or have since been found vulnerable, even if a developer tries to bypass the standard process.
20. Financial institutions should ensure stringent control over the granting of access to container orchestrators (e.g. Kubernetes), especially the use of the orchestrator administrative account, and the orchestrators’ access to container images. To ensure that only secure container images are used, a container registry could be established to facilitate tracking of container images that have met the financial institution’s security requirements.
Tenable's Kubernetes Security Posture Management (KSPM) component continuously scans your Kubernetes resources (like pods, deployments, and namespaces) to identify misconfigurations and policy violations. This allows you to:
Discover and remediate vulnerabilities and misconfigurations before they can be exploited.
Continuously audit your environment against industry standards, like the Center for Internet Security (CIS) benchmarks for Kubernetes.
Get a single, centralized view of your security posture across multiple Kubernetes clusters.
Tenable’s admission controllers act as gatekeepers to your Kubernetes cluster. When a user or a system attempts to deploy a new container image, the admission controller intercepts the request before it's fully scheduled. It then checks the image against your defined security policies. Your policies can be based on factors such as:
Vulnerability scores (e.g., block any image with a critical vulnerability)
Compliance violations (e.g., block images that don't meet a specific security standard)
The presence of malicious software or exposed secrets
If the image violates any of these policies, the admission controller denies the deployment, preventing the vulnerable container from ever reaching production.
Source: Tenable, December 2025
Gaining the upper hand on MAS compliance through a unified ecosystem view
Tenable One is the market-leading exposure management platform, normalizing, contextualizing, and correlating security signals from all domains, including cloud — across vulnerabilities, misconfigurations, and identities spanning your hybrid estate. Exposure management enables cross-functional alignment between SecOps, DevOps, and governance, risk and compliance (GRC) teams with a shared, unified view of risk.
Tenable Cloud Security, part of Tenable One, unifies vision, insight, and action to support continuous adherence to the MAS cloud advisory across multi-cloud and hybrid environments. Source: Tenable, December 2025
Tenable Cloud Security, part of the Tenable One Exposure Management platform, supports continuous adherence to the MAS cloud advisory and enables risk-based decision-making by eliminating the toxic combinations that attackers exploit. The platform unifies security insight, transforming the effort to achieve compliance from a necessary burden into a strategic advantage.
Raise your hand if you’ve fallen victim to a vendor-led conversation around their latest AI-driven platform over the past calendar year. Keep it up if the pitch leaned on “next-gen,” “market-shaping,” or “best-in-class” while they nudged another product into your stack. If your hand is still up, you are not alone. MSPs are the target because you sit between shrinking budgets and rising risk.
A Chrome browser extension with 6 million users, as well as seven other Chrome and Edge extensions, for months have been silently collecting data from every AI chatbot conversion, packaging it, and then selling it to third parties like advertisers and data brokers, according to Koi Security.
As holiday lights go up and inboxes fill with year-in-review emails, it’s tempting to look back on 2025 as “the year of AI.”
But for security teams, it was something more specific – the year APIs, AI agents, and MCP servers collided across the API fabric, expanding the attack surface faster than most organizations could keep up.
At Salt Security, we spent 2025 focused on one thing: defending the API action layer where AI, applications, and data intersect. And we did it with a steady drumbeat of innovation, a new “gift” for security teams almost every month.
So in the spirit of the season, here’s a look back at Salt’s 12 Months of Innovation – a year-long series of product launches, partnerships, and research milestones designed to help organizations stay ahead of fast-moving threats.
We kicked off the year by shining a harsh light on what many teams already suspected:
APIs now sit at the center of almost every digital initiative.
Zombie and unmanaged APIs still live in production.
Software supply chain dependencies are quietly multiplying risk.
Early 2025 research and thought leadership from Salt Labs showed just how dangerous it is to run modern AI and automation on top of APIs you don’t fully understand or control.
Takeaway: January set the tone – defending tomorrow’s API fabric with yesterday’s tools is no longer an option.
February – A Spotlight on API Reality
In February, we went from “we think we have a problem” to “here are the numbers.”
With the latest State of API Security Report and key industry recognitions such as inclusion in top security lists, Salt brought hard data to boardroom and CISO conversations.
The message was clear:
API traffic is exploding.
Attackers are targeting APIs at scale.
Traditional perimeter and app security are missing critical context.
Takeaway: API security is no longer a niche concern. It’s a business risk that demands strategy, budget, and board-level attention.
March – Gold Medals & Rising Shadows
March blended validation and urgency.
On one side, industry bodies recognized Salt’s leadership with awards like a Gold Globee, underscoring the maturity and impact of our platform.
On the other, new blogs and research highlighted reality on the ground:
Compliance and data privacy pressure are rising.
AI-driven attacks are accelerating, not slowing.
Takeaway: Excellence in API security isn’t just about winning awards, it’s about staying ahead of adversaries who are constantly adapting.
April – A Season of Partnerships & Paradigm Shifts
High-profile AI mishaps, including incidents like the McDonald’s chatbot breach, made one thing painfully obvious: conversational AI and digital experiences are only as safe as the APIs behind them.
Salt responded with:
Deep-dive blogs on AI agent risk and API blind spots.
The launch of Salt Surface, designed to map and prioritize exposed API risk.
Takeaway: 2025 was the year CISOs started asking not just “What APIs do we have?” but “Which of these are exposed, exploitable, and business-critical?”
August – Autonomous Everything
By August, “autonomous” wasn’t just a buzzword, it was a roadmap theme.
Organizations leaned hard into:
Autonomous workflows
AI-driven decisioning
Automated threat detection and response
Salt’s innovation in this space emphasized a key reality: AI, autonomy, and APIs are inseparable.
We advanced protections for autonomous threat hunting and AI-driven security use cases, reinforcing that if APIs are compromised, autonomous systems are too.
Takeaway: You can’t secure autonomous operations if you’re not securing the API action layer that powers them.
Salt introduced the industry’s first solution to secure AI agent actions across APIs and MCP servers, bringing real controls to a problem that had mostly been theoretical.
This meant:
Protection against prompt injection and misuse.
Guardrails around what AI agents can access or execute.
Enforceable policy where it matters: at the API and action level.
Takeaway: The AI agent revolution doesn’t have to be a security nightmare — if you secure the actions, not just the model.
In October, new data from Salt and customer environments revealed how deep the AI + API blind spots really go.
We broke down:
Misconfigurations in AI-driven workflows.
Risky patterns in agentic and MCP deployments.
Common mistakes teams make when bolting AI onto existing architectures.
Through detailed analysis and practical guidance, we helped teams turn confusion into a roadmap for modernizing their security posture.
Takeaway: Education is as important as technology. You can’t fix what you don’t fully understand.
November – Security Starts in Code
November brought a massive step forward in shifting API security left and right at the same time.
We launched:
GitHub Connect - to scan code repositories for shadow APIs, spec mismatches, and insecure patterns before they ship.
MCP Finder - to identify risky MCP configurations and AI-integrated workflows early in the development lifecycle.
Combined with runtime intelligence from the Salt platform, customers could now connect:
What’s being written → What’s being deployed → What’s being exploited
Takeaway: Real API security covers the full lifecycle, from design and code to production traffic and AI-agent actions.
December – Hello, Pepper
We closed the year with a new kind of experience: Ask Pepper AI.
Ask Pepper AI turns Salt’s platform into a conversational partner, letting users:
Ask natural-language questions about APIs, risks, and threats.
Accelerate investigation and incident response.
Bring complex insights to teams who don’t live inside dashboards.
Alongside MCP protection for AWS WAF, December marked the next stage in our vision: API security that’s not just powerful, but accessible and intuitive.
Takeaway: When security teams can simply ask questions and get meaningful, contextual answers, they move faster, and so does the business.
Looking Ahead: Building on a Year of Innovation
If 2025 was the year APIs fully merged with AI agents, automation, and MCP servers, 2026 will be the year organizations either embrace the API action layer or fall behind those that do.
At Salt Security, our focus remains the same:
See everything - every API, every action, every blind spot.
Understand the context - who’s calling what, from where, and why.
Stop attacks - before they turn into outages, data loss, or brand damage.
The 12 Months of Innovation were just the beginning. The threats are evolving, and so are we.
It’s not always immediately clear why your IP has been listed or how to fix it. To help, we’ve added a new “troubleshooting” step to the IP & Domain Reputation Checker, specifically for those whose IPs have been listed on the Combined Spam Sources (CSS) Blocklist - IPs associated with low-reputation email. Learn how you can diagnose the issue using this new feature.
In an era marked by escalating cyber threats and evolving risk landscapes, organisations face mounting pressure to strengthen their security posture whilst maintaining seamless user experiences. At Thales, we recognise that robust security must be foundational – embedded into products and services by design, not bolted on as an afterthought. This principle underpins our commitment […]
As Artificial Intelligence technology rapidly advances, Large Language Models (LLMs) are being widely adopted across countless domains. However, with this growth comes a critical challenge: LLM security issues are becoming increasingly prominent, posing a major constraint on further development. Governments and regulatory bodies are responding with policies and regulations to ensure the safety and compliance […]
For a long time, DDoS attacks were easy to recognize. They were loud, messy, and built on raw throughput. Attackers controlled massive botnets and flooded targets until bandwidth or infrastructure collapsed. It was mostly a scale problem, not an engineering one. That era is ending. A quieter and far more refined threat has taken its […]
Discover how homomorphic encryption (HE) enhances privacy-preserving model context sharing in AI, ensuring secure data handling and compliance for MCP deployments.
Explore the differences between LDAP and Single Sign-On (SSO) for user authentication. Understand their use cases, benefits, and how they fit into your enterprise security strategy.
Learn how to configure users without OTP login in your applications. This guide covers conditional authentication, account settings, and fallback mechanisms for seamless access.
FOR IMMEDIATE RELEASE Richmond, VA — December 11, 2025 — Assura is proud to announce that it has been named to the MSSP Alert and CyberRisk Alliance partnership’s prestigious Top 250 MSSPs list for 2025, securing the #94 position among the world’s leading Managed Security Service Providers. “Making The Top 100 is an incredible milestone and testament to the… Continue reading Assura Named to MSSP Alert and Cyber Alliance’s 2025 “Top 250 MSSPs,” Ranking at Number 94
Researchers with Google Threat Intelligence Group have detected five China-nexus threat groups exploiting the maximum-security React2Shell security flaw to drop a number of malicious payloads, from backdoors to downloaders to tunnelers.
Ambiguity isn't just a challenge. It's a leadership test - and most fail it.
I want to start with something that feels true but gets ignored way too often.
Most of us in leadership roles have a love hate relationship with ambiguity. We say we embrace it... until it shows up for real. Then we freeze, hedge our words, or pretend we have a plan. Cybersecurity teams deal with ambiguity all the time. Its in threat intel you cant quite trust, in stakeholder demands that swing faster than markets, in patch rollouts that go sideways. But ambiguity isnt a bug to be fixed. Its a condition to be led through.
[Image: A leader facing a foggy maze of digital paths - ambiguity as environment.]
Lets break this down the way I see it, without jazz hands or buzzwords.
Ambiguity isn't uncertainty. Its broader.
Uncertainty is when you lack enough data to decide. Ambiguity is when even the terms of the problem are in dispute. Its not just what we don't know. Its what we cant define yet. In leadership terms, that feels like being handed a puzzle where some pieces aren't even shaped yet. This is classic VUCA territory - volatility, uncertainty, complexity and ambiguity make up the modern landscape leaders sit in every day.
[Image: The dual nature of ambiguity - logic on one side, uncertainty on the other.]
Here is the blunt truth. Great leaders don't eliminate ambiguity. They engage with it. They treat ambiguity like a partner you've gotta dance with, not a foe to crush.
Ambiguity is a leadership signal
When a situation is ambiguous, its telling you something. Its saying your models are incomplete, or your language isn't shared, or your team has gaps in context. Stanford researchers and communication experts have been talking about this recently: ambiguity often reflects a gap in the shared mental model across the team. If you're confused, your team probably is too.
A lot of leadership texts treat ambiguity like an enemy of clarity. But thats backward. Ambiguity is the condition that demands sensemaking. Sensemaking is the real work. Its the pattern of dialogue and iteration that leads to shared understanding amid chaos. That means asking the hard questions out loud, not silently wishing for clarity.
If your team seems paralyzed, unclear, or checked out - it might not be them. It might be you.
Leaders model calm confusion
Think about that phrase. Calm confusion. Leaders rarely say, "I don't know." Instead they hedge, hide, or overcommit. But leaders who effectively navigate ambiguity do speak up about what they don't know. Not to sound vulnerable in a soft way, but to anchor the discussion in reality. That model gives permission for others to explore unknowns without fear.
I once watched a director hold a 45-minute meeting to "gain alignment" without once stating the problem. Everyone left more confused than when they walked in. That’s not leadership. That's cover.
There is a delicate balance here. You don't turn every ambiguous situation into a therapy session. Instead, you create boundaries around confusion so the team knows where exploration stops and action begins. Good leaders hold this tension.
Move through ambiguity with frameworks, not polish
Here is a practical bit. One common way to get stuck is treating decisions as if they're singular. But ambiguous situations usually contain clusters of decisions wrapped together. A good framework is to break the big, foggy problem into smaller, more combinable decisions. Clarify what is known, identify the assumptions you are making, and make provisional calls on the rest. Treat them like hypotheses to test, not laws of motion.
In cybersecurity, this looks like mapping your threat intel to scenarios where you know the facts, then isolating the areas of guesswork where your team can experiment or prepare contingencies. Its not clean. But it beats paralysis.
Teams learn differently under ambiguity
If you have ever noticed that your best team members step up in times of clear crises, but shut down when the goals are vague, you're observing humans responding to ambiguity differently. Some thirst for structure. Others thrive in gray zones. As a leader, you want both. You shape the context so self starters can self start, and then you steward alignment so the whole group isnt pulling in four directions.
Theres a counterintuitive finding in team research: under certain conditions, ambiguity enables better collaborative decision making because the absence of a single voice forces people to share and integrate knowledge more deeply. But this only works when there is a shared understanding of the task and a culture of open exchange.
Lead ambiguity, don't manage it
Managing ambiguity sounds like you're trying to tighten it up, reduce it, or push it into a box. Leading ambiguity is different. It's about moving with the uncertainty. Encouraging experiments. Turning unknowns into learning loops. Recognizing iterative decision processes rather than linear ones.
And yes, that approach feels messy. Good. Leadership is messy. The only thing worse than ambiguity is false certainty. I've been in too many rooms where leaders pretended to know the answer, only to cost time, credibility, or talent. You can be confident without being certain. That's leadership.
But there's a flip side no one talks about.
Sometimes leaders use ambiguity as a shield. They stay vague, push decisions down the org, and let someone else take the hit if it goes sideways. I've seen this pattern more than once. Leaders who pass the fog downstream and call it empowerment. Except it's not. It's evasion. And it sets people up to fail.
Real leaders see ambiguity for what it is: a moment to step up and mentor. To frame the unknowns, offer scaffolding, and help others think through it with some air cover. The fog is a chance to teach — not disappear.
But the hard truth? Some leaders can't handle the ambiguity themselves. So they deflect. They repackage their own discomfort as a test of independence, when really they're just dodging responsibility. And sometimes, yeah, it feels intentional. They act like ambiguity builds character... but only because they're too insecure or inexperienced to lead through it.
The result is the same: good people get whiplash. Goals shift. Ownership blurs. Trust erodes. And the fog thickens.
There's research on this, too. It's called role ambiguity — when you're not clear on what's expected, what your job even is, or how success gets measured. People in those situations don't just get frustrated. They burn out. They overcompensate for silence. They stop trusting. And productivity tanks. It's not about needing a five-year plan. It's about needing a shared frame to work from. Leadership sets that tone.
Leading ambiguity means owning the fog, not outsourcing it.
Ambiguity isn't a one-off problem. It's a perpetual condition, especially in cybersecurity and executive realms where signals are weak and stakes are high. The real skill isn't clarity. It's resilience. The real job isn't prediction. It's navigation.
Lead through ambiguity by embracing the fog, not burying it. And definitely not dumping it on someone else.
When the fog rolls in, what kind of leader are you really?