Reading view

2026 Cyber Predictions: Accelerating AI, Data Sovereignty, and Architecture Rationalization 

agentic aiDeepseek, CrowdStrike, agentic,

2026 marks a critical turning point for cybersecurity leaders as AI-driven threats, data sovereignty mandates, and hybrid infrastructure risks reshape the CISO agenda. Discover the strategic priorities that will define tomorrow’s security posture.

The post 2026 Cyber Predictions: Accelerating AI, Data Sovereignty, and Architecture Rationalization  appeared first on Security Boulevard.

  •  

Private Certificate Authority 101: From Setup to Management

Introduction Security has become a primary focus in today’s world, which is dominated by computers and technology. Businesses are always on a quest to find better ways how secure their information and messages. Another important component in the field of ‘cyber security’ is the understanding and management of certification. These are generally in the formRead More

The post Private Certificate Authority 101: From Setup to Management appeared first on EncryptedFence by Certera - Web & Cyber Security Blog.

The post Private Certificate Authority 101: From Setup to Management appeared first on Security Boulevard.

  •  

What’s Powering Enterprise AI in 2025: ThreatLabz Report Sneak Peek

As 2025 comes to a close, artificial intelligence (AI) is a clear throughline across enterprise organizations. Many teams are still in the thick of implementing AI or deciding where and how to use it. Keeping up with usage trends and developments on top of that has become increasingly difficult. AI innovation moves fast and LLMs permeate core workflows across research, communication, development, finance, and operations. Security teams are left chasing risks that shift as quickly as the technology.Zscaler ThreatLabz publishes annual research to help enterprises make sense of the fast-evolving AI foundation model landscape. The upcoming ThreatLabz 2026 AI Security Report will provide visibility into organizational AI usage, from the most-used LLMs and applications to regional and industry-specific patterns and risk mitigation strategies. What follows is a sneak peek into some of this year’s preliminary findings through November 2025. The full 2026 AI Security Report, including December 2025 data and deeper analysis, will be available next month. The data and categories shared in this preview reflect the current state of our research findings and are subject to be updated, added to, excluded, or recategorized in the final report.OpenAI dominates enterprise AI traffic in 2025Figure 1. Top LLM vendors by AI/ML transactions (January 2025–November 2025) OpenAI has held the top position among LLM vendors by an overwhelming margin to date in 2025, accounting for 113.6 billion AI/ML transactions, more than three times the transaction volume of its nearest competitor. GPT-5’s August release set a new performance bar across coding assistance, multimodal reasoning, and other capabilities that integrate into business functions. Just as importantly, OpenAI’s expanded Enterprise API portfolio (including stricter privacy controls and model-isolation options) has solidified OpenAI and GPT-powered capabilities as the “default engine” behind countless enterprise AI workflows. Everything from internal copilots to automated research agents now lean heavily on OpenAI’s stack, keeping it far ahead of the rest of the field.OpenAI’s dominance carries important implications for enterprise leaders, which will be explored in greater detail in the upcoming report:How vendor concentration impacts risk: The heavy reliance on OpenAI underscores growing vendor dependency within many organizations; transaction flow data shows that businesses may be relying on OpenAI even more than they realize.Hidden AI uses across workflows: Transaction categories reveal that LLM interaction is no longer limited to visible tools like ChatGPT. AI underpins everything from automated meeting summaries in productivity suites to behind-the-scenes copilots in common SaaS platforms.Codeium (Windsurf as of April 2025) emerged as the second-largest source of enterprise LLM traffic in 2025, with strong adoption of its proprietary coding-focused models. As enterprises increased their use of AI in software development, Codeium’s models are a go-to option for engineering teams, especially in secure development environments.Perplexity rose to the #3 position. Not only an AI-powered search assistant, Perplexity is also an LLM provider offering proprietary large language models that power its answer engine.Anthropic and Google currently round out the top five LLM vendors by transaction volume. Despite generating only a fraction of OpenAI’s activity, both LLMs played meaningful and differentiated roles in the 2025 enterprise AI landscape. Anthropic saw expanding adoption of its Claude 3 and 3.5 models over the past year, along with a July launch of Claude for Financial Services that further strengthened its position in compliance-heavy environments. Google also accelerated enterprise adoption through major enhancements to Gemini, including improved multimodal capabilities and security and access controls tailored for corporate deployments. It will be interesting to see how the adoption changes as we head into 2026.Engineering leads AI usage among core enterprise departmentsThreatLabz also mapped AI/ML traffic to a select set of common enterprise departments. Only applications with at least one million transactions and primarily associated with a specific department were included in the following analysis, and percentages reflect usage relative to these departments only, not total enterprise traffic.Distribution of AI usage across these core departments offers a directional view into enterprise AI adoption:Suggesting where AI has become operational, not just experimental.Indicating which business functions generate the highest volume of unique AI activity, signaling deeper integration into day-to-day operations.Highlighting potential areas of risk, as sensitive functions in R&D, engineering, legal, and finance increasingly depend on AI applications and LLM-driven workflows.Within this scoped view, Engineering accounts for 47.6% of transactions to date, making it the largest driver of enterprise AI activity among the departments analyzed by ThreatLabz. IT follows at 33.1%. Usage among these teams adds up quickly; everyday tasks like coding, testing, configuration, and system analysis lend themselves to repeated AI interactions. Engineering teams in particular integrate AI into daily build cycles, where even small efficiency gains compound quickly across releases. Marketing ranks third in AI usage among core enterprise departments, with Customer Support, HR, Legal, Sales, and Finance collectively accounting for the remaining share. Regardless of the variance, AI now clearly spans the entire enterprise, driving new efficiencies in workflows and productivity—even as it introduces new security requirements. High-volume applications demand the highest security attention2025 has been another year marked by the push-and-pull between rapid AI adoption and the need for more deliberate oversight. Accordingly, the rise in AI transactions has not translated neatly into unrestricted use. In many case, the applications responsible for the growth in LLM activity are also the ones triggering the most blocks by enterprises.This trend has played out across many categories of applications, including popular general AI tools like Grammarly and more specialized function-specific tools like GitHub Copilot. These are just two examples of applications appearing at the top of both transaction volume and block lists. Their proximity to sensitive content (whether business communications or proprietary source code) make them natural flashpoints for security controls.The upcoming ThreatLabz 2026 AI Security Report will feature further analysis on blocking trends.AI threats and vulnerabilities evolve alongside enterprise adoptionAs enterprises expand their use of GenAI applications and security teams block more AI traffic, the threat landscape is moving just as quickly. ThreatLabz continues to analyze how AI-driven threats are scaling alongside enterprise adoption. In addition to amplifying familiar techniques like social engineering and malvertising, attackers are beginning to operationalize agentic AI and autonomous attack workflows and exploit weaknesses in the AI model supply chain itself. The upcoming report will cover AI threats and risks in more detail, along with actionable guidance for enterprise leaders on how to effectively secure usage and stop AI-powered threats.Coming soon: ThreatLabz 2026 AI Security Report The findings shared here are just the start. The full ThreatLabz 2026 AI Security Report will be released in late January and offer comprehensive analysis of the enterprise AI landscape, including: AI data transfer trendsDLP violations and sensitive data exposureIndustry and regional adoption patternsBest practices for securing AIAI is now a fundamental aspect of how almost every business operates. ThreatLabz remains committed to helping enterprises innovate securely and stay ahead of emerging risks. Join us next month for the full report release and get the insights needed to secure your AI-driven future. 

The post What’s Powering Enterprise AI in 2025: ThreatLabz Report Sneak Peek appeared first on Security Boulevard.

  •  

LLM10: Unbounded Consumption – FireTail Blog

Dec 17, 2025 - Lina Romero - The OWASP Top 10 for LLMs was released this year to help security teams understand and mitigate the rising risks to LLMs. In previous blogs, we’ve explored risks 1-9, and today we’ll finally be deep diving LLM10: Unbounded Consumption. Unbounded Consumption occurs when LLMs allow users to conduct excessive prompt submissions, or submission of overly complex, large or verbose prompts, leading to resource depletion, potential Denial of Service (DoS) attacks, and more. An inference is the process that an AI model uses to generate an output based on its training. When a user feeds an LLM a prompt, the LLM generates inferences in response. Follow-up questions trigger more inferences, because each additional interaction builds upon all the inferences, and potentially also previously submitted prompts, required for the previous interactions. Rate limiting controls the amount of requests an LLM can receive. When an LLM does not have the adequate rate limiting, it can effectively become overwhelmed with inferences and either begin to malfunction, or reach a cap on utilization and stop responding. A part of the LLM application could become unavailable. In AI security, we often refer to the “CIA,” which stands for Confidentiality, Integrity and Availability. Unbounded Consumption can cause an LLM to fail at the “Availability” part of this equation, which in turn can affect the LLM’s Confidentiality and Integrity. Another way in which Unbounded Consumption can negatively impact an LLM is through Denial of Wallet (DOW). Effectively, attackers will hit the LLM with request upon request, which can run up the bill if rate limiting is not in place. Eventually, these attacks can cause the LLM to reject requests due to the high volume of abnormal activity, which will stop it from working entirely.
Mitigation Methods
Some ways to reduce the risk of Unbounded Consumption include: Input Validation- ensure that inputs do not exceed reasonable size limits
Rate Limiting- apply user quotas and limits to restrict requests per user
Limit Exposure of Logits and Logprobs- obfuscate the exposure of API responses, provide only necessary information to users
Resource Allocation Management- monitor resource utilization to prevent any single user from exceeding a reasonable limit
Timeouts and Throttling- set time limits and throttle processing for resource intense operations to prevent prolonged resource consumption
Sandbox Techniques- restrict the LLMs access to network resources to limit what information it can expose
Monitoring and Logging- get alerts and continually monitor usage for unusual patterns Unbounded Consumption poses a critical risk to LLMs as it can cause DoS or DoW, however, with proper security measures and training, teams can minimize the risk of Unbounded Consumption in their AI applications. For more information on the rest of the OWASP Top 10 for LLMs, head over to the LLM series on our blog page. And for general information on how to take charge of your own AI security posture, schedule a demo today!

The post LLM10: Unbounded Consumption – FireTail Blog appeared first on Security Boulevard.

  •  

Homomorphic Encryption for Privacy-Preserving MCP Analytics in a Post-Quantum World

Explore homomorphic encryption for privacy-preserving analytics in Model Context Protocol (MCP) deployments, addressing post-quantum security challenges. Learn how to secure your AI infrastructure with Gopher Security.

The post Homomorphic Encryption for Privacy-Preserving MCP Analytics in a Post-Quantum World appeared first on Security Boulevard.

  •  

CVE-2025-40602: SonicWall Secure Mobile Access (SMA) 1000 Zero-Day Exploited

A zero-day vulnerability in SonicWall’s Secure Mobile Access (SMA) 1000 was reportedly exploited in the wild in a chained attack with CVE-2025-23006.

Key takeaways:

  1. CVE-2025-40602 is a local privilege escalation vulnerability in the appliance management console (AMC) of the SonicWall SMA 1000 appliance.
     
  2. CVE-2025-40602 has been exploited in a chained attack with CVE-2025-23006, a deserialization of untrusted data vulnerability patched in January.
     
  3. A list of Tenable plugins for this vulnerability can be found on the individual CVE pages for CVE-2025-40602 and CVE-2025-23006.

Background

On December 17, SonicWall published a security advisory (SNWLID-2025-0019) for a newly disclosed vulnerability in its Secure Mobile Access (SMA) 1000 product, a remote access solution.

CVE Description CVSSv3
CVE-2025-40602 SonicWall SMA 1000 Privilege Escalation Vulnerability 6.6

Analysis

CVE-2025-40602 is a local privilege escalation vulnerability in the appliance management console (AMC) of the SonicWall SMA 1000 appliance. An authenticated, remote attacker could exploit this vulnerability to escalate privileges on an affected device. While on its own, this flaw would require authentication in order to exploit, the advisory from SonicWall states that CVE-2025-40602 has been exploited in a chained attack with CVE-2025-23006, a deserialization of untrusted data vulnerability patched in January. The combination of these two vulnerabilities would allow an unauthenticated attacker to execute arbitrary code with root privileges.

According to SonicWall, “SonicWall Firewall products are not affected by this vulnerability.”

Historical exploitation of SonicWall vulnerabilities

SonicWall products have been a frequent target for attackers over the years. Specifically, the SMA product line has been targeted in the past by ransomware groups, as well as being featured in the Top Routinely Exploited Vulnerabilities list co-authored by multiple United States and International Agencies.

Earlier this year, an increase in ransomware activity tied to SonicWall Gen 7 Firewalls was observed. While initially it was believed that a new zero-day may have been the root cause, SonicWall later provided a statement noting that exploitation activity was in relation to CVE-2024-40766, an improper access control vulnerability which had been observed to have been exploited in the wild. More information on this can be found on our blog.

Given the past exploitation of SonicWall devices, we put together the following list of known SMA vulnerabilities that have been exploited in the wild:

CVE Description Tenable Blog Links Year
CVE-2019-7481 SonicWall SMA100 SQL Injection Vulnerability 1 2019
CVE-2019-7483 SonicWall SMA100 Directory Traversal Vulnerability - 2019
CVE-2021-20016 SonicWall SSLVPN SMA100 SQL Injection Vulnerability 1, 2, 3, 4, 5 2021
CVE-2021-20038 SonicWall SMA100 Stack-based Buffer Overflow Vulnerability 1, 2, 3 2021
CVE-2025-23006 SonicWall SMA 1000 Deserialization of Untrusted Data Vulnerability 1 2025
CVE-2024-40766 SonicWall SonicOS Improper Access Control Vulnerability 1 2025

Proof of concept

At the time this blog was published, no proof-of-concept (PoC) code had been published for CVE-2025-40602. If and when a public PoC exploit becomes available for CVE-2025-40602, we anticipate a variety of attackers will attempt to leverage this flaw as part of their attacks.

Solution

SonicWall has released patches to address this vulnerability as outlined in the table below:

Affected Version Fixed Version
12.4.3-03093 and earlier 12.4.3-03245
12.5.0-02002 and earlier 12.5.0-02283

The advisory also provides a workaround to reduce potential impact. This involves restricting access to the AMC to trusted sources. We recommend reviewing the advisory for the most up to date information on patches and workaround steps.

Identifying affected systems

A list of Tenable plugins for this vulnerability can be found on the individual CVE page for CVE-2025-40602 as they’re released. This link will display all available plugins for this vulnerability, including upcoming plugins in our Plugins Pipeline. In addition, product coverage for CVE-2025-23006 can be found here.

Tenable Attack Surface Management customers are able to identify these assets using a filtered search for SonicWall devices:

Tenable Attack Surface Management SonicWall Subscription Image

 

Get more information

Join Tenable's Research Special Operations (RSO) Team on Tenable Connect and engage with us in the Threat Roundtable group for further discussions on the latest cyber threats.

Learn more about Tenable One, the Exposure Management Platform for the modern attack surface.

The post CVE-2025-40602: SonicWall Secure Mobile Access (SMA) 1000 Zero-Day Exploited appeared first on Security Boulevard.

  •  

NDSS 2025 – Blindfold: Confidential Memory Management By Untrusted Operating System

Session 6B: Confidential Computing 1

Authors, Creators & Presenters: Caihua Li (Yale University), Seung-seob Lee (Yale University), Lin Zhong (Yale University)

PAPER
Blindfold: Confidential Memory Management by Untrusted Operating System

Confidential Computing (CC) has received increasing attention in recent years as a mechanism to protect user data from untrusted operating systems (OSes). Existing CC solutions hide confidential memory from the OS and/or encrypt it to achieve confidentiality. In doing so, they render OS memory optimization unusable or complicate the trusted computing base (TCB) required for optimization. This paper presents our results toward overcoming these limitations, synthesized in a CC design named Blindfold. Like many other CC solutions, Blindfold relies on a small trusted software component running at a higher privilege level than the kernel, called Guardian. It features three techniques that can enhance existing CC solutions. First, instead of nesting page tables, Blindfold's Guardian mediates how the OS accesses memory and handles exceptions by switching page and interrupt tables. Second, Blindfold employs a lightweight capability system to regulate the OS's semantic access to user memory, unifying case-by-case approaches in previous work. Finally, Blindfold provides carefully designed secure ABI for confidential memory management without encryption. We report an implementation of Blindfold that works on ARMv8-A/Linux. Using Blindfold's prototype, we are able to evaluate the cost of enabling confidential memory management by the untrusted Linux kernel. We show Blindfold has a smaller runtime TCB than related systems and enjoys competitive performance. More importantly, we show that the Linux kernel, including all of its memory optimizations except memory compression, can function properly for confidential memory. This requires only about 400 lines of kernel modifications.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Blindfold: Confidential Memory Management By Untrusted Operating System appeared first on Security Boulevard.

  •  

NDSS 2025 – Blindfold: Confidential Memory Management By Untrusted Operating System

Session 6B: Confidential Computing 1

Authors, Creators & Presenters: Caihua Li (Yale University), Seung-seob Lee (Yale University), Lin Zhong (Yale University)

PAPER
Blindfold: Confidential Memory Management by Untrusted Operating System

Confidential Computing (CC) has received increasing attention in recent years as a mechanism to protect user data from untrusted operating systems (OSes). Existing CC solutions hide confidential memory from the OS and/or encrypt it to achieve confidentiality. In doing so, they render OS memory optimization unusable or complicate the trusted computing base (TCB) required for optimization. This paper presents our results toward overcoming these limitations, synthesized in a CC design named Blindfold. Like many other CC solutions, Blindfold relies on a small trusted software component running at a higher privilege level than the kernel, called Guardian. It features three techniques that can enhance existing CC solutions. First, instead of nesting page tables, Blindfold's Guardian mediates how the OS accesses memory and handles exceptions by switching page and interrupt tables. Second, Blindfold employs a lightweight capability system to regulate the OS's semantic access to user memory, unifying case-by-case approaches in previous work. Finally, Blindfold provides carefully designed secure ABI for confidential memory management without encryption. We report an implementation of Blindfold that works on ARMv8-A/Linux. Using Blindfold's prototype, we are able to evaluate the cost of enabling confidential memory management by the untrusted Linux kernel. We show Blindfold has a smaller runtime TCB than related systems and enjoys competitive performance. More importantly, we show that the Linux kernel, including all of its memory optimizations except memory compression, can function properly for confidential memory. This requires only about 400 lines of kernel modifications.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Blindfold: Confidential Memory Management By Untrusted Operating System appeared first on Security Boulevard.

  •  

Why Venture Capital Is Betting Against Traditional SIEMs

And why most of the arguments do not hold up under scrutiny Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all […]

The post Why Venture Capital Is Betting Against Traditional SIEMs first appeared on Future of Tech and Security: Strategy & Innovation with Raffy.

The post Why Venture Capital Is Betting Against Traditional SIEMs appeared first on Security Boulevard.

  •  

The Hidden Cost of “AI on Every Alert” (And How to Fix It)

Learn why running AI agents on every SOC alert can spike cloud costs. See how bounded workflows make agentic triage reliable and predictable.

The post The Hidden Cost of “AI on Every Alert” (And How to Fix It) appeared first on D3 Security.

The post The Hidden Cost of “AI on Every Alert” (And How to Fix It) appeared first on Security Boulevard.

  •  

Inside the Global Airline that Eliminated 14,600 SaaS Security Issues with AppOmni

28 apps secured. 37 orgs monitored. 14,600 issues resolved. See how a global airline strengthened SaaS security with AppOmni.

The post Inside the Global Airline that Eliminated 14,600 SaaS Security Issues with AppOmni appeared first on AppOmni.

The post Inside the Global Airline that Eliminated 14,600 SaaS Security Issues with AppOmni appeared first on Security Boulevard.

  •  

Cybersecurity Crossed the AI Rubicon: Why 2025 Marked a Point of No Return

For years, artificial intelligence sat at the edges of cybersecurity conversations. It appeared in product roadmaps, marketing claims, and isolated detection use cases, but rarely altered the fundamental dynamics between attackers and defenders. That changed in 2025. This year marked a clear inflection point where AI became operational on both sides of the threat landscape.

The post Cybersecurity Crossed the AI Rubicon: Why 2025 Marked a Point of No Return appeared first on Seceon Inc.

The post Cybersecurity Crossed the AI Rubicon: Why 2025 Marked a Point of No Return appeared first on Security Boulevard.

  •  

When Zero-Days Go Active: What Ongoing Windows, Chrome, and Apple Exploits Reveal About Modern Intrusion Risk

A series of actively exploited zero-day vulnerabilities affecting Windows, Google Chrome, and Apple platforms was disclosed in mid-December, according to The Hacker News, reinforcing a persistent reality for defenders: attackers no longer wait for exposure windows to close. They exploit them immediately. Unlike large-scale volumetric attacks that announce themselves through disruption, zero-day exploitation operates quietly.

The post When Zero-Days Go Active: What Ongoing Windows, Chrome, and Apple Exploits Reveal About Modern Intrusion Risk appeared first on Seceon Inc.

The post When Zero-Days Go Active: What Ongoing Windows, Chrome, and Apple Exploits Reveal About Modern Intrusion Risk appeared first on Security Boulevard.

  •  

Complying with the Monetary Authority of Singapore’s Cloud Advisory: How Tenable Can Help

The Monetary Authority of Singapore’s cloud advisory, part of its 2021 Technology Risk Management Guidelines, advises financial institutions to move beyond siloed monitoring to adopt a continuous, enterprise-wide approach. These firms must undergo annual audits. Here’s how Tenable can help.

Key takeaways:

  1. High-stakes compliance: The MAS requires all financial institutions in Singapore to meet mandatory technology risk and cloud security guidelines and document compliance. Non-compliance can lead to severe financial penalties and business restrictions. Any third-party providers used by Singapore financial institutions must also comply with the standards.
     
  2. The proactive mandate: Compliance requires a shift from static compliance checks to a continuous, proactive approach to managing exposure. This approach is essential for securing the key cloud risk areas mandated by MAS: identity and access management (IAM) and securing applications in the public cloud.
     
  3. How to get there: Effective risk mitigation means breaking the most dangerous attack paths. Tenable Cloud Security, available in the Tenable One Exposure Management Platform, provides continuous monitoring, eliminates over-privileged permissions, and addresses misconfiguration risk.

Complying with government cybersecurity regulations can lull organizations into a false sense of security and lead to an over-reliance on point-in-time assessments conducted at irregular intervals. While such compliance efforts are essential to pass audits, they may do very little to actually reduce an organization’s risk. On the other hand, government efforts like the robust framework provided by the Monetary Authority of Singapore (MAS), Singapore’s central bank and integrated financial regulator, offer valuable guidance for organizations worldwide to consider as they look to reduce cyber risk. 

The MAS framework is designed to safeguard the integrity of the country's financial systems. The framework is anchored by the MAS Technology Risk Management (TRM) Guidelines, published in January 2021, which covers a wide spectrum of risk management concerns, including IT governance, cyber resilience, incident response, and third-party risk. The TRM guidelines were supplemented by the June 2021 Advisory On Addressing The Technology And Cyber Security Risks Associated With Public Cloud Adoption.

The cloud advisory highlights key risks and control measures that Singapore’s financial institutions should consider before adopting public cloud services, including:

  • Developing a public cloud risk management strategy that takes into consideration the unique characteristics of public cloud services
  • Implementing strong controls in areas such as identity and access management (IAM), cybersecurity, data protection, and cryptographic key management
  • Expanding cybersecurity operations to include the security of public cloud workloads
  • Managing cloud resilience, outsourcing, vendor lock-in, and concentration risks
  • Ensuring the financial institution’s staff have the adequate skillsets to manage public cloud workloads and their risks.

The advisory recommends avoiding a siloed approach when performing security monitoring of on-premises apps or infrastructure and public cloud workloads. Instead, it advises financial institutions to “feed cyber-related information on public cloud workloads into their respective enterprise-wide IT security monitoring services to facilitate continuous monitoring and analysis of cyber events.” 

Who must comply with MAS TRM and the cloud advisory?

While the MAS TRM guidelines and cloud advisory do not specifically state penalties for compliance failures, they are legally binding. They apply to all financial institutions operating under the authority’s regulation in Singapore, including banks, insurers, fintech firms, payment service providers, and venture capital managers. A financial institution in Singapore that leverages the services of a firm based outside the country must ensure that its service providers also meet the TRM requirements. MAS also factors adherence to the framework into its overall risk assessment of an organization; failure to comply can damage an organization's standing and reputation.

In short, the scope of accountability to the MAS TRM guidelines and cloud advisory is broad.

Complying with the MAS cloud advisory: How Tenable can help

We evaluated how the Tenable One Exposure Management Platform with Tenable Cloud Security can assist organizations in achieving and maintaining compliance with the MAS cloud advisory. Read on to understand two of the cloud advisory’s key focus areas and how to address them effectively with Tenable One — preventing dangerous attack path vectors from compromising sensitive cloud assets.

1. Identity and access management: Enforcing least privilege access

The MAS cloud advisory calls for financial institutions to “enforce the principle of least privilege stringently” when granting access to assets in the public cloud. It further advises firms to consider adopting zero trust principles in the architecture design of applications, where “access to public cloud services and resources is evaluated and granted on a per-request and need-to basis.”

At Tenable, we believe applying least privilege in Identity Access Management (IAM) is the cornerstone for effective cloud security. In the cloud, excessive permissions on accounts that can access sensitive data are a direct route to a breach.

How Tenable can help: CIEM and sensitive data protection

The Tenable Cloud Security domain within Tenable One offers integrated cloud infrastructure entitlement management (CIEM) that enforces strict least privilege across human and machine identities in Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Oracle Cloud Infrastructure (OCI), and Kubernetes environments.

  • Eliminate lateral movement: CIEM analyzes policies to identify privilege escalation risks and lateral movement paths, effectively closing dangerous attack vectors.
  • Data-driven prioritization: Tenable provides automated data classification and correlates sensitive data exposure with overly permissive identities. This ensures remediation focuses on the exposures that threaten your most critical regulated data.
  • Mandatory controls: The platform automatically monitors for privileged users who lack multi-factor authentication (MFA) and checks for regular access key rotation.
Cutting-edge identity intelligence correlates overprivileged IAM identities with vulnerabilities, misconfigurations, and sensitive data
Cutting-edge identity intelligence correlates overprivileged IAM identities with vulnerabilities, misconfigurations, and sensitive data to see where privilege misuse could have the greatest impact. Guided, least-privilege remediation closes these identity exposure gaps. Source: Tenable, December 2025

Here’s a detailed look at how Tenable can help with three of the cloud advisory’s IAM provisions:

MAS cloud advisory item How Tenable helps
10. As IAM is the cornerstone of effective cloud security risk management, FIs should enforce the principle of “least privilege” stringently when granting access to information assets in the public cloud. Tenable provides easy visualization of effective permissions through identity intelligence and permission mapping. By querying permissions across identities, you can quickly surface problems and revoke excessive permissions with automatically generated least privilege policies.
11. Financial institutions should implement multi-factor authentication (MFA) for staff with privileges to configure public cloud services through the CSPs’ metastructure, especially staff with top-level account privileges (e.g. known as the “root user” or “subscription owner” for some CSPs). Tenable offers detailed monitoring for privileged users, including IAM users who don't have multi-factor authentication (MFA) enabled.
12. Credentials used by system/application services for authentication in the public cloud, such as “access keys,” should be changed regularly. If the credentials are not used, they should be deleted immediately. Tenable's audits check for this specific condition. They can identify IAM users whose access keys have not been rotated within a specified time frame (e.g., 90 days). This helps you to quickly identify and address this security vulnerability

Source: Tenable, December 2025

2. Securing applications in the public cloud: Minimizing risk exposure

For financial institutions using microservices and containers, the MAS cloud advisory advises that, to reduce the attack surface, each container includes only the core software components needed by the application. The cloud advisory also notes that security tools made for traditional on-premises IT infrastructure (e.g. vulnerability scanners) may not run effectively on containers, and advises financial institutions to adopt container-specific security solutions for preventing, detecting, and responding to container-specific threats. For firms using IaC to provision or manage public cloud workloads, it further calls for implementing controls to minimize the risk of misconfigurations.

At Tenable, we believe this explicit mandate for specialized cloud and container security solutions underscores the need for continuous, accurate risk assessment. Tenable Cloud Security is purpose-built to meet these requirements with full Cloud Security Posture Management (CSPM) and Cloud Workload Protection (CWP) capabilities across your cloud footprint. This ability to see and protect every cloud asset — from code to container — is crucial for enabling contextual prioritization of risk. We also believe that relying solely on static vulnerability scoring systems, like the Common Vulnerability Scoring System (CVSS) is insufficient because it fails to reflect real-world exploitability. To ensure financial institutions focus remediation efforts where they matter most, Tenable Exposure Management, including Tenable Cloud Security, incorporates the Tenable Vulnerability Priority Rating (VPR) — dynamic, predictive risk scoring that allows teams to address the most immediate and exploitable threats first.

How Tenable can help: Container security and cloud-to-code traceability

Tenable unifies cloud workload protection (CWP) with cloud security posture management (CSPM) to provide continuous, contextual risk assessment.

  • Workload and container security: Tenable provides solutions tailored to your security domain:
    • For the cloud security professional: Tenable offers robust, agentless cloud workload protection capabilities that continuously scan for, detect and visualize critical risks such as vulnerabilities, sensitive data exposure, malware and misconfigurations across virtual machines, containers and serverless environments.
    • For the vulnerability management owner: Tenable offers a streamlined solution with unified visibility for hybrid environments, providing the core capabilities to extend vulnerability management best practices to cloud workloads: Tenable Cloud Vulnerability Management, ensures agentless multi-cloud coverage, scanning containers in registries (shift-left) and runtime to prevent the deployment of vulnerable images and detect drift in production.
  • Cloud-to-code traceability: This unique feature links runtime findings (e.g., an exposed workload) directly back to its IaC source code, allowing for rapid remediation and automated pull requests, minimizing misconfiguration risk as mandated by MAS.
Embed security and compliance throughout the development lifecycle
Embed security and compliance throughout the development lifecycle, in DevOps workflows like HashiCorp Terraform and CloudFormation, to minimize risks. Detect issues in the cloud and suggest the fix in code. Source: Tenable, December 2025

Here’s a detailed look at how Tenable can help with two of the cloud advisory’s provisions related to securing applications in the public cloud:

MAS cloud advisory item How Tenable helps
19. Applications that run in a public cloud environment may be packaged in containers, especially for applications adopting a microservices architecture. Financial institutions should ensure that each container includes only the core software components needed by the application to reduce the attack surface. As containers typically share a host operating system, financial institutions should run containers with a similar risk profile together (e.g., based on the criticality of the service or the data that are processed) to minimize risk exposure. As security tools made for traditional on-premise[s] IT infrastructure (e.g. vulnerability scanners) may not run effectively on containers, financial institutions should adopt [a] container-specific security solution for preventing, detecting, and responding to container-specific threats.

Tenable integrates with your CI/CD pipelines and container registries to provide visibility and control throughout the container lifecycle. Here's how it works:

  • Tenable scans container images for vulnerabilities, misconfigurations, and malware as they're being built and stored in registries. This is a "shift-left" approach, which means it helps you find and fix security issues early in the development process.
  • You can create and enforce security policies based on vulnerability scores, the presence of specific malware, or other security criteria.
  • Tenable's admission controllers act as runtime guardrails, ensuring that the policies you've defined are enforced at the point of deployment. This prevents deployment of images that failed initial scans or have since been found vulnerable, even if a developer tries to bypass the standard process.
20. Financial institutions should ensure stringent control over the granting of access to container orchestrators (e.g. Kubernetes), especially the use of the orchestrator administrative account, and the orchestrators’ access to container images. To ensure that only secure container images are used, a container registry could be established to facilitate tracking of container images that have met the financial institution’s security requirements.

Tenable's Kubernetes Security Posture Management (KSPM) component continuously scans your Kubernetes resources (like pods, deployments, and namespaces) to identify misconfigurations and policy violations. This allows you to:

  • Discover and remediate vulnerabilities and misconfigurations before they can be exploited.
  • Continuously audit your environment against industry standards, like the Center for Internet Security (CIS) benchmarks for Kubernetes.
  • Get a single, centralized view of your security posture across multiple Kubernetes clusters.

Tenable’s admission controllers act as gatekeepers to your Kubernetes cluster. When a user or a system attempts to deploy a new container image, the admission controller intercepts the request before it's fully scheduled. It then checks the image against your defined security policies. Your policies can be based on factors such as:

  • Vulnerability scores (e.g., block any image with a critical vulnerability)
  • Compliance violations (e.g., block images that don't meet a specific security standard)
  • The presence of malicious software or exposed secrets

If the image violates any of these policies, the admission controller denies the deployment, preventing the vulnerable container from ever reaching production.

Source: Tenable, December 2025

Gaining the upper hand on MAS compliance through a unified ecosystem view

Tenable One is the market-leading exposure management platform, normalizing, contextualizing, and correlating security signals from all domains, including cloud — across vulnerabilities, misconfigurations, and identities spanning your hybrid estate. Exposure management enables cross-functional alignment between SecOps, DevOps, and governance, risk and compliance (GRC) teams with a shared, unified view of risk.

Tenable Cloud Security, part of Tenable One, unifies vision, insight, and action to support continuous adherence to the MAS cloud advisory
Tenable Cloud Security, part of Tenable One, unifies vision, insight, and action to support continuous adherence to the MAS cloud advisory across multi-cloud and hybrid environments. Source: Tenable, December 2025

Tenable Cloud Security, part of the Tenable One Exposure Management platform, supports continuous adherence to the MAS cloud advisory and enables risk-based decision-making by eliminating the toxic combinations that attackers exploit. The platform unifies security insight, transforming the effort to achieve compliance from a necessary burden into a strategic advantage.

Learn more

The post Complying with the Monetary Authority of Singapore’s Cloud Advisory: How Tenable Can Help appeared first on Security Boulevard.

  •  

MSP Automation Isn’t Optional, But it Isn’t the Answer to Everything

Raise your hand if you’ve fallen victim to a vendor-led conversation around their latest AI-driven platform over the past calendar year. Keep it up if the pitch leaned on “next-gen,” “market-shaping,” or “best-in-class” while they nudged another product into your stack. If your hand is still up, you are not alone. MSPs are the target because you sit between shrinking budgets and rising risk.

The post MSP Automation Isn’t Optional, But it Isn’t the Answer to Everything appeared first on Security Boulevard.

  •  

Google Chrome Extension is Intercepting Millions of Users’ AI Chats

A Chrome browser extension with 6 million users, as well as seven other Chrome and Edge extensions, for months have been silently collecting data from every AI chatbot conversion, packaging it, and then selling it to third parties like advertisers and data brokers, according to Koi Security.

The post Google Chrome Extension is Intercepting Millions of Users’ AI Chats appeared first on Security Boulevard.

  •  

The 12 Months of Innovation: How Salt Security Helped Rewrite API & AI Security in 2025

As holiday lights go up and inboxes fill with year-in-review emails, it’s tempting to look back on 2025 as “the year of AI.”

But for security teams, it was something more specific – the year APIs, AI agents, and MCP servers collided across the API fabric, expanding the attack surface faster than most organizations could keep up.

At Salt Security, we spent 2025 focused on one thing: defending the API action layer where AI, applications, and data intersect. And we did it with a steady drumbeat of innovation, a new “gift” for security teams almost every month.

So in the spirit of the season, here’s a look back at Salt’s 12 Months of Innovation – a year-long series of product launches, partnerships, and research milestones designed to help organizations stay ahead of fast-moving threats.

January – The Year Kicks Off with APIs at the Center

We kicked off the year by shining a harsh light on what many teams already suspected:

  • APIs now sit at the center of almost every digital initiative.
  • Zombie and unmanaged APIs still live in production.
  • Software supply chain dependencies are quietly multiplying risk.

Early 2025 research and thought leadership from Salt Labs showed just how dangerous it is to run modern AI and automation on top of APIs you don’t fully understand or control.

Takeaway: January set the tone – defending tomorrow’s API fabric with yesterday’s tools is no longer an option.

February – A Spotlight on API Reality

In February, we went from “we think we have a problem” to “here are the numbers.”

With the latest State of API Security Report and key industry recognitions such as inclusion in top security lists, Salt brought hard data to boardroom and CISO conversations.

The message was clear:

  • API traffic is exploding.
  • Attackers are targeting APIs at scale.
  • Traditional perimeter and app security are missing critical context.

Takeaway: API security is no longer a niche concern. It’s a business risk that demands strategy, budget, and board-level attention.

March – Gold Medals & Rising Shadows

March blended validation and urgency.

On one side, industry bodies recognized Salt’s leadership with awards like a Gold Globee, underscoring the maturity and impact of our platform.

On the other, new blogs and research highlighted reality on the ground:

  • Compliance and data privacy pressure are rising.
  • AI-driven attacks are accelerating, not slowing.

Takeaway: Excellence in API security isn’t just about winning awards, it’s about staying ahead of adversaries who are constantly adapting.

April – A Season of Partnerships & Paradigm Shifts

In April, collaboration took center stage.

We deepened integrations with leading platforms such as CrowdStrike and expanded support for modern ecosystems, including MCP server–driven architectures.

By weaving Salt API intelligence into tools security teams already rely on, we helped customers:

  • Gain richer, real-time context.
  • Simplify deployment and operations.
  • Extend protections into their existing workflows.

Takeaway: API and AI security are team sports. Partnerships and integrations turn siloed tools into a cohesive defense fabric.

May – The Cloud Era Gets Real

By May, the conversation had shifted from “we’re moving to the cloud” to “our entire business depends on it.”

Salt expanded coverage and governance capabilities for leading cloud environments and partners, helping customers:

  • Align API security with cyber insurance and regulatory expectations.
  • Build stronger posture governance and risk-management processes.
  • Translate technical API risk into board-ready language.

Takeaway: In 2025, API security moved squarely into the boardroom as a core pillar of enterprise risk.

June – Illuminate Everything

June was all about turning on the lights.

We launched Salt Illuminate and expanded Cloud Connect, giving customers the ability to:

  • Discover APIs across complex, hybrid, and multi-cloud environments.
  • Spot shadow, zombie, and unmanaged APIs in minutes instead of months.
  • Build a live inventory that actually stays current.

Takeaway: You can’t protect what you can’t see. Illuminate gave teams the visibility foundation they’ve been missing.

July – CISOs Sound the Alarm

In July, the stakes became very real.

High-profile AI mishaps, including incidents like the McDonald’s chatbot breach, made one thing painfully obvious: conversational AI and digital experiences are only as safe as the APIs behind them.

Salt responded with:

  • Deep-dive blogs on AI agent risk and API blind spots.
  • The launch of Salt Surface, designed to map and prioritize exposed API risk.

Takeaway: 2025 was the year CISOs started asking not just “What APIs do we have?” but “Which of these are exposed, exploitable, and business-critical?”

August – Autonomous Everything

By August, “autonomous” wasn’t just a buzzword, it was a roadmap theme.

Organizations leaned hard into:

  • Autonomous workflows
  • AI-driven decisioning
  • Automated threat detection and response

Salt’s innovation in this space emphasized a key reality: AI, autonomy, and APIs are inseparable.

We advanced protections for autonomous threat hunting and AI-driven security use cases, reinforcing that if APIs are compromised, autonomous systems are too.

Takeaway: You can’t secure autonomous operations if you’re not securing the API action layer that powers them.

September – Securing the AI Agent Revolution

September was a turning point.

Salt introduced the industry’s first solution to secure AI agent actions across APIs and MCP servers, bringing real controls to a problem that had mostly been theoretical.

This meant:

  • Protection against prompt injection and misuse.
  • Guardrails around what AI agents can access or execute.
  • Enforceable policy where it matters: at the API and action level.

Takeaway: The AI agent revolution doesn’t have to be a security nightmare — if you secure the actions, not just the model.

October – The Blind Spots Strike Back

In October, new data from Salt and customer environments revealed how deep the AI + API blind spots really go.

We broke down:

  • Misconfigurations in AI-driven workflows.
  • Risky patterns in agentic and MCP deployments.
  • Common mistakes teams make when bolting AI onto existing architectures.

Through detailed analysis and practical guidance, we helped teams turn confusion into a roadmap for modernizing their security posture.

Takeaway: Education is as important as technology. You can’t fix what you don’t fully understand.

November – Security Starts in Code

November brought a massive step forward in shifting API security left and right at the same time.

We launched:

  • GitHub Connect - to scan code repositories for shadow APIs, spec mismatches, and insecure patterns before they ship.
  • MCP Finder - to identify risky MCP configurations and AI-integrated workflows early in the development lifecycle.

Combined with runtime intelligence from the Salt platform, customers could now connect:

  • What’s being written → What’s being deployed → What’s being exploited

Takeaway: Real API security covers the full lifecycle, from design and code to production traffic and AI-agent actions.

December – Hello, Pepper

We closed the year with a new kind of experience: Ask Pepper AI.

Ask Pepper AI turns Salt’s platform into a conversational partner, letting users:

  • Ask natural-language questions about APIs, risks, and threats.
  • Accelerate investigation and incident response.
  • Bring complex insights to teams who don’t live inside dashboards.

Alongside MCP protection for AWS WAF, December marked the next stage in our vision: API security that’s not just powerful, but accessible and intuitive.

Takeaway: When security teams can simply ask questions and get meaningful, contextual answers, they move faster, and so does the business.

Looking Ahead: Building on a Year of Innovation

If 2025 was the year APIs fully merged with AI agents, automation, and MCP servers, 2026 will be the year organizations either embrace the API action layer or fall behind those that do.

At Salt Security, our focus remains the same:

  • See everything - every API, every action, every blind spot.
  • Understand the context - who’s calling what, from where, and why.
  • Stop attacks - before they turn into outages, data loss, or brand damage.

The 12 Months of Innovation were just the beginning. The threats are evolving, and so are we.

If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security's research team and learn what attackers already know.

The post The 12 Months of Innovation: How Salt Security Helped Rewrite API & AI Security in 2025 appeared first on Security Boulevard.

  •  

New Feature | Spamhaus Reputation Checker: Troubleshoot your listing

It’s not always immediately clear why your IP has been listed or how to fix it. To help, we’ve added a new “troubleshooting” step to the IP & Domain Reputation Checker, specifically for those whose IPs have been listed on the Combined Spam Sources (CSS) Blocklist - IPs associated with low-reputation email. Learn how you can diagnose the issue using this new feature.

The post New Feature | Spamhaus Reputation Checker: Troubleshoot your listing appeared first on Security Boulevard.

  •  

Security by Design: Why Multi-Factor Authentication Matters More Than Ever

In an era marked by escalating cyber threats and evolving risk landscapes, organisations face mounting pressure to strengthen their security posture whilst maintaining seamless user experiences. At Thales, we recognise that robust security must be foundational – embedded into products and services by design, not bolted on as an afterthought. This principle underpins our commitment […]

The post Security by Design: Why Multi-Factor Authentication Matters More Than Ever appeared first on Blog.

The post Security by Design: Why Multi-Factor Authentication Matters More Than Ever appeared first on Security Boulevard.

  •  

SHARED INTEL Q&A: This is how ‘edge AI’ is forcing a rethink of trust, security and resilience

A seismic shift in digital systems is underway — and most people are missing it.

Related: Edge AI at the chip layer

While generative AI demos and LLM hype steal the spotlight, enterprise infrastructure is being quietly re-architected, not from … (more…)

The post SHARED INTEL Q&A: This is how ‘edge AI’ is forcing a rethink of trust, security and resilience first appeared on The Last Watchdog.

The post SHARED INTEL Q&A: This is how ‘edge AI’ is forcing a rethink of trust, security and resilience appeared first on Security Boulevard.

  •  

Securing the AI Revolution: NSFOCUS LLM Security Protection Solution

As Artificial Intelligence technology rapidly advances, Large Language Models (LLMs) are being widely adopted across countless domains. However, with this growth comes a critical challenge: LLM security issues are becoming increasingly prominent, posing a major constraint on further development. Governments and regulatory bodies are responding with policies and regulations to ensure the safety and compliance […]

The post Securing the AI Revolution: NSFOCUS LLM Security Protection Solution appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Securing the AI Revolution: NSFOCUS LLM Security Protection Solution appeared first on Security Boulevard.

  •  

The Rise of Precision Botnets in DDoS

For a long time, DDoS attacks were easy to recognize. They were loud, messy, and built on raw throughput. Attackers controlled massive botnets and flooded targets until bandwidth or infrastructure collapsed. It was mostly a scale problem, not an engineering one. That era is ending. A quieter and far more refined threat has taken its […]

The post The Rise of Precision Botnets in DDoS appeared first on Security Boulevard.

  •  

Assura Named to MSSP Alert and Cyber Alliance’s 2025 “Top 250 MSSPs,” Ranking at Number 94

FOR IMMEDIATE RELEASE Richmond, VA — December 11, 2025 — Assura is proud to announce that it has been named to the MSSP Alert and CyberRisk Alliance partnership’s prestigious Top 250 MSSPs list for 2025, securing the #94 position among the world’s leading Managed Security Service Providers. “Making The Top 100 is an incredible milestone and testament to the… Continue reading Assura Named to MSSP Alert and Cyber Alliance’s 2025 “Top 250 MSSPs,” Ranking at Number 94

The post Assura Named to MSSP Alert and Cyber Alliance’s 2025 “Top 250 MSSPs,” Ranking at Number 94 appeared first on Security Boulevard.

  •  

Google Finds Five China-Nexus Groups Exploiting React2Shell Flaw

Chinese cybercrime illegal online gambling

Researchers with Google Threat Intelligence Group have detected five China-nexus threat groups exploiting the maximum-security React2Shell security flaw to drop a number of malicious payloads, from backdoors to downloaders to tunnelers.

The post Google Finds Five China-Nexus Groups Exploiting React2Shell Flaw appeared first on Security Boulevard.

  •  

Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership

Ambiguity isn't just a challenge. It's a leadership test - and most fail it.

I want to start with something that feels true but gets ignored way too often.

Most of us in leadership roles have a love hate relationship with ambiguity. We say we embrace it... until it shows up for real. Then we freeze, hedge our words, or pretend we have a plan. Cybersecurity teams deal with ambiguity all the time. Its in threat intel you cant quite trust, in stakeholder demands that swing faster than markets, in patch rollouts that go sideways. But ambiguity isnt a bug to be fixed. Its a condition to be led through.

[Image: A leader facing a foggy maze of digital paths - ambiguity as environment.]

Lets break this down the way I see it, without jazz hands or buzzwords.

Ambiguity isn't uncertainty. Its broader.  

Uncertainty is when you lack enough data to decide. Ambiguity is when even the terms of the problem are in dispute. Its not just what we don't know. Its what we cant define yet. In leadership terms, that feels like being handed a puzzle where some pieces aren't even shaped yet. This is classic VUCA territory - volatility, uncertainty, complexity and ambiguity make up the modern landscape leaders sit in every day. 

[Image: The dual nature of ambiguity - logic on one side, uncertainty on the other.]

Here is the blunt truth. Great leaders don't eliminate ambiguity. They engage with it. They treat ambiguity like a partner you've gotta dance with, not a foe to crush.

Ambiguity is a leadership signal  

When a situation is ambiguous, its telling you something. Its saying your models are incomplete, or your language isn't shared, or your team has gaps in context. Stanford researchers and communication experts have been talking about this recently: ambiguity often reflects a gap in the shared mental model across the team. If you're confused, your team probably is too. 

A lot of leadership texts treat ambiguity like an enemy of clarity. But thats backward. Ambiguity is the condition that demands sensemaking. Sensemaking is the real work. Its the pattern of dialogue and iteration that leads to shared understanding amid chaos. That means asking the hard questions out loud, not silently wishing for clarity.

If your team seems paralyzed, unclear, or checked out - it might not be them. It might be you.

Leaders model calm confusion  

Think about that phrase. Calm confusion. Leaders rarely say, "I don't know." Instead they hedge, hide, or overcommit. But leaders who effectively navigate ambiguity do speak up about what they don't know. Not to sound vulnerable in a soft way, but to anchor the discussion in reality. That model gives permission for others to explore unknowns without fear.

I once watched a director hold a 45-minute meeting to "gain alignment" without once stating the problem. Everyone left more confused than when they walked in. That’s not leadership. That's cover.

There is a delicate balance here. You don't turn every ambiguous situation into a therapy session. Instead, you create boundaries around confusion so the team knows where exploration stops and action begins. Good leaders hold this tension.

Move through ambiguity with frameworks, not polish  

Here is a practical bit. One common way to get stuck is treating decisions as if they're singular. But ambiguous situations usually contain clusters of decisions wrapped together. A good framework is to break the big, foggy problem into smaller, more combinable decisions. Clarify what is known, identify the assumptions you are making, and make provisional calls on the rest. Treat them like hypotheses to test, not laws of motion.

In cybersecurity, this looks like mapping your threat intel to scenarios where you know the facts, then isolating the areas of guesswork where your team can experiment or prepare contingencies. Its not clean. But it beats paralysis.

Teams learn differently under ambiguity  

If you have ever noticed that your best team members step up in times of clear crises, but shut down when the goals are vague, you're observing humans responding to ambiguity differently. Some thirst for structure. Others thrive in gray zones. As a leader, you want both. You shape the context so self starters can self start, and then you steward alignment so the whole group isnt pulling in four directions.

Theres a counterintuitive finding in team research: under certain conditions, ambiguity enables better collaborative decision making because the absence of a single voice forces people to share and integrate knowledge more deeply. But this only works when there is a shared understanding of the task and a culture of open exchange. 

Lead ambiguity, don't manage it  

Managing ambiguity sounds like you're trying to tighten it up, reduce it, or push it into a box. Leading ambiguity is different. It's about moving with the uncertainty. Encouraging experiments. Turning unknowns into learning loops. Recognizing iterative decision processes rather than linear ones.

And yes, that approach feels messy. Good. Leadership is messy. The only thing worse than ambiguity is false certainty. I've been in too many rooms where leaders pretended to know the answer, only to cost time, credibility, or talent. You can be confident without being certain. That's leadership.

But there's a flip side no one talks about.

Sometimes leaders use ambiguity as a shield. They stay vague, push decisions down the org, and let someone else take the hit if it goes sideways. I've seen this pattern more than once. Leaders who pass the fog downstream and call it empowerment. Except it's not. It's evasion. And it sets people up to fail.

Real leaders see ambiguity for what it is: a moment to step up and mentor. To frame the unknowns, offer scaffolding, and help others think through it with some air cover. The fog is a chance to teach — not disappear.

But the hard truth? Some leaders can't handle the ambiguity themselves. So they deflect. They repackage their own discomfort as a test of independence, when really they're just dodging responsibility. And sometimes, yeah, it feels intentional. They act like ambiguity builds character... but only because they're too insecure or inexperienced to lead through it.

The result is the same: good people get whiplash. Goals shift. Ownership blurs. Trust erodes. And the fog thickens.

There's research on this, too. It's called role ambiguity — when you're not clear on what's expected, what your job even is, or how success gets measured. People in those situations don't just get frustrated. They burn out. They overcompensate for silence. They stop trusting. And productivity tanks. It's not about needing a five-year plan. It's about needing a shared frame to work from. Leadership sets that tone.

Leading ambiguity means owning the fog, not outsourcing it.

Ambiguity isn't a one-off problem. It's a perpetual condition, especially in cybersecurity and executive realms where signals are weak and stakes are high. The real skill isn't clarity. It's resilience. The real job isn't prediction. It's navigation.

Lead through ambiguity by embracing the fog, not burying it. And definitely not dumping it on someone else.

When the fog rolls in, what kind of leader are you really?

#

Sources / Resources List

The post Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership appeared first on Security Boulevard.

  •  

NDSS 2025 – Selective Data Protection against Memory Leakage Attacks for Serverless Platforms

Session 6B: Confidential Computing 1

Authors, Creators & Presenters: Maryam Rostamipoor (Stony Brook University), Seyedhamed Ghavamnia (University of Connecticut), Michalis Polychronakis (Stony Brook University)

PAPER
LeakLess: Selective Data Protection against Memory Leakage Attacks for Serverless Platforms

As the use of language-level sandboxing for running untrusted code grows, the risks associated with memory disclosure vulnerabilities and transient execution attacks become increasingly significant. Besides the execution of untrusted JavaScript or WebAssembly code in web browsers, serverless environments have also started relying on language-level isolation to improve scalability by running multiple functions from different customers within a single process. Web browsers have adopted process-level sandboxing to mitigate memory leakage attacks, but this solution is not applicable in serverless environments, as running each function as a separate process would negate the performance benefits of language-level isolation. In this paper we present LeakLess, a selective data protection approach for serverless computing platforms. LeakLess alleviates the limitations of previous selective data protection techniques by combining in-memory encryption with a separate I/O module to enable the safe transmission of the protected data between serverless functions and external hosts. We implemented LeakLess on top of the Spin serverless platform, and evaluated it with real-world serverless applications. Our results demonstrate that LeakLess offers robust protection while incurring a minor throughput decrease under stress-testing conditions of up to 2.8% when the I/O module runs on a different host than the Spin runtime, and up to 8.5% when it runs on the same host.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Selective Data Protection against Memory Leakage Attacks for Serverless Platforms appeared first on Security Boulevard.

  •  

News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026

Frankfurt, Dec. 16, 2025, CyberNewswire — Link11, a European provider of web infrastructure security solutions, has released new insights outlining five key cybersecurity developments expected to influence how organizations across Europe prepare for and respond to threats in 2026.… (more…)

The post News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026 first appeared on The Last Watchdog.

The post News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026 appeared first on Security Boulevard.

  •  

Code Execution in Jupyter Notebook Exports

After our research on Cursor, in the context of developer-ecosystem security, we turn our attention to the Jupyter ecosystem. We expose security risks we identified in the notebook’s export functionality, in the default Windows environment, to help organizations better protect their assets and networks. Executive Summary We identified a new way external Jupyter notebooks could […]

The post Code Execution in Jupyter Notebook Exports appeared first on Blog.

The post Code Execution in Jupyter Notebook Exports appeared first on Security Boulevard.

  •  

Veza Extends Reach to Secure and Govern AI Agents

Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise. Currently in the process of being acquired by ServiceNow, the platform is based on an Access Graph the company previously developed to provide cybersecurity teams with a..

The post Veza Extends Reach to Secure and Govern AI Agents appeared first on Security Boulevard.

  •  

Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions

Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Seceon Inc.

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Security Boulevard.

  •  

Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries

As cyber threats against regulated industries continue to escalate in scale, sophistication, and financial impact, organizations are under immense pressure to modernize security operations while meeting strict compliance requirements. Addressing this urgent need, Seceon has announced a strategic partnership with InterSources Inc., expanding the delivery of AI-driven cybersecurity solutions across some of the world’s most

The post Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries appeared first on Seceon Inc.

The post Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries appeared first on Security Boulevard.

  •  

Can a Transparent Piece of Plastic Win the Invisible War on Your Identity?

Identity systems hold modern life together, yet we barely notice them until they fail. Every time someone starts a new job, crosses a border, or walks into a secure building, an official must answer one deceptively simple question: Is this person really who they claim to be? That single moment—matching a living, breathing human to..

The post Can a Transparent Piece of Plastic Win the Invisible War on Your Identity? appeared first on Security Boulevard.

  •  

Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions

Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Seceon Inc.

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Security Boulevard.

  •  

Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance

As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.

However, AI systems do not operate in isolation. They rely heavily on Application Programming Interfaces (APIs) to ingest training data, serve model inferences, and facilitate communication between agents and servers. Consequently, the API attack surface effectively becomes the AI attack surface. Securing these API pathways is fundamental to achieving the "Secure and Resilient" and "Privacy-Enhanced" characteristics mandated by the framework.

Understanding the NIST AI RMF Core

The NIST AI RMF is organized around four core functions that provide a structure for managing risk throughout the AI lifecycle:

  • GOVERN: Cultivates a culture of risk management and outlines processes, documents, and organizational schemes.
  • MAP: Establishes context to frame risks, identifying interdependencies and visibility gaps.
  • MEASURE: Employs tools and methodologies to analyze, assess, and monitor AI risk and related impacts.
  • MANAGE: Prioritizes and acts upon risks, allocating resources to respond to and recover from incidents.

The Critical Role of API Posture Governance

While the "GOVERN" function in the NIST framework focuses on organizational culture and policies, API Posture Governance serves as the technical enforcement mechanism for these policies in operational environments.

Without robust API posture governance, organizations struggle to effectively Manage or Govern their AI risks. Unvetted AI models may be deployed via shadow APIs, and sensitive training data can be exposed through misconfigurations. Automating posture governance ensures that every API connected to an AI system adheres to security standards, preventing the deployment of insecure models and ensuring your AI infrastructure remains compliant by design.

How Salt Security Safeguards AI Systems

Salt Security provides a tailored solution that aligns directly with the NIST AI RMF. By securing the API layer (Agentic AI Action Layer), Salt Security helps organizations maintain the integrity of their AI systems and safeguard sensitive data. The key features, along with their direct correlations to NIST AI RMF functions, include:

Automated API Discovery:

  • Alignment: Supports the MAP function by establishing context and recognizing risk visibility gaps.
  • Outcome: Guarantees a complete inventory of all APIs, including shadow APIs used for AI training or inference, ensuring no part of the AI ecosystem is unmanaged.

Posture Governance:

  • Alignment: Operationalizes the GOVERN and MANAGE functions by enabling organizational risk culture and prioritizing risk treatment.
  • Outcome: Preserves secure APIs throughout their lifecycle, enforcing policies that prevent the deployment of insecure models and ensuring ongoing compliance with NIST standards.

AI-Driven Threat Detection:

  • Alignment: Meets the Secure & Resilient trustworthiness characteristic by defending against adversarial misuse and exfiltration attacks.
  • Outcome: Actively identifies and blocks sophisticated threats like model extraction, data poisoning, and prompt injection attacks in real-time.

Sensitive Data Visibility:

  • Alignment: Supports the Privacy-Enhanced characteristic by safeguarding data confidentiality and limiting observation.
  • Outcome: Oversees data flow through APIs to protect PII and sensitive training data, ensuring data minimization and privacy compliance.

Vulnerability Assessment:

  • Alignment: Assists in the MEASURE function by assessing system trustworthiness and testing for failure modes.
  • Outcome: Identifies logic flaws and misconfigurations in AI-connected APIs before they can be exploited by adversaries.

Conclusion

Trustworthy AI requires secure APIs. By implementing API Posture Governance and comprehensive security controls, organizations can confidently adopt the NIST AI RMF and innovate safely. Salt Security provides the visibility and protection needed to secure the critical infrastructure powering your AI. For a more in-depth understanding of API security compliance across multiple regulations, please refer to our comprehensive API Compliance Whitepaper.

If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security's research team and learn what attackers already know.

The post Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance appeared first on Security Boulevard.

  •  

Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage

Breaking Free from Security Silos in the Modern Enterprise Today’s organizations face an unprecedented challenge: securing increasingly complex IT environments that span on-premises data centers, multiple cloud platforms, and hybrid architectures. Traditional security approaches that rely on disparate point solutions are failing to keep pace with sophisticated threats, leaving critical gaps in visibility and response

The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Seceon Inc.

The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Security Boulevard.

  •  

SoundCloud Confirms Security Incident

SoundCloud confirmed today that it experienced a security incident involving unauthorized access to a supporting internal system, resulting in the exposure of certain user data. The company said the incident affected approximately 20 percent of its users and involved email addresses along with information already visible on public SoundCloud profiles. Passwords and financial information were […]

The post SoundCloud Confirms Security Incident appeared first on Centraleyes.

The post SoundCloud Confirms Security Incident appeared first on Security Boulevard.

  •