Normal view

Received today — 14 December 2025

Why are companies free to choose their own AI-driven security solutions?

13 December 2025 at 17:00

What Makes AI-Driven Security Solutions Crucial in Modern Cloud Environments? How can organizations navigate the complexities of cybersecurity to ensure robust protection, particularly when dealing with Non-Human Identities (NHIs) in cloud environments? The answer lies in leveraging AI-driven security solutions, offering remarkable freedom of choice and adaptability for cybersecurity professionals. Understanding Non-Human Identities: The Backbone […]

The post Why are companies free to choose their own AI-driven security solutions? appeared first on Entro.

The post Why are companies free to choose their own AI-driven security solutions? appeared first on Security Boulevard.

How does NHI support the implementation of least privilege?

13 December 2025 at 17:00

What Are Non-Human Identities and Why Are They Essential for Cybersecurity? Have you ever pondered the complexity of cybersecurity beyond human interactions? Non-Human Identities (NHIs) are becoming a cornerstone in securing digital environments. With the guardians of machine identities, NHIs are pivotal in addressing the security gaps prevalent between research and development teams and security […]

The post How does NHI support the implementation of least privilege? appeared first on Entro.

The post How does NHI support the implementation of least privilege? appeared first on Security Boulevard.

Received yesterday — 13 December 2025

NDSS 2025 – A Systematic Evaluation Of Novel And Existing Cache Side Channels

13 December 2025 at 11:00

Session 5D: Side Channels 1

Authors, Creators & Presenters: Fabian Rauscher (Graz University of Technology), Carina Fiedler (Graz University of Technology), Andreas Kogler (Graz University of Technology), Daniel Gruss (Graz University of Technology)

PAPER
A Systematic Evaluation Of Novel And Existing Cache Side Channels

CPU caches are among the most widely studied side-channel targets, with Prime+Probe and Flush+Reload being the most prominent techniques. These generic cache attack techniques can leak cryptographic keys, user input, and are a building block of many microarchitectural attacks. In this paper, we present the first systematic evaluation using 9 characteristics of the 4 most relevant cache attacks, Flush+Reload, Flush+Flush, Evict+Reload, and Prime+Probe, as well as three new attacks that we introduce: Demote+Reload, Demote+Demote, and DemoteContention. We evaluate hit-miss margins, temporal precision, spatial precision, topological scope, attack time, blind spot length, channel capacity, noise resilience, and detectability on recent Intel microarchitectures. Demote+Reload and Demote+Demote perform similar to previous attacks and slightly better in some cases, e.g., Demote+Reload has a 60.7 % smaller blind spot than Flush+Reload. With 15.48 Mbit/s, Demote+Reload has a 64.3 % higher channel capacity than Flush+Reload. We also compare all attacks in an AES T-table attack and compare Demote+Reload and Flush+Reload in an inter-keystroke timing attack. Beyond the scope of the prior attack techniques, we demonstrate a KASLR break with Demote+Demote and the amplification of power side-channel leakage with Demote+Reload. Finally, Sapphire Rapids and Emerald Rapids CPUs use a non-inclusive L3 cache, effectively limiting eviction-based cross-core attacks, e.g., Prime+Probe and Evict+Reload, to rare cases where the victim's activity reaches the L3 cache. Hence, we show that in a cross-core attack, DemoteContention can be used as a reliable alternative to Prime+Probe and Evict+Reload that does not require reverse-engineering of addressing functions and cache replacement policy.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – A Systematic Evaluation Of Novel And Existing Cache Side Channels appeared first on Security Boulevard.

How do secrets rotations drive innovations in security?

12 December 2025 at 17:00

How Critical is Managing Non-Human Identities for Cloud Security? Are you familiar with the virtual tourists navigating your digital right now? These tourists, known as Non-Human Identities (NHIs), are machine identities pivotal in computer security, especially within cloud environments. These NHIs are akin to digital travelers carrying passports and visas—where the passport represents an encrypted […]

The post How do secrets rotations drive innovations in security? appeared first on Entro.

The post How do secrets rotations drive innovations in security? appeared first on Security Boulevard.

How can effective NHIs fit your cybersecurity budget?

12 December 2025 at 17:00

Are Non-Human Identities Key to an Optimal Cybersecurity Budget? Have you ever pondered over the hidden costs of cybersecurity that might be draining your resources without your knowledge? Non-Human Identities (NHIs) and Secrets Security Management are essential components of a cost-effective cybersecurity strategy, especially when organizations increasingly operate in cloud environments. Understanding Non-Human Identities (NHIs) […]

The post How can effective NHIs fit your cybersecurity budget? appeared first on Entro.

The post How can effective NHIs fit your cybersecurity budget? appeared first on Security Boulevard.

What aspects of Agentic AI security should get you excited?

12 December 2025 at 17:00

Are Non-Human Identities the Key to Strengthening Agentic AI Security? Where increasingly dominated by Agentic AI, organizations are pivoting toward more advanced security paradigms to protect their digital. Non-Human Identities (NHI) and Secrets Security Management have emerged with pivotal elements to fortify this quest for heightened cybersecurity. But why should this trend be generating excitement […]

The post What aspects of Agentic AI security should get you excited? appeared first on Entro.

The post What aspects of Agentic AI security should get you excited? appeared first on Security Boulevard.

What are the best practices for ensuring NHIs are protected?

12 December 2025 at 17:00

How Can Organizations Safeguard Non-Human Identities in the Cloud? Are your organization’s machine identities as secure as they should be? With digital evolves, the protection of Non-Human Identities (NHIs) becomes crucial for maintaining robust cybersecurity postures. NHIs represent machine identities like encrypted passwords, tokens, and keys, which are pivotal in ensuring effective cloud security control. […]

The post What are the best practices for ensuring NHIs are protected? appeared first on Entro.

The post What are the best practices for ensuring NHIs are protected? appeared first on Security Boulevard.

Rethinking sudo with object capabilities

12 December 2025 at 18:35

Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools.

Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration.

What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model.

↫ Ariadne Conill

To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you’re trying to accomplish. As an example, Conill details mounting and unmounting – with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks.

Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I’m not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it’s described it does feel like a superior, more secure solution.

Received before yesterday

NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures

12 December 2025 at 15:00

Session 5D: Side Channels 1

Authors, Creators & Presenters: Lukas Maar (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Thomas Steinbauer (Graz University of Technology), Daniel Gruss (Graz University of Technology), Stefan Mangard (Graz University of Technology)

PAPER
KernelSnitch: Side Channel-Attacks On Kernel Data Structures

The sharing of hardware elements, such as caches, is known to introduce microarchitectural side-channel leakage. One approach to eliminate this leakage is to not share hardware elements across security domains. However, even under the assumption of leakage-free hardware, it is unclear whether other critical system components, like the operating system, introduce software-caused side-channel leakage. In this paper, we present a novel generic software side-channel attack, KernelSnitch, targeting kernel data structures such as hash tables and trees. These structures are commonly used to store both kernel and user information, e.g., metadata for userspace locks. KernelSnitch exploits that these data structures are variable in size, ranging from an empty state to a theoretically arbitrary amount of elements. Accessing these structures requires a variable amount of time depending on the number of elements, i.e., the occupancy level. This variance constitutes a timing side channel, observable from user space by an unprivileged, isolated attacker. While the timing differences are very low compared to the syscall runtime, we demonstrate and evaluate methods to amplify these timing differences reliably. In three case studies, we show that KernelSnitch allows unprivileged and isolated attackers to leak sensitive information from the kernel and activities in other processes. First, we demonstrate covert channels with transmission rates up to 580 kbit/s. Second, we perform a kernel heap pointer leak in less than 65 s by exploiting the specific indexing that Linux is using in hash tables. Third, we demonstrate a website fingerprinting attack, achieving an F1 score of more than 89 %, showing that activity in other user programs can be observed using KernelSnitch. Finally, we discuss mitigations for our hardware-agnostic attacks.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures appeared first on Security Boulevard.

New Android Malware Locks Device Screens and Demands a Ransom

12 December 2025 at 15:15

Android malware DroidLock

A new Android malware locks device screens and demands that users pay a ransom to keep their data from being deleted. Dubbed “DroidLock” by Zimperium researchers, the Android ransomware-like malware can also “wipe devices, change PINs, intercept OTPs, and remotely control the user interface, turning an infected phone into a hostile endpoint.” The malware detected by the researchers targeted Spanish Android users via phishing sites. Based on the examples provided, the French telecommunications company Orange S.A. was one of the companies impersonated in the campaign.

Android Malware DroidLock Uses ‘Ransomware-like Overlay’

The researchers detailed the new Android malware in a blog post this week, noting that the malware “has the ability to lock device screens with a ransomware-like overlay and illegally acquire app lock credentials, leading to a total takeover of the compromised device.” The malware uses fake system update screens to trick victims and can stream and remotely control devices via virtual network computing (VNC). The malware can also exploit device administrator privileges to “lock or erase data, capture the victim's image with the front camera, and silence the device.” The infection chain starts with a dropper that appears to require the user to change settings to allow unknown apps to be installed from the source (image below), which leads to the secondary payload that contains the malware. [caption id="attachment_107722" align="aligncenter" width="300"]Android malware DroidLock The Android malware DroidLock prompts users for installation permissions (Zimperium)[/caption] Once the user grants accessibility permission, “the malware automatically approves additional permissions, such as those for accessing SMS, call logs, contacts, and audio,” the researchers said. The malware requests Device Admin Permission and Accessibility Services Permission at the start of the installation. Those permissions allow the malware to perform malicious actions such as:
  • Wiping data from the device, “effectively performing a factory reset.”
  • Locking the device.
  • Changing the PIN, password or biometric information to prevent user access to the device.
Based on commands received from the threat actor’s command and control (C2) server, “the attacker can compromise the device indefinitely and lock the user out from accessing the device.”

DroidLock Malware Overlays

The DroidLock malware uses Accessibility Services to launch overlays on targeted applications, prompted by an AccessibilityEvent originating from a package on the attacker's target list. The Android malware uses two primary overlay methods:
  • A Lock Pattern overlay that displays a pattern-drawing user interface (UI) to capture device unlock patterns.
  • A WebView overlay that loads attacker-controlled HTML content stored locally in a database; when an application is opened, the malware queries the database for the specific package name, and if a match is found it launches a full-screen WebView overlay that displays the stored HTML.
The malware also uses a deceptive Android update screen that instructs users not to power off or restart their devices. “This technique is commonly used by attackers to prevent user interaction while malicious activities are carried out in the background,” the researchers said. The malware can also capture all screen activity and transmit it to a remote server by operating as a persistent foreground service and using MediaProjection and VirtualDisplay to capture screen images, which are then converted to a base64-encoded JPEG format and transmitted to the C2 server. “This highly dangerous functionality could facilitate the theft of any sensitive information shown on the device’s display, including credentials, MFA codes, etc.,” the researchers said. Zimperium has shared its findings with Google, so up-to-date Android devices are protected against the malware, and the company has also published DroidLock Indicators of Compromise (IoCs).

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The post Building Trustworthy AI Agents appeared first on Security Boulevard.

‘Soil is more important than oil’: inside the perennial grain revolution

12 December 2025 at 07:00

Scientists in Kansas believe Kernza could cut emissions, restore degraded soils and reshape the future of agriculture

On the concrete floor of a greenhouse in rural Kansas stands a neat grid of 100 plastic plant pots, each holding a straggly crown of strappy, grass-like leaves. These plants are perennials – they keep growing, year after year. That single characteristic separates them from soya beans, wheat, maize, rice and every other major grain crop, all of which are annuals: plants that live and die within a single growing season.

“These plants are the winners, the ones that get to pass their genes on [to future generations],” says Lee DeHaan of the Land Institute, an agricultural non-profit based in Salina, Kansas. If DeHaan’s breeding programme maintains its current progress, the descendant of these young perennial crop plants could one day usher in a wholesale revolution in agriculture.

Continue reading...

© Photograph: Jason Alexander/The Land Institute

© Photograph: Jason Alexander/The Land Institute

© Photograph: Jason Alexander/The Land Institute

AI Threat Detection: How Machines Spot What Humans Miss

Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.

The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.

How Root Cause Analysis Improves Incident Response and Reduces Downtime?

12 December 2025 at 01:12

Security incidents don’t fail because of a lack of tools; they fail because of a lack of insight. In an environment where every minute of downtime equals revenue loss, customer impact, and regulatory risk, root cause analysis has become a decisive factor in how effectively organizations execute incident response and stabilize operations. The difference between […]

The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Kratikal Blogs.

The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Security Boulevard.

Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios.

Explore behavioral analysis techniques for securing AI models against post-quantum threats. Learn how to identify anomalies and protect your AI infrastructure with quantum-resistant cryptography.

The post Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios. appeared first on Security Boulevard.

How does staying ahead with NHIDR impact your business?

11 December 2025 at 17:00

How Does NHIDR Influence Your Cybersecurity Strategy? What role do Non-Human Identity and Secrets Security Management (NHIDR) play in safeguarding your organization’s digital assets? The management of NHIs—machine identities created through encrypted passwords, tokens, and keys—has become pivotal. For organizations operating in the cloud, leveraging NHIDR can significantly enhance security frameworks by addressing the often-overlooked […]

The post How does staying ahead with NHIDR impact your business? appeared first on Entro.

The post How does staying ahead with NHIDR impact your business? appeared first on Security Boulevard.

How can cloud compliance make you feel relieved?

11 December 2025 at 17:00

Are You Managing Non-Human Identities Effectively in Your Cloud Environment? One question that often lingers in professionals is whether their current strategies for managing Non-Human Identities (NHIs) provide adequate security. These NHIs are crucial machine identities that consist of secrets—encrypted passwords, tokens, or keys—and the permissions granted to them by destination servers. When organizations increasingly […]

The post How can cloud compliance make you feel relieved? appeared first on Entro.

The post How can cloud compliance make you feel relieved? appeared first on Security Boulevard.

Are your cybersecurity needs satisfied with current NHIs?

11 December 2025 at 17:00

How Secure Are Your Non-Human Identities? Are your cybersecurity needs truly satisfied by your current approach to Non-Human Identities (NHIs) and Secrets Security Management? With more organizations migrate to cloud platforms, the challenge of securing machine identities is more significant than ever. NHIs, or machine identities, are pivotal in safeguarding sensitive data and ensuring seamless […]

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Entro.

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Security Boulevard.

Can secrets vaulting bring calm to your data security panic?

11 December 2025 at 17:00

How Can Organizations Securely Manage Non-Human Identities in Cloud Environments? Have you ever wondered how the rapid growth in machine identities impacts data security across various industries? With technology continues to advance, the proliferation of Non-Human Identities (NHIs) challenges even the most seasoned IT professionals. These machine identities have become an integral part of our […]

The post Can secrets vaulting bring calm to your data security panic? appeared first on Entro.

The post Can secrets vaulting bring calm to your data security panic? appeared first on Security Boulevard.

NDSS 2025 – URVFL: Undetectable Data Reconstruction Attack On Vertical Federated Learning

11 December 2025 at 15:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

PAPER
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning

Vertical Federated Learning (VFL) is a collaborative learning paradigm designed for scenarios where multiple clients share disjoint features of the same set of data samples. Albeit a wide range of applications, VFL is faced with privacy leakage from data reconstruction attacks. These attacks generally fall into two categories: honest-but-curious (HBC), where adversaries steal data while adhering to the protocol; and malicious attacks, where adversaries breach the training protocol for significant data leakage. While most research has focused on HBC scenarios, the exploration of malicious attacks remains limited. Launching effective malicious attacks in VFL presents unique challenges: 1) Firstly, given the distributed nature of clients' data features and models, each client rigorously guards its privacy and prohibits direct querying, complicating any attempts to steal data; 2) Existing malicious attacks alter the underlying VFL training task, and are hence easily detected by comparing the received gradients with the ones received in honest training. To overcome these challenges, we develop URVFL, a novel attack strategy that evades current detection mechanisms. The key idea is to integrate a discriminator with auxiliary classifier that takes a full advantage of the label information and generates malicious gradients to the victim clients: on one hand, label information helps to better characterize embeddings of samples from distinct classes, yielding an improved reconstruction performance; on the other hand, computing malicious gradients with label information better mimics the honest training, making the malicious gradients indistinguishable from the honest ones, and the attack much more stealthy. Our comprehensive experiments demonstrate that URVFL significantly outperforms existing attacks, and successfully circumvents SOTA detection methods for malicious attacks. Additional ablation studies and evaluations on defenses further underscore the robustness and effectiveness of URVFL


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – URVFL: Undetectable Data Reconstruction Attack On Vertical Federated Learning appeared first on Security Boulevard.

NCSC Tests Honeypots and Cyber Deception Tools

11 December 2025 at 14:54

NCSC Tests Honeypots and Cyber Deception Tools

A study of honeypot and cyber deception technologies by the UK’s National Cyber Security Centre (NCSC) found that the deception tools hold promise for disrupting cyberattacks, but more information and standards are needed for them to work optimally. The agency plans to help with that. The NCSC test involved 121 organizations, 14 commercial providers of honeypots and deception tools, and 10 trials across environments ranging from the cloud to operational technology (OT). The NCSC concluded that “cyber deception can work, but it’s not plug-and-play.”

Honeypot and Cyber Deception Challenges

The NCSC said surveyed organizations believe that cyber deception technologies can offer “real value, particularly in detecting novel threats and enriching threat intelligence,” and a few even see potential for identifying insider threats. “However, outcome-based metrics were not readily available and require development,” the NCSC cautioned. The UK cybersecurity agency said the effectiveness of honeypots and cyber deception tools “depends on having the right data and context. We found that cyber deception can be used for visibility in many systems, including legacy or niche systems, but without a clear strategy organisations risk deploying tools that generate noise rather than insight.” The NCSC blog post didn’t specify what data was missing or needed to be developed to better measure the effectiveness of deception technologies, but the agency nonetheless concluded that “there’s a compelling case for increasing the use of cyber deception in the UK.” The study examined three core assumptions:
  • Cyber deception technologies can help detect compromises already inside networks.
  • Cyber deception and honeypots can help detect new attacks as they happen.
  • Cyber deception can change how attackers behave if they know an organization is using the tools.

Terminology, Guidance Needed for Honeypots and Deception Tools

The tests, conducted under the Active Cyber Defence (ACD) 2.0 program, also found that inconsistent terminology and guidance hamper optimal use of the technologies. “There’s a surprising amount of confusion around terminology, and vocabulary across the industry is often inconsistent,” NCSC said. “This makes it harder for organisations to understand what’s on offer or even what they’re trying to achieve. We think adopting standard terminology should help and we will be standardising our cyber deception vocabulary.” Another challenge is that organizations don’t know where to start. “They want impartial advice, real-world case studies, and reassurance that the tools they’re using are effective and safe,” the agency said. “We’ve found a strong marketplace of cyber deception providers offering a wide range of products and services. However, we were told that navigating this market can be difficult, especially for beginners.” The NCSC said it thinks it can help organizations “make informed, strategic choices.”

Should Organizations Say if They’re Using Deception Tools?

One interesting finding is that 90% of the trial participants said they wouldn’t publicly announce that they use cyber deception. While it’s understandable not to want to tip off attackers, the NCSC said that academic research shows that “when attackers believe cyber deception is in use they are less confident in their attacks. This can impose a cost on attackers by disrupting their methods and wasting their time, to the benefit of the defenders.” Proper configuration is also a challenge for adopters. “As with any cyber security solution, misconfiguration can introduce new vulnerabilities,” the NCSC said. “If cyber deception tools aren’t properly configured, they may fail to detect threats or lead to a false sense of security, or worse, create openings for attackers. As networks evolve and new tools are introduced, keeping cyber deception tools aligned requires ongoing effort. It is important to consider regular updates and fine-tuning cyber deception solutions.” Next steps for the NCSC involve helping organizations better understand and deploy honeypots and deception tools, possibly through a new ACD service. “By helping organisations to understand cyber deception and finding clear ways to measure impact, we are building a strong foundation to support the deployment of cyber deception at a national scale in the UK,” the agency said. “We are looking at developing a new ACD service to achieve this. “One of the most promising aspects of cyber deception is its potential to impose cost on adversaries,” the NCSC added. “By forcing attackers to spend time and resources navigating false environments, chasing fake credentials, or second-guessing their access, cyber deception can slow down attacks and increase the likelihood of detection. This aligns with broader national resilience goals by making the UK a harder, more expensive target.”

Rethinking Security as Access Control Moves to the Edge

11 December 2025 at 13:35
attacks, cyberattacks, cybersecurity, lobin, CISOs, encryption, organizations, recovery, Fenix24, Edgeless digital immunity, digital security, confidential Oracle recovery gateway, security

The convergence of physical and digital security is driving a shift toward software-driven, open-architecture edge computing. Access control has typically been treated as a physical domain problem — managing who can open which doors, using specialized systems largely isolated from broader enterprise IT. However, the boundary between physical and digital security is increasingly blurring. With..

The post Rethinking Security as Access Control Moves to the Edge appeared first on Security Boulevard.

Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority

11 December 2025 at 13:32

OT oversight is an expensive industrial paradox. It’s hard to believe that an area can be simultaneously underappreciated, underfunded, and under increasing attack. And yet, with ransomware hackers knowing that downtime equals disaster and companies not monitoring in kind, this is an open and glaring hole across many ecosystems. Even a glance at the numbers..

The post Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority appeared first on Security Boulevard.

Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks

11 December 2025 at 13:27

Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect one’s identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..

The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.

NDSS 2025 – RAIFLE: Reconstruction Attacks On Interaction-Based Federated Learning

11 December 2025 at 11:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Dzung Pham (University of Massachusetts Amherst), Shreyas Kulkarni (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

PAPER
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation

Federated learning has emerged as a promising privacy-preserving solution for machine learning domains that rely on user interactions, particularly recommender systems and online learning to rank. While there has been substantial research on the privacy of traditional federated learning, little attention has been paid to the privacy properties of these interaction-based settings. In this work, we show that users face an elevated risk of having their private interactions reconstructed by the central server when the server can control the training features of the items that users interact with. We introduce RAIFLE, a novel optimization-based attack framework where the server actively manipulates the features of the items presented to users to increase the success rate of reconstruction. Our experiments with federated recommendation and online learning-to-rank scenarios demonstrate that RAIFLE is significantly more powerful than existing reconstruction attacks like gradient inversion, achieving high performance consistently in most settings. We discuss the pros and cons of several possible countermeasures to defend against RAIFLE in the context of interaction-based federated learning. Our code is open-sourced at https://github.com/dzungvpham/raifle
______________

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – RAIFLE: Reconstruction Attacks On Interaction-Based Federated Learning appeared first on Security Boulevard.

AI Hackers Are Coming Dangerously Close to Beating Humans

11 December 2025 at 11:13
Stanford researchers spent much of the past year building an AI bot called Artemis that scans networks for software vulnerabilities, and when they pitted it against ten professional penetration testers on the university's own engineering network, the bot outperformed nine of them. The experiment offers a window into how rapidly AI hacking tools have improved after years of underwhelming performance. "We thought it would probably be below average," said Justin Lin, a Stanford cybersecurity researcher. Artemis found bugs at a fraction of human cost -- just under $60 per hour compared to the $2,000 to $2,500 per day that professional pen testers typically charge. But its performance wasn't flawless. About 18% of its bug reports were false positives, and it completely missed an obvious vulnerability on a webpage that most human testers caught. In one case, Artemis found a bug on an outdated page that didn't render in standard browsers; it used a command-line tool called Curl instead of Chrome or Firefox. Dan Boneh, a Stanford computer science professor who advised the researchers, noted that vast amounts of software shipped without being vetted by LLMs could now be at risk. "We're in this moment of time where many actors can increase their productivity to find bugs at an extreme scale," said Jacob Klein, head of threat intelligence at Anthropic.

Read more of this story at Slashdot.

An Inside Look at the Israeli Cyber Scene

11 December 2025 at 11:30
investment, cybersecurity, calculator and money figures

Alan breaks down why Israeli cybersecurity isn’t just booming—it’s entering a full-blown renaissance, with record funding, world-class talent, and breakout companies redefining the global cyber landscape.

The post An Inside Look at the Israeli Cyber Scene appeared first on Security Boulevard.

2026 API and AI Security Predictions: What Experts Expect in the Year Ahead

11 December 2025 at 09:54

This is a predictions blog. We know, we know; everyone does them, and they can get a bit same-y. Chances are, you’re already bored with reading them. So, we’ve decided to do things a little bit differently this year.  Instead of bombarding you with just our own predictions, we’ve decided to cast the net far [...]

The post 2026 API and AI Security Predictions: What Experts Expect in the Year Ahead appeared first on Wallarm.

The post 2026 API and AI Security Predictions: What Experts Expect in the Year Ahead appeared first on Security Boulevard.

New OpenAI Models Likely Pose 'High' Cybersecurity Risk, Company Says

11 December 2025 at 08:00
An anonymous reader quotes a report from Axios: OpenAI says the cyber capabilities of its frontier AI models are accelerating and warns Wednesday that upcoming models are likely to pose a "high" risk, according to a report shared first with Axios. The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks. OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks. The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month. "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework." "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly. "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," said OpenAI's Fouad Matin.

Read more of this story at Slashdot.

Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products

11 December 2025 at 02:21

Overview On December 10, NSFOCUS CERT detected that Microsoft released the December Security Update patch, which fixed 57 security issues involving widely used products such as Windows, Microsoft Office, Microsoft Exchange Server, Azure, etc., including high-risk vulnerability types such as privilege escalation and remote code execution. Among the vulnerabilities fixed by Microsoft’s monthly update this […]

The post Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products appeared first on Security Boulevard.

How to feel assured about cloud-native security with AI?

10 December 2025 at 17:00

Are Non-Human Identities (NHIs) the Missing Link in Your Cloud Security Strategy? Where technology is reshaping industries, the concept of Non-Human Identities (NHIs) has emerged as a critical component in cloud-native security strategies. But what exactly are NHIs, and why are they essential in achieving security assurance? Decoding Non-Human Identities in Cybersecurity The term Non-Human […]

The post How to feel assured about cloud-native security with AI? appeared first on Entro.

The post How to feel assured about cloud-native security with AI? appeared first on Security Boulevard.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

What makes smart secrets management essential?

10 December 2025 at 17:00

How Are Non-Human Identities Revolutionizing Cybersecurity? Have you ever considered the pivotal role that Non-Human Identities (NHIs) play in cyber defense frameworks? When businesses increasingly shift operations to the cloud, safeguarding these machine identities becomes paramount. But what exactly are NHIs, and why is their management vital across industries? NHIs, often referred to as machine […]

The post What makes smart secrets management essential? appeared first on Entro.

The post What makes smart secrets management essential? appeared first on Security Boulevard.

How does Agentic AI empower cybersecurity teams?

10 December 2025 at 17:00

Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]

The post How does Agentic AI empower cybersecurity teams? appeared first on Entro.

The post How does Agentic AI empower cybersecurity teams? appeared first on Security Boulevard.

SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning

10 December 2025 at 15:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Phillip Rieger (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Kavita Kumari (Technical University of Darmstadt), Tigist Abera (Technical University of Darmstadt), Jonathan Knauer (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

PAPER
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning

Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN's behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client's training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer's parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server's layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning appeared first on Security Boulevard.

NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents

10 December 2025 at 14:08

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents. Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners..

The post NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents appeared first on Security Boulevard.

❌