Normal view

Received today — 13 December 2025

Rethinking sudo with object capabilities

12 December 2025 at 18:35

Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools.

Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration.

What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model.

↫ Ariadne Conill

To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you’re trying to accomplish. As an example, Conill details mounting and unmounting – with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks.

Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I’m not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it’s described it does feel like a superior, more secure solution.

Received yesterday — 12 December 2025

NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures

12 December 2025 at 15:00

Session 5D: Side Channels 1

Authors, Creators & Presenters: Lukas Maar (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Thomas Steinbauer (Graz University of Technology), Daniel Gruss (Graz University of Technology), Stefan Mangard (Graz University of Technology)

PAPER
KernelSnitch: Side Channel-Attacks On Kernel Data Structures

The sharing of hardware elements, such as caches, is known to introduce microarchitectural side-channel leakage. One approach to eliminate this leakage is to not share hardware elements across security domains. However, even under the assumption of leakage-free hardware, it is unclear whether other critical system components, like the operating system, introduce software-caused side-channel leakage. In this paper, we present a novel generic software side-channel attack, KernelSnitch, targeting kernel data structures such as hash tables and trees. These structures are commonly used to store both kernel and user information, e.g., metadata for userspace locks. KernelSnitch exploits that these data structures are variable in size, ranging from an empty state to a theoretically arbitrary amount of elements. Accessing these structures requires a variable amount of time depending on the number of elements, i.e., the occupancy level. This variance constitutes a timing side channel, observable from user space by an unprivileged, isolated attacker. While the timing differences are very low compared to the syscall runtime, we demonstrate and evaluate methods to amplify these timing differences reliably. In three case studies, we show that KernelSnitch allows unprivileged and isolated attackers to leak sensitive information from the kernel and activities in other processes. First, we demonstrate covert channels with transmission rates up to 580 kbit/s. Second, we perform a kernel heap pointer leak in less than 65 s by exploiting the specific indexing that Linux is using in hash tables. Third, we demonstrate a website fingerprinting attack, achieving an F1 score of more than 89 %, showing that activity in other user programs can be observed using KernelSnitch. Finally, we discuss mitigations for our hardware-agnostic attacks.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures appeared first on Security Boulevard.

New Android Malware Locks Device Screens and Demands a Ransom

12 December 2025 at 15:15

Android malware DroidLock

A new Android malware locks device screens and demands that users pay a ransom to keep their data from being deleted. Dubbed “DroidLock” by Zimperium researchers, the Android ransomware-like malware can also “wipe devices, change PINs, intercept OTPs, and remotely control the user interface, turning an infected phone into a hostile endpoint.” The malware detected by the researchers targeted Spanish Android users via phishing sites. Based on the examples provided, the French telecommunications company Orange S.A. was one of the companies impersonated in the campaign.

Android Malware DroidLock Uses ‘Ransomware-like Overlay’

The researchers detailed the new Android malware in a blog post this week, noting that the malware “has the ability to lock device screens with a ransomware-like overlay and illegally acquire app lock credentials, leading to a total takeover of the compromised device.” The malware uses fake system update screens to trick victims and can stream and remotely control devices via virtual network computing (VNC). The malware can also exploit device administrator privileges to “lock or erase data, capture the victim's image with the front camera, and silence the device.” The infection chain starts with a dropper that appears to require the user to change settings to allow unknown apps to be installed from the source (image below), which leads to the secondary payload that contains the malware. [caption id="attachment_107722" align="aligncenter" width="300"]Android malware DroidLock The Android malware DroidLock prompts users for installation permissions (Zimperium)[/caption] Once the user grants accessibility permission, “the malware automatically approves additional permissions, such as those for accessing SMS, call logs, contacts, and audio,” the researchers said. The malware requests Device Admin Permission and Accessibility Services Permission at the start of the installation. Those permissions allow the malware to perform malicious actions such as:
  • Wiping data from the device, “effectively performing a factory reset.”
  • Locking the device.
  • Changing the PIN, password or biometric information to prevent user access to the device.
Based on commands received from the threat actor’s command and control (C2) server, “the attacker can compromise the device indefinitely and lock the user out from accessing the device.”

DroidLock Malware Overlays

The DroidLock malware uses Accessibility Services to launch overlays on targeted applications, prompted by an AccessibilityEvent originating from a package on the attacker's target list. The Android malware uses two primary overlay methods:
  • A Lock Pattern overlay that displays a pattern-drawing user interface (UI) to capture device unlock patterns.
  • A WebView overlay that loads attacker-controlled HTML content stored locally in a database; when an application is opened, the malware queries the database for the specific package name, and if a match is found it launches a full-screen WebView overlay that displays the stored HTML.
The malware also uses a deceptive Android update screen that instructs users not to power off or restart their devices. “This technique is commonly used by attackers to prevent user interaction while malicious activities are carried out in the background,” the researchers said. The malware can also capture all screen activity and transmit it to a remote server by operating as a persistent foreground service and using MediaProjection and VirtualDisplay to capture screen images, which are then converted to a base64-encoded JPEG format and transmitted to the C2 server. “This highly dangerous functionality could facilitate the theft of any sensitive information shown on the device’s display, including credentials, MFA codes, etc.,” the researchers said. Zimperium has shared its findings with Google, so up-to-date Android devices are protected against the malware, and the company has also published DroidLock Indicators of Compromise (IoCs).

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The post Building Trustworthy AI Agents appeared first on Security Boulevard.

‘Soil is more important than oil’: inside the perennial grain revolution

12 December 2025 at 07:00

Scientists in Kansas believe Kernza could cut emissions, restore degraded soils and reshape the future of agriculture

On the concrete floor of a greenhouse in rural Kansas stands a neat grid of 100 plastic plant pots, each holding a straggly crown of strappy, grass-like leaves. These plants are perennials – they keep growing, year after year. That single characteristic separates them from soya beans, wheat, maize, rice and every other major grain crop, all of which are annuals: plants that live and die within a single growing season.

“These plants are the winners, the ones that get to pass their genes on [to future generations],” says Lee DeHaan of the Land Institute, an agricultural non-profit based in Salina, Kansas. If DeHaan’s breeding programme maintains its current progress, the descendant of these young perennial crop plants could one day usher in a wholesale revolution in agriculture.

Continue reading...

© Photograph: Jason Alexander/The Land Institute

© Photograph: Jason Alexander/The Land Institute

© Photograph: Jason Alexander/The Land Institute

AI Threat Detection: How Machines Spot What Humans Miss

Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.

The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.

How Root Cause Analysis Improves Incident Response and Reduces Downtime?

12 December 2025 at 01:12

Security incidents don’t fail because of a lack of tools; they fail because of a lack of insight. In an environment where every minute of downtime equals revenue loss, customer impact, and regulatory risk, root cause analysis has become a decisive factor in how effectively organizations execute incident response and stabilize operations. The difference between […]

The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Kratikal Blogs.

The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Security Boulevard.

Received before yesterday

Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios.

Explore behavioral analysis techniques for securing AI models against post-quantum threats. Learn how to identify anomalies and protect your AI infrastructure with quantum-resistant cryptography.

The post Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios. appeared first on Security Boulevard.

How does staying ahead with NHIDR impact your business?

11 December 2025 at 17:00

How Does NHIDR Influence Your Cybersecurity Strategy? What role do Non-Human Identity and Secrets Security Management (NHIDR) play in safeguarding your organization’s digital assets? The management of NHIs—machine identities created through encrypted passwords, tokens, and keys—has become pivotal. For organizations operating in the cloud, leveraging NHIDR can significantly enhance security frameworks by addressing the often-overlooked […]

The post How does staying ahead with NHIDR impact your business? appeared first on Entro.

The post How does staying ahead with NHIDR impact your business? appeared first on Security Boulevard.

How can cloud compliance make you feel relieved?

11 December 2025 at 17:00

Are You Managing Non-Human Identities Effectively in Your Cloud Environment? One question that often lingers in professionals is whether their current strategies for managing Non-Human Identities (NHIs) provide adequate security. These NHIs are crucial machine identities that consist of secrets—encrypted passwords, tokens, or keys—and the permissions granted to them by destination servers. When organizations increasingly […]

The post How can cloud compliance make you feel relieved? appeared first on Entro.

The post How can cloud compliance make you feel relieved? appeared first on Security Boulevard.

Are your cybersecurity needs satisfied with current NHIs?

11 December 2025 at 17:00

How Secure Are Your Non-Human Identities? Are your cybersecurity needs truly satisfied by your current approach to Non-Human Identities (NHIs) and Secrets Security Management? With more organizations migrate to cloud platforms, the challenge of securing machine identities is more significant than ever. NHIs, or machine identities, are pivotal in safeguarding sensitive data and ensuring seamless […]

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Entro.

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Security Boulevard.

Can secrets vaulting bring calm to your data security panic?

11 December 2025 at 17:00

How Can Organizations Securely Manage Non-Human Identities in Cloud Environments? Have you ever wondered how the rapid growth in machine identities impacts data security across various industries? With technology continues to advance, the proliferation of Non-Human Identities (NHIs) challenges even the most seasoned IT professionals. These machine identities have become an integral part of our […]

The post Can secrets vaulting bring calm to your data security panic? appeared first on Entro.

The post Can secrets vaulting bring calm to your data security panic? appeared first on Security Boulevard.

NDSS 2025 – URVFL: Undetectable Data Reconstruction Attack On Vertical Federated Learning

11 December 2025 at 15:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

PAPER
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning

Vertical Federated Learning (VFL) is a collaborative learning paradigm designed for scenarios where multiple clients share disjoint features of the same set of data samples. Albeit a wide range of applications, VFL is faced with privacy leakage from data reconstruction attacks. These attacks generally fall into two categories: honest-but-curious (HBC), where adversaries steal data while adhering to the protocol; and malicious attacks, where adversaries breach the training protocol for significant data leakage. While most research has focused on HBC scenarios, the exploration of malicious attacks remains limited. Launching effective malicious attacks in VFL presents unique challenges: 1) Firstly, given the distributed nature of clients' data features and models, each client rigorously guards its privacy and prohibits direct querying, complicating any attempts to steal data; 2) Existing malicious attacks alter the underlying VFL training task, and are hence easily detected by comparing the received gradients with the ones received in honest training. To overcome these challenges, we develop URVFL, a novel attack strategy that evades current detection mechanisms. The key idea is to integrate a discriminator with auxiliary classifier that takes a full advantage of the label information and generates malicious gradients to the victim clients: on one hand, label information helps to better characterize embeddings of samples from distinct classes, yielding an improved reconstruction performance; on the other hand, computing malicious gradients with label information better mimics the honest training, making the malicious gradients indistinguishable from the honest ones, and the attack much more stealthy. Our comprehensive experiments demonstrate that URVFL significantly outperforms existing attacks, and successfully circumvents SOTA detection methods for malicious attacks. Additional ablation studies and evaluations on defenses further underscore the robustness and effectiveness of URVFL


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – URVFL: Undetectable Data Reconstruction Attack On Vertical Federated Learning appeared first on Security Boulevard.

NCSC Tests Honeypots and Cyber Deception Tools

11 December 2025 at 14:54

NCSC Tests Honeypots and Cyber Deception Tools

A study of honeypot and cyber deception technologies by the UK’s National Cyber Security Centre (NCSC) found that the deception tools hold promise for disrupting cyberattacks, but more information and standards are needed for them to work optimally. The agency plans to help with that. The NCSC test involved 121 organizations, 14 commercial providers of honeypots and deception tools, and 10 trials across environments ranging from the cloud to operational technology (OT). The NCSC concluded that “cyber deception can work, but it’s not plug-and-play.”

Honeypot and Cyber Deception Challenges

The NCSC said surveyed organizations believe that cyber deception technologies can offer “real value, particularly in detecting novel threats and enriching threat intelligence,” and a few even see potential for identifying insider threats. “However, outcome-based metrics were not readily available and require development,” the NCSC cautioned. The UK cybersecurity agency said the effectiveness of honeypots and cyber deception tools “depends on having the right data and context. We found that cyber deception can be used for visibility in many systems, including legacy or niche systems, but without a clear strategy organisations risk deploying tools that generate noise rather than insight.” The NCSC blog post didn’t specify what data was missing or needed to be developed to better measure the effectiveness of deception technologies, but the agency nonetheless concluded that “there’s a compelling case for increasing the use of cyber deception in the UK.” The study examined three core assumptions:
  • Cyber deception technologies can help detect compromises already inside networks.
  • Cyber deception and honeypots can help detect new attacks as they happen.
  • Cyber deception can change how attackers behave if they know an organization is using the tools.

Terminology, Guidance Needed for Honeypots and Deception Tools

The tests, conducted under the Active Cyber Defence (ACD) 2.0 program, also found that inconsistent terminology and guidance hamper optimal use of the technologies. “There’s a surprising amount of confusion around terminology, and vocabulary across the industry is often inconsistent,” NCSC said. “This makes it harder for organisations to understand what’s on offer or even what they’re trying to achieve. We think adopting standard terminology should help and we will be standardising our cyber deception vocabulary.” Another challenge is that organizations don’t know where to start. “They want impartial advice, real-world case studies, and reassurance that the tools they’re using are effective and safe,” the agency said. “We’ve found a strong marketplace of cyber deception providers offering a wide range of products and services. However, we were told that navigating this market can be difficult, especially for beginners.” The NCSC said it thinks it can help organizations “make informed, strategic choices.”

Should Organizations Say if They’re Using Deception Tools?

One interesting finding is that 90% of the trial participants said they wouldn’t publicly announce that they use cyber deception. While it’s understandable not to want to tip off attackers, the NCSC said that academic research shows that “when attackers believe cyber deception is in use they are less confident in their attacks. This can impose a cost on attackers by disrupting their methods and wasting their time, to the benefit of the defenders.” Proper configuration is also a challenge for adopters. “As with any cyber security solution, misconfiguration can introduce new vulnerabilities,” the NCSC said. “If cyber deception tools aren’t properly configured, they may fail to detect threats or lead to a false sense of security, or worse, create openings for attackers. As networks evolve and new tools are introduced, keeping cyber deception tools aligned requires ongoing effort. It is important to consider regular updates and fine-tuning cyber deception solutions.” Next steps for the NCSC involve helping organizations better understand and deploy honeypots and deception tools, possibly through a new ACD service. “By helping organisations to understand cyber deception and finding clear ways to measure impact, we are building a strong foundation to support the deployment of cyber deception at a national scale in the UK,” the agency said. “We are looking at developing a new ACD service to achieve this. “One of the most promising aspects of cyber deception is its potential to impose cost on adversaries,” the NCSC added. “By forcing attackers to spend time and resources navigating false environments, chasing fake credentials, or second-guessing their access, cyber deception can slow down attacks and increase the likelihood of detection. This aligns with broader national resilience goals by making the UK a harder, more expensive target.”

Rethinking Security as Access Control Moves to the Edge

11 December 2025 at 13:35
attacks, cyberattacks, cybersecurity, lobin, CISOs, encryption, organizations, recovery, Fenix24, Edgeless digital immunity, digital security, confidential Oracle recovery gateway, security

The convergence of physical and digital security is driving a shift toward software-driven, open-architecture edge computing. Access control has typically been treated as a physical domain problem — managing who can open which doors, using specialized systems largely isolated from broader enterprise IT. However, the boundary between physical and digital security is increasingly blurring. With..

The post Rethinking Security as Access Control Moves to the Edge appeared first on Security Boulevard.

Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority

11 December 2025 at 13:32

OT oversight is an expensive industrial paradox. It’s hard to believe that an area can be simultaneously underappreciated, underfunded, and under increasing attack. And yet, with ransomware hackers knowing that downtime equals disaster and companies not monitoring in kind, this is an open and glaring hole across many ecosystems. Even a glance at the numbers..

The post Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority appeared first on Security Boulevard.

Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks

11 December 2025 at 13:27

Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect one’s identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..

The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.

NDSS 2025 – RAIFLE: Reconstruction Attacks On Interaction-Based Federated Learning

11 December 2025 at 11:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Dzung Pham (University of Massachusetts Amherst), Shreyas Kulkarni (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

PAPER
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation

Federated learning has emerged as a promising privacy-preserving solution for machine learning domains that rely on user interactions, particularly recommender systems and online learning to rank. While there has been substantial research on the privacy of traditional federated learning, little attention has been paid to the privacy properties of these interaction-based settings. In this work, we show that users face an elevated risk of having their private interactions reconstructed by the central server when the server can control the training features of the items that users interact with. We introduce RAIFLE, a novel optimization-based attack framework where the server actively manipulates the features of the items presented to users to increase the success rate of reconstruction. Our experiments with federated recommendation and online learning-to-rank scenarios demonstrate that RAIFLE is significantly more powerful than existing reconstruction attacks like gradient inversion, achieving high performance consistently in most settings. We discuss the pros and cons of several possible countermeasures to defend against RAIFLE in the context of interaction-based federated learning. Our code is open-sourced at https://github.com/dzungvpham/raifle
______________

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – RAIFLE: Reconstruction Attacks On Interaction-Based Federated Learning appeared first on Security Boulevard.

AI Hackers Are Coming Dangerously Close to Beating Humans

11 December 2025 at 11:13
Stanford researchers spent much of the past year building an AI bot called Artemis that scans networks for software vulnerabilities, and when they pitted it against ten professional penetration testers on the university's own engineering network, the bot outperformed nine of them. The experiment offers a window into how rapidly AI hacking tools have improved after years of underwhelming performance. "We thought it would probably be below average," said Justin Lin, a Stanford cybersecurity researcher. Artemis found bugs at a fraction of human cost -- just under $60 per hour compared to the $2,000 to $2,500 per day that professional pen testers typically charge. But its performance wasn't flawless. About 18% of its bug reports were false positives, and it completely missed an obvious vulnerability on a webpage that most human testers caught. In one case, Artemis found a bug on an outdated page that didn't render in standard browsers; it used a command-line tool called Curl instead of Chrome or Firefox. Dan Boneh, a Stanford computer science professor who advised the researchers, noted that vast amounts of software shipped without being vetted by LLMs could now be at risk. "We're in this moment of time where many actors can increase their productivity to find bugs at an extreme scale," said Jacob Klein, head of threat intelligence at Anthropic.

Read more of this story at Slashdot.

An Inside Look at the Israeli Cyber Scene

11 December 2025 at 11:30
investment, cybersecurity, calculator and money figures

Alan breaks down why Israeli cybersecurity isn’t just booming—it’s entering a full-blown renaissance, with record funding, world-class talent, and breakout companies redefining the global cyber landscape.

The post An Inside Look at the Israeli Cyber Scene appeared first on Security Boulevard.

2026 API and AI Security Predictions: What Experts Expect in the Year Ahead

11 December 2025 at 09:54

This is a predictions blog. We know, we know; everyone does them, and they can get a bit same-y. Chances are, you’re already bored with reading them. So, we’ve decided to do things a little bit differently this year.  Instead of bombarding you with just our own predictions, we’ve decided to cast the net far [...]

The post 2026 API and AI Security Predictions: What Experts Expect in the Year Ahead appeared first on Wallarm.

The post 2026 API and AI Security Predictions: What Experts Expect in the Year Ahead appeared first on Security Boulevard.

New OpenAI Models Likely Pose 'High' Cybersecurity Risk, Company Says

11 December 2025 at 08:00
An anonymous reader quotes a report from Axios: OpenAI says the cyber capabilities of its frontier AI models are accelerating and warns Wednesday that upcoming models are likely to pose a "high" risk, according to a report shared first with Axios. The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks. OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks. The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month. "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework." "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly. "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," said OpenAI's Fouad Matin.

Read more of this story at Slashdot.

Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products

11 December 2025 at 02:21

Overview On December 10, NSFOCUS CERT detected that Microsoft released the December Security Update patch, which fixed 57 security issues involving widely used products such as Windows, Microsoft Office, Microsoft Exchange Server, Azure, etc., including high-risk vulnerability types such as privilege escalation and remote code execution. Among the vulnerabilities fixed by Microsoft’s monthly update this […]

The post Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Microsoft’s December Security Update of High-Risk Vulnerability Notice for Multiple Products appeared first on Security Boulevard.

How to feel assured about cloud-native security with AI?

10 December 2025 at 17:00

Are Non-Human Identities (NHIs) the Missing Link in Your Cloud Security Strategy? Where technology is reshaping industries, the concept of Non-Human Identities (NHIs) has emerged as a critical component in cloud-native security strategies. But what exactly are NHIs, and why are they essential in achieving security assurance? Decoding Non-Human Identities in Cybersecurity The term Non-Human […]

The post How to feel assured about cloud-native security with AI? appeared first on Entro.

The post How to feel assured about cloud-native security with AI? appeared first on Security Boulevard.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

What makes smart secrets management essential?

10 December 2025 at 17:00

How Are Non-Human Identities Revolutionizing Cybersecurity? Have you ever considered the pivotal role that Non-Human Identities (NHIs) play in cyber defense frameworks? When businesses increasingly shift operations to the cloud, safeguarding these machine identities becomes paramount. But what exactly are NHIs, and why is their management vital across industries? NHIs, often referred to as machine […]

The post What makes smart secrets management essential? appeared first on Entro.

The post What makes smart secrets management essential? appeared first on Security Boulevard.

How does Agentic AI empower cybersecurity teams?

10 December 2025 at 17:00

Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]

The post How does Agentic AI empower cybersecurity teams? appeared first on Entro.

The post How does Agentic AI empower cybersecurity teams? appeared first on Security Boulevard.

SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning

10 December 2025 at 15:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Phillip Rieger (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Kavita Kumari (Technical University of Darmstadt), Tigist Abera (Technical University of Darmstadt), Jonathan Knauer (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

PAPER
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning

Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN's behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client's training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer's parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server's layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning appeared first on Security Boulevard.

NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents

10 December 2025 at 14:08

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents. Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners..

The post NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents appeared first on Security Boulevard.

Ring-fencing AI Workloads for NIST and ISO Compliance 

10 December 2025 at 12:32

AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on 12Port.

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on Security Boulevard.

NDSS 2025 – Passive Inference Attacks On Split Learning Via Adversarial Regularization

10 December 2025 at 11:00

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

PAPER
Passive Inference Attacks on Split Learning via Adversarial Regularization

Split Learning (SL) has emerged as a practical and efficient alternative to traditional federated learning. While previous attempts to attack SL have often relied on overly strong assumptions or targeted easily exploitable models, we seek to develop more capable attacks. We introduce SDAR, a novel attack framework against SL with an honest-but-curious server. SDAR leverages auxiliary data and adversarial regularization to learn a decodable simulator of the client's private model, which can effectively infer the client's private features under the vanilla SL, and both features and labels under the U-shaped SL. We perform extensive experiments in both configurations to validate the effectiveness of our proposed attacks. Notably, in challenging scenarios where existing passive attacks struggle to reconstruct the client's private data effectively, SDAR consistently achieves significantly superior attack performance, even comparable to active attacks. On CIFAR-10, at the deep split level of 7, SDAR achieves private feature reconstruction with less than 0.025 mean squared error in both the vanilla and the U-shaped SL, and attains a label inference accuracy of over 98% in the U-shaped setting, while existing attacks fail to produce non-trivial results.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Passive Inference Attacks On Split Learning Via Adversarial Regularization appeared first on Security Boulevard.

Gartner’s AI Browser Ban: Rearranging Deck Chairs on the Titanic

10 December 2025 at 10:31

The cybersecurity world loves a simple solution to a complex problem, and Gartner delivered exactly that with its recent advisory: “Block all AI browsers for the foreseeable future.” The esteemed analyst firm warns that agentic browsers—tools like Perplexity’s Comet and OpenAI’s ChatGPT Atlas—pose too much risk for corporate use. While their caution makes sense given..

The post Gartner’s AI Browser Ban: Rearranging Deck Chairs on the Titanic appeared first on Security Boulevard.

Securing MCP: How to Build Trustworthy Agent Integrations

10 December 2025 at 08:25
LLMs, prompt, MCP, Cato, AI, jailbreak, cybersecurity, DeepSeek, LLM, LLMs, attacks, multi-agent, Cybersecurity, AI, security, risk, Google AI LLM vulnerability

Model Context Protocol (MCP) is quickly becoming the backbone of how AI agents interact with the outside world. It gives agents a standardized way to discover tools, trigger actions, and pull data. MCP dramatically simplifies integration work. In short, MCP servers act as the adapter that grants access to services, manages credentials and permissions, and..

The post Securing MCP: How to Build Trustworthy Agent Integrations appeared first on Security Boulevard.

Coupang CEO Resigns After Massive Data Breach Exposes Millions of Users

10 December 2025 at 02:42

Coupang CEO Resigns

Coupang CEO Resigns, a headline many in South Korea expected, but still signals a major moment for the country’s tech and e-commerce landscape. Coupang Corp. confirmed on Wednesday that its CEO, Park Dae-jun, has stepped down following a massive Coupang data breach that exposed the personal information of 33.7 million people, almost two-thirds of the country. Park said he was “deeply sorry” for the incident and accepted responsibility both for the breach and for the company’s response. His exit, while formally described as a resignation, is widely seen as a forced departure given the scale of the fallout and growing anger among customers and regulators. To stabilize the company, Coupang’s U.S. parent, Coupang Inc., has appointed Harold Rogers, its chief administrative officer and general counsel, as interim CEO. The parent company said the leadership change aims to strengthen crisis management and ease customer concerns.

What Happened in the Coupang Data Breach

The company clarified that the latest notice relates to the previously disclosed incident on November 29 and that no new leak has occurred. According to Coupang’s ongoing investigation, the leaked information includes:
  • Customer names and email addresses
  • Full shipping address book details, such as names, phone numbers, addresses, and apartment entrance access codes
  • Portions of the order information
Coupang emphasized that payment details, passwords, banking information, and customs clearance codes were not compromised. As soon as it identified the leak, the company blocked abnormal access routes and tightened internal monitoring. It is now working closely with the Ministry of Science and ICT, the National Police Agency, the Personal Information Protection Commission (PIPC), the Korea Internet & Security Agency (KISA), and the Financial Supervisory Service.

Phishing, Smishing, and Impersonation Alerts

Coupang warned customers to be extra cautious as leaked data can fuel impersonation scams. The company reminded users that:
  • Coupang never asks customers to install apps via phone or text.
  • Unknown links in messages should not be opened.
  • Suspicious communications should be reported to 112 or the Financial Supervisory Service.
  • Customers must verify messages using Coupang’s official customer service numbers.
Users who stored apartment entrance codes in their delivery address book were also urged to change them immediately. The company also clarified that delivery drivers rarely call customers unless necessary to access a building or resolve a pickup issue, a small detail meant to help people recognize potential scam attempts.

Coupang CEO Resigns as South Korea Toughens Cyber Rules

The departure of CEO Park comes at a time when South Korea is rethinking how corporations respond to data breaches. The government’s 2025 Comprehensive National Cybersecurity Strategy puts direct responsibility on CEOs for major security incidents. It also expands CISOs' authority, strengthens IT asset management requirements, and gives chief privacy officers greater influence over security budgets. This shift follows other serious breaches, including SK Telecom’s leak of 23 million user records, which led to a record 134.8 billion won fine. Regulators are now considering fines of up to 1.2 trillion won for Coupang, roughly 3% of its annual sales, under the Personal Information Protection Act. The company also risks losing its ISMS-P certification, a possibility unprecedented for a business of its size.

Industry Scramble After a Coupang Data Breach of This Scale

A Coupang Data breach affecting tens of millions of people has sent shockwaves across South Korea’s corporate sector. Authorities have launched emergency inspections of 1,600 ISMS-certified companies and begun unannounced penetration tests. Security vendors say Korean companies are urgently adding multi-factor authentication, AI-based anomaly detection, insider threat monitoring, and stronger access controls. Police naming a former Chinese Coupang employee as a suspect has intensified focus on insider risk. Government agencies, including the National Intelligence Service, are also working with private partners to shorten cyber-incident analysis times from 14 days to 5 days using advanced AI forensic labs.

Looking Ahead

With the Coupang CEO's resignation development now shaping the company’s crisis trajectory, Coupang faces a long road to rebuilding trust among users and regulators. The company says its teams are working to resolve customer concerns quickly, but the broader lesson is clear: cybersecurity failures now carry real consequences, including at the highest levels of leadership.

‘Even the animals seem confused’: a retreating Kashmir glacier is creating an entire new world in its wake

Kolahoi is one of many glaciers whose decline is disrupting whole ecosystems – water, wildlife and human life that it has supported for centuries

From the slopes above Pahalgam, the Kolahoi glacier is visible as a thinning, rumpled ribbon of ice stretching across the western Himalayas. Once a vast white artery feeding rivers, fields and forests, it is now retreating steadily, leaving bare rock, crevassed ice and newly exposed alpine meadows.

The glacier’s meltwater has sustained paddy fields, apple orchards, saffron fields and grazing pastures for centuries. Now, as its ice diminishes, the entire web of life it supported is shifting.

Continue reading...

© Photograph: Courtesy of Teri

© Photograph: Courtesy of Teri

© Photograph: Courtesy of Teri

❌