Normal view

Received today — 14 February 2026

How do NHIs add value to cloud compliance auditing?

13 February 2026 at 17:00

What Makes Non-Human Identities Essential for Cloud Compliance Auditing? With cybersecurity threats evolve, how can organizations ensure their compliance measures are robust enough to handle the complexities of modern cloud environments? The answer lies in understanding and managing Non-Human Identities (NHIs)—a crucial component for establishing a secure and compliant framework in cloud computing. Understanding NHIs: […]

The post How do NHIs add value to cloud compliance auditing? appeared first on Entro.

The post How do NHIs add value to cloud compliance auditing? appeared first on Security Boulevard.

How can cloud-native security be transformed by Agentic AI?

13 February 2026 at 17:00

How do Non-Human Identities Shape the Future of Cloud Security? Have you ever wondered how machine identities influence cloud security? Non-Human Identities (NHIs) are crucial for maintaining robust cybersecurity frameworks, especially in cloud environments. These identities demand a sophisticated understanding, when they are essential for secure interactions between machines and their environments. The Critical Role […]

The post How can cloud-native security be transformed by Agentic AI? appeared first on Entro.

The post How can cloud-native security be transformed by Agentic AI? appeared first on Security Boulevard.

What future-proof methods do Agentic AIs use in data protection?

13 February 2026 at 17:00

How Secure Is Your Organization’s Cloud Environment? How secure is your organization’s cloud environment? With the digital transformation accelerates, gaps in security are becoming increasingly noticeable. Non-Human Identities (NHIs), representing machine identities, are pivotal in these frameworks. In cybersecurity, they are formed by integrating a ‘Secret’—like an encrypted password or key—and the permissions allocated by […]

The post What future-proof methods do Agentic AIs use in data protection? appeared first on Entro.

The post What future-proof methods do Agentic AIs use in data protection? appeared first on Security Boulevard.

Is Agentic AI driven security scalable for large enterprises?

13 February 2026 at 17:00

How Can Non-Human Identities (NHIs) Transform Scalable Security for Large Enterprises? One might ask: how can large enterprises ensure scalable security without compromising on efficiency and compliance? The answer lies in the effective management of Non-Human Identities (NHIs) and secrets security management. With machine identities, NHIs are pivotal in crafting a robust security framework, especially […]

The post Is Agentic AI driven security scalable for large enterprises? appeared first on Entro.

The post Is Agentic AI driven security scalable for large enterprises? appeared first on Security Boulevard.

Received yesterday — 13 February 2026

Survey: Most Security Incidents Involve Identity Attacks

13 February 2026 at 15:55

A survey of 512 cybersecurity professionals finds 76% report that over half (54%) of the security incidents that occurred in the past 12 months involved some issue relating to identity management. Conducted by Permiso Security, a provider of an identity security platform, the survey also finds 95% are either very confident (52%) or somewhat confident..

The post Survey: Most Security Incidents Involve Identity Attacks appeared first on Security Boulevard.

NDSS 2025 – Automated Mass Malware Factory

13 February 2026 at 15:00

Session 12B: Malware

Authors, Creators & Presenters: Heng Li (Huazhong University of Science and Technology), Zhiyuan Yao (Huazhong University of Science and Technology), Bang Wu (Huazhong University of Science and Technology), Cuiying Gao (Huazhong University of Science and Technology), Teng Xu (Huazhong University of Science and Technology), Wei Yuan (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

PAPER
Automated Mass Malware Factory: The Convergence of Piggybacking and Adversarial Example in Android Malicious Software Generation

Adversarial example techniques have been demonstrated to be highly effective against Android malware detection systems, enabling malware to evade detection with minimal code modifications. However, existing adversarial example techniques overlook the process of malware generation, thus restricting the applicability of adversarial example techniques. In this paper, we investigate piggybacked malware, a type of malware generated in bulk by piggybacking malicious code into popular apps, and combine it with adversarial example techniques. Given a malicious code segment (i.e., a rider), we can generate adversarial perturbations tailored to it and insert them into any carrier, enabling the resulting malware to evade detection. Through exploring the mechanism by which adversarial perturbation affects piggybacked malware code, we propose an adversarial piggybacked malware generation method, which comprises three modules: Malicious Rider Extraction, Adversarial Perturbation Generation, and Benign Carrier Selection. Extensive experiments have demonstrated that our method can efficiently generate a large volume of malware in a short period, and significantly increase the likelihood of evading detection. Our method achieved an average attack success rate (ASR) of 88.3% on machine learning-based detection models (e.g., Drebin and MaMaDroid), and an ASR of 76% and 92% on commercial engines Microsoft and Kingsoft, respectively. Furthermore, we have explored potential defenses against our adversarial piggybacked malware.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Automated Mass Malware Factory appeared first on Security Boulevard.

Why PAM Implementations Struggle 

13 February 2026 at 13:41

Privileged Access Management (PAM) is widely recognized as a foundational security control for Zero Trust, ransomware prevention, and compliance with frameworks such as NIST, ISO 27001, and SOC 2. Yet despite heavy investment, many organizations struggle to realize the promised value of PAM. Projects stall, adoption remains low, and security teams are left managing complex systems that deliver limited risk reduction.  […]

The post Why PAM Implementations Struggle  appeared first on 12Port.

The post Why PAM Implementations Struggle  appeared first on Security Boulevard.

NDSS 2025 – Density Boosts Everything

13 February 2026 at 11:00

Session 12B: Malware

Authors, Creators & Presenters: Jianwen Tian (Academy of Military Sciences), Wei Kong (Zhejiang Sci-Tech University), Debin Gao (Singapore Management University), Tong Wang (Academy of Military Sciences), Taotao Gu (Academy of Military Sciences), Kefan Qiu (Beijing Institute of Technology), Zhi Wang (Nankai University), Xiaohui Kuang (Academy of Military Sciences)

PAPER
Density Boosts Everything: A One-stop Strategy For Improving Performance, Robustness, And Sustainability of Malware Detectors

In the contemporary landscape of cybersecurity, AI-driven detectors have emerged as pivotal in the realm of malware detection. However, existing AI-driven detectors encounter a myriad of challenges, including poisoning attacks, evasion attacks, and concept drift, which stem from the inherent characteristics of AI methodologies. While numerous solutions have been proposed to address these issues, they often concentrate on isolated problems, neglecting the broader implications for other facets of malware detection. This paper diverges from the conventional approach by not targeting a singular issue but instead identifying one of the fundamental causes of these challenges, sparsity. Sparsity refers to a scenario where certain feature values occur with low frequency, being represented only a minimal number of times across the dataset. The authors are the first to elevate the significance of sparsity and link it to core challenges in the domain of malware detection, and then aim to improve performance, robustness, and sustainability simultaneously by solving sparsity problems. To address the sparsity problems, a novel compression technique is designed to effectively alleviate the sparsity. Concurrently, a density boosting training method is proposed to consistently fill sparse regions. Empirical results demonstrate that the proposed methodologies not only successfully bolster the model's resilience against different attacks but also enhance the performance and sustainability over time. Moreover, the proposals are complementary to existing defensive technologies and successfully demonstrate practical classifiers with improved performance and robustness to attacks.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Density Boosts Everything appeared first on Security Boulevard.

The Cyber Express Weekly Roundup: Escalating Breaches, Regulatory Crackdowns, and Global Cybercrime Developments

13 February 2026 at 05:53

The Cyber Express Weekly Roundup

As February 2026 progresses, this week’s The Cyber Express Weekly Roundup examines a series of cybersecurity incidents and enforcement actions spanning Europe, Africa, Australia, and the United States.   The developments include a breach affecting the European Commission’s mobile management infrastructure, a ransomware attack disrupting Senegal’s national identity systems, a landmark financial penalty imposed on an Australian investment firm, and the sentencing of a fugitive linked to a multimillion-dollar cryptocurrency scam.  From suspected exploitation of zero-day vulnerabilities to prolonged breach detection failures and cross-border financial crime, these cases highlights the operational, legal, and systemic dimensions of modern cyber risk.  

The Cyber Express Weekly Roundup 

European Commission Mobile Infrastructure Breach Raises Supply Chain Questions 

The European Commission reported a cyberattack on its mobile device management (MDM) system on January 30, potentially exposing staff names and mobile numbers, though no devices were compromised, and the breach was contained within nine hours. Read more... 

Ransomware Disrupts Senegal’s National Identity Systems 

In West Africa, a major cyberattack hit Senegal’s Directorate of File Automation (DAF), halting identity card production and disrupting national ID, passport, and electoral services. While authorities insist no personal data was compromised, the ransomware group. The full extent of the breach is still under investigation. Read more... 

Australian Court Imposes Landmark Cybersecurity Penalty 

In Australia, FIIG Securities was fined AU$2.5 million for failing to maintain adequate cybersecurity protections, leading to a 2023 ransomware breach that exposed 385GB of client data, including IDs, bank details, and tax numbers. The firm must also pay AU$500,000 in legal costs and implement an independent compliance program. Read more... 

Crypto Investment Scam Leader Sentenced in Absentia 

U.S. authorities sentenced Daren Li in absentia to 20 years for a $73 million cryptocurrency scam targeting American victims. Li remains a fugitive after fleeing in December 2025. The Cambodia-based scheme used “pig butchering” tactics to lure victims to fake crypto platforms, laundering nearly $60 million through U.S. shell companies. Eight co-conspirators have pleaded guilty. The case was led by the U.S. Secret Service. Read more... 

India Brings AI-Generated Content Under Formal Regulation 

India has regulated AI-generated content under notification G.S.R. 120(E), effective February 20, 2026, defining “synthetically generated information” (SGI) as AI-created content that appears real, including deepfakes and voiceovers. Platforms must label AI content, embed metadata, remove unlawful content quickly, and verify user declarations. Read More... 

Weekly Takeaway 

Taken together, this weekly roundup highlights the expanding attack surface created by digital transformation, the persistence of ransomware threats to national infrastructure, and the intensifying regulatory scrutiny facing financial institutions.  From zero-day exploitation and supply chain risks to enforcement actions and transnational crypto fraud, organizations are confronting an environment where operational resilience, compliance, and proactive monitoring are no longer optional; they are foundational to trust and continuity in the digital economy. 

The Law of Cyberwar is Pretty Discombobulated

13 February 2026 at 05:24
cyberwar, cyber, SLA, cyberattack, retailers, Ai, applications, sysdig, attack, cisco, AI, AI-powered, attacks, attackers, security, BreachRx, Cisco, Nexus, security, challenges, attacks, cybersecurity, risks, industry, Cisco Talos hackers legitimate tools used in cyberattacks

This article explores the complexities of cyberwarfare, emphasizing the need to reconsider how we categorize cyber operations within the framework of the Law of Armed Conflict (LOAC). It discusses the challenges posed by AI in transforming traditional warfare notions and highlights the potential risks associated with the misuse of emerging technologies in conflicts.

The post The Law of Cyberwar is Pretty Discombobulated appeared first on Security Boulevard.

Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group

13 February 2026 at 03:11

In 2025, NSFOCUS Fuying Lab disclosed a new APT group targeting China’s scientific research sector, dubbed “ChainedShark” (tracking number: Actor240820). Been active since May 2024, the group’s operations are marked by high strategic coherence and technical sophistication. Its primary targets are professionals in Chinese universities and research institutions specializing in international relations, marine technology, and related […]

The post Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Top Security Incidents of 2025:  The Emergence of the ChainedShark APT Group appeared first on Security Boulevard.

Securing Agentic AI Connectivity

12 February 2026 at 17:50

 

Securing Agentic AI Connectivity

AI agents are no longer theoretical, they are here, powerful, and being connected to business systems in ways that introduce cybersecurity risks! They’re calling APIs, invoking MCPs, reasoning across systems, and acting autonomously in production environments, right now.

And here’s the problem nobody has solved: identity and access controls tell you WHO is acting, but not WHY.

An AI agent can be fully authenticated, fully authorized, and still be completely misaligned with the intent that justified its access. That’s not a failure of your tools. That’s a gap in the entire security model.

This is the problem ArmorIQ was built to solve.

ArmorIQ secures agentic AI at the intent layer, where it actually matters:

· Intent-Bound Execution: Every agent action must trace back to an explicit, bounded plan. If the reasoning drifts, trust is revoked in real time.

· Scoped Delegation Controls: When agents delegate to other agents or invoke tools via MCPs and APIs, authority is constrained and temporary. No inherited trust. No implicit permissions.

· Purpose-Aware Governance: Access isn’t just granted and forgotten. It expires when intent expires. Trust is situational, not permanent.

If you’re a CISO, security architect, or board leader navigating agentic AI risk — this is worth your attention.

See what ArmorIQ is building: https://armoriq.io

The post Securing Agentic AI Connectivity appeared first on Security Boulevard.

Received before yesterday

Can AI-driven architecture significantly enhance SOC team efficiency?

12 February 2026 at 17:00

How Can Non-Human Identities Revolutionize Cybersecurity? Have you ever considered the challenges that arise when managing thousands of machine identities? Where organizations migrate to the cloud, the need for robust security systems becomes paramount. Enter Non-Human Identities (NHIs) — the unsung heroes of cybersecurity that can revolutionize how secure our clouds are. Managing NHIs, which […]

The post Can AI-driven architecture significantly enhance SOC team efficiency? appeared first on Entro.

The post Can AI-driven architecture significantly enhance SOC team efficiency? appeared first on Security Boulevard.

How do Agentic AI systems ensure robust cloud security?

12 February 2026 at 17:00

How Can Non-Human Identities Transform Cloud Security? Is your organization leveraging the full potential of Non-Human Identities (NHIs) to secure your cloud infrastructure? While we delve deeper into increasingly dependent on digital identities, NHIs are pivotal in shaping robust cloud security frameworks. Unlike human identities, NHIs are digital constructs that transcend traditional login credentials, encapsulating […]

The post How do Agentic AI systems ensure robust cloud security? appeared first on Entro.

The post How do Agentic AI systems ensure robust cloud security? appeared first on Security Boulevard.

What role do NHIs play in privileged access management?

12 February 2026 at 17:00

Could the Future of Privileged Access Management Lie in Non-Human Identities? Where the number of machine identities is rapidly expanding, the need for advanced management solutions becomes more pressing. Enter Non-Human Identities (NHIs), a compelling concept in cybersecurity that addresses this burgeoning requirement. Where businesses transition more functions to the cloud, understanding the strategic role […]

The post What role do NHIs play in privileged access management? appeared first on Entro.

The post What role do NHIs play in privileged access management? appeared first on Security Boulevard.

What makes Non-Human Identities safe in healthcare data?

12 February 2026 at 17:00

How Can Organizations Safeguard Non-Human Identities in Healthcare Data? Have you ever considered the importance of machine identities in your cybersecurity strategy? The healthcare sector, with its vast arrays of sensitive information, relies heavily on these machine identities, known as Non-Human Identities (NHIs), to streamline operations and safeguard data. This article delves into how NHIs […]

The post What makes Non-Human Identities safe in healthcare data? appeared first on Entro.

The post What makes Non-Human Identities safe in healthcare data? appeared first on Security Boulevard.

NDSS 2025 – PBP: Post-Training Backdoor Purification For Malware Classifiers

12 February 2026 at 15:00

Session 12B: Malware

Authors, Creators & Presenters: Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University)

PAPER
PBP: Post-Training Backdoor Purification for Malware Classifiers

In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor poisoning attacks on ML malware classifiers. These attacks aim to manipulate model behavior when provided with a particular input trigger. For instance, adversaries could inject malicious samples into public malware repositories, contaminating the training data and potentially misclassifying malware by the ML model. Current countermeasures predominantly focus on detecting poisoned samples by leveraging disagreements within the outputs of a diverse set of ensemble models on training data points. However, these methods are not applicable in scenarios involving ML-as-a-Service (MLaaS) or for users who seek to purify a backdoored model post-training. Addressing this scenario, we introduce PBP, a post-training defense for malware classifiers that mitigates various types of backdoor embeddings without assuming any specific backdoor embedding mechanism. Our method exploits the influence of backdoor attacks on the activation distribution of neural networks, independent of the trigger-embedding method. In the presence of a backdoor attack, the activation distribution of each layer is distorted into a mixture of distributions. By regulating the statistics of the batch normalization layers, we can guide a backdoored model to perform similarly to a clean one. Our method demonstrates substantial advantages over several state-of-the-art methods, as evidenced by experiments on two datasets, two types of backdoor methods, and various attack configurations. Our experiments showcase that PBP can mitigate even the SOTA backdoor attacks for malware classifiers, e.g., Jigsaw Puzzle, which was previously demonstrated to be stealthy against existing backdoor defenses. Notably, your approach requires only a small portion of the training data -- only 1% -- to purify the backdoor and reduce the attack success rate from 100% to almost 0%, a 100-fold improvement over the baseline methods.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – PBP: Post-Training Backdoor Purification For Malware Classifiers appeared first on Security Boulevard.

4 Tools That Help Students Focus

12 February 2026 at 13:37

Educators recognize the dual reality of educational technology (EdTech): its potential to sharpen student focus and detract from it. Schools must proactively leverage technology’s advantages while mitigating its risks to student productivity. Read on as we unpack the evolving importance and challenge of supporting student focus. We also detail four categories of classroom focus tools, ...

The post 4 Tools That Help Students Focus appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.

The post 4 Tools That Help Students Focus appeared first on Security Boulevard.

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

12 February 2026 at 14:42

On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

© Google

42,900 OpenClaw Exposed Control Panels and Why You Should Care

12 February 2026 at 09:47

Over the past two weeks, most coverage around Moltbot and OpenClaw has chased the flashy angle. One-click exploits, remote code execution, APT chatter, scary screenshots. Meanwhile, security teams are doing...

The post 42,900 OpenClaw Exposed Control Panels and Why You Should Care appeared first on Strobes Security.

The post 42,900 OpenClaw Exposed Control Panels and Why You Should Care appeared first on Security Boulevard.

Reducing Alert Fatigue Using AI: From Overwhelmed SOCs to Autonomous Precision

12 February 2026 at 07:27

How Artificial Intelligence Transforms Security Operations Security Operations Centers (SOCs) face a growing operational challenge: overwhelming alert volumes. Modern enterprise environments generate thousands of security notifications daily across endpoint, network, identity, cloud, and application layers. This continuous stream of alerts creates what the industry describes as alert fatigue, a condition where analysts are overwhelmed by

The post Reducing Alert Fatigue Using AI: From Overwhelmed SOCs to Autonomous Precision appeared first on Seceon Inc.

The post Reducing Alert Fatigue Using AI: From Overwhelmed SOCs to Autonomous Precision appeared first on Security Boulevard.

India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment

12 February 2026 at 04:28

AI-generated Content

The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.  The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. 

What the Law Now Defines as “Synthetically Generated Information” 

The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.  More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.  This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.  At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. 

New Duties for Intermediaries 

The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.  A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.  Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. 

Due Diligence and Labelling Requirements for AI-generated Content 

A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.  This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.  For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. 

Enhanced Obligations for Social Media Intermediaries 

Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.  If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. 

Implications for Indian Cybersecurity and Digital Platforms 

The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age. 

Healthcare Networks, Financial Regulators, and Industrial Systems on the Same Target List

12 February 2026 at 04:50

More than 25 million individuals are now tied to the Conduent Business Services breach as investigations continue to expand its scope. In Canada, approximately 750,000 investors were affected in the CIRO data breach. During roughly the same period, 2,451 vulnerabilities specific to industrial control systems were disclosed by 152 vendors. The latest ColorTokens Threat Advisory […]

The post Healthcare Networks, Financial Regulators, and Industrial Systems on the Same Target List appeared first on ColorTokens.

The post Healthcare Networks, Financial Regulators, and Industrial Systems on the Same Target List appeared first on Security Boulevard.

Why are experts optimistic about future AI security technologies

11 February 2026 at 17:00

Are Non-Human Identities the Key to Enhancing AI Security Technologies? Digital has become an intricate web of connections, powered not only by human users but also by a myriad of machine identities, commonly known as Non-Human Identities (NHIs). These mysterious yet vital components are rapidly becoming central to AI security technologies, sparking optimism among experts […]

The post Why are experts optimistic about future AI security technologies appeared first on Entro.

The post Why are experts optimistic about future AI security technologies appeared first on Security Boulevard.

What elements ensure stability in NHI Lifecycle Management

11 February 2026 at 17:00

How Can Organizations Achieve Stability Through NHI Lifecycle Management? How can organizations secure their digital infrastructures effectively? Managing Non-Human Identities (NHIs) presents a formidable challenge, when they necessitate a nuanced approach that addresses every stage of their lifecycle. With technology advances, these machine identities become critical components in safeguarding sensitive data and maintaining robust cybersecurity […]

The post What elements ensure stability in NHI Lifecycle Management appeared first on Entro.

The post What elements ensure stability in NHI Lifecycle Management appeared first on Security Boulevard.

How to ensure Agentic AI security fits your budget

11 February 2026 at 17:00

Are Organizations Equipped to Handle Agentic AI Security? Where artificial intelligence and machine learning have become integral parts of various industries, securing these advanced technologies is paramount. One crucial aspect that often gets overlooked is the management of Non-Human Identities (NHIs) and their associated secrets—a key factor in ensuring robust Agentic AI security and fitting […]

The post How to ensure Agentic AI security fits your budget appeared first on Entro.

The post How to ensure Agentic AI security fits your budget appeared first on Security Boulevard.

Cloud Security and Compliance: What It Is and Why It Matters for Your Business

11 February 2026 at 16:08

Cloud adoption didn’t just change where workloads run. It fundamentally changed how security and compliance must be managed. Enterprises are moving faster than ever across AWS, Azure, GCP, and hybrid...

The post Cloud Security and Compliance: What It Is and Why It Matters for Your Business appeared first on Security Boulevard.

Survey: Widespread Adoption of AI Hasn’t Yet Reduced Cybersecurity Burnout

11 February 2026 at 15:41

A global survey of 1,813 IT and cybersecurity professionals finds that despite the rise of artificial intelligence (AI) and automation, cybersecurity teams still spend on average 44% of their time on manual or repetitive work. Conducted by Sapio Research on behalf of Tines, a provider of an automation platform, the survey also notes that as..

The post Survey: Widespread Adoption of AI Hasn’t Yet Reduced Cybersecurity Burnout appeared first on Security Boulevard.

'Fam Jam' fundraiser to help food-focused charities in Kingston

11 February 2026 at 15:02
For 43 years, Mae Whalen spent her Decembers planning, writing, rehearsing and directing Newburgh’s annual Community Christmas Concert, which raised thousands of dollars and purchased thousands of toys for children who needed them. Read More

Once-hobbled Lumma Stealer is back with lures that are hard to resist

11 February 2026 at 17:11

Last May, law enforcement authorities around the world scored a key win when they hobbled the infrastructure of Lumma, an infostealer that infected nearly 395,000 Windows computers over just a two-month span leading up to the international operation. Researchers said Wednesday that Lumma is once again “back at scale” in hard-to-detect attacks that pilfer credentials and sensitive files.

Lumma, also known as Lumma Stealer, first appeared in Russian-speaking cybercrime forums in 2022. Its cloud-based malware-as-a-service model provided a sprawling infrastructure of domains for hosting lure sites offering free cracked software, games, and pirated movies, as well as command-and-control channels and everything else a threat actor needed to run their infostealing enterprise. Within a year, Lumma was selling for as much as $2,500 for premium versions. By the spring of 2024, the FBI counted more than 21,000 listings on crime forums. Last year, Microsoft said Lumma had become the “go-to tool” for multiple crime groups, including Scattered Spider, one of the most prolific groups.

Takedowns are hard

The FBI and an international coalition of its counterparts took action early last year. In May, they said they seized 2,300 domains, command-and-control infrastructure, and crime marketplaces that had enabled the infostealer to thrive. Recently, however, the malware has made a comeback, allowing it to infect a significant number of machines again.

Read full article

Comments

© Getty Images

The original Secure Boot certificates are about to expire, but you probably won’t notice

11 February 2026 at 16:45

With the original release of Windows 8, Microsoft also enforced Secure Boot. It’s been 15 years since that release, and that means the original 2011 Secure Boot certificates are about to expire. If these certificates are not replaced with new ones, Secure Boot will cease to function – your machine will still boot and operate, but the benefits of Secure Boot are mostly gone, and as newer vulnerabilities are discovered, systems without updated Secure Boot certificates will be increasingly exposed.

Microsoft has already been rolling out new certificates through Windows updates, but only for users of supported versions of Windows, which means Windows 11. If you’re using Windows 10, without the Extended Security Updates, you won’t be getting the new certificates through Windows Update. Even if you use Windows 11, you may need a UEFI update from your laptop or motherboard OEM, assuming they still support your device.

For Linux users using Secure Boot, you’re probably covered by fwupd, which will update the certificates as part of your system’s update program, like KDE’s Discover. Of course, you can also use fwupd manually in the terminal, if you’d like. For everyone else not using Secure Boot, none of this will matter and you’re going to be just fine. I honestly doubt there will be much fallout from this updating process, but there’s always bound to be a few people who fall between the cracks.

All we can do is hope whomever is responsible for Secure Boot at Microsoft hasn’t started slopcoding yet.

NDSS 2025 – MingledPie: A Cluster Mingling Approach For Mitigating Preference Profiling In CFL

11 February 2026 at 11:00

Session 12A: Federated Learning 2

Authors, Creators & Presenters: Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

PAPER
MingledPie: A Cluster Mingling Approach for Mitigating Preference Profiling in CFL

Clustered federated learning (CFL) serves as a promising framework to address the challenges of non-IID (non-Independent and Identically Distributed) data and heterogeneity in federated learning. It involves grouping clients into clusters based on the similarity of their data distributions or model updates. However, classic CFL frameworks pose severe threats to clients' privacy since the honest-but-curious server can easily know the bias of clients' data distributions (its preferences). In this work, we propose a privacy-enhanced clustered federated learning framework, MingledPie, aiming to resist against servers' preference profiling capabilities by allowing clients to be grouped into multiple clusters spontaneously. Specifically, within a given cluster, we mingled two types of clients in which a major type of clients share similar data distributions while a small portion of them do not (false positive clients). Such that, the CFL server fails to link clients' data preferences based on their belonged cluster categories. To achieve this, we design an indistinguishable cluster identity generation approach to enable clients to form clusters with a certain proportion of false positive members without the assistance of a CFL server. Meanwhile, training with mingled false positive clients will inevitably degrade the performances of the cluster's global model. To rebuild an accurate cluster model, we represent the mingled cluster models as a system of linear equations consisting of the accurate models and solve it. Rigid theoretical analyses are conducted to evaluate the usability and security of the proposed designs. In addition, extensive evaluations of MingledPie on six open-sourced datasets show that it defends against preference profiling attacks with an accuracy of 69.4% on average. Besides, the model accuracy loss is limited to between 0.02% and 3.00%.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – MingledPie: A Cluster Mingling Approach For Mitigating Preference Profiling In CFL appeared first on Security Boulevard.

Survey Sees Little Post-Quantum Computing Encryption Progress

10 February 2026 at 17:15

A global survey of 4,149 IT and security practitioners finds that while three-quarters (75%) expect a quantum computer will be capable of breaking traditional public key encryption within five years, only 38% at this point in time are preparing to adopt post-quantum cryptography. Conducted by the Ponemon Institute on behalf of Entrust, a provider of..

The post Survey Sees Little Post-Quantum Computing Encryption Progress appeared first on Security Boulevard.

Windows' original Secure Boot certificates expire in June—here's what you need to do

10 February 2026 at 13:04

Windows 8 is remembered most for its oddball touchscreen-focused full-screen Start menu, but it also introduced a number of under-the-hood enhancements to Windows. One of those was UEFI Secure Boot, a mechanism for verifying PC bootloaders to ensure that unverified software can't be loaded at startup. Secure Boot was enabled but technically optional for Windows 8 and Windows 10, but it became a formal system requirement for installing Windows starting with Windows 11 in 2021.

Secure Boot has relied on the same security certificates to verify bootloaders since 2011, during the development cycle for Windows 8. But those original certificates are set to expire in June and October of this year, something Microsoft is highlighting in a post today.

This certificate expiration date isn't news—Microsoft and most major PC makers have been talking about it for months or years, and behind-the-scenes work to get the Windows ecosystem ready has been happening for some time. And renewing security certificates is a routine occurrence that most users only notice when something goes wrong.

Read full article

Comments

© Microsoft

Versa SASE Platform Now Prevents Sensitive Data From Being Shared With AI

10 February 2026 at 09:00

Versa has enhanced its SASE platform by integrating text analysis and optical character recognition (OCR) capabilities to better identify sensitive data and improve cybersecurity. The updates aim to provide deeper insights for teams dealing with AI-related data risks, reduce false positives, and enable effective incident response through AI-driven alert correlation.

The post Versa SASE Platform Now Prevents Sensitive Data From Being Shared With AI appeared first on Security Boulevard.

FIIG Securities Fined AU$2.5 Million Following Prolonged Cybersecurity Failures

10 February 2026 at 04:28

FIIG cyberattack

Australian fixed-income firm FIIG Securities has been fined AU$2.5 million after the Federal Court found it failed to adequately protect client data from cybersecurity threats over a period exceeding four years. The penalty follows a major FIIG cyberattack in 2023 that resulted in the theft and exposure of highly sensitive personal and financial information belonging to thousands of clients.  It is the first time the Federal Court has imposed civil penalties for cybersecurity failures under the general obligations of an Australian Financial Services (AFS) license.   In addition to the fine, the court ordered FIIG Securities to pay AU$500,000 toward the Australian Securities and Investments Commission’s (ASIC) enforcement costs. FIIG must also implement a compliance program, including the engagement of an independent expert to ensure its cybersecurity and cyber resilience systems are reasonably managed going forward. 

FIIG Cyberattack Exposed Sensitive Client Data After Years of Security Gaps 

The enforcement action stems from a ransomware attack that occurred in 2023. ASIC alleged that between March 2019 and June 2023, FIIG Securities failed to implement adequate cybersecurity measures, leaving its systems vulnerable to intrusion. On May 19, 2023, a hacker gained access to FIIG’s IT network and remained undetected for nearly three weeks.  During that time, approximately 385 gigabytes of confidential data were exfiltrated. The stolen data included names, addresses, dates of birth, driver’s licences, passports, bank account details, tax file numbers, and other sensitive information. FIIG later notified around 18,000 clients that their personal data may have been compromised as a result of the FIIG cyberattack.  Alarmingly, FIIG Securities did not discover the breach on its own. The company became aware of the incident only after being contacted by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) on June 2. Despite receiving this warning, FIIG did not launch a formal internal investigation until six days later.  FIIG admitted it had failed to comply with its AFS licence obligations and acknowledged that adequate cybersecurity controls would have enabled earlier detection and response. The firm also conceded that adherence to its own policies and procedures could have prevented much of the client information from being downloaded. 

Regulatory Action Against FIIG Securities Sets Precedent for Cybersecurity Enforcement 

ASIC Deputy Chair Sarah Court said the case highlights the growing risks posed by cyber threats and the consequences of inadequate controls. “Cyber-attacks and data breaches are escalating in both scale and sophistication, and inadequate controls put clients and companies at real risk,” she said. “ASIC expects financial services licensees to be on the front foot every day to protect their clients. FIIG wasn’t – and they put thousands of clients at risk.”  ASIC Chair Joe Longo described the matter as a broader warning for Australian businesses. “This matter should serve as a wake-up call to all companies on the dangers of neglecting cybersecurity systems,” he said, emphasizing that cybersecurity is not a “set and forget” issue but one that requires continuous monitoring and improvement.  ASIC alleged that FIIG Securities failed to implement basic cybersecurity protection, including properly configured firewalls, regular patching of software and operating systems, mandatory cybersecurity training for staff, and sufficient allocation of financial and human resources to manage cyber risk.  Additional deficiencies cited by ASIC included the absence of an up-to-date incident response plan, ineffective privileged access management, lack of regular vulnerability scanning, failure to deploy endpoint detection and response tools, inadequate use of multi-factor authentication, and a poorly configured Security Information and Event Management (SIEM) system. 

Lessons From the FIIG Cyberattack for Australia’s Financial Sector 

Cybersecurity experts have pointed out that the significance of the FIIG cyberattack lies not only in the breach itself but in the prolonged failure to implement reasonable protections. Annie Haggar, Partner and Head of Cybersecurity at Norton Rose Fulbright Australia, noted in a LinkedIn post that ASIC’s case provides clarity on what regulators consider “adequate” cybersecurity. Key factors include the nature of the business, the sensitivity of stored data, the value of assets under management, and the potential impact of a successful attack.  The attack on FIIG Securities was later claimed by the ALPHV/BlackCat ransomware group, which stated on the dark web that it had stolen approximately 385GB of data from FIIG’s main server. The group warned the company that it had three days to make contact regarding the consequences of what it described as a failure by FIIG’s IT department.  According to FBI and Center for Internet Security reports, the ALPHV/BlackCat group gains initial access using compromised credentials, deploys PowerShell scripts and Cobalt Strike to disable security features, and uses malicious Group Policy Objects to spread ransomware across networks.  The breach was discovered after an employee reported being locked out of their email account. Further investigation revealed that files had been encrypted and backups wiped. While FIIG managed to restore some systems, other data could not be recovered. 

ENISA Updates Its International Strategy to Strengthen EU’s Cybersecurity Cooperation

10 February 2026 at 04:20

ENISA International Strategy

The European Union Agency for Cybersecurity has released an updated international strategy to reinforce the EU’s cybersecurity ecosystem and strengthen cooperation beyond Europe’s borders. The revised ENISA International Strategy refreshes the agency’s approach to working with global partners while ensuring stronger alignment with the European Union’s international cybersecurity policies, core values, and long-term objectives.  Cybersecurity challenges today rarely stop at national or regional borders. Digital systems, critical infrastructure, and data flows are deeply intertwined across continents, making international cooperation a necessity rather than a choice. Against this backdrop, ENISA has clarified that it will continue to engage strategically with international partners outside the European Union, but only when such cooperation directly supports its mandate to improve cybersecurity within Europe. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

ENISA International Strategy Aligns Global Cooperation With Europe’s Cybersecurity Priorities 

Under the updated ENISA International Strategy, the agency’s primary objective remains unchanged: raising cybersecurity levels across the EU. International cooperation is therefore pursued selectively and strategically, focusing on areas where collaboration can deliver tangible benefits to EU Member States and strengthen Europe’s overall cybersecurity resilience. ENISA Executive Director Juhan Lepassaar highlighted the importance of international engagement in achieving this goal. He stated: “International cooperation is essential in cybersecurity. It complements and strengthens the core tasks of ENISA to achieve a high common level of cybersecurity across the Union.   Together with our Management Board, ENISA determines how we engage at an international level to achieve our mission and mandate. ENISA stands fully prepared to cooperate on the global stage to support the EU Member States in doing so.”  The strategy is closely integrated with ENISA’s broader organizational direction, including its recently renewed stakeholders’ strategy. A central focus is cooperation with international partners that share the EU’s values and maintain strategic relationships with the Union.

Expanding Cybersecurity Partnerships Beyond Europe While Supporting EU Policy Objectives 

The revised ENISA International Strategy outlines several active areas of international cooperation. These include more tailored working arrangements with specific countries, notably Ukraine and the United States. These partnerships are designed to focus on capacity-building, best practice exchange, and structured information and knowledge sharing in the field of cybersecurity.  ENISA will also continue supporting the European Commission and the European External Action Service (EEAS) in EU cyber dialogues with partners such as Japan and the United Kingdom. Through this role, ENISA provides technical expertise to inform discussions and to help align international cooperation with Europe’s cybersecurity priorities.  Another key element of the strategy involves continued support for EU candidate countries in the Western Balkans region. From 2026 onward, this support is planned to expand through the extension of specific ENISA frameworks and tools. These may include the development of comparative cyber indexes, cybersecurity exercise methodologies, and the delivery of targeted training programs aimed at strengthening national capabilities. 

Strengthening Europe’s Cybersecurity Resilience Through Multilateral Frameworks 

The updated strategy also addresses the operationalization of the EU Cybersecurity Reserve, established under the 2025 EU Cyber Solidarity Act. ENISA plans to support making the reserve operational for third countries associated with the Digital Europe Programme, including Moldova, thereby extending coordinated cybersecurity response mechanisms while maintaining alignment with EU standards.  In addition, ENISA will continue contributing to the cybersecurity work of the G7 Cybersecurity Working Group. In this context, the agency provides EU-level cybersecurity expertise when required, supporting cooperation on shared cyber threats and resilience efforts. The strategy also leaves room for exploring further cooperation with other like-minded international partners where mutual interests align.  Finally, the ENISA International Strategy reaffirms the principles guiding ENISA’s international cooperation and clarifies working modalities with the European Commission, the EEAS, and EU Member States. These principles were first established following the adoption of ENISA’s initial international strategy in 2021 and have since been consolidated and refined based on practical experience and best practices. 

Zscaler Bolsters Zero-Trust Arsenal with Acquisition of Browser Security Firm SquareX

9 February 2026 at 14:18

Cloud security titan Zscaler Inc. has acquired SquareX, a pioneer in browser-based threat protection, in an apparent move to step away from traditional, clunky security hardware and toward a seamless, browser-native defense. The acquisition, which did not include financial terms, integrates SquareX’s browser detection and response technology into Zscaler’s Zero Trust Exchange platform. Unlike traditional..

The post Zscaler Bolsters Zero-Trust Arsenal with Acquisition of Browser Security Firm SquareX appeared first on Security Boulevard.

AI Revolution Reshapes CISO Spending for 2026: Security Leaders Prioritize Defense Automation

9 February 2026 at 12:22
CISOs Pump Up Political Prowess

The cybersecurity landscape is undergoing a fundamental shift as chief information security officers (CISOs) shift their 2026 budgets to artificial intelligence (AI) and realign traditional defense strategies. Nearly 80% of senior security executives are prioritizing AI-driven solutions to counter increasingly sophisticated threats, a new report from Glilot Capital Partners reveals. The survey, which polled leaders..

The post AI Revolution Reshapes CISO Spending for 2026: Security Leaders Prioritize Defense Automation appeared first on Security Boulevard.

❌