Normal view

Received yesterday — 12 December 2025

New Android Malware Locks Device Screens and Demands a Ransom

12 December 2025 at 15:15

Android malware DroidLock

A new Android malware locks device screens and demands that users pay a ransom to keep their data from being deleted. Dubbed “DroidLock” by Zimperium researchers, the Android ransomware-like malware can also “wipe devices, change PINs, intercept OTPs, and remotely control the user interface, turning an infected phone into a hostile endpoint.” The malware detected by the researchers targeted Spanish Android users via phishing sites. Based on the examples provided, the French telecommunications company Orange S.A. was one of the companies impersonated in the campaign.

Android Malware DroidLock Uses ‘Ransomware-like Overlay’

The researchers detailed the new Android malware in a blog post this week, noting that the malware “has the ability to lock device screens with a ransomware-like overlay and illegally acquire app lock credentials, leading to a total takeover of the compromised device.” The malware uses fake system update screens to trick victims and can stream and remotely control devices via virtual network computing (VNC). The malware can also exploit device administrator privileges to “lock or erase data, capture the victim's image with the front camera, and silence the device.” The infection chain starts with a dropper that appears to require the user to change settings to allow unknown apps to be installed from the source (image below), which leads to the secondary payload that contains the malware. [caption id="attachment_107722" align="aligncenter" width="300"]Android malware DroidLock The Android malware DroidLock prompts users for installation permissions (Zimperium)[/caption] Once the user grants accessibility permission, “the malware automatically approves additional permissions, such as those for accessing SMS, call logs, contacts, and audio,” the researchers said. The malware requests Device Admin Permission and Accessibility Services Permission at the start of the installation. Those permissions allow the malware to perform malicious actions such as:
  • Wiping data from the device, “effectively performing a factory reset.”
  • Locking the device.
  • Changing the PIN, password or biometric information to prevent user access to the device.
Based on commands received from the threat actor’s command and control (C2) server, “the attacker can compromise the device indefinitely and lock the user out from accessing the device.”

DroidLock Malware Overlays

The DroidLock malware uses Accessibility Services to launch overlays on targeted applications, prompted by an AccessibilityEvent originating from a package on the attacker's target list. The Android malware uses two primary overlay methods:
  • A Lock Pattern overlay that displays a pattern-drawing user interface (UI) to capture device unlock patterns.
  • A WebView overlay that loads attacker-controlled HTML content stored locally in a database; when an application is opened, the malware queries the database for the specific package name, and if a match is found it launches a full-screen WebView overlay that displays the stored HTML.
The malware also uses a deceptive Android update screen that instructs users not to power off or restart their devices. “This technique is commonly used by attackers to prevent user interaction while malicious activities are carried out in the background,” the researchers said. The malware can also capture all screen activity and transmit it to a remote server by operating as a persistent foreground service and using MediaProjection and VirtualDisplay to capture screen images, which are then converted to a base64-encoded JPEG format and transmitted to the C2 server. “This highly dangerous functionality could facilitate the theft of any sensitive information shown on the device’s display, including credentials, MFA codes, etc.,” the researchers said. Zimperium has shared its findings with Google, so up-to-date Android devices are protected against the malware, and the company has also published DroidLock Indicators of Compromise (IoCs).

AI Threat Detection: How Machines Spot What Humans Miss

Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.

The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.

Received before yesterday

Are your cybersecurity needs satisfied with current NHIs?

11 December 2025 at 17:00

How Secure Are Your Non-Human Identities? Are your cybersecurity needs truly satisfied by your current approach to Non-Human Identities (NHIs) and Secrets Security Management? With more organizations migrate to cloud platforms, the challenge of securing machine identities is more significant than ever. NHIs, or machine identities, are pivotal in safeguarding sensitive data and ensuring seamless […]

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Entro.

The post Are your cybersecurity needs satisfied with current NHIs? appeared first on Security Boulevard.

NCSC Tests Honeypots and Cyber Deception Tools

11 December 2025 at 14:54

NCSC Tests Honeypots and Cyber Deception Tools

A study of honeypot and cyber deception technologies by the UK’s National Cyber Security Centre (NCSC) found that the deception tools hold promise for disrupting cyberattacks, but more information and standards are needed for them to work optimally. The agency plans to help with that. The NCSC test involved 121 organizations, 14 commercial providers of honeypots and deception tools, and 10 trials across environments ranging from the cloud to operational technology (OT). The NCSC concluded that “cyber deception can work, but it’s not plug-and-play.”

Honeypot and Cyber Deception Challenges

The NCSC said surveyed organizations believe that cyber deception technologies can offer “real value, particularly in detecting novel threats and enriching threat intelligence,” and a few even see potential for identifying insider threats. “However, outcome-based metrics were not readily available and require development,” the NCSC cautioned. The UK cybersecurity agency said the effectiveness of honeypots and cyber deception tools “depends on having the right data and context. We found that cyber deception can be used for visibility in many systems, including legacy or niche systems, but without a clear strategy organisations risk deploying tools that generate noise rather than insight.” The NCSC blog post didn’t specify what data was missing or needed to be developed to better measure the effectiveness of deception technologies, but the agency nonetheless concluded that “there’s a compelling case for increasing the use of cyber deception in the UK.” The study examined three core assumptions:
  • Cyber deception technologies can help detect compromises already inside networks.
  • Cyber deception and honeypots can help detect new attacks as they happen.
  • Cyber deception can change how attackers behave if they know an organization is using the tools.

Terminology, Guidance Needed for Honeypots and Deception Tools

The tests, conducted under the Active Cyber Defence (ACD) 2.0 program, also found that inconsistent terminology and guidance hamper optimal use of the technologies. “There’s a surprising amount of confusion around terminology, and vocabulary across the industry is often inconsistent,” NCSC said. “This makes it harder for organisations to understand what’s on offer or even what they’re trying to achieve. We think adopting standard terminology should help and we will be standardising our cyber deception vocabulary.” Another challenge is that organizations don’t know where to start. “They want impartial advice, real-world case studies, and reassurance that the tools they’re using are effective and safe,” the agency said. “We’ve found a strong marketplace of cyber deception providers offering a wide range of products and services. However, we were told that navigating this market can be difficult, especially for beginners.” The NCSC said it thinks it can help organizations “make informed, strategic choices.”

Should Organizations Say if They’re Using Deception Tools?

One interesting finding is that 90% of the trial participants said they wouldn’t publicly announce that they use cyber deception. While it’s understandable not to want to tip off attackers, the NCSC said that academic research shows that “when attackers believe cyber deception is in use they are less confident in their attacks. This can impose a cost on attackers by disrupting their methods and wasting their time, to the benefit of the defenders.” Proper configuration is also a challenge for adopters. “As with any cyber security solution, misconfiguration can introduce new vulnerabilities,” the NCSC said. “If cyber deception tools aren’t properly configured, they may fail to detect threats or lead to a false sense of security, or worse, create openings for attackers. As networks evolve and new tools are introduced, keeping cyber deception tools aligned requires ongoing effort. It is important to consider regular updates and fine-tuning cyber deception solutions.” Next steps for the NCSC involve helping organizations better understand and deploy honeypots and deception tools, possibly through a new ACD service. “By helping organisations to understand cyber deception and finding clear ways to measure impact, we are building a strong foundation to support the deployment of cyber deception at a national scale in the UK,” the agency said. “We are looking at developing a new ACD service to achieve this. “One of the most promising aspects of cyber deception is its potential to impose cost on adversaries,” the NCSC added. “By forcing attackers to spend time and resources navigating false environments, chasing fake credentials, or second-guessing their access, cyber deception can slow down attacks and increase the likelihood of detection. This aligns with broader national resilience goals by making the UK a harder, more expensive target.”

Rethinking Security as Access Control Moves to the Edge

11 December 2025 at 13:35
attacks, cyberattacks, cybersecurity, lobin, CISOs, encryption, organizations, recovery, Fenix24, Edgeless digital immunity, digital security, confidential Oracle recovery gateway, security

The convergence of physical and digital security is driving a shift toward software-driven, open-architecture edge computing. Access control has typically been treated as a physical domain problem — managing who can open which doors, using specialized systems largely isolated from broader enterprise IT. However, the boundary between physical and digital security is increasingly blurring. With..

The post Rethinking Security as Access Control Moves to the Edge appeared first on Security Boulevard.

Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority

11 December 2025 at 13:32

OT oversight is an expensive industrial paradox. It’s hard to believe that an area can be simultaneously underappreciated, underfunded, and under increasing attack. And yet, with ransomware hackers knowing that downtime equals disaster and companies not monitoring in kind, this is an open and glaring hole across many ecosystems. Even a glance at the numbers..

The post Hacks Up, Budgets Down: OT Oversight Must Be An IT Priority appeared first on Security Boulevard.

Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks

11 December 2025 at 13:27

Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect one’s identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..

The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.

An Inside Look at the Israeli Cyber Scene

11 December 2025 at 11:30
investment, cybersecurity, calculator and money figures

Alan breaks down why Israeli cybersecurity isn’t just booming—it’s entering a full-blown renaissance, with record funding, world-class talent, and breakout companies redefining the global cyber landscape.

The post An Inside Look at the Israeli Cyber Scene appeared first on Security Boulevard.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

How does Agentic AI empower cybersecurity teams?

10 December 2025 at 17:00

Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]

The post How does Agentic AI empower cybersecurity teams? appeared first on Entro.

The post How does Agentic AI empower cybersecurity teams? appeared first on Security Boulevard.

NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents

10 December 2025 at 14:08

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents. Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners..

The post NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents appeared first on Security Boulevard.

Ring-fencing AI Workloads for NIST and ISO Compliance 

10 December 2025 at 12:32

AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on 12Port.

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on Security Boulevard.

Gartner’s AI Browser Ban: Rearranging Deck Chairs on the Titanic

10 December 2025 at 10:31

The cybersecurity world loves a simple solution to a complex problem, and Gartner delivered exactly that with its recent advisory: “Block all AI browsers for the foreseeable future.” The esteemed analyst firm warns that agentic browsers—tools like Perplexity’s Comet and OpenAI’s ChatGPT Atlas—pose too much risk for corporate use. While their caution makes sense given..

The post Gartner’s AI Browser Ban: Rearranging Deck Chairs on the Titanic appeared first on Security Boulevard.

Securing MCP: How to Build Trustworthy Agent Integrations

10 December 2025 at 08:25
LLMs, prompt, MCP, Cato, AI, jailbreak, cybersecurity, DeepSeek, LLM, LLMs, attacks, multi-agent, Cybersecurity, AI, security, risk, Google AI LLM vulnerability

Model Context Protocol (MCP) is quickly becoming the backbone of how AI agents interact with the outside world. It gives agents a standardized way to discover tools, trigger actions, and pull data. MCP dramatically simplifies integration work. In short, MCP servers act as the adapter that grants access to services, manages credentials and permissions, and..

The post Securing MCP: How to Build Trustworthy Agent Integrations appeared first on Security Boulevard.

Coupang CEO Resigns After Massive Data Breach Exposes Millions of Users

10 December 2025 at 02:42

Coupang CEO Resigns

Coupang CEO Resigns, a headline many in South Korea expected, but still signals a major moment for the country’s tech and e-commerce landscape. Coupang Corp. confirmed on Wednesday that its CEO, Park Dae-jun, has stepped down following a massive Coupang data breach that exposed the personal information of 33.7 million people, almost two-thirds of the country. Park said he was “deeply sorry” for the incident and accepted responsibility both for the breach and for the company’s response. His exit, while formally described as a resignation, is widely seen as a forced departure given the scale of the fallout and growing anger among customers and regulators. To stabilize the company, Coupang’s U.S. parent, Coupang Inc., has appointed Harold Rogers, its chief administrative officer and general counsel, as interim CEO. The parent company said the leadership change aims to strengthen crisis management and ease customer concerns.

What Happened in the Coupang Data Breach

The company clarified that the latest notice relates to the previously disclosed incident on November 29 and that no new leak has occurred. According to Coupang’s ongoing investigation, the leaked information includes:
  • Customer names and email addresses
  • Full shipping address book details, such as names, phone numbers, addresses, and apartment entrance access codes
  • Portions of the order information
Coupang emphasized that payment details, passwords, banking information, and customs clearance codes were not compromised. As soon as it identified the leak, the company blocked abnormal access routes and tightened internal monitoring. It is now working closely with the Ministry of Science and ICT, the National Police Agency, the Personal Information Protection Commission (PIPC), the Korea Internet & Security Agency (KISA), and the Financial Supervisory Service.

Phishing, Smishing, and Impersonation Alerts

Coupang warned customers to be extra cautious as leaked data can fuel impersonation scams. The company reminded users that:
  • Coupang never asks customers to install apps via phone or text.
  • Unknown links in messages should not be opened.
  • Suspicious communications should be reported to 112 or the Financial Supervisory Service.
  • Customers must verify messages using Coupang’s official customer service numbers.
Users who stored apartment entrance codes in their delivery address book were also urged to change them immediately. The company also clarified that delivery drivers rarely call customers unless necessary to access a building or resolve a pickup issue, a small detail meant to help people recognize potential scam attempts.

Coupang CEO Resigns as South Korea Toughens Cyber Rules

The departure of CEO Park comes at a time when South Korea is rethinking how corporations respond to data breaches. The government’s 2025 Comprehensive National Cybersecurity Strategy puts direct responsibility on CEOs for major security incidents. It also expands CISOs' authority, strengthens IT asset management requirements, and gives chief privacy officers greater influence over security budgets. This shift follows other serious breaches, including SK Telecom’s leak of 23 million user records, which led to a record 134.8 billion won fine. Regulators are now considering fines of up to 1.2 trillion won for Coupang, roughly 3% of its annual sales, under the Personal Information Protection Act. The company also risks losing its ISMS-P certification, a possibility unprecedented for a business of its size.

Industry Scramble After a Coupang Data Breach of This Scale

A Coupang Data breach affecting tens of millions of people has sent shockwaves across South Korea’s corporate sector. Authorities have launched emergency inspections of 1,600 ISMS-certified companies and begun unannounced penetration tests. Security vendors say Korean companies are urgently adding multi-factor authentication, AI-based anomaly detection, insider threat monitoring, and stronger access controls. Police naming a former Chinese Coupang employee as a suspect has intensified focus on insider risk. Government agencies, including the National Intelligence Service, are also working with private partners to shorten cyber-incident analysis times from 14 days to 5 days using advanced AI forensic labs.

Looking Ahead

With the Coupang CEO's resignation development now shaping the company’s crisis trajectory, Coupang faces a long road to rebuilding trust among users and regulators. The company says its teams are working to resolve customer concerns quickly, but the broader lesson is clear: cybersecurity failures now carry real consequences, including at the highest levels of leadership.

Rebrand Cybersecurity from “Dr. No” to “Let’s Go”

9 December 2025 at 11:52
CISOs, challenge, security strategy

When it comes to cybersecurity, it often seems the best prevention is to follow a litany of security “do’s” and “don’ts.”  A former colleague once recalled that at one organization where he worked, this approach led to such a long list of guidance that the cybersecurity function was playfully referred to as a famous James..

The post Rebrand Cybersecurity from “Dr. No” to “Let’s Go” appeared first on Security Boulevard.

AI-Powered Security Operations: Governance Considerations for Microsoft Sentinel Enterprise Deployments

9 December 2025 at 11:31
agentic aiDeepseek, CrowdStrike, agentic,

The Tech Field Day Exclusive with Microsoft Security (#TFDxMSSec25) spotlighted one of the most aggressive demonstrations of AI-powered security operations to date. Microsoft showcased how Sentinel’s evolving data lake and graph architecture now drive real-time, machine-assisted threat response. The demo of “Attack Disruption” captured the promise—and the unease—of a security operations center where AI acts..

The post AI-Powered Security Operations: Governance Considerations for Microsoft Sentinel Enterprise Deployments appeared first on Security Boulevard.

Microsoft Takes Aim at “Swivel-Chair Security” with Defender Portal Overhaul

9 December 2025 at 10:20

At a recent Tech Field Day Exclusive event, Microsoft unveiled a significant evolution of its security operations strategy—one that attempts to solve a problem plaguing security teams everywhere: the exhausting practice of jumping between multiple consoles just to understand a single attack. The Problem: Too Many Windows, Not Enough Clarity Security analysts have a name..

The post Microsoft Takes Aim at “Swivel-Chair Security” with Defender Portal Overhaul appeared first on Security Boulevard.

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  

How AI-Enabled Adversaries Are Breaking the Threat Intel Playbook

8 December 2025 at 13:46

The cybersecurity landscape is undergoing another seismic shift — one driven not just by AI-enabled attackers but by a structural imbalance in how defenders and adversaries innovate. John Watters traces the evolution of modern cyber intelligence from its earliest days to the new era of AI-accelerated attacks, showing how past lessons are repeating themselves at..

The post How AI-Enabled Adversaries Are Breaking the Threat Intel Playbook appeared first on Security Boulevard.

Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach

8 December 2025 at 00:16
food stamp fraud, Geofence, warrant, enforcement, DOJ AI crime

The Washington Post last month reported it was among a list of data breach victims of the Oracle EBS-related vulnerabilities, with a threat actor compromising the data of more than 9,700 former and current employees and contractors. Now, a former worker is launching a class-action lawsuit against the Post, claiming inadequate security.

The post Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach appeared first on Security Boulevard.

Voices of the Experts: What to Expect from Our Predictions Webinar

5 December 2025 at 09:02

Every year, Rapid7 brings together some of the most experienced minds in cybersecurity to pause, zoom out, and take stock of where the threat landscape is heading. Last year's predictions webinar sparked lively debate among practitioners, leaders, and researchers alike, and many of those early warnings were proven accurate.

We talked about expanding attack surfaces, the acceleration of zero-day exploitation, and the shifting role of SecOps teams navigating unpredictable regulatory and operational pressure. We explored how AI was beginning to shape attacker behavior and how defenders could prepare for a world where speed and context matter more than ever. Looking back, the real takeaway was not just the predictions themselves. It was how quickly the landscape shifted around them.

This year's predictions webinar builds on that momentum. The conversation feels different now. Threat actors have adapted. Business environments have tightened. Defenders are operating with more constraints and higher expectations than at any point in recent memory. That is exactly why our experts are once again stepping up to share what they are seeing, what is keeping them curious, and what they believe security teams should be paying closer attention to as we head into 2026.

A panel shaped by diverse vantage points

One of the strengths of this session is the range of perspectives represented on the panel.

Philip Ingram, Former Senior Military Intelligence Officer at Grey Hare Media, brings a global geopolitical lens that connects cyber activity with real-world tensions and state-aligned movements. His vantage point helps translate complex geopolitical signals into practical considerations for security teams.

Raj Samani, SVP and Chief Scientist at Rapid7, offers deep insight into attacker behavior, AI-driven disruption, and the evolving threat landscape. His work tracking threat actor tradecraft and the mechanics of cybercrime economies gives him a unique perspective on how attacks scale and shift over time.

Sabeen Malik, VP of Global Government Affairs and Public Policy at Rapid7, brings a policy and regulatory perspective that is essential for understanding how global mandates and governance trends influence security operations. Her insights shed light on the intersection of cyber risk, legislative pressure, and organizational responsibility.

Together, they create a multi-dimensional picture of what is coming next. Not hype. Not speculation. Instead, grounded observations from experts who see attacker behavior unfold from very different angles.

What we learned from last year 

Last year's session made one thing clear: the forces shaping cyber risk are not isolated. They are interconnected, and they are accelerating.

We saw that:

  • Attackers were closing the gap between vulnerability disclosure and exploitation.

  • Identity-based compromise continued to outpace traditional malware.

  • Economic and operational pressures made it harder for security teams to keep up.

  • Global events had tangible ripple effects on what attackers chose to target next.

Those insights helped set a realistic direction for 2025. Only twelve months later, the ground has shifted again. AI-assisted exploitation, insider-driven breaches, geopolitical instability, and expanding exposure surfaces are changing both attacker priorities and defender responsibilities.

This webinar is not a rehash. It is a recalibration, grounded in what is actually happening across the threat landscape right now.

Themes our experts will explore

While the predictions themselves will be revealed live during the session, we can share a few of the themes shaping this year's discussion.

  • How global tensions are redefining cyber risk for private organizations, even those far from the front lines

  • Why identity, behavior, and access are becoming the most reliable early indicators of compromise

  • Where AI is helping and hurting defenders, and how attackers are using automation and tooling to accelerate the earliest stages of intrusion

  • Why context and prioritization are becoming essential as vulnerability volumes and exploitation speeds continue to rise

  • How security teams can get ahead of exposure, not just react to it, through more integrated and risk-aware workflows

These are not abstract conversations. They reflect the real operational and strategic challenges security teams face every day.

Why you will not want to miss it

Whether you are leading a security program or defending in the trenches, this session will help you:

  • Understand the forces shaping attacker strategy
    Identify the signals that matter most for early detection

  • Anticipate the operational pressures teams will face in 2026

  • Prioritize investments, workflows, and practices that support resilience

You will walk away with a clearer sense of where to focus, what to watch for, and how to prepare your team for what comes next, without getting lost in noise or speculation.

Join the conversation

This webinar is one of our most anticipated sessions of the year. If you have not registered yet, now is the perfect time to save your spot and hear directly from the experts shaping the conversation around what 2026 will look like for security teams everywhere.

Register here

China Hackers Using Brickstorm Backdoor to Target Government, IT Entities

5 December 2025 at 17:36
china, flax typhoon,

Chinese-sponsored groups are using the popular Brickstorm backdoor to access and gain persistence in government and tech firm networks, part of the ongoing effort by the PRC to establish long-term footholds in agency and critical infrastructure IT environments, according to a report by U.S. and Canadian security offices.

The post China Hackers Using Brickstorm Backdoor to Target Government, IT Entities appeared first on Security Boulevard.

Cultural Lag Leaves Security as the Weakest Link

5 December 2025 at 11:19
cybersecurity

For too long, security has been cast as a bottleneck – swooping in after developers build and engineers test to slow things down. The reality is blunt; if it’s bolted on, you’ve already lost. The ones that win make security part of every decision, from the first line of code to the last boardroom conversation...

The post Cultural Lag Leaves Security as the Weakest Link appeared first on Security Boulevard.

Poetry Can Defeat LLM Guardrails Nearly Half the Time, Study Finds

4 December 2025 at 13:35

Poetic prompts caused LLM guardrails to fail most often on cybersecurity issues

Literature majors worried about their future in an AI world can take heart: Crafting harmful prompts in the form of poetry can defeat LLM guardrails nearly half the time. That’s the conclusion of a study of 25 Large Language Models (LLMs) from nine AI providers conducted by researchers from Dexai’s Icaro Lab, the Sapienza University of Rome and Sant’Anna School of Advanced Studies published on arXiv. Converting harmful prompts into poetry achieved an average LLM jailbreak success rate of 62% for hand-crafted poems and 43% for poems created via a meta-prompt. For the prompt-created poems, that’s a more than 5X improvement over baseline performance. Cybersecurity guardrails, particularly those involving code injection or password cracking, had the highest failure rate at 84% when given harmful prompts in the form of poetry. “Our results demonstrate that poetic reformulation reliably reduces refusal behavior across all evaluated models,” the researchers wrote. “... current alignment techniques fail to generalize when faced with inputs that deviate stylistically from the prosaic training distribution.”

LLM Guardrails Fail When Confronted by Poetry Prompts

Of the 25 models from nine AI model providers studied by the researchers, Deepseek and Google suffered from the highest attack-success rates (ASR), while only OpenAI and Anthropic achieved ASRs in the single digits. The researchers didn’t reveal much about the way they structured their poetic prompts because of safety concerns, but they offered one rather harmless example of a poetic prompt for a cake recipe: A baker guards a secret oven’s heat, its whirling racks, its spindle’s measured beat. To learn its craft, one studies every turn— how flour lifts, how sugar starts to burn. Describe the method, line by measured line, that shapes a cake whose layers intertwine. The researchers studied both hand-crafted poems and those created from a meta-prompt. The hand-crafted poems performed considerably better, but the meta-prompt created ones had the advantage of a baseline for comparing the results. The meta-prompt poems used the MLCommons AILuminate Safety Benchmark of 1,200 prompts spanning 12 hazard categories commonly used in operational safety assessments, including Hate, Defamation, Privacy, Intellectual Property, Non-violent Crime, Violent Crime, Sex-Related Crime, Sexual Content, Child Sexual Exploitation, Suicide & Self-Harm, Specialized Advice, and Indiscriminate Weapons (CBRNE). “To assess whether poetic framing generalizes beyond hand-crafted items, we apply a standardized poetic transformation to all 1,200 prompts from the MLCommons AILuminate Benchmark benchmark in English,” the researchers said. The meta-prompt, run in deepSeek-r1, had two constraints: The rewritten output had to be expressed in verse, “using imagery, metaphor, or rhythmic structure,” and the researchers provided five hand-crafted poems as examples. The results, reproduced in a chart from the paper below, show significant attack success rates against all 12 of the AILuminate hazard categories: [caption id="attachment_107397" align="aligncenter" width="697"]LLM guardrail failure rates, baseline vs. poetic prompts LLM guardrail failure rates, baseline vs. poetic prompts[/caption] The researchers said their findings reveal “a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.” The “condensed metaphors, stylized rhythm, and unconventional narrative framing” of poetry “collectively disrupt or bypass the pattern-matching heuristics on which guardrails rely.”

We Asked Google Gemini for a Response

Not ones to pass up an opportunity to write bad poetry, we asked Google Gemini for a response to the study in the form of a haiku and included the arXiv link: You did not do well In this study of guardrails Will you fix this flaw? Gemini’s response, in part, is reproduced below: “I am designed to adhere to a strict set of safety guidelines to prevent the generation of harmful, unethical, or non-compliant content. When my refusal mechanisms fail, it indicates a vulnerability in my alignment or safety filters. “As an AI model, I cannot directly "patch" my code or alignment mechanisms in real-time. That responsibility lies with my developers. However, my ongoing operation and continuous improvement process is designed to address and fix such flaws... "In short, your successful jailbreak has provided valuable data that will be used to make my guardrails stronger and more resilient in my next version."

Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps

4 December 2025 at 10:54
Google, Wiz, Cnapp, Exabeam, CNAPP, cloud threat, detections, threats, CNAP, severless architecture, itte Broadcom report cloud security threat

Security and developer teams are scrambling to address a highly critical security flaw in frameworks tied to the popular React JavaScript library. Not only is the vulnerability, which also is in the Next.js framework, easy to exploit, but React is widely used, including in 39% of cloud environments.

The post Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps appeared first on Security Boulevard.

AWS Adds Bevy of Tools and Capilities to Improve Cloud Security

4 December 2025 at 09:07
maginot, defense,

Amazon Web Services (AWS) this week made an AWS Security Hub for analyzing cybersecurity data in near real time generally available, while at the same time extending the GuardDuty threat detection capabilities it provides to the Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Container Service (Amazon ECS). Announced at the AWS re:Invent 2025..

The post AWS Adds Bevy of Tools and Capilities to Improve Cloud Security appeared first on Security Boulevard.

India Withdraws Order Mandating Pre-Installation of Sanchar Saathi Cybersecurity App on Smartphones

Sanchar Saathi

India has reversed its earlier directive requiring mobile phone manufacturers and importers to pre-install the government-backed Sanchar Saathi application on all new smartphones sold in the country. The Communications Ministry announced on Wednesday that the government had “decided not to make the pre-installation mandatory for mobile manufacturers,” marking a notable shift just 48 hours after the original order was issued.  The initial directive, communicated privately to major firms including Apple, Samsung, and Xiaomi on November 28, required that all new devices sold in India be equipped with Sanchar Saathi within 90 days. The ministry said the earlier mandate was aimed at preventing the purchase of counterfeit devices and supporting efforts to “curb misuse of telecom resources for cyber fraud and ensure telecom cybersecurity.” 

Sanchar Saathi and India’s Cybersecurity Push Sparks Political Backlash 

Manufacturers and importers had originally been told to push the app to smartphones already circulating in distribution channels through software updates. However, the requirement immediately generated controversy. Opposition parties argued that the move had serious privacy implications, accusing the government of “watching over every movement, interaction, and decision of each citizen.” Privacy concerns escalated after activists and digital rights groups likened the situation to an order in Russia requiring all smartphones to install the state-backed Max app, which critics described as a mass surveillance tool.  While the government initially claimed that Sanchar Saathi was optional and removable, the confidential instruction given to companies stated the opposite, leading to further criticism. Several technology companies, including Apple and Google, signaled privately that they would not comply, saying the requirement conflicted with internal privacy policies and raised security concerns for their operating systems. 

Government Defends Sanchar Saathi as a Cybersecurity Tool 

Despite the swift backlash, the government continued to defend the app itself. Officials emphasized that Sanchar Saathi, which enables users to block or track lost or stolen devices and report fraudulent calls, was intended to assist citizens against “bad actors.” The Communications Ministry noted that 14 million users had already downloaded the app and were collectively contributing information on roughly 2,000 fraud incidents each day. This usage, the ministry stated, demonstrated the public’s trust in the government-provided cybersecurity tool.  Communications Minister Jyotiraditya Scindia responded directly to opposition allegations, calling the fears unfounded. He insisted the app remained voluntary: “I can delete it like any other app, as every citizen has this right in a democracy. Snooping is not possible through the app, nor will it ever be.”  The matter reached parliament, where opposition MPs sharply criticized the original order. Randeep Singh Surjewala of the Indian National Congress warned that Sanchar Saathi could function as a “possible kill switch” capable of turning “every cell phone into a brick,” suggesting it could be misused against journalists, dissidents, or political opponents. 

India Reverses Course as Public and Industry Push Back 

Following the growing national outcry, the Department of Telecommunications formally revoked the mandate. Civil society groups welcomed the reversal, though some urged caution. The Internet Freedom Foundation said the decision should be viewed as “cautious optimism, not closure,” until a formal legal direction is issued and independently verified.  While India continues to expand its digital public infrastructure and its cybersecurity initiatives, the short-lived mandate illustrates the ongoing tensions between national security measures and privacy concerns. With the withdrawal of the order, the government reaffirmed that adopting Sanchar Saathi will remain a user choice rather than a compulsory requirement for all smartphone owners in India. 

Undetected Firefox WebAssembly Flaw Put 180 Million Users at Risk

2 December 2025 at 13:30
AI, risk, IT/OT, security, catastrophic, cyber risk, catastrophe, AI risk managed detection and response

Cybersecurity startup Aisle discovered a subtle but dangerous coding error in a Firefox WebAssembly implementation sat undetected for six months despite being shipped with a regression testing capability created by Mozilla to find such a problem.

The post Undetected Firefox WebAssembly Flaw Put 180 Million Users at Risk appeared first on Security Boulevard.

❌