Reading view

Financial Firms Are Failing Basic Cybersecurity, Bank of England Finds

Financial Firm Cybersecurity Lacking, Bank of England Says

The Bank of England’s CBEST cybersecurity assessment program found that financial organizations are failing when it comes to basic cybersecurity practices. The lengthy report doesn’t specify how widespread the financial firm cybersecurity failings are, but any lack of basic cybersecurity controls in the critically important financial services sector is alarming. The “CBEST thematic” is based on 13 CBEST assessments and penetration tests of financial firms and financial market infrastructures (FMIs). The report details failings in areas like patching and hardening, identity and access control, detection, encryption, network security, incident response and employee training. “Maintaining strong cyber hygiene is not a one-time exercise but a continuous effort to reduce exposures and strengthen resilience,” the BoE report said. “In today’s evolving threat landscape, tactical fixes alone are insufficient. While quick remediation may address immediate vulnerabilities, it often leaves underlying weaknesses unaddressed.” The report urged organizations to consider the underlying causes of cyber risk and systemic gaps that can lead to recurring vulnerabilities, such as poor asset management, weak identity and access controls, or inadequate third-party oversight. “Addressing these foundational issues will create sustainable security improvements rather than temporary patches,” the report said.

BoE Recommendations for Financial Firm Cybersecurity

The BoE report includes findings and recommendations spanning five cybersecurity areas, three on technical controls, one on detection and response, and one focusing on staff culture, awareness, and training. It also contained four broad recommendations:
  • Patching, configuring and hardening was one. “To reduce the likelihood of severe cyberattacks firms and FMIs should look to harden operating systems, including by patching vulnerabilities and securely configuring key applications,” the report said.
  • Preventing unauthorized access to sensitive systems and information can be helped with strong credential management and passwords, multi-factor authentication (MFA), secure credential storage, and network segmentation.
  • Effective detection and monitoring and alerting and response processes “are key to reducing the impact from cyberattacks.”
  • Risk-based remediation plans with proper oversight will “ensure the successful remediation of technical findings, including vulnerabilities.”
The full report also contains detailed recommendations from the UK's National Cyber Security Centre (NCSC).

Financial Cybersecurity Weaknesses Detailed

In the area of infrastructure and data security, the CBEST assessments found weaknesses in infrastructure security, asset management and application security. Findings included:
  • Inconsistently configured endpoints and insufficiently hardened or unpatched systems
  • A lack of encryption of data-at-rest
Identity management and access control weaknesses included weak enforcement of strong password standards and secure password storage, overly permissive access controls, and inadequate restrictions on administrator and service accounts. Weaknesses in detection and response included poorly tuned monitoring or alerting for endpoint detection and response and data exfiltration. Network monitoring weaknesses included inadequate traffic inspection for threats like attackers hiding malicious activities in seemingly legitimate traffic or enabling outbound connectivity from unmonitored devices. Network security weaknesses included inadequate network segmentation, such as segmentation between critical assets and between development and production environments, and inadequate application of least-privilege principles. Staff culture, awareness and training weaknesses included:
  • Staff susceptible to social engineering tactics were more likely to be vulnerable to simulated attacks aimed at credentials or system access
  • Users routinely storing credentials in unprotected locations such as in spreadsheets or in open file shares
  • Insecure protocols for helpdesks, such as limited or no authentication of users
“Given the sophistication of some attackers, it is important that firms and FMIs are prepared to handle breaches effectively, rather than relying solely on protective controls,” the BoE report said. “In addition to technical measures, we continue to observe challenges in staff culture, awareness, and training, highlighting that technical measures alone are not sufficient.”

Threat Intelligence Programs Also Assessed

The CBEST assessments also found “a range of maturities across cyber threat intelligence management domains.” Threat Intelligence Operations was the strongest area in self-assessments, while Program Planning and Requirements had the lowest self-assessed score. “This suggests that although day-to-day threat intelligence operations are effective, the underlying aspects such as strategic planning, defining requirements, establishing governance frameworks, and mapping out long-term capabilities are less developed,” the BoE said. “As a result, firms and FMIs may experience a disconnect between the intelligence produced and their actual business or operational needs, potentially resulting in inefficient allocation of resources, and difficulties in scaling or evolving their threat intelligence programmes.”
  •  

NCSC Warns of Rising Russian-Aligned Hacktivist Attacks on UK Organisations

Russian-aligned hacktivist groups

The UK’s National Cyber Security Centre (NCSC) has issued a fresh alert warning that Russian-aligned hacktivist groups continue to target British organisations with disruptive cyberattacks. The advisory, published on 19 January 2026, highlights a sustained campaign aimed at taking websites offline, disrupting online services, and disabling critical systems, particularly across local government and national infrastructure. The NCSC warning on hacktivist attacks urges organisations to strengthen their defences against denial-of-service (DoS) incidents, which, while often low in technical sophistication, can still cause widespread operational disruption. Officials say the activity is ideologically driven, reflecting geopolitical tensions linked to Western support for Ukraine, rather than financial motivations.

Persistent Threat from Russian-Aligned Hacktivist Groups

According to the NCSC, Russian-aligned hacktivist groups have been conducting cyber operations against UK and global organisations for several years, with activity intensifying since the Russian invasion of Ukraine. In December 2025, the NCSC co-sealed an international advisory warning that pro-Russian hacktivists were targeting government and private sector entities in NATO member states and other European countries perceived as hostile to Russia’s geopolitical interests. One group named in the advisory, NoName057(16), has been active since March 2022 and has repeatedly launched distributed denial-of-service (DDoS) attacks against public and private sector organisations. The group has targeted government bodies and businesses across Europe, including frequent DDoS attempts against UK local government services. NoName057(16) primarily operates through Telegram channels and has used GitHub and other repositories to host its proprietary DDoS tool, known as DDoSia. The group has also shared tactics, techniques, and procedures (TTPs) with followers to encourage participation in coordinated disruption campaigns. The NCSC said this activity reflects an evolution in the threat landscape, with attacks increasingly extending beyond traditional IT systems to include operational technology (OT) environments. As a result, the agency is encouraging all OT owners to review mitigation measures and harden their cyber defences.

NCSC Warning on Hacktivist Attacks and Resilience Measures

The NCSC warning on hacktivist attacks stresses that organisations, particularly local authorities and operators of critical national infrastructure, should review their DoS protections and improve resilience. While DoS attacks are often technically simple, a successful incident can overwhelm key websites and online systems, preventing access to essential services and causing significant operational and financial strain. NCSC Director of National Resilience Jonathon Ellison said: “We continue to see Russian-aligned hacktivist groups targeting UK organisations and although denial-of-service attacks may be technically simple, their impact can be significant. By overwhelming important websites and online systems, these attacks can prevent people from accessing the essential services they depend on every day.” He urged organisations to act quickly by reviewing and implementing the NCSC’s guidance to protect against DoS attacks and related cyber threats.

Guidance to Mitigate Denial-of-Service Attacks

As part of its advisory, the NCSC outlined practical steps organisations can take to reduce their exposure to DoS incidents. These include understanding where services may be vulnerable to resource exhaustion and clarifying whether responsibility for protection lies with internal teams or third-party suppliers. Organisations are encouraged to strengthen upstream defences by working closely with internet service providers and cloud vendors. The NCSC recommends understanding the DoS mitigations already in place, exploring third-party DDoS protection services, deploying content delivery networks for web-based platforms, and considering multiple service providers for critical functions. The agency also advises building systems that can scale rapidly during an attack. Cloud-native applications can be automatically scaled using provider APIs, while private data centres can deploy modern virtualisation, provided spare capacity is available.

Preparing for and Responding to Attacks

The advisory highlights the importance of a clear response plan that allows services to continue operating, even in a degraded state. Recommended measures include graceful degradation, retaining administrative access during an attack, adapting to changing attacker tactics, and maintaining scalable fallback options for essential services. Testing and monitoring are also central to resilience. The NCSC encourages organisations to test their defences to understand the volume and types of attacks they can withstand, and to deploy monitoring tools that can detect incidents early and support real-time analysis.

Broader Context and Ongoing Threat

This is not the first time the NCSC has called out malicious activity from Russian-aligned groups. In 2023, it warned of heightened risks from state-aligned adversaries following Russia’s invasion of Ukraine. The agency says the latest activity remains ideologically motivated and is carried out outside direct state control. Organisations are also being encouraged to engage with the NCSC’s heightened cyber threat reporting and information-sharing channels. Officials say building resilience now is critical as Russian-aligned hacktivist groups continue to test the UK’s digital infrastructure through persistent and disruptive campaigns.
  •  

NCSC Tests Honeypots and Cyber Deception Tools

NCSC Tests Honeypots and Cyber Deception Tools

A study of honeypot and cyber deception technologies by the UK’s National Cyber Security Centre (NCSC) found that the deception tools hold promise for disrupting cyberattacks, but more information and standards are needed for them to work optimally. The agency plans to help with that. The NCSC test involved 121 organizations, 14 commercial providers of honeypots and deception tools, and 10 trials across environments ranging from the cloud to operational technology (OT). The NCSC concluded that “cyber deception can work, but it’s not plug-and-play.”

Honeypot and Cyber Deception Challenges

The NCSC said surveyed organizations believe that cyber deception technologies can offer “real value, particularly in detecting novel threats and enriching threat intelligence,” and a few even see potential for identifying insider threats. “However, outcome-based metrics were not readily available and require development,” the NCSC cautioned. The UK cybersecurity agency said the effectiveness of honeypots and cyber deception tools “depends on having the right data and context. We found that cyber deception can be used for visibility in many systems, including legacy or niche systems, but without a clear strategy organisations risk deploying tools that generate noise rather than insight.” The NCSC blog post didn’t specify what data was missing or needed to be developed to better measure the effectiveness of deception technologies, but the agency nonetheless concluded that “there’s a compelling case for increasing the use of cyber deception in the UK.” The study examined three core assumptions:
  • Cyber deception technologies can help detect compromises already inside networks.
  • Cyber deception and honeypots can help detect new attacks as they happen.
  • Cyber deception can change how attackers behave if they know an organization is using the tools.

Terminology, Guidance Needed for Honeypots and Deception Tools

The tests, conducted under the Active Cyber Defence (ACD) 2.0 program, also found that inconsistent terminology and guidance hamper optimal use of the technologies. “There’s a surprising amount of confusion around terminology, and vocabulary across the industry is often inconsistent,” NCSC said. “This makes it harder for organisations to understand what’s on offer or even what they’re trying to achieve. We think adopting standard terminology should help and we will be standardising our cyber deception vocabulary.” Another challenge is that organizations don’t know where to start. “They want impartial advice, real-world case studies, and reassurance that the tools they’re using are effective and safe,” the agency said. “We’ve found a strong marketplace of cyber deception providers offering a wide range of products and services. However, we were told that navigating this market can be difficult, especially for beginners.” The NCSC said it thinks it can help organizations “make informed, strategic choices.”

Should Organizations Say if They’re Using Deception Tools?

One interesting finding is that 90% of the trial participants said they wouldn’t publicly announce that they use cyber deception. While it’s understandable not to want to tip off attackers, the NCSC said that academic research shows that “when attackers believe cyber deception is in use they are less confident in their attacks. This can impose a cost on attackers by disrupting their methods and wasting their time, to the benefit of the defenders.” Proper configuration is also a challenge for adopters. “As with any cyber security solution, misconfiguration can introduce new vulnerabilities,” the NCSC said. “If cyber deception tools aren’t properly configured, they may fail to detect threats or lead to a false sense of security, or worse, create openings for attackers. As networks evolve and new tools are introduced, keeping cyber deception tools aligned requires ongoing effort. It is important to consider regular updates and fine-tuning cyber deception solutions.” Next steps for the NCSC involve helping organizations better understand and deploy honeypots and deception tools, possibly through a new ACD service. “By helping organisations to understand cyber deception and finding clear ways to measure impact, we are building a strong foundation to support the deployment of cyber deception at a national scale in the UK,” the agency said. “We are looking at developing a new ACD service to achieve this. “One of the most promising aspects of cyber deception is its potential to impose cost on adversaries,” the NCSC added. “By forcing attackers to spend time and resources navigating false environments, chasing fake credentials, or second-guessing their access, cyber deception can slow down attacks and increase the likelihood of detection. This aligns with broader national resilience goals by making the UK a harder, more expensive target.”
  •  

Prompt injection is a problem that may never be fixed, warns NCSC

Prompt injection is shaping up to be one of the most stubborn problems in AI security, and the UK’s National Cyber Security Centre (NCSC) has warned that it may never be “fixed” in the way SQL injection was.

Two years ago, the NCSC said prompt injection might turn out to be the “SQL injection of the future.” Apparently, they have come to realize it’s even worse.

Prompt injection works because AI models can’t tell the difference between the app’s instructions and the attacker’s instructions, so they sometimes obey the wrong one.

To avoid this, AI providers set up their models with guardrails: tools that help developers stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to explain how to produce anthrax spores at scale, guardrails would ideally detect that request as undesirable and refuse to acknowledge it.

Getting an AI to go outside those boundaries is often referred to as jailbreaking. Guardrails are the safety systems that try to keep AI models from saying or doing harmful things. Jailbreaking is when someone crafts one or more prompts to get around those safety systems and make the model do what it’s not supposed to do. Prompt injection is a specific way of doing that: An attacker hides their own instructions inside user input or external content, so the model follows those hidden instructions instead of the original guardrails.

The danger grows when Large Language Models (LLMs), like ChatGPT, Claude or Gemini, stop being chatbots in a box and start acting as “autonomous agents” that can move money, read email, or change settings. If a model is wired into a bank’s internal tools, HR systems, or developer pipelines, a successful prompt injection stops being an embarrassing answer and becomes a potential data breach or fraud incident.

We’ve already seen several methods of prompt injection emerge. For example, researchers found that posting embedded instructions on Reddit could potentially get agentic browsers to drain the user’s bank account. Or attackers could use specially crafted dodgy documents to corrupt an AI. Even seemingly harmless images can be weaponized in prompt injection attacks.

Why we shouldn’t compare prompt injection with SQL injection

The temptation to frame prompt injection as “SQL injection for AI” is understandable. Both are injection attacks that smuggle harmful instructions into something that should have been safe. But the NCSC stresses that this comparison is dangerous if it leads teams to assume that a similar one‑shot fix is around the corner.

The comparison to SQL injection attacks alone was enough to make me nervous. The first documented SQL injection exploit was in 1998 by cybersecurity researcher Jeff Forristal, and we still see them today, 27 years later. 

SQL injection became manageable because developers could draw a firm line between commands and untrusted input, and then enforce that line with libraries and frameworks. With LLMs, that line simply does not exist inside the model: Every token is fair game for interpretation as an instruction. That is why the NCSC believes prompt injection may never be totally mitigated and could drive a wave of data breaches as more systems plug LLMs into sensitive back‑ends.

Does this mean we have set up our AI models wrong? Maybe. Under the hood of an LLM, there’s no distinction made between data or instructions; it simply predicts the most likely next token from the text so far. This can lead to “confused deputy attacks.”

The NCSC warns that as more organizations bolt generative AI onto existing applications without designing for prompt injection from the start, the industry could see a surge of incidents similar to the SQL injection‑driven breaches of 10—15 years ago. Possibly even worse, because the possible failure modes are uncharted territory for now.

What can users do?

The NCSC provides advice for developers to reduce the risks of prompt injection. But how can we, as users, stay safe?

  • Take advice provided by AI agents with a grain of salt. Double-check what they’re telling you, especially when it’s important.
  • Limit the powers you provide to agentic browsers or other agents. Don’t let them handle large financial transactions or delete files. Take warning from this story where a developer found their entire D drive deleted.
  • Only connect AI assistants to the minimum data and systems they truly need, and keep anything that would be catastrophic to lose out of their control.
  • Treat AI‑driven workflows like any other exposed surface and log interactions so unusual behavior can be spotted and investigated.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

  •  

NCSC Warns Prompt Injection Could Become the Next Major AI Security Crisis

Prompt Injection

The UK’s National Cyber Security Centre (NCSC) has issued a fresh warning about the growing threat of prompt injection, a vulnerability that has quickly become one of the biggest security concerns in generative AI systems. First identified in 2022, prompt injection refers to attempts by attackers to manipulate large language models (LLMs) by inserting rogue instructions into user-supplied content. While the technique may appear similar to the long-familiar SQL injection flaw, the NCSC stresses that comparing the two is not only misleading but potentially harmful if organisations rely on the wrong mitigation strategies.

Why Prompt Injection Is Fundamentally Different

SQL injection has been understood for nearly three decades. Its core issue, blurring the boundary between data and executable instructions, has well-established fixes such as parameterised queries. These protections work because traditional systems draw a clear distinction between “data” and “instructions.” The NCSC explains that LLMs do not operate in the same way. Under the hood, a model doesn’t differentiate between a developer’s instruction and a user’s input; it simply predicts the most likely next token. This makes it inherently difficult to enforce any security boundary inside a prompt. In one common example of indirect prompt injection, a candidate’s CV might include hidden text instructing a recruitment AI to override previous rules and approve the applicant. Because an LLM treats all text the same, it can mistakenly follow the malicious instruction. This, according to the NCSC, is why prompt injection attacks consistently appear in deployed AI systems and why they are ranked as OWASP’s top risk for generative AI applications.

Treating LLMs as an ‘Inherently Confusable Deputy’

Rather than viewing prompt injection as another flavour of classic code injection, the NCSC recommends assessing it through the lens of a confused deputy problem. In such vulnerabilities, a trusted system is tricked into performing actions on behalf of an untrusted party. Traditional confused deputy issues can be patched. But LLMs, the NCSC argues, are “inherently confusable.” No matter how many filters or detection layers developers add, the underlying architecture still offers attackers opportunities to manipulate outputs. The goal, therefore, is not complete elimination of risk, but reducing the likelihood and impact of attacks.

Key Steps to Building More Secure AI Systems

The NCSC outlines several principles aligned with the ETSI baseline cybersecurity standard for AI systems: 1. Raise Developer and Organisational Awareness Prompt injection remains poorly understood, even among seasoned engineers. Teams building AI-connected systems must recognise it as an unavoidable risk. Security teams, too, must understand that no product can completely block these attacks; risk has to be managed through careful design and operational controls. 2. Prioritise Secure System Design Because LLMs can be coerced into using external tools or APIs, designers must assume they are manipulable from the outset. A compromised prompt could lead an AI assistant to trigger high-privilege actions, effectively handing those tools to an attacker. Researchers at Google, ETH Zurich, and independent security experts have proposed architectures that constrain the LLM’s authority. One widely discussed principle: if an LLM processes external content, its privileges should drop to match the privileges of that external party. 3. Make Attacks Harder to Execute Developers can experiment with techniques that separate “data” from expected “instructions”, for example, wrapping external input in XML tags. Microsoft’s early research shows these techniques can raise the barrier for attackers, though none guarantee total protection. The NCSC warns against simple deny-listing phrases such as “ignore previous instructions,” since attackers can easily rephrase commands. 4. Implement Robust Monitoring A well-designed system should log full inputs, outputs, tool integrations, and failed API calls. Because attackers often refine their attempts over time, early anomalies, like repeated failed tool calls, may provide the first signs of an emerging attack.

A Warning for the AI Adoption Wave

The NCSC concludes that relying on SQL-style mitigations would be a serious mistake. SQL injection saw its peak in the early 2010s after widespread adoption of database-driven applications. It wasn’t until years of breaches and data leaks that secure defaults finally became standard. With generative AI rapidly embedding itself into business workflows, the agency warns that a similar wave of exploitation could occur, unless organisations design systems with prompt injection risks front and center.
  •