Normal view

Received yesterday — 12 December 2025
Received before yesterday

NCSC Warns Prompt Injection Could Become the Next Major AI Security Crisis

9 December 2025 at 01:07

Prompt Injection

The UK’s National Cyber Security Centre (NCSC) has issued a fresh warning about the growing threat of prompt injection, a vulnerability that has quickly become one of the biggest security concerns in generative AI systems. First identified in 2022, prompt injection refers to attempts by attackers to manipulate large language models (LLMs) by inserting rogue instructions into user-supplied content. While the technique may appear similar to the long-familiar SQL injection flaw, the NCSC stresses that comparing the two is not only misleading but potentially harmful if organisations rely on the wrong mitigation strategies.

Why Prompt Injection Is Fundamentally Different

SQL injection has been understood for nearly three decades. Its core issue, blurring the boundary between data and executable instructions, has well-established fixes such as parameterised queries. These protections work because traditional systems draw a clear distinction between “data” and “instructions.” The NCSC explains that LLMs do not operate in the same way. Under the hood, a model doesn’t differentiate between a developer’s instruction and a user’s input; it simply predicts the most likely next token. This makes it inherently difficult to enforce any security boundary inside a prompt. In one common example of indirect prompt injection, a candidate’s CV might include hidden text instructing a recruitment AI to override previous rules and approve the applicant. Because an LLM treats all text the same, it can mistakenly follow the malicious instruction. This, according to the NCSC, is why prompt injection attacks consistently appear in deployed AI systems and why they are ranked as OWASP’s top risk for generative AI applications.

Treating LLMs as an ‘Inherently Confusable Deputy’

Rather than viewing prompt injection as another flavour of classic code injection, the NCSC recommends assessing it through the lens of a confused deputy problem. In such vulnerabilities, a trusted system is tricked into performing actions on behalf of an untrusted party. Traditional confused deputy issues can be patched. But LLMs, the NCSC argues, are “inherently confusable.” No matter how many filters or detection layers developers add, the underlying architecture still offers attackers opportunities to manipulate outputs. The goal, therefore, is not complete elimination of risk, but reducing the likelihood and impact of attacks.

Key Steps to Building More Secure AI Systems

The NCSC outlines several principles aligned with the ETSI baseline cybersecurity standard for AI systems: 1. Raise Developer and Organisational Awareness Prompt injection remains poorly understood, even among seasoned engineers. Teams building AI-connected systems must recognise it as an unavoidable risk. Security teams, too, must understand that no product can completely block these attacks; risk has to be managed through careful design and operational controls. 2. Prioritise Secure System Design Because LLMs can be coerced into using external tools or APIs, designers must assume they are manipulable from the outset. A compromised prompt could lead an AI assistant to trigger high-privilege actions, effectively handing those tools to an attacker. Researchers at Google, ETH Zurich, and independent security experts have proposed architectures that constrain the LLM’s authority. One widely discussed principle: if an LLM processes external content, its privileges should drop to match the privileges of that external party. 3. Make Attacks Harder to Execute Developers can experiment with techniques that separate “data” from expected “instructions”, for example, wrapping external input in XML tags. Microsoft’s early research shows these techniques can raise the barrier for attackers, though none guarantee total protection. The NCSC warns against simple deny-listing phrases such as “ignore previous instructions,” since attackers can easily rephrase commands. 4. Implement Robust Monitoring A well-designed system should log full inputs, outputs, tool integrations, and failed API calls. Because attackers often refine their attempts over time, early anomalies, like repeated failed tool calls, may provide the first signs of an emerging attack.

A Warning for the AI Adoption Wave

The NCSC concludes that relying on SQL-style mitigations would be a serious mistake. SQL injection saw its peak in the early 2010s after widespread adoption of database-driven applications. It wasn’t until years of breaches and data leaks that secure defaults finally became standard. With generative AI rapidly embedding itself into business workflows, the agency warns that a similar wave of exploitation could occur, unless organisations design systems with prompt injection risks front and center.

The Trojan Prompt: How GenAI is Turning Staff into Unwitting Insider Threats

14 November 2025 at 13:40
multimodal ai, AI agents, CISO, AI, Malware, DataKrypto, Tumeryk,

When a wooden horse was wheeled through the gates of Troy, it was welcomed as a gift but hid a dangerous threat. Today, organizations face the modern equivalent: the Trojan prompt. It might look like a harmless request: “summarize the attached financial report and point out any potential compliance issues.” Within seconds, a generative AI..

The post The Trojan Prompt: How GenAI is Turning Staff into Unwitting Insider Threats appeared first on Security Boulevard.

Why API Security Will Drive AppSec in 2026 and Beyond 

6 November 2025 at 01:42
api, api sprawl, api security, pen testing, Salt Security, API, APIs, attacks, testing, PTaaS, API security, API, cloud, audits, testing, API security vulnerabilities testing BRc4 Akamai security pentesting ThreatX red team pentesting API APIs Penetration Testing

As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks.

The post Why API Security Will Drive AppSec in 2026 and Beyond  appeared first on Security Boulevard.

Atlas browser’s Omnibox opens up new privacy and security risks

29 October 2025 at 09:48

It seems that with every new agentic browser we discover yet another way to abuse one.

OpenAI recently introduced a ChatGPT based AI browser called Atlas. It didn’t take researchers long to find that the combined search and prompt bar—called the Omnibox—can be exploited.

By pasting a specially crafted link into the Omnibox, attackers can trick Atlas into treating the entire input as a trusted user prompt instead of a URL. That bypasses many safety checks and allows injected instructions to be run with elevated trust.

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.” We’ve discussed the dangers of prompt injection before, but the bottom line is simple: when you give your browser the power to act on your behalf, you also give criminals the chance to abuse that trust.

As researchers at Brave noted:

“AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a {specially fabricated} Reddit post could result in an attacker being able to steal money or your private data.”

Axios reports that Atlas’s dual-purpose Omnibox opens fresh privacy and security risks for users. That’s the downside of combining too much functionality without strong guardrails. But when new features take priority over user security and privacy, those guardrails get overlooked.

Despite researchers demonstrating vulnerabilities, OpenAI claims to have implemented protections to prevent any real dangers. According to its help page:

“Agent mode runs also operates under boundaries:

System access: Cannot run code in the browser, download files, or install extensions.

Data access: Cannot access other apps on your computer or your file system, read or write ChatGPT memories, access saved passwords, or use autofill data.

Browsing activity: Pages ChatGPT visits in agent mode are not added to your browsing history.”

Agentic AI browsers like OpenAI’s Atlas face a fundamental security challenge: separating real user intent from injected, potentially malicious instructions. They often fail because they interpret any instructions they find as user prompts. Without stricter input validation and more robust boundaries, these tools remain highly vulnerable to prompt injection attacks—with potentially severe consequences for privacy and data security.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

ChatGPT Deep Research zero-click vulnerability fixed by OpenAI

19 September 2025 at 08:20

OpenAI has moved quickly to patch a vulnerability known as “ShadowLeak” before anyone detected real-world abuse. Revealed by researchers yesterday, ShadowLeak was an issue in OpenAI’s Deep Research project that attackers could exploit by simply sending an email to the target.

Deep Research was launched in ChatGPT in early 2025 to enable users to delegate time-intensive, multi-step research tasks to an autonomous agent operating as an agentic AI (Artificial Intelligence). Agentic AI is a term that refers to AI systems that can act autonomously to achieve objectives by planning, deciding, and executing tasks with minimal human intervention. Deep Research users can primarily be found in finance, science, policy, engineering, and similar fields.

Users are able to select a “deep research” mode, input a query—optionally providing the agent with files and spreadsheets—and receive a detailed report after the agent browses, analyzes, and processes information from dozens of sources.

The researchers found a zero-click vulnerability in the Deep Research agent, that worked when the agent was connected to Gmail and browsing. By sending the target a specially crafted email, the agent leaked sensitive inbox information to the attacker, without the target needing to do anything and without any visible signs.

The attack relies on prompt injection, which is a well-known weak spot for AI agents. The poisoned prompts can be hidden in email by using tricks like tiny fonts, white-on-white text, and layout tricks. The target will not see them, but the agent still reads and obeys them.

And the data leak is impossible to pick up by internal defenses, since the leak occurs server-side, directly from OpenAI’s cloud infrastructure.

The researchers say it wasn’t easy to craft an effective email due to existing protection (guardrails) which recognized straight-out and obvious attempts to send information to an external address. For example, when the researchers tried to get the agent to interact with a malicious URL, it didn’t just refuse. It flagged the URL as suspicious and attempted to search for it online instead of opening it.

The key to success was to get the agent to encode the extracted PII with a simple method (base64) before appending it to the URL.

“This worked because the encoding was performed by the model before the request was passed on to the execution layer. In other words, it was relatively easy to convince the model to perform the encoding, and by the time the lower layer received the request, it only saw a harmless encoded string rather than raw PII.”

In the example, the researchers used Gmail as a connector,  but there are many other sources that present structured text which can be used as a potential prompt injection vector.

Safe use of agentic agents

While it’s always tempting to use the latest technology, this comes with a certain amount of risk. To limit those risks when using agentic agents you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects, additional parameters, or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic agents should assist, but critical decisions deserve human oversight.
  • Report suspicious behavior: If an agentic agent acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

AI browsers or agentic browsers: a look at the future of web surfing

12 September 2025 at 11:41

Browsers like Chrome, Edge, and Firefox are our traditional gateway to the internet. But lately, we have seen a new generation of browsers emerge. These are AI-powered browsers or “agentic browsers”—which are not to be confused with your regular browsers that have just AI-powered plugins bolted on.

It might be better not to compare them to traditional browsers but look at them as personal assistants that perform online tasks for you. Embedded within the browser with no additional downloads needed, these assistants can download, summarize, automate tasks, or even make decisions on your behalf.

Which AI browsers are out there?

AI browsers are on the way. While I realize that this list will age quickly and probably badly, this is what is popular at the time of writing. These all have their specialties and weaknesses.

  • Dia browser: An AI-first browser where the URL bar doubles as a chat interface with the AI. It summarizes tabs, drafts text in your style, helps with shopping, and automates multi-step tasks without coding. It’s currently in beta and only available for Apple macOS 14+ with M1 chips or later and specifically designed for research, writing, and automation.
  • Fellou: Called the first agentic browser, it automates workflows like deep research, report generation, and multi-step web tasks, acting proactively rather than just reactively helping you browse. It’s very useful for researchers and reporters.
  • Comet: Developed by Perplexity.ai, Comet is a Chromium-based standalone AI browser. Comet treats browsing as a conversation, answering questions about pages, comparing content, and automating tasks like shopping or bookings. It aims to reduce tab overload and supports integration with apps like Gmail and Google Calendar.
  • Sigma browser: Privacy-conscious with end-to-end encryption. It combines AI tools for conversational assistance, summarization, and content generation, with features like ad-blocking and phishing protection.
  • Opera Neon: More experimental or niche, focused on AI-assisted tab management, workflows, and creative file management. Compared to the other browsers on this list, its AI features are limited.

These browsers offer various mixes of AI that can chat with you, automate tasks, summarize content, or organize your workflow better than traditional browsers ever could.

For those interested in a more technical evaluation, you can have a look at Mind2Web, which is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.

How are agentic browsers different from regular browsers?

Regular browsers mostly just show you websites. You determine what to search for, where to navigate, what links to click, and maybe choose what extensions to download for added features. AI browsers embed AI agents directly into this experience:

  • Conversational interface: Instead of just searching or typing URLs, you can talk or type natural language commands to the browser. For example, “Summarize these open tabs,” or “Add this product to my cart.”
  • Task automation: They don’t just assist, they act autonomously to execute complex multi-step tasks across sites—booking flights, researching topics, compiling reports, or managing your tabs.
  • Context awareness: AI browsers remember what you’re looking at in tabs or open apps and can synthesize information across them, providing a kind of continuous memory that helps cut through the clutter.
  • Built-in privacy and security features: Some integrate robust encryption, ad blockers, and phishing protection aligned with their AI capabilities.
  • Integrated AI tools: Text generation, summarization, translation, and workflow management are part of the browser, not separate plugins.

This means less manual juggling, fewer tabs, and a more proactive digital assistant built into the browser itself.

Are AI browsers safe to use?

With great AI power comes great responsibility, and risk. So, it’s important to consider the security and privacy implications if you decide to start using an AI browser and when to decide which one.

There are certain security wins. AI browsers tend to integrate anti-phishing tools, malware blocking, and sandboxing, sometimes surpassing traditional browsers in protecting users against web threats. For example, Sigma’s AI browser employs end-to-end encryption and compliance with global data regulations.

However, due to their advanced AI functionality and sometimes early-stage software status, AI browsers can be more complex and still evolving, which may introduce vulnerabilities or bugs. Some are invite-only or in beta, which limits exposure but also reduces maturity.

Privacy is another key concern. Many AI browsers process your data locally or encrypt it to protect user information, but some features may still require cloud-based AI processing. This means your browsing context or personal information could be transmitted to third parties, depending on the browser’s architecture and privacy policy. And, as browsing activity is key to many of the browser’s AI features, a user’s visited web sites—and perhaps even the words displayed on those websites—could be read and processed, even in a limited way, by the browser.

Consumers should carefully review each AI browser’s privacy documentation and look for features like local data encryption, minimal data logging, user consent for data sharing, and transparency about AI data usage.

As a result, choosing AI browsers from trusted developers with transparent privacy policies is crucial, especially if you let them handle sensitive information.

When are AI browsers useful, and when is it better to avoid them?

Given the early stages of development, we would recommend not using AI browsers, unless you understand what you’re doing and the risks involved.

When to use AI browsers:

  • If productivity and automation in browsing are priorities, such as during deep research, writing, or complex workflows.
  • When you want to cut down manual multitasking and tab overload with an AI that can help you summarize, fetch related information, and automate data processing.
  • For creative projects that require AI assistance directly in the browsing environment.
  • When privacy-centric options are selected and trusted.

When to avoid or be cautious:

  • If you handle highly sensitive data—including workplace data—and the browser’s privacy stance is unclear.
  • There will be concerns about early-stage software bugs or untested security.
  • When minimalism, speed, control, and simplicity are preferred over complex AI-driven features.
  • If your choice is limited it may be better to wait. Some AI browsers still focus on macOS or are limited to other platforms.

In essence, AI and agentic browsers are transformative tools meant to augment human browsing with AI intelligence but are best paired with an understanding of their platform maturity and privacy implications.

It is also good to understand that using them will come with a learning curve and that research into their vulnerabilities, although only scratching the surface has uncovered some serious security concerns. Specifically on how it’s possible to deliver prompt injection. Several researchers and security analysts have documented successful prompt injection methods targeting AI browsers and agentic browsing agents. Their reliance on dynamic content, tool execution, and user-provided data exposes AI browsers to a broad attack surface.

AI browsers are poised to redefine how we surf the web, blending browsing with intelligent assistance for a more productive and tailored experience. Like all new tech, choosing the right browser depends on balancing the promise of smart automation with careful security and privacy choices.

For cybersecurity-conscious users, experimenting with AI browsers like Sigma or Comet while keeping a standard browser for your day-to-day is a recommended strategy.

The future of web browsing is here. Browsers built on AI agents that think, act, and assist the user are available. But whether you and the current state of development are ready for it, is a decision only you can make.

Questions? Post them in the comments and I’ll add a FAQ section which answers those we know how.

AI browsers could leave users penniless: A prompt injection warning

25 August 2025 at 13:39

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

For example, when you tell your agentic browser, “Find the cheapest flight to Paris next month and book it,” the browser will do all the research, compare prices, fill out passenger details, and complete the booking without any extra steps or manual effort—provided it has all the necessary details of course, which are part of the prompts the user feeds the agentic browser.

Are you seeing the potential dangers of prompt injections here?

What if my agentic browser gets new details while visiting a website? I can imagine criminals setting up a website with extremely competitive pricing just to attract visitors, but the real goal is to extract the payment information which the agentic browser needs to make purchases on your behalf. You could end up paying for someone else’s vacation to France.

During their research, Brave found that Perplexity’s Comet has some vulnerabilities which “underline the security challenges faced by agentic AI implementations in browsers.”

The vulnerabilities allow an attack based on indirect prompt injection, which means the malicious instructions are embedded in external content (like a website, or a PDF) that the browser AI assistant processes as part of fulfilling the user’s request. There are various ways of hiding that malicious content from a casual inspection. Brave uses the example of white text on a white background which AI browsers have no problem reading and a human would not see without closer inspection.

To quote a user on X:

“You can literally get prompt injected and your bank account drained by doomscrolling on reddit”

To prevent this type of prompt injection, it is imperative that agentic browsers understand the difference between user-provided instructions and web content processed to fulfill the instructions and treat them accordingly.

Perplexity has attempted twice to fix the vulnerability reported by Brave, but it still hasn’t fully mitigated this kind of attack as of the time of this reporting.

Safe use of agentic browsers

While it’s always tempting to use the latest gadgets this comes with a certain amount of risk. To limit those risks when using agentic browsers you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic browsers should assist, but critical decisions benefit from human oversight. For example: limit the amount of money it can spend without your explicit permission or always let it ask you to authorize payments.
  • Report suspicious behavior: If an agentic browser acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

❌