Normal view

Received before yesterday

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  

Prompt Injection in AI Browsers

11 November 2025 at 07:08

This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL.

LayerX researchers say that the prompt tells the agent to consult its memory and connected services instead of searching the web. As the AI tool is connected to various services, an attacker leveraging the CometJacking method could exfiltrate available data.

In their tests, the connected services and accessible data include Google Calendar invites and Gmail messages and the malicious prompt included instructions to encode the sensitive data in base64 and then exfiltrate them to an external endpoint.

According to the researchers, Comet followed the instructions and delivered the information to an external system controlled by the attacker, evading Perplexity’s checks.

I wrote previously:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

Is AI moving faster than its safety net?

24 October 2025 at 09:35

You’ve probably noticed that artificial intelligence, or AI, has been everywhere lately—news, phones, apps, even in your browser. It seems like everything suddenly wants to be “powered by AI.“ If it’s not, it’s considered old school and boring. It’s easy to get swept up in the promise: smarter tools, less work, and maybe even a glimpse of the future.

But if we look at some of the things we learned just this week, that glimpse doesn’t only promise good things. There’s a quieter story running alongside the hype that you won’t see in the commercials. It’s the story of how AI’s rapid development is leaving security and privacy struggling to catch up.

And if you make use of AI assistants, chatbots, or those “smart” AI browsers popping up on your screen, those stories are worth your attention.

Are they smarter than us?

Even some of the industry’s biggest names—Steve Wozniak, Sir Richard Branson, and Stuart Russel—are worried that progress in AI is moving too fast for its own good. In an article published by ZDNet, they talk about their fear of “superintelligence,” saying they’re afraid we’ll cross the line from “AI helps humans” to “AI acts beyond human control” before we’ve figured out how to keep it in check.

These scenarios are not about killer robots or takeovers like in the movies. They’re about much smaller, subtler problems that add up. For example, an AI system designed to make customer service more efficient might accidentally share private data because it wasn’t trained to understand what’s confidential. Or an AI tool designed to optimize web traffic might quietly break privacy laws it doesn’t comprehend.

At the scale we use AI—billions of interactions per day—these oversights become serious. The problem isn’t that AI is malicious; it’s that it doesn’t understand consequences, and developers forget to set boundaries.

We’re already struggling to build basic online safety into the AI tools that are replacing our everyday ones.

AI browsers: too smart, too soon

AI browsers—and their newer cousin, the ‘agentic’ browser—do more than just display websites. They can read them, summarize them, and even perform tasks for you.

A browser that can search, write, and even act on your behalf sounds great—but you may want to rethink that. According to research reported by Futurism, some of these tools are being rolled out with deeply worrying security flaws.

Here’s the issue: many AI browsers are just as vulnerable to prompt injection as AI chatbots. The difference is that if you give an AI browser a task, it runs off on its own and you have little control over what it reads or where it goes.

Take Comet, a browser developed by the company Perplexity. Researchers at Brave found that Comet’s “AI assistant” could be tricked into doing harmful things simple because it trusted what it saw online.

In one test, researchers showed the browser a seemingly innocent image. Hidden inside that image was a line of invisible text—something no human would see, but instructions meant only for the AI. The browser followed the hidden commands and ended up opening personal emails and visiting a malicious website.

In short, the AI couldn’t tell the difference between a user’s request and an attacker’s disguised instructions. That is a typical example of a prompt injection attack, which works a bit like phishing for machines. Instead of tricking a person into clicking a bad link, it tricks an AI browser into doing it for you. Without the realization of “oops, maybe I shouldn’t have done that,” it is faster, quiet, and with access you might not even realize it has.

The AI has no idea it did something wrong. It’s just following orders, doing exactly what it was programmed to do. It doesn’t know which instructions are bad because nobody taught it how to tell the difference.

Misery loves company: spoofed AI interfaces

Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces.

According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to genuine ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot. Picture this: you open your browser, see what looks like your trusted AI helper, and ask it a question. But instead of the AI assistant helping you, it’s quietly recording every word you type.

Some of these fake sidebars even persuade users to “verify” credentials or “authorize” a quick fix. This is social engineering in a new disguise. The scammer doesn’t need to lure you away from the page, they just need to convince you that the AI you’re chatting with is legitimate. Once that trust is earned, the damage is done.

And since AI tools are designed to sound helpful, polite, and confident, most people will take their word for it. After all, if an AI browser says, “Don’t worry, this is safe to click,” who are you to argue?

What can we do?

The key problem right now is speed. We keep pushing the limits of what AI can do faster than we can make it safe. The next big problem will be the data these systems are trained on.

As long as we keep chasing the newest features, companies will keep pushing for more options and integrations—whether or not they’re ready. They’ll teach your fridge to track your diet if they think you’ll buy it.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting it with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.

Pro tip: I installed Malwarebytes’ Browser Guard on Comet, and it seems to be working fine so far. I’ll keep you posted on that.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

OpenAI Unveils Atlas Web Browser Built to Work Closely With ChatGPT

21 October 2025 at 15:21
The new browser, called Atlas, is designed to work closely with OpenAI products like ChatGPT.

© Benjamin Legendre/Agence France-Presse — Getty Images

OpenAI’s chief executive, Sam Altman, has been looking for ways to level the playing field with his company’s giant competitors.
❌