Normal view

Received yesterday — 12 December 2025

AI Threat Detection: How Machines Spot What Humans Miss

Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.

The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.

Received before yesterday

Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios.

Explore behavioral analysis techniques for securing AI models against post-quantum threats. Learn how to identify anomalies and protect your AI infrastructure with quantum-resistant cryptography.

The post Behavioral Analysis of AI Models Under Post-Quantum Threat Scenarios. appeared first on Security Boulevard.

An Inside Look at the Israeli Cyber Scene

11 December 2025 at 11:30
investment, cybersecurity, calculator and money figures

Alan breaks down why Israeli cybersecurity isn’t just booming—it’s entering a full-blown renaissance, with record funding, world-class talent, and breakout companies redefining the global cyber landscape.

The post An Inside Look at the Israeli Cyber Scene appeared first on Security Boulevard.

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

2 December 2025 at 07:15

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

© EasternLightcraft via Getty Images

Why AI Red Teaming is different from traditional security

13 November 2025 at 15:33

“72% of organizations use AI in business functions — but only 13% feel ready to secure it.” That gap, between adoption and preparedness, explains why traditional AppSec approaches aren’t enough.  Modern AI systems aren’t just software systems that run code; they’re probabilistic, contextual, and capable of emergent behavior. In a traditional app, a query to […]

The post Why AI Red Teaming is different from traditional security appeared first on Security Boulevard.

Using FinOps to Detect AI-Created Security Risks 

6 November 2025 at 01:28

As AI investments surge toward $1 trillion by 2027, many organizations still see zero ROI due to hidden security and cost risks. Discover how aligning FinOps with security practices helps identify AI-related vulnerabilities, control cloud costs, and build sustainable, secure AI operations.

The post Using FinOps to Detect AI-Created Security Risks  appeared first on Security Boulevard.

Palo Alto Networks Extends Scope and Reach of AI Capabilities

28 October 2025 at 10:00
Proofpoint, Sumo, Permiso, CyCognito, WAF,

Palo Alto Networks unveils Prisma AIRS 2.0 and Cortex AgentiX to secure AI applications and automate cybersecurity workflows. With new AI-driven protection, no-code agent building, and integrated threat detection, the company aims to simplify and strengthen enterprise AI security operations.

The post Palo Alto Networks Extends Scope and Reach of AI Capabilities appeared first on Security Boulevard.

❌