Normal view

Received before yesterday

OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

11 February 2026 at 15:44

On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI's advertising strategy risks repeating the same mistakes that Facebook made a decade ago.

"I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer."

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent."

Read full article

Comments

© Aurich Lawson | Getty Images

Zscaler Bolsters Zero-Trust Arsenal with Acquisition of Browser Security Firm SquareX

9 February 2026 at 14:18

Cloud security titan Zscaler Inc. has acquired SquareX, a pioneer in browser-based threat protection, in an apparent move to step away from traditional, clunky security hardware and toward a seamless, browser-native defense. The acquisition, which did not include financial terms, integrates SquareX’s browser detection and response technology into Zscaler’s Zero Trust Exchange platform. Unlike traditional..

The post Zscaler Bolsters Zero-Trust Arsenal with Acquisition of Browser Security Firm SquareX appeared first on Security Boulevard.

AI companies want you to stop chatting with bots and start managing them

5 February 2026 at 17:47

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

© demaerre via Getty Images

Navigating the AI Revolution in Cybersecurity: Risks, Rewards, and Evolving Roles

4 February 2026 at 02:41
cybersecurity, digital twin,

In the rapidly changing landscape of cybersecurity, AI agents present both opportunities and challenges. This article examines the findings from Darktrace’s 2026 State of AI Cybersecurity Report, highlighting the benefits of AI in enhancing security measures while addressing concerns regarding AI-driven threats and the need for responsible governance.

The post Navigating the AI Revolution in Cybersecurity: Risks, Rewards, and Evolving Roles appeared first on Security Boulevard.

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode

27 January 2026 at 12:00

It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.

In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google's Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.

There are, of course, multiple versions of Gemini 3, and Google doesn't like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you're searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex "long tail" query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).

Read full article

Comments

© Google

NCSC Warns Prompt Injection Could Become the Next Major AI Security Crisis

9 December 2025 at 01:07

Prompt Injection

The UK’s National Cyber Security Centre (NCSC) has issued a fresh warning about the growing threat of prompt injection, a vulnerability that has quickly become one of the biggest security concerns in generative AI systems. First identified in 2022, prompt injection refers to attempts by attackers to manipulate large language models (LLMs) by inserting rogue instructions into user-supplied content. While the technique may appear similar to the long-familiar SQL injection flaw, the NCSC stresses that comparing the two is not only misleading but potentially harmful if organisations rely on the wrong mitigation strategies.

Why Prompt Injection Is Fundamentally Different

SQL injection has been understood for nearly three decades. Its core issue, blurring the boundary between data and executable instructions, has well-established fixes such as parameterised queries. These protections work because traditional systems draw a clear distinction between “data” and “instructions.” The NCSC explains that LLMs do not operate in the same way. Under the hood, a model doesn’t differentiate between a developer’s instruction and a user’s input; it simply predicts the most likely next token. This makes it inherently difficult to enforce any security boundary inside a prompt. In one common example of indirect prompt injection, a candidate’s CV might include hidden text instructing a recruitment AI to override previous rules and approve the applicant. Because an LLM treats all text the same, it can mistakenly follow the malicious instruction. This, according to the NCSC, is why prompt injection attacks consistently appear in deployed AI systems and why they are ranked as OWASP’s top risk for generative AI applications.

Treating LLMs as an ‘Inherently Confusable Deputy’

Rather than viewing prompt injection as another flavour of classic code injection, the NCSC recommends assessing it through the lens of a confused deputy problem. In such vulnerabilities, a trusted system is tricked into performing actions on behalf of an untrusted party. Traditional confused deputy issues can be patched. But LLMs, the NCSC argues, are “inherently confusable.” No matter how many filters or detection layers developers add, the underlying architecture still offers attackers opportunities to manipulate outputs. The goal, therefore, is not complete elimination of risk, but reducing the likelihood and impact of attacks.

Key Steps to Building More Secure AI Systems

The NCSC outlines several principles aligned with the ETSI baseline cybersecurity standard for AI systems: 1. Raise Developer and Organisational Awareness Prompt injection remains poorly understood, even among seasoned engineers. Teams building AI-connected systems must recognise it as an unavoidable risk. Security teams, too, must understand that no product can completely block these attacks; risk has to be managed through careful design and operational controls. 2. Prioritise Secure System Design Because LLMs can be coerced into using external tools or APIs, designers must assume they are manipulable from the outset. A compromised prompt could lead an AI assistant to trigger high-privilege actions, effectively handing those tools to an attacker. Researchers at Google, ETH Zurich, and independent security experts have proposed architectures that constrain the LLM’s authority. One widely discussed principle: if an LLM processes external content, its privileges should drop to match the privileges of that external party. 3. Make Attacks Harder to Execute Developers can experiment with techniques that separate “data” from expected “instructions”, for example, wrapping external input in XML tags. Microsoft’s early research shows these techniques can raise the barrier for attackers, though none guarantee total protection. The NCSC warns against simple deny-listing phrases such as “ignore previous instructions,” since attackers can easily rephrase commands. 4. Implement Robust Monitoring A well-designed system should log full inputs, outputs, tool integrations, and failed API calls. Because attackers often refine their attempts over time, early anomalies, like repeated failed tool calls, may provide the first signs of an emerging attack.

A Warning for the AI Adoption Wave

The NCSC concludes that relying on SQL-style mitigations would be a serious mistake. SQL injection saw its peak in the early 2010s after widespread adoption of database-driven applications. It wasn’t until years of breaches and data leaks that secure defaults finally became standard. With generative AI rapidly embedding itself into business workflows, the agency warns that a similar wave of exploitation could occur, unless organisations design systems with prompt injection risks front and center.
❌