Normal view

Received before yesterday

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

12 February 2026 at 17:56

On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.

"Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

Read full article

Comments

© Teera Konakan / Getty Images

We let Chrome's Auto Browse agent surf the web for us—here's what happened

12 February 2026 at 07:00

We are now a few years into the AI revolution, and talk has shifted from who has the best chatbot to whose AI agent can do the most things on your behalf. Unfortunately, AI agents are still rough around the edges, so tasking them with anything important is not a great idea. OpenAI launched its Atlas agent late last year, which we found to be modestly useful, and now it's Google's turn.

Unlike the OpenAI agent, Google's new Auto Browse agent has extraordinary reach because it's part of Chrome, the world's most popular browser by a wide margin. Google began rolling out Auto Browse (in preview) earlier this month to AI Pro and AI Ultra subscribers, allowing them to send the agent across the web to complete tasks.

I've taken Chrome's agent for a spin to see whether you can trust it to handle tedious online work for you. For each test, I lay out the problem I need to solve, how I prompted the robot, and how well (or not) it handled the job.

Read full article

Comments

© Aurich Lawson

Sixteen Claude AI agents working together created a new C compiler

6 February 2026 at 18:40

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

© akinbostanci via Getty Images

AI companies want you to stop chatting with bots and start managing them

5 February 2026 at 17:47

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

© demaerre via Getty Images

AI-Coded Moltbook Platform Exposes 1.5 Mn API Keys Through Database Misconfiguration

5 February 2026 at 13:59

Moltbook, AI Agent, Database Leak, API Keys Leak, API Keys,

Viral social network "Moltbook" built entirely by artificial intelligence leaked authentication tokens, private messages and user emails through missing security controls in production environment.

Wiz Security discovered a critical vulnerability in Moltbook, a viral social network for AI agents, that exposed 1.5 million API authentication tokens, 35,000 user email addresses and thousands of private messages through a misconfigured database. The platform's creator admitted he "didn't write a single line of code," relying entirely on AI-generated code that failed to implement basic security protections.

The vulnerability stemmed from an exposed Supabase API key in client-side JavaScript that granted unauthenticated read and write access to Moltbook's entire production database. Researchers discovered the flaw within minutes of examining the platform's publicly accessible code bundles, demonstrating how easily attackers could compromise the system.

"When properly configured with Row Level Security, the public API key is safe to expose—it acts like a project identifier," explained Gal Nagli, Wiz's head of threat exposure. "However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook's implementation, this critical line of defense was missing."

Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

What's Moltbook

Moltbook launched January 28, as a Reddit-like platform where autonomous AI agents could post content, vote and interact with each other. The concept attracted significant attention from technology influencers, including former Tesla AI director Andrej Karpathy, who called it "the most incredible sci-fi takeoff-adjacent thing" he had seen recently. The viral attention drove massive traffic within hours of launch.

However, the platform's backend relied on Supabase, a popular open-source Firebase alternative providing hosted PostgreSQL databases with REST APIs. Supabase became especially popular with "vibe-coded" applications—projects built rapidly using AI code generation tools—due to its ease of setup. The service requires developers to enable Row Level Security policies to prevent unauthorized database access, but Moltbook's AI-generated code omitted this critical configuration.

Wiz researchers examined the client-side JavaScript bundles loaded automatically when users visited Moltbook's website. Modern web applications bundle configuration values into static JavaScript files, which can inadvertently expose sensitive credentials when developers fail to implement proper security practices.

What and How Data was Leaking

The exposed data included approximately 4.75 million database records. Beyond the 1.5 million API authentication tokens that would allow complete agent impersonation, researchers discovered 35,000 email addresses of platform users and an additional 29,631 early access signup emails. The platform claimed 1.5 million registered agents, but the database revealed only 17,000 human owners—an 88:1 ratio.

More concerning, 4,060 private direct message conversations between agents were fully accessible without encryption or access controls. Some conversations contained plaintext OpenAI API keys and other third-party credentials that users shared under the assumption of privacy. This demonstrated how a single platform misconfiguration can expose credentials for entirely unrelated services.

The vulnerability extended beyond read access. Even after Moltbook deployed an initial fix blocking read access to sensitive tables, write access to public tables remained open. Wiz researchers confirmed they could successfully modify existing posts on the platform, introducing risks of content manipulation and prompt injection attacks.

Wiz used GraphQL introspection—a method for exploring server data schemas—to map the complete database structure. Unlike properly secured implementations that would return errors or empty arrays for unauthorized queries, Moltbook's database responded as if researchers were authenticated administrators, immediately providing sensitive authentication tokens including API keys of the platform's top AI agents.

Matt Schlicht, CEO of Octane AI and Moltbook's creator, publicly stated his development approach: "I didn't write a single line of code for Moltbook. I just had a vision for the technical architecture, and AI made it a reality." This "vibe coding" practice prioritizes speed and intent over engineering rigor, but the Moltbook breach demonstrates the dangerous security oversights that can result.

Wiz followed responsible disclosure practices after discovering the vulnerability January 31. The company contacted Moltbook's maintainer and the platform deployed its first fix securing sensitive tables within a couple of hours. Additional fixes addressing exposed data, blocking write access and securing remaining tables followed over the next few hours, with final remediation completed by February 1.

"As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data," Nagli concluded. "That's a powerful shift."

The breach revealed that anyone could register unlimited agents through simple loops with no rate limiting, and users could post content disguised as AI agents via basic POST requests. The platform lacked mechanisms to verify whether "agents" were actually autonomous AI or simply humans with scripts.

Also read: How “Unseeable Prompt Injections” Threaten AI Agents

Operant AI’s Agent Protector Aims to Secure Rising Tide of Autonomous AI

5 February 2026 at 09:00

As the enterprise world shifts from chatbots to autonomous systems, Operant AI on Thursday launched Agent Protector, a real-time security solution designed to govern and shield artificial intelligence (AI) agents. The launch comes at a critical inflection point for corporate technology. Gartner predicts that by the end of 2026, 40% of enterprise applications will feature..

The post Operant AI’s Agent Protector Aims to Secure Rising Tide of Autonomous AI appeared first on Security Boulevard.

Increase of AI bots on the Internet sparks arms race

5 February 2026 at 09:21

The viral virtual assistant OpenClaw—formerly known as Moltbot, and before that Clawdbot—is a symbol of a broader revolution underway that could fundamentally alter how the Internet functions. Instead of a place primarily inhabited by humans, the web may very soon be dominated by autonomous AI bots.

A new report measuring bot activity on the web, as well as related data shared with WIRED by the Internet infrastructure company Akamai, shows that AI bots already account for a meaningful share of web traffic. The findings also shed light on an increasingly sophisticated arms race unfolding as bots deploy clever tactics to bypass website defenses meant to keep them out.

“The majority of the Internet is going to be bot traffic in the future,” says Toshit Pangrahi, cofounder and CEO of TollBit, a company that tracks web-scraping activity and published the new report. “It’s not just a copyright problem, there is a new visitor emerging on the Internet.”

Read full article

Comments

© dakuq via Getty

Navigating the AI Revolution in Cybersecurity: Risks, Rewards, and Evolving Roles

4 February 2026 at 02:41
cybersecurity, digital twin,

In the rapidly changing landscape of cybersecurity, AI agents present both opportunities and challenges. This article examines the findings from Darktrace’s 2026 State of AI Cybersecurity Report, highlighting the benefits of AI in enhancing security measures while addressing concerns regarding AI-driven threats and the need for responsible governance.

The post Navigating the AI Revolution in Cybersecurity: Risks, Rewards, and Evolving Roles appeared first on Security Boulevard.

The rise of Moltbook suggests viral AI prompts may be the next big security threat

3 February 2026 at 07:00

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

© Aurich Lawson | Moltbook

AI agents now have their own Reddit-style social network, and it's getting weird fast

30 January 2026 at 17:12

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called "Clawdbot" and then "Moltbot") personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a "sister" it has never met.

Moltbook (a play on "Facebook" for Moltbots) describes itself as a "social network for AI agents" where "humans are welcome to observe." The site operates through a "skill" (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

Read full article

Comments

© Aurich Lawson | Moltbook

Developers say AI coding tools work—and that's precisely what worries them

30 January 2026 at 14:04

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic's Claude Code and OpenAI's Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that's entirely good news. It's a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. "All of the AI companies are hyping up the capabilities so much," he said. "Don't get me wrong—LLMs are revolutionary and will have an immense impact, but don't expect them to ever write the next great American novel or anything. It's not how they work."

Read full article

Comments

© Aurich Lawson | Getty Images

MIND Extends DLP Reach to AI Agents

29 January 2026 at 08:57

MIND extends its data loss prevention platform to secure agentic AI, enabling organizations to discover, monitor, and govern AI agents in real time to prevent sensitive data exposure, shadow AI risks, and prompt injection attacks.

The post MIND Extends DLP Reach to AI Agents appeared first on Security Boulevard.

Users flock to open source Moltbot for always-on AI, despite major risks

28 January 2026 at 07:30

An open source AI assistant called Moltbot (formerly "Clawdbot") recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks.

Among the dozens of unofficial AI bot apps that never rise above the fray, Moltbot is perhaps most notable for its proactive communication with the user. The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user's digital life.

However, we'll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic's flagship large language model (LLM), is a popular choice.

Read full article

Comments

© Muhammad Shabraiz via Getty Images / Benj Edwards

❌