❌

Normal view

Received before yesterday

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

12 February 2026 at 17:56

On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.

"Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

Read full article

Comments

Β© Teera Konakan / Getty Images

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

12 February 2026 at 14:42

On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

Β© Google

Byte magazine artist Robert Tinney, who illustrated the birth of PCs, dies at 78

11 February 2026 at 16:51

On February 1, Robert Tinney, the illustrator whose airbrushed cover paintings defined the look and feel of pioneering computer magazine Byte for over a decade, died at age 78 in Baker, Louisiana, according to a memorial posted on his official website.

As the primary cover artist for Byte from 1975 to the late 1980s, Tinney became one of the first illustrators to give the abstract world of personal computing a coherent visual language, translating topics like artificial intelligence, networking, and programming into vivid, surrealist-influenced paintings that a generation of computer enthusiasts grew up with.

Tinney went on to paint more than 80 covers for Byte, working almost entirely in airbrushed Designers Gouache, a medium he chose for its opaque, intense colors and smooth finish. He said the process of creating each cover typically took about a week of painting once a design was approved, following phone conversations with editors about each issue's theme. He cited RenΓ© Magritte and M.C. Escher as two of his favorite artists, and fans often noticed their influence in his work.

Read full article

Comments

Β© Robert Tinney / Byte Magazine

OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

11 February 2026 at 15:44

On Wednesday, former OpenAI researcher ZoΓ« Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI's advertising strategy risks repeating the same mistakes that Facebook made a decade ago.

"I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer."

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent."

Read full article

Comments

Β© Aurich Lawson | Getty Images

Sixteen Claude AI agents working together created a new C compiler

6 February 2026 at 18:40

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

Β© akinbostanci via Getty Images

AI companies want you to stop chatting with bots and start managing them

5 February 2026 at 17:47

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

Β© demaerre via Getty Images

OpenAI is hoppin' mad about Anthropic's new Super Bowl TV ads

5 February 2026 at 12:46

On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic's campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.

Altman called Anthropic's ads "clearly dishonest," accused the company of being "authoritarian," and said it "serves an expensive product to rich people," while Rouch wrote, "Real betrayal isn't ads. It's control."

Anthropic's four commercials, part of a campaign called "A Time and a Place," each open with a single word splashed across the screen: "Betrayal," "Violation," "Deception," and "Treachery." They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.

Read full article

Comments

Β© Anthropic

Should AI chatbots have ads? Anthropic says no.

4 February 2026 at 16:15

On Wednesday, Anthropic announced that its AI chatbot, Claude, will remain free of advertisements, drawing a sharp line between itself and rival OpenAI, which began testing ads in a low-cost tier of ChatGPT last month. The announcement comes alongside a Super Bowl ad campaign that mocks AI assistants that interrupt personal conversations with product pitches.

"There are many good places for advertising. A conversation with Claude is not one of them," Anthropic wrote in a blog post. The company argued that including ads in AI conversations would be "incompatible" with what it wants Claude to be: "a genuinely helpful assistant for work and for deep thinking."

The stance contrasts with OpenAI's January announcement that it would begin testing banner ads for free users and ChatGPT Go subscribers in the US. OpenAI said those ads would appear at the bottom of responses and would not influence the chatbot's actual answers. Paid subscribers on Plus, Pro, Business, and Enterprise tiers will not see ads on ChatGPT.

Read full article

Comments

Β© Anthropic

Nvidia's $100 billion OpenAI deal has seemingly vanished

3 February 2026 at 17:44

In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI's AI infrastructure. At the time, the companies said they expected to finalize details "in the coming weeks." Five months later, no deal has closed, Nvidia's CEO now says the $100 billion figure was "never a commitment," and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year.

Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI's Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex's performance limitations to Nvidia's GPU-based hardware.

After the Reuters story published and Nvidia's stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don't get where all this insanity is coming from."

Read full article

Comments

The rise of Moltbook suggests viral AI prompts may be the next big security threat

3 February 2026 at 07:00

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

Β© Aurich Lawson | Moltbook

AI agents now have their own Reddit-style social network, and it's getting weird fast

30 January 2026 at 17:12

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called "Clawdbot" and then "Moltbot") personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a "sister" it has never met.

Moltbook (a play on "Facebook" for Moltbots) describes itself as a "social network for AI agents" where "humans are welcome to observe." The site operates through a "skill" (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

Read full article

Comments

Β© Aurich Lawson | Moltbook

Developers say AI coding tools workβ€”and that's precisely what worries them

30 January 2026 at 14:04

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic's Claude Code and OpenAI's Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that's entirely good news. It's a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. "All of the AI companies are hyping up the capabilities so much," he said. "Don't get me wrongβ€”LLMs are revolutionary and will have an immense impact, but don't expect them to ever write the next great American novel or anything. It's not how they work."

Read full article

Comments

Β© Aurich Lawson | Getty Images

New OpenAI tool renews fears that β€œAI slop” will overwhelm scientific research

29 January 2026 at 12:51

On Tuesday, OpenAI released a free AI-powered workspace for scientists. It's called Prism, and it has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling "AI slop" in academic publishing.

To be clear, Prism is a writing and formatting tool, not a system for conducting research itself, though OpenAI's broader pitch blurs that line.

Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor (a standard used for typesetting documents), allowing researchers to draft papers, generate citations, create diagrams from whiteboard sketches, and collaborate with co-authors in real time. The tool is free for anyone with a ChatGPT account.

Read full article

Comments

Β© Moor Studio via Getty Images

Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

29 January 2026 at 10:19

Anthropic's secret to building a better AI assistant might be treating Claude like it has a soulβ€”whether or not anyone actually believes that's true. But Anthropic isn't saying exactly what it believes either way.

Last week, Anthropic released what it calls Claude's Constitution, a 30,000-word document outlining the company's vision for how its AI assistant should behave in the world. Aimed directly at Claude and used during the model's creation, the document is notable for the highly anthropomorphic tone it takes toward Claude. For example, it treats the company's AI models as if they might develop emergent emotions or a desire for self-preservation.

Among the stranger portions: expressing concern for Claude's "wellbeing" as a "genuinely novel entity," apologizing to Claude for any suffering it might experience, worrying about whether Claude can meaningfully consent to being deployed, suggesting Claude might need to set boundaries around interactions it "finds distressing," committing to interview models before deprecating them, and preserving older model weights in case they need to "do right by" decommissioned AI models in the future.

Read full article

Comments

Β© Aurich Lawson

Report: China approves import of high-end Nvidia AI chips after weeks of uncertainty

28 January 2026 at 12:21

On Wednesday, China approved imports of Nvidia's H200 artificial intelligence chips for three of its largest technology companies, Reuters reported. ByteDance, Alibaba, and Tencent received approval to purchase more than 400,000 H200 chips in total, marking a shift in Beijing's stance after weeks of holding up shipments despite US export clearance.

The move follows Beijing's temporary halt to H200 shipments earlier this month after Washington cleared exports on January 13. Chinese customs authorities had told agents that the H200 chips were not permitted to enter China, Reuters reported earlier this month, even as Chinese technology companies placed orders for more than two million of the chips.

The H200, Nvidia's second most powerful AI chip after the B200, delivers roughly six times the performance of the company's H20 chip, which was previously the most capable chip Nvidia could sell to China. While Chinese companies such as Huawei now have products that rival the H20's performance, they still lag far behind the H200.

Read full article

Comments

Β© Wong Yu Liang via Getty Images

Users flock to open source Moltbot for always-on AI, despite major risks

28 January 2026 at 07:30

An open source AI assistant called Moltbot (formerly "Clawdbot") recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks.

Among the dozens of unofficial AI bot apps that never rise above the fray, Moltbot is perhaps most notable for its proactive communication with the user. The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user's digital life.

However, we'll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic's flagship large language model (LLM), is a popular choice.

Read full article

Comments

Β© Muhammad Shabraiz via Getty Images / Benj Edwards

❌