Normal view

Received before yesterday

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

12 February 2026 at 17:56

On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.

"Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

Read full article

Comments

© Teera Konakan / Getty Images

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

12 February 2026 at 14:42

On Thursday, Google announced that "commercially motivated" actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity "model extraction" and considers it intellectual property theft, which is a somewhat loaded position, given that Google's LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google's Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI's terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Read full article

Comments

© Google

Once-hobbled Lumma Stealer is back with lures that are hard to resist

11 February 2026 at 17:11

Last May, law enforcement authorities around the world scored a key win when they hobbled the infrastructure of Lumma, an infostealer that infected nearly 395,000 Windows computers over just a two-month span leading up to the international operation. Researchers said Wednesday that Lumma is once again “back at scale” in hard-to-detect attacks that pilfer credentials and sensitive files.

Lumma, also known as Lumma Stealer, first appeared in Russian-speaking cybercrime forums in 2022. Its cloud-based malware-as-a-service model provided a sprawling infrastructure of domains for hosting lure sites offering free cracked software, games, and pirated movies, as well as command-and-control channels and everything else a threat actor needed to run their infostealing enterprise. Within a year, Lumma was selling for as much as $2,500 for premium versions. By the spring of 2024, the FBI counted more than 21,000 listings on crime forums. Last year, Microsoft said Lumma had become the “go-to tool” for multiple crime groups, including Scattered Spider, one of the most prolific groups.

Takedowns are hard

The FBI and an international coalition of its counterparts took action early last year. In May, they said they seized 2,300 domains, command-and-control infrastructure, and crime marketplaces that had enabled the infostealer to thrive. Recently, however, the malware has made a comeback, allowing it to infect a significant number of machines again.

Read full article

Comments

© Getty Images

OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path

11 February 2026 at 15:44

On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI's advertising strategy risks repeating the same mistakes that Facebook made a decade ago.

"I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer."

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent."

Read full article

Comments

© Aurich Lawson | Getty Images

Sixteen Claude AI agents working together created a new C compiler

6 February 2026 at 18:40

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Read full article

Comments

© akinbostanci via Getty Images

Malicious packages for dYdX cryptocurrency exchange empties user wallets

6 February 2026 at 17:16

Open source packages published on the npm and PyPI repositories were laced with code that stole wallet credentials from dYdX developers and backend systems and, in some cases, backdoored devices, researchers said.

“Every application using the compromised npm versions is at risk ….” the researchers, from security firm Socket, said Friday. “Direct impact includes complete wallet compromise and irreversible cryptocurrency theft. The attack scope includes all applications depending on the compromised versions and both developers testing with real credentials and production end-users."

Packages that were infected were:

Read full article

Comments

© Getty Images

AI companies want you to stop chatting with bots and start managing them

5 February 2026 at 17:47

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic's contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called "agent teams" in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

Read full article

Comments

© demaerre via Getty Images

OpenAI is hoppin' mad about Anthropic's new Super Bowl TV ads

5 February 2026 at 12:46

On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic's campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.

Altman called Anthropic's ads "clearly dishonest," accused the company of being "authoritarian," and said it "serves an expensive product to rich people," while Rouch wrote, "Real betrayal isn't ads. It's control."

Anthropic's four commercials, part of a campaign called "A Time and a Place," each open with a single word splashed across the screen: "Betrayal," "Violation," "Deception," and "Treachery." They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.

Read full article

Comments

© Anthropic

Increase of AI bots on the Internet sparks arms race

5 February 2026 at 09:21

The viral virtual assistant OpenClaw—formerly known as Moltbot, and before that Clawdbot—is a symbol of a broader revolution underway that could fundamentally alter how the Internet functions. Instead of a place primarily inhabited by humans, the web may very soon be dominated by autonomous AI bots.

A new report measuring bot activity on the web, as well as related data shared with WIRED by the Internet infrastructure company Akamai, shows that AI bots already account for a meaningful share of web traffic. The findings also shed light on an increasingly sophisticated arms race unfolding as bots deploy clever tactics to bypass website defenses meant to keep them out.

“The majority of the Internet is going to be bot traffic in the future,” says Toshit Pangrahi, cofounder and CEO of TollBit, a company that tracks web-scraping activity and published the new report. “It’s not just a copyright problem, there is a new visitor emerging on the Internet.”

Read full article

Comments

© dakuq via Getty

Microsoft releases urgent Office patch. Russian-state hackers pounce.

4 February 2026 at 18:08

Russian-state hackers wasted no time exploiting a critical Microsoft Office vulnerability that allowed them to compromise the devices inside diplomatic, maritime, and transport organizations in more than half a dozen countries, researchers said Wednesday.

The threat group, tracked under names including APT28, Fancy Bear, Sednit, Forest Blizzard, and Sofacy, pounced on the vulnerability, tracked as CVE-2026-21509, less than 48 hours after Microsoft released an urgent, unscheduled security update late last month, the researchers said. After reverse-engineering the patch, group members wrote an advanced exploit that installed one of two never-before-seen backdoor implants.

Stealth, speed, and precision

The entire campaign was designed to make the compromise undetectable to endpoint protection. Besides being novel, the exploits and payloads were encrypted and ran in memory, making their malice hard to spot. The initial infection vector came from previously compromised government accounts from multiple countries and were likely familiar to the targeted email holders. Command and control channels were hosted in legitimate cloud services that are typically allow-listed inside sensitive networks.

Read full article

Comments

© Getty Images

Should AI chatbots have ads? Anthropic says no.

4 February 2026 at 16:15

On Wednesday, Anthropic announced that its AI chatbot, Claude, will remain free of advertisements, drawing a sharp line between itself and rival OpenAI, which began testing ads in a low-cost tier of ChatGPT last month. The announcement comes alongside a Super Bowl ad campaign that mocks AI assistants that interrupt personal conversations with product pitches.

"There are many good places for advertising. A conversation with Claude is not one of them," Anthropic wrote in a blog post. The company argued that including ads in AI conversations would be "incompatible" with what it wants Claude to be: "a genuinely helpful assistant for work and for deep thinking."

The stance contrasts with OpenAI's January announcement that it would begin testing banner ads for free users and ChatGPT Go subscribers in the US. OpenAI said those ads would appear at the bottom of responses and would not influence the chatbot's actual answers. Paid subscribers on Plus, Pro, Business, and Enterprise tiers will not see ads on ChatGPT.

Read full article

Comments

© Anthropic

So yeah, I vibe-coded a log colorizer—and I feel good about it

4 February 2026 at 07:00

I can't code.

I know, I know—these days, that sounds like an excuse. Anyone can code, right?! Grab some tutorials, maybe an O'Reilly book, download an example project, and jump in. It's just a matter of learning how to break your project into small steps that you can make the computer do, then memorizing a bit of syntax. Nothing about that is hard!

Perhaps you can sense my sarcasm (and sympathize with my lack of time to learn one more technical skill).

Read full article

Comments

© Aurich Lawson

Nvidia's $100 billion OpenAI deal has seemingly vanished

3 February 2026 at 17:44

In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI's AI infrastructure. At the time, the companies said they expected to finalize details "in the coming weeks." Five months later, no deal has closed, Nvidia's CEO now says the $100 billion figure was "never a commitment," and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year.

Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI's Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex's performance limitations to Nvidia's GPU-based hardware.

After the Reuters story published and Nvidia's stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don't get where all this insanity is coming from."

Read full article

Comments

The rise of Moltbook suggests viral AI prompts may be the next big security threat

3 February 2026 at 07:00

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.

Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.

History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.

Read full article

Comments

© Aurich Lawson | Moltbook

Notepad++ users take note: It's time to check if you're hacked

2 February 2026 at 15:30

Infrastructure delivering updates for Notepad++—a widely used text editor for Windows—was compromised for six months by suspected China-state hackers who used their control to deliver backdoored versions of the app to select targets, developers said Monday.

“I deeply apologize to all users affected by this hijacking,” the author of a post published to the official notepad-plus-plus.org site wrote Monday. The post said that the attack began last June with an “infrastructure-level compromise that allowed malicious actors to intercept and redirect update traffic destined for notepad-plus-plus.org.” The attackers, whom multiple investigators tied to the Chinese government, then selectively redirected certain targeted users to malicious update servers where they received backdoored updates. Notepad++ didn’t regain control of its infrastructure until December.

The attackers used their access to install a never-before-seen payload that has been dubbed Chrysalis. Security firm Rapid 7 descrbed it as a "custom, feature-rich backdoor."

Read full article

Comments

© Getty Images

AI agents now have their own Reddit-style social network, and it's getting weird fast

30 January 2026 at 17:12

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called "Clawdbot" and then "Moltbot") personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a "sister" it has never met.

Moltbook (a play on "Facebook" for Moltbots) describes itself as a "social network for AI agents" where "humans are welcome to observe." The site operates through a "skill" (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

Read full article

Comments

© Aurich Lawson | Moltbook

Developers say AI coding tools work—and that's precisely what worries them

30 January 2026 at 14:04

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic's Claude Code and OpenAI's Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that's entirely good news. It's a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. "All of the AI companies are hyping up the capabilities so much," he said. "Don't get me wrong—LLMs are revolutionary and will have an immense impact, but don't expect them to ever write the next great American novel or anything. It's not how they work."

Read full article

Comments

© Aurich Lawson | Getty Images

County pays $600,000 to pentesters it arrested for assessing courthouse security

29 January 2026 at 13:30

Two security professionals who were arrested in 2019 after performing an authorized security assessment of a county courthouse in Iowa will receive $600,000 to settle a lawsuit they brought alleging wrongful arrest and defamation.

The case was brought by Gary DeMercurio and Justin Wynn, two penetration testers who at the time were employed by Colorado-based security firm Coalfire Labs. The men had written authorization from the Iowa Judicial Branch to conduct “red-team” exercises, meaning attempted security breaches that mimic techniques used by criminal hackers or burglars.

The objective of such exercises is to test the resilience of existing defenses using the types of real-world attacks the defenses are designed to repel. The rules of engagement for this exercise explicitly permitted “physical attacks,” including “lockpicking,” against judicial branch buildings so long as they didn’t cause significant damage.

Read full article

Comments

© Stephen Matthew Milligan

Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

29 January 2026 at 10:19

Anthropic's secret to building a better AI assistant might be treating Claude like it has a soul—whether or not anyone actually believes that's true. But Anthropic isn't saying exactly what it believes either way.

Last week, Anthropic released what it calls Claude's Constitution, a 30,000-word document outlining the company's vision for how its AI assistant should behave in the world. Aimed directly at Claude and used during the model's creation, the document is notable for the highly anthropomorphic tone it takes toward Claude. For example, it treats the company's AI models as if they might develop emergent emotions or a desire for self-preservation.

Among the stranger portions: expressing concern for Claude's "wellbeing" as a "genuinely novel entity," apologizing to Claude for any suffering it might experience, worrying about whether Claude can meaningfully consent to being deployed, suggesting Claude might need to set boundaries around interactions it "finds distressing," committing to interview models before deprecating them, and preserving older model weights in case they need to "do right by" decommissioned AI models in the future.

Read full article

Comments

© Aurich Lawson

Site catering to online criminals has been seized by the FBI

28 January 2026 at 17:06

RAMP—the predominantly Russian-language online bazaar that billed itself as the “only place ransomware allowed”—had its dark web and clear web sites seized by the FBI as the agency tries to combat the growing scourge threatening critical infrastructure and organizations around the world.

Visits to both sites on Wednesday returned pages that said the FBI had taken control of the RAMP domains, which mirrored each other. RAMP has been among the dwindling number of online crime forums to operate with impunity, following the takedown of other forums such as XSS, which saw its leader arrested last year by Europol. The vacuum left RAMP as one of the leading places for people pushing ransomware and other online threats to buy, sell, or trade products and services.

I regret to inform you

“The Federal Bureau of Investigation has seized RAMP,” a banner carrying the seals of the FBI and the Justice Department said. “This action has been taken in coordination with the United States Attorney’s Office for the Southern District of Florida and the Computer Crime and Intellectual Property Section of the Department of Justice.” The banner included a graphic that appeared on the RAMP site, before it was seized, that billed itself as the “only place ransomware allowed.”

Read full article

Comments

© Getty Images

Report: China approves import of high-end Nvidia AI chips after weeks of uncertainty

28 January 2026 at 12:21

On Wednesday, China approved imports of Nvidia's H200 artificial intelligence chips for three of its largest technology companies, Reuters reported. ByteDance, Alibaba, and Tencent received approval to purchase more than 400,000 H200 chips in total, marking a shift in Beijing's stance after weeks of holding up shipments despite US export clearance.

The move follows Beijing's temporary halt to H200 shipments earlier this month after Washington cleared exports on January 13. Chinese customs authorities had told agents that the H200 chips were not permitted to enter China, Reuters reported earlier this month, even as Chinese technology companies placed orders for more than two million of the chips.

The H200, Nvidia's second most powerful AI chip after the B200, delivers roughly six times the performance of the company's H20 chip, which was previously the most capable chip Nvidia could sell to China. While Chinese companies such as Huawei now have products that rival the H20's performance, they still lag far behind the H200.

Read full article

Comments

© Wong Yu Liang via Getty Images

Users flock to open source Moltbot for always-on AI, despite major risks

28 January 2026 at 07:30

An open source AI assistant called Moltbot (formerly "Clawdbot") recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks.

Among the dozens of unofficial AI bot apps that never rise above the fray, Moltbot is perhaps most notable for its proactive communication with the user. The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user's digital life.

However, we'll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic's flagship large language model (LLM), is a popular choice.

Read full article

Comments

© Muhammad Shabraiz via Getty Images / Benj Edwards

There's a rash of scam spam coming from a real Microsoft address

27 January 2026 at 17:34

There are reports that a legitimate Microsoft email address—which Microsoft explicitly says customers should add to their allow list—is delivering scam spam.

The emails originate from no-reply-powerbi@microsoft.com, an address tied to Power BI. The Microsoft platform provides analytics and business intelligence from various sources that can be integrated into a single dashboard. Microsoft documentation says that the address is used to send subscription emails to mail-enabled security groups. To prevent spam filters from blocking the address, the company advises users to add it to allow lists.

From Microsoft, with malice

According to an Ars reader, the address on Tuesday sent her an email claiming (falsely) that a $399 charge had been made to her. "It provided a phone number to call to dispute the transaction. A man who answered a call asking to cancel the sale directed me to download and install a remote access application, presumably so he could then take control of my Mac or Windows machine (Linux wasn’t allowed)," she said. The email, captured in the two screenshots below, looked like this:

Read full article

Comments

© Getty Images

❌