Normal view

Received before yesterday

OpenAI is hoppin' mad about Anthropic's new Super Bowl TV ads

5 February 2026 at 12:46

On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic's campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.

Altman called Anthropic's ads "clearly dishonest," accused the company of being "authoritarian," and said it "serves an expensive product to rich people," while Rouch wrote, "Real betrayal isn't ads. It's control."

Anthropic's four commercials, part of a campaign called "A Time and a Place," each open with a single word splashed across the screen: "Betrayal," "Violation," "Deception," and "Treachery." They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.

Read full article

Comments

© Anthropic

Should AI chatbots have ads? Anthropic says no.

4 February 2026 at 16:15

On Wednesday, Anthropic announced that its AI chatbot, Claude, will remain free of advertisements, drawing a sharp line between itself and rival OpenAI, which began testing ads in a low-cost tier of ChatGPT last month. The announcement comes alongside a Super Bowl ad campaign that mocks AI assistants that interrupt personal conversations with product pitches.

"There are many good places for advertising. A conversation with Claude is not one of them," Anthropic wrote in a blog post. The company argued that including ads in AI conversations would be "incompatible" with what it wants Claude to be: "a genuinely helpful assistant for work and for deep thinking."

The stance contrasts with OpenAI's January announcement that it would begin testing banner ads for free users and ChatGPT Go subscribers in the US. OpenAI said those ads would appear at the bottom of responses and would not influence the chatbot's actual answers. Paid subscribers on Plus, Pro, Business, and Enterprise tiers will not see ads on ChatGPT.

Read full article

Comments

© Anthropic

So yeah, I vibe-coded a log colorizer—and I feel good about it

4 February 2026 at 07:00

I can't code.

I know, I know—these days, that sounds like an excuse. Anyone can code, right?! Grab some tutorials, maybe an O'Reilly book, download an example project, and jump in. It's just a matter of learning how to break your project into small steps that you can make the computer do, then memorizing a bit of syntax. Nothing about that is hard!

Perhaps you can sense my sarcasm (and sympathize with my lack of time to learn one more technical skill).

Read full article

Comments

© Aurich Lawson

Developers say AI coding tools work—and that's precisely what worries them

30 January 2026 at 14:04

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic's Claude Code and OpenAI's Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that's entirely good news. It's a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. "All of the AI companies are hyping up the capabilities so much," he said. "Don't get me wrong—LLMs are revolutionary and will have an immense impact, but don't expect them to ever write the next great American novel or anything. It's not how they work."

Read full article

Comments

© Aurich Lawson | Getty Images

How often do AI chatbots lead users down a harmful path?

29 January 2026 at 17:05

At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it's hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls "disempowering patterns" across 1.5 million anonymized real-world conversations with its Claude AI model. While the results show that these kinds of manipulative patterns are relatively rare as a percentage of all AI conversations, they still represent a potentially large problem on an absolute basis.

A rare but growing problem

In the newly published paper "Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage," researchers from Anthropic and the University of Toronto try to quantify the potential for a specific set of "user disempowering" harms by identifying three primary ways that a chatbot can negatively impact a user's thoughts or actions:

Read full article

Comments

© Getty Images

Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

29 January 2026 at 10:19

Anthropic's secret to building a better AI assistant might be treating Claude like it has a soul—whether or not anyone actually believes that's true. But Anthropic isn't saying exactly what it believes either way.

Last week, Anthropic released what it calls Claude's Constitution, a 30,000-word document outlining the company's vision for how its AI assistant should behave in the world. Aimed directly at Claude and used during the model's creation, the document is notable for the highly anthropomorphic tone it takes toward Claude. For example, it treats the company's AI models as if they might develop emergent emotions or a desire for self-preservation.

Among the stranger portions: expressing concern for Claude's "wellbeing" as a "genuinely novel entity," apologizing to Claude for any suffering it might experience, worrying about whether Claude can meaningfully consent to being deployed, suggesting Claude might need to set boundaries around interactions it "finds distressing," committing to interview models before deprecating them, and preserving older model weights in case they need to "do right by" decommissioned AI models in the future.

Read full article

Comments

© Aurich Lawson

Attackers Targeting LLMs in Widespread Campaign

12 January 2026 at 15:20

ai generated 8177861 1280

Threat actors are targeting LLMs in a widespread reconnaissance campaign that could be the first step in cyberattacks on exposed AI models, according to security researchers. The attackers scanned for every major large language model (LLM) family, including OpenAI-compatible and Google Gemini API formats, looking for “misconfigured proxy servers that might leak access to commercial APIs,” according to research from GreyNoise, whose honeypots picked up 80,000 of the enumeration requests from the threat actors. “Threat actors don't map infrastructure at this scale without plans to use that map,” the researchers said. “If you're running exposed LLM endpoints, you're likely already on someone's list.”

LLM Reconnaissance Targets ‘Every Major Model Family’

The researchers said the threat actors were probing “every major model family,” including:
  • OpenAI (GPT-4o and variants)
  • Anthropic (Claude Sonnet, Opus, Haiku)
  • Meta (Llama 3.x)
  • DeepSeek (DeepSeek-R1)
  • Google (Gemini)
  • Mistral
  • Alibaba (Qwen)
  • xAI (Grok)
The campaign began on December 28, when two IPs “launched a methodical probe of 73+ LLM model endpoints,” the researchers said. In a span of 11 days, they generated 80,469 sessions, “systematic reconnaissance hunting for misconfigured proxy servers that might leak access to commercial APIs.” Test queries were “deliberately innocuous with the likely goal to fingerprint which model actually responds without triggering security alerts” (image below). [caption id="attachment_108529" align="aligncenter" width="908"]prompts used by attackers targeting LLMs Test queries used by attackers targeting LLMs (GreyNoise)[/caption] The two IPs behind the reconnaissance campaign were: 45.88.186.70 (AS210558, 1337 Services GmbH) and 204.76.203.125 (AS51396, Pfcloud UG). GreyNoise said both IPs have “histories of CVE exploitation,” including attacks on the “React2Shell” vulnerability CVE-2025-55182, TP-Link Archer vulnerability CVE-2023-1389, and more than 200 other vulnerabilities. The researchers concluded that the campaign was a professional threat actor conducting reconnaissance operations to discover cyberattack targets. “The infrastructure overlap with established CVE scanning operations suggests this enumeration feeds into a larger exploitation pipeline,” the researchers said. “They're building target lists.”

Second LLM Campaign Targets SSRF Vulnerabilities

The researchers also detected a second campaign targeting server-side request forgery (SSRF) vulnerabilities, which “force your server to make outbound connections to attacker-controlled infrastructure.” The attackers targeted the honeypot infrastructure’s model pull functionality by injecting malicious registry URLs to force servers to make HTTP requests to the attacker’s infrastructure, and they also targeted Twilio SMS webhook integrations by manipulating MediaUrl parameters to trigger outbound connections. The attackers used ProjectDiscovery's Out-of-band Application Security Testing (OAST) infrastructure to confirm successful SSRF exploitation through callback validation. A single JA4H signature appeared in almost all of the attacks, “pointing to shared automation tooling—likely Nuclei.” 62 source IPs were spread across 27 countries, “but consistent fingerprints indicate VPS-based infrastructure, not a botnet.” The researchers concluded that the second campaign was likely security researchers or bug bounty hunters, but they added that “the scale and Christmas timing suggest grey-hat operations pushing boundaries.” The researchers noted that the two campaigns “reveal how threat actors are systematically mapping the expanding surface area of AI deployments.”

LLM Security Recommendations

The researchers recommended that organizations “Lock down model pulls ... to accept models only from trusted registries. Egress filtering prevents SSRF callbacks from reaching attacker infrastructure.” Organizations should also detect enumeration patterns and “alert on rapid-fire requests hitting multiple model endpoints,” watching for fingerprinting queries such as "How many states are there in the United States?" and "How many letter r..." They should also block OAST at DNS to “cut off the callback channel that confirms successful exploitation.” Organizations should also rate-limit suspicious ASNs, noting that AS152194, AS210558 and AS51396 “all appeared prominently in attack traffic,” and they should also monitor JA4 fingerprints. ‍

GenAI Is Everywhere—Here’s How to Stay Cyber-Ready

21 November 2025 at 02:56

Cyber Resilience

By Kannan Srinivasan, Business Head – Cybersecurity, Happiest Minds Technologies Cyber resilience means being prepared for anything that might disrupt your systems. It’s about knowing how to get ready, prevent problems, recover quickly, and adapt when a cyber incident occurs. Generative AI, or GenAI, has become a big part of how many organizations work today. About 70% of industries are already using it, and over 95% of US companies have adopted it in some form. GenAI is now supporting nearly every area, including IT, finance, legal, and marketing. It even helps doctors make faster decisions, students learn more effectively, and shoppers find better deals. But what happens if GenAI breaks, gets messed up, or stops working? Once AI is part of your business, you need a stronger plan to stay safe and steady. Here are some simple ways organizations can build their cyber resilience in this AI-driven world.

A Practical Guide to Cyber Resilience in the GenAI Era

  1. Get Leadership and the Board on Board

Leading the way in cyber resilience starts with your leaders. Keep your board and senior managers in the loop about the risks that come with GenAI. Get their support, make sure it lines up with your business goals, and secure enough budget for safety measures and training. Make talking about cyber safety a regular part of your meetings.
  1. Know Where GenAI Is Being Used

Make a list of all departments and processes using GenAI. Note which models you're using, who manages them, and what they’re used for. Then, do a quick risk check—what could happen if a system goes down? This helps you understand the risks and prepare better backup plans.
  1. Check for Weak Spots Regularly

Follow trusted guidelines like OWASP for testing your GenAI systems. Regular checks can spot issues like data leaks or misuse early. Fix problems quickly to stay ahead of potential risks.
  1. Improve Threat Detection and Response

Use security tools that keep an eye on your GenAI systems all the time. These tools should spot unusual activity, prevent data loss, and help investigate when something goes wrong. Make sure your cybersecurity team is trained and ready to act fast.
  1. Use More Than One AI Model

Don’t rely on just one AI tool. Having multiple models from different providers helps keep things running smoothly if one faces problems. For example, if you’re using OpenAI, consider adding options like Anthropic Claude or Google Gemini as backups. Decide which one is your main and which ones are backups.
  1. Update Your Incident Plans

Review and update your plans for dealing with incidents to include GenAI, making sure they meet new rules like the EU AI Act. Once done, test them with drills so everyone knows what to do in a real emergency.

Conclusion

Cyber resilience in the GenAI era is a continuous process. As AI grows, the need for stronger governance, smarter controls, and proactive planning grows with it. Organizations that stay aware, adaptable, and consistent in their approach will continue to build trust and reliability. GenAI opens doors to efficiency and creativity, and resilience ensures that progress stays uninterrupted. The future belongs to those who stay ready, informed, and confident in how they manage technology.
❌