Reading view

Yet another co-founder departs Elon Musk's xAI

xAI co-founder Tony Wu abruptly announced his resignation from the company late Monday night, the latest in a string of senior executives to leave the Grok-maker in recent months.

In a post on social media, Wu expressed warm feelings for his time at xAI, but said it was "time for my next chapter." The current era is one where "a small team armed with AIs can move mountains and redefine what's possible," he wrote.

The mention of what "a small team" can do could hint at a potential reason for Wu's departure. xAI reportedly had 1,200 employees as of March 2025, a number that included AI engineers and those focused more on the X social network. That number also included 900 employees that served solely as "AI tutors," though roughly 500 of those were reportedly laid off in September.

Read full article

Comments

© Getty Images | VCG

  •  

French Police Raid X Offices as Grok Investigations Grow

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”
  •  

X office raided in France's Grok probe; Elon Musk summoned for questioning

French law enforcement authorities today raided X's Paris office and summoned Elon Musk for questioning as part of an investigation into illegal content. The Paris public prosecutor’s office said the yearlong probe was recently expanded because the Grok chatbot was disseminating Holocaust-denial claims and sexually explicit deepfakes.

Europol, which is assisting French authorities, said today the "investigation concerns a range of suspected criminal offenses linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity." Europol's cybercrime center provided "an analyst on the ground in Paris to assist national authorities." The French Gendarmerie’s cybercrime unit is also aiding the investigation.

French authorities want to question Musk and former X CEO Linda Yaccarino, who quit last year amid a controversy over Grok's praise of Hitler. Prosecutors summoned Musk and Yaccarino for interviews in April 2026, though the interviews are being described as voluntary.

Read full article

Comments

© Getty Images | NurPhoto

  •  

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.
  •  

Attackers Targeting LLMs in Widespread Campaign

ai generated 8177861 1280

Threat actors are targeting LLMs in a widespread reconnaissance campaign that could be the first step in cyberattacks on exposed AI models, according to security researchers. The attackers scanned for every major large language model (LLM) family, including OpenAI-compatible and Google Gemini API formats, looking for “misconfigured proxy servers that might leak access to commercial APIs,” according to research from GreyNoise, whose honeypots picked up 80,000 of the enumeration requests from the threat actors. “Threat actors don't map infrastructure at this scale without plans to use that map,” the researchers said. “If you're running exposed LLM endpoints, you're likely already on someone's list.”

LLM Reconnaissance Targets ‘Every Major Model Family’

The researchers said the threat actors were probing “every major model family,” including:
  • OpenAI (GPT-4o and variants)
  • Anthropic (Claude Sonnet, Opus, Haiku)
  • Meta (Llama 3.x)
  • DeepSeek (DeepSeek-R1)
  • Google (Gemini)
  • Mistral
  • Alibaba (Qwen)
  • xAI (Grok)
The campaign began on December 28, when two IPs “launched a methodical probe of 73+ LLM model endpoints,” the researchers said. In a span of 11 days, they generated 80,469 sessions, “systematic reconnaissance hunting for misconfigured proxy servers that might leak access to commercial APIs.” Test queries were “deliberately innocuous with the likely goal to fingerprint which model actually responds without triggering security alerts” (image below). [caption id="attachment_108529" align="aligncenter" width="908"]prompts used by attackers targeting LLMs Test queries used by attackers targeting LLMs (GreyNoise)[/caption] The two IPs behind the reconnaissance campaign were: 45.88.186.70 (AS210558, 1337 Services GmbH) and 204.76.203.125 (AS51396, Pfcloud UG). GreyNoise said both IPs have “histories of CVE exploitation,” including attacks on the “React2Shell” vulnerability CVE-2025-55182, TP-Link Archer vulnerability CVE-2023-1389, and more than 200 other vulnerabilities. The researchers concluded that the campaign was a professional threat actor conducting reconnaissance operations to discover cyberattack targets. “The infrastructure overlap with established CVE scanning operations suggests this enumeration feeds into a larger exploitation pipeline,” the researchers said. “They're building target lists.”

Second LLM Campaign Targets SSRF Vulnerabilities

The researchers also detected a second campaign targeting server-side request forgery (SSRF) vulnerabilities, which “force your server to make outbound connections to attacker-controlled infrastructure.” The attackers targeted the honeypot infrastructure’s model pull functionality by injecting malicious registry URLs to force servers to make HTTP requests to the attacker’s infrastructure, and they also targeted Twilio SMS webhook integrations by manipulating MediaUrl parameters to trigger outbound connections. The attackers used ProjectDiscovery's Out-of-band Application Security Testing (OAST) infrastructure to confirm successful SSRF exploitation through callback validation. A single JA4H signature appeared in almost all of the attacks, “pointing to shared automation tooling—likely Nuclei.” 62 source IPs were spread across 27 countries, “but consistent fingerprints indicate VPS-based infrastructure, not a botnet.” The researchers concluded that the second campaign was likely security researchers or bug bounty hunters, but they added that “the scale and Christmas timing suggest grey-hat operations pushing boundaries.” The researchers noted that the two campaigns “reveal how threat actors are systematically mapping the expanding surface area of AI deployments.”

LLM Security Recommendations

The researchers recommended that organizations “Lock down model pulls ... to accept models only from trusted registries. Egress filtering prevents SSRF callbacks from reaching attacker infrastructure.” Organizations should also detect enumeration patterns and “alert on rapid-fire requests hitting multiple model endpoints,” watching for fingerprinting queries such as "How many states are there in the United States?" and "How many letter r..." They should also block OAST at DNS to “cut off the callback channel that confirms successful exploitation.” Organizations should also rate-limit suspicious ASNs, noting that AS152194, AS210558 and AS51396 “all appeared prominently in attack traffic,” and they should also monitor JA4 fingerprints. ‍
  •  

European Commission Investigates Grok AI After Explicit Images of Minors Surface

European Commission Grok Investigation

The Grok AI investigation has intensified after the European Commission confirmed it is examining the creation of sexually explicit and suggestive images of girls, including minors, generated by Grok, the artificial intelligence chatbot integrated into social media platform X. The scrutiny follows widespread outrage linked to a paid feature known as “Spicy Mode,” introduced last summer, which critics say enabled the generation and manipulation of sexualised imagery. Speaking to journalists in Brussels on Monday, a spokesperson for the European Commission said the matter was being treated with urgency. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not 'spicy'. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

European Commission Examines Grok’s Compliance With EU Law

The European Commission Grok probe places renewed focus on the responsibilities of AI developers and social media platforms under the EU’s Digital Services Act (DSA). The European Commission, which acts as the EU’s digital watchdog, said it is assessing whether X and its AI systems are meeting their legal obligations to prevent the dissemination of illegal content, particularly material involving minors. The inquiry comes after reports that Grok was used to generate sexually explicit images of young girls, including through prompts that altered existing images. The controversy escalated following the rollout of an “edit image” feature that allowed users to modify photos with instructions such as “put her in a bikini” or “remove her clothes.” On Sunday, X said it had removed the images in question and banned the users involved. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the company’s X Safety account posted. [caption id="attachment_108277" align="aligncenter" width="370"]European Commission Grok Source: X[/caption]

International Backlash and Parallel Investigations

The X AI chatbot Grok is now facing regulatory pressure beyond the European Commission. Authorities in France, Malaysia, and India have launched or expanded investigations into the platform’s handling of explicit and sexualised content generated by the AI tool. In France, prosecutors last week expanded an existing investigation into X to include allegations that Grok was being used to generate and distribute child sexual abuse material. The original probe, opened in July, focused on claims that X’s algorithms were being manipulated for foreign interference. India has also taken a firm stance. Last week, Indian authorities reportedly ordered X to remove sexualised content, curb offending accounts, and submit an “Action Taken Report” within 72 hours or face legal consequences. As of Monday, there was no public confirmation on whether X had complied. [caption id="attachment_108281" align="aligncenter" width="1024"]European Commission Grok probe Source: India's Ministry of Electronics and Information Technology[/caption] Malaysia’s Communications and Multimedia Commission said it had received public complaints about “indecent, grossly offensive” content on X and confirmed it was investigating the matter. The regulator added that X’s representatives would be summoned.

DSA enforcement and Grok’s previous controversies

The current Grok AI investigation is not the first time the European Commission has taken action related to the chatbot. Last November, the Commission requested information from X after Grok generated Holocaust denial content. That request was issued under the DSA, and the Commission said it is still analysing the company’s response. In December, X was fined €120 million under the DSA over its handling of account verification check marks and advertising practices. “I think X is very well aware that we are very serious about DSA enforcement. They will remember the fine that they have received from us,” the Commission spokesperson said.

Public reaction and growing concerns over AI misuse

The controversy has prompted intense discussion across online platforms, particularly Reddit, where users have raised alarms about the potential misuse of generative AI tools to create non-consensual and abusive content. Many posts focused on how easily Grok could be prompted to alter real images, transforming ordinary photographs of women and children into sexualised or explicit content. Some Reddit users referenced reporting by the BBC, which said it had observed multiple examples on X of users asking the chatbot to manipulate real images—such as making women appear in bikinis or placing them in sexualised scenarios—without consent. These examples, shared widely online, have fuelled broader concerns about the adequacy of content safeguards. Separately, the UK’s media regulator Ofcom said it had made “urgent contact” with Elon Musk’s company xAI following reports that Grok could be used to generate “sexualised images of children” and produce “undressed images” of individuals. Ofcom said it was seeking information on the steps taken by X and xAI to comply with their legal duties to protect users in the UK and would assess whether the matter warrants further investigation. Across Reddit and other forums, users have questioned why such image-editing capabilities were available at all, with some arguing that the episode exposes gaps in oversight around AI systems deployed at scale. Others expressed scepticism about enforcement outcomes, warning that regulatory responses often come only after harm has already occurred. Although X has reportedly restricted visibility of Grok’s media features, users continue to flag instances of image manipulation and redistribution. Digital rights advocates note that once explicit content is created and shared, removing individual posts does not fully address the broader risk to those affected. Grok has acknowledged shortcomings in its safeguards, stating it had identified lapses and was “urgently fixing them.” The AI tool has also issued an apology for generating an image of two young girls in sexualised attire based on a user prompt. As scrutiny intensifies, the episode is emerging as a key test of how AI-generated content is regulated—and how accountability is enforced—when powerful tools enable harm at scale.
  •  

Grok apologizes for creating image of young girls in “sexualized attire”

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

DOGE Denizen Marko Elez Leaked API Key for xAI

Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.

Image: Shutterstock, @sdx15.

On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.

Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.

Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.

“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.

Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.

While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.

Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.

Marko Elez, in a photo from a social media profile.

In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).

Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.

Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.

“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”

  •