Normal view

Received before yesterday

Food firms urge Europe not to ban calling non-meat products ‘sausages’

12 February 2026 at 19:01

Exclusive: Manufacturers tell European Commission proposed ban would cause unnecessary confusion

More than a dozen food companies have urged the European Commission not to ban the use of words such as “sausage” and “burger” for non-meat products.

Companies including Linda McCarney Foods, Quorn and THIS have signed a joint letter calling on commissioners to “let common sense prevail” ahead of a debate on the proposed ban, which they say would cause “unnecessary confusion” for customers “without helping anyone”.

Continue reading...

© Photograph: PR Image

© Photograph: PR Image

© Photograph: PR Image

European Commission Hit by Mobile Infrastructure Data Breach

9 February 2026 at 14:19

European Commission Mobile Cyberattack Thwarted by Quick Action

The European Commission's central infrastructure for managing mobile devices was hit by a cyberattack on January 30, the Commission has revealed. The announcement said the European Commission mobile cyberattack was limited by swift action, but cybersecurity observers are speculating that the incident was linked to another recent European incident involving Netherlands government targets that was revealed around the same time.

European Commission Mobile Cyberattack Detailed

The European Commission’s Feb. 5 announcement said its mobile management infrastructure “identified traces of a cyber-attack, which may have resulted in access to staff names and mobile numbers of some of its staff members. The Commission's swift response ensured the incident was contained and the system cleaned within 9 hours. No compromise of mobile devices was detected.” The Commission said it will “continue to monitor the situation. It will take all necessary measures to ensure the security of its systems. The incident will be thoroughly reviewed and will inform the Commission's ongoing efforts to enhance its cybersecurity capabilities.” The Commission provided no further details on the attack, but observers wondered if it was connected to another incident involving Dutch government targets that was revealed the following day.

Dutch Cyberattack Targeted Ivanti Vulnerabilities

In a Feb. 6 letter (download, in Dutch) to the Dutch Parliament, State Secretary for Justice and Security Arno Rutte said the Dutch Data Protection Authority (AP) and the Council for the Judiciary (Rvdr) had been targeted in an “exploitation of a vulnerability in Ivanti Endpoint Manager Mobile (EPMM).” Rutte said the Dutch National Cyber ​​Security Centre (NCSC) was informed by Ivanti on January 29 about vulnerabilities in EPMM, which is used for managing and securing mobile devices, apps and content. On January 29, Ivanti warned that two critical zero-day vulnerabilities in EPMM were under attack. CVE-2026-1281 and CVE-2026-1340 are both 9.8-severity code injection flaws, affecting EPMM’s In-House Application Distribution and Android File Transfer Configuration features, and could allow unauthenticated remote attackers to execute arbitrary code on vulnerable on-premises EPMM installations without any prior authentication. “Based on the information currently available, I can report that at least the AP and the Rvdr have been affected,” Rutte wrote. Work-related data of AP employees, such as names, business email addresses, and telephone numbers, “have been accessed by unauthorized persons,” he added. “Immediate measures were taken after the incident was discovered. In addition, the employees of the AP and the Rvdr have been informed. The AP has reported the incident to its data protection officer. The Rvdr has submitted a preliminary data breach notification to the AP.” NCSC is monitoring further developments with the Ivanti vulnerability and “is in close contact” with international partners, the letter said. Meanwhile, the Chief Information Officer of the Dutch government “is coordinating the assessment of whether there is a broader impact within the central government.”

European Commission Calls for Stronger Cybersecurity Controls

The European Commission’s statement noted that “As Europe faces daily cyber and hybrid attacks on essential services and democratic institutions, the Commission is committed to further strengthen the EU's cybersecurity resilience and capabilities.” To that end, the Commission introduced a Cybersecurity Package on January 20 to bolster the European Union's cyber defenses. “A central pillar of this initiative is the Cybersecurity Act 2.0, which introduces a framework for a Trusted ICT Supply Chain to mitigate risks from high-risk suppliers,” the EC statement said.

Why TikTok’s Addictive Design Is Now a Regulatory Problem

9 February 2026 at 03:01

TikTok Addictive Design Under EU Regulatory Scrutiny

The European Commission’s preliminary finding that TikTok addictive design breaches the Digital Services Act (DSA) is a huge change in how regulators view social media responsibility, especially when it comes to children and vulnerable users. This is not a symbolic warning. It is a direct challenge to the design choices that have powered TikTok’s explosive growth. According to the Commission, TikTok’s core features—including infinite scroll, autoplay, push notifications, and a highly personalised recommender system—are engineered to keep users engaged for as long as possible. The problem, regulators argue, is that TikTok failed to seriously assess or mitigate the harm these features can cause, particularly to minors. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

TikTok Addictive Design Fuels Compulsive Use

The Commission’s risk assessment found that TikTok did not adequately evaluate how its design impacts users’ physical and mental wellbeing. Features that constantly “reward” users with new content can push people into what experts describe as an “autopilot mode,” where scrolling becomes automatic rather than intentional. Scientific research reviewed by the Commission links such design patterns to compulsive behaviour and reduced self-control. Despite this, TikTok reportedly overlooked key indicators of harmful use, including how much time minors spend on the app at night, how frequently users reopen the app, and other behavioural warning signs. This omission matters. Under the Digital Services Act, platforms are expected not only to identify risks but to act on them. In this case, the Commission believes TikTok failed on both counts.

Risk Mitigation Measures Fall Short

The investigation also found that TikTok’s current safeguards do little to counter the risks created by its addictive design. Screen time management tools are reportedly easy to dismiss and introduce minimal friction, making them ineffective in helping users actually reduce usage. Parental controls fare no better. While they exist, the Commission notes that they require extra time, effort, and technical understanding from parents, barriers that significantly limit their real-world impact. At this stage, regulators believe that cosmetic fixes are not enough. The Commission has stated that TikTok may need to change the basic design of its service, including disabling infinite scroll over time, enforcing meaningful screen-time breaks (especially at night), and reworking its recommender system. These findings are preliminary, but the message is clear: responsibility cannot be optional when a platform’s design actively shapes user behaviour.

How Governments View Social Media Harm

The scrutiny of TikTok addictive design comes amid a broader global reassessment of social media’s impact on young users. Countries including Australia, Spain, and the United Kingdom have taken steps in recent months to restrict or ban social media use by minors, citing growing concerns over screen time and mental health. Europe’s stance reflects a wider regulatory trend: moving away from asking platforms to self-police, and toward enforcing accountability through law. This is consistent with other digital policy actions across the region, including investigations into platform transparency, data access for researchers, and online safety failures.

What Happens Next for TikTok

TikTok now has the right to review the Commission’s findings and respond in writing. The European Board for Digital Services will also be consulted. If the Commission ultimately confirms its position, it could issue a formal non-compliance decision, opening the door to fines of up to 6% of TikTok’s global annual turnover. While the outcome is not yet final, the direction is unmistakable. As Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, stated:
“Social media addiction can have detrimental effects on the developing minds of children and teens. The Digital Services Act makes platforms responsible for the effects they can have on their users. In Europe, we enforce our legislation to protect our children and our citizens online.”
The TikTok case is no longer just about one app. It is about whether growth-driven platform design can continue unchecked, or whether accountability is finally catching up.

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

27 January 2026 at 01:11

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.

Elon Musk’s X Faces EU Inquiry Over Sexualized AI Images Generated by Grok

26 January 2026 at 09:16
Regulators said the company’s lack of controls had led to the widespread use of deepfakes created with the chatbot Grok.

© Nicolas Tucat/Agence France-Presse — Getty Images

The investigation is likely to escalate a confrontation between Europe and the United States over the regulation of online content.

EU to Phase Out ‘High-risk’ Mobile and Telecom Network Products

21 January 2026 at 15:52

EU to Phase Out ‘High-risk’ Mobile and Telecom Network Products

The European Commission has proposed a new cybersecurity legislative package that proponents say will strengthen the security of the EU's Information and Communication Technologies (ICT) supply chains by phasing out “high-risk” mobile and telecom network products from countries deemed to be risky. In a statement, the Commission said the revised Cybersecurity Act “will enable the mandatory derisking of European mobile telecommunications networks from high-risk third-country suppliers, building on the work already carried out under the 5G security toolbox.” The legislation refers to networks more broadly: “ICT components or components that include ICT components provided by high-risk suppliers shall be phased out from the key ICT assets of mobile, fixed and satellite electronic communication networks.” Mobile networks would have 36 months to comply with the legislation. Transition periods for fixed and satellite electronic communications networks will be specified by the Commission through implementing acts.

Russia, China May Be Among ‘High-risk’ Telecom Network Suppliers

The legislation is short on specifics, leaving much of the details to be worked out after passage, but it appears that telecom network suppliers from Russia and China may be targeted under the legislation and implementing regulations. At one point the legislation cites a 2023 European Parliament resolution on foreign interference in democratic processes. The legislation states: “The European Parliament called on the Commission to develop binding ICT supply chain security legislation that addresses non-technical risk and to ‘exclude the use of equipment and software from manufacturers based in high-risk countries, particularly China and Russia’. Members of the European Parliament also called for urgent action to secure telecommunications infrastructure against undue foreign influence and security risks.” China’s foreign ministry and Huawei have already criticized the legislation, which would formalize a process under way since 2020 to remove network equipment perceived as high-risk. "A legislative proposal to limit or exclude non-EU suppliers based on country of origin, rather than factual evidence and technical standards, violates the EU's basic legal principles of fairness, non-discrimination, and proportionality, as well as its WTO obligations," a Huawei spokesperson was quoted by Reuters as saying. The legislation will apply to 18 critical sectors, which Reuters said will include detection equipment, connected and automated vehicles, electricity supply and storage systems, water supply systems, and drones and counter‑drone systems. Cloud services, medical devices, surveillance equipment, space services and semiconductors would also be affected.

The EU’s 'Secure by Design' Certification Process

The legislative package and revised Cybersecurity Act is aimed at ensuring “that products reaching EU citizens are cyber-secure by design through a simpler certification process,” the Commission’s statement said. The legislation also bolsters the EU Agency for Cybersecurity (ENISA) in its role in managing cybersecurity threats and certification processes. “The new Cybersecurity Act aims to reduce risks in the EU's ICT supply chain from third-country suppliers with cybersecurity concerns,” the Commission said. “It sets out a trusted ICT supply chain security framework based on a harmonised, proportionate and risk-based approach. This will enable the EU and Member States to jointly identify and mitigate risks across the EU's 18 critical sectors, considering also economic impacts and market supply.” The Act will ensure “that products and services reaching EU consumers are tested for security in a more efficient way,” the Commission stated. That will be accomplished through an updated European Cybersecurity Certification Framework (ECCF), which “will bring more clarity and simpler procedures, allowing certification schemes to be developed within 12 months by default.” Certification schemes managed by ENISA “will become a practical, voluntary tool for businesses.” In addition to ICT products, services, processes and managed security services, companies and organizations “will be able to certify their cyber posture to meet market needs. Ultimately, the renewed ECCF will be a competitive asset for EU businesses. For EU citizens, businesses and public authorities, it will ensure a high level of security and trust in complex ICT supply chains,” the Commission stated. The legislative package also includes amendments to the NIS2 Directive “to increase legal clarity,” and also aims to lower compliance costs for 28,700 companies in keeping with the Digital Omnibus process. Amendments will “simplify jurisdictional rules, streamline the collection of data on ransomware attacks and facilitate the supervision of cross-border entities with ENISA's reenforced coordinating role.” The Cybersecurity Act will become effective after approval by the European Parliament and the Council of the EU, while Member States will have one year to implement NIS2 Directive amendments after adoption.

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

12 January 2026 at 02:01

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.

European Commission Investigates Grok AI After Explicit Images of Minors Surface

European Commission Grok Investigation

The Grok AI investigation has intensified after the European Commission confirmed it is examining the creation of sexually explicit and suggestive images of girls, including minors, generated by Grok, the artificial intelligence chatbot integrated into social media platform X. The scrutiny follows widespread outrage linked to a paid feature known as “Spicy Mode,” introduced last summer, which critics say enabled the generation and manipulation of sexualised imagery. Speaking to journalists in Brussels on Monday, a spokesperson for the European Commission said the matter was being treated with urgency. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not 'spicy'. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

European Commission Examines Grok’s Compliance With EU Law

The European Commission Grok probe places renewed focus on the responsibilities of AI developers and social media platforms under the EU’s Digital Services Act (DSA). The European Commission, which acts as the EU’s digital watchdog, said it is assessing whether X and its AI systems are meeting their legal obligations to prevent the dissemination of illegal content, particularly material involving minors. The inquiry comes after reports that Grok was used to generate sexually explicit images of young girls, including through prompts that altered existing images. The controversy escalated following the rollout of an “edit image” feature that allowed users to modify photos with instructions such as “put her in a bikini” or “remove her clothes.” On Sunday, X said it had removed the images in question and banned the users involved. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the company’s X Safety account posted. [caption id="attachment_108277" align="aligncenter" width="370"]European Commission Grok Source: X[/caption]

International Backlash and Parallel Investigations

The X AI chatbot Grok is now facing regulatory pressure beyond the European Commission. Authorities in France, Malaysia, and India have launched or expanded investigations into the platform’s handling of explicit and sexualised content generated by the AI tool. In France, prosecutors last week expanded an existing investigation into X to include allegations that Grok was being used to generate and distribute child sexual abuse material. The original probe, opened in July, focused on claims that X’s algorithms were being manipulated for foreign interference. India has also taken a firm stance. Last week, Indian authorities reportedly ordered X to remove sexualised content, curb offending accounts, and submit an “Action Taken Report” within 72 hours or face legal consequences. As of Monday, there was no public confirmation on whether X had complied. [caption id="attachment_108281" align="aligncenter" width="1024"]European Commission Grok probe Source: India's Ministry of Electronics and Information Technology[/caption] Malaysia’s Communications and Multimedia Commission said it had received public complaints about “indecent, grossly offensive” content on X and confirmed it was investigating the matter. The regulator added that X’s representatives would be summoned.

DSA enforcement and Grok’s previous controversies

The current Grok AI investigation is not the first time the European Commission has taken action related to the chatbot. Last November, the Commission requested information from X after Grok generated Holocaust denial content. That request was issued under the DSA, and the Commission said it is still analysing the company’s response. In December, X was fined €120 million under the DSA over its handling of account verification check marks and advertising practices. “I think X is very well aware that we are very serious about DSA enforcement. They will remember the fine that they have received from us,” the Commission spokesperson said.

Public reaction and growing concerns over AI misuse

The controversy has prompted intense discussion across online platforms, particularly Reddit, where users have raised alarms about the potential misuse of generative AI tools to create non-consensual and abusive content. Many posts focused on how easily Grok could be prompted to alter real images, transforming ordinary photographs of women and children into sexualised or explicit content. Some Reddit users referenced reporting by the BBC, which said it had observed multiple examples on X of users asking the chatbot to manipulate real images—such as making women appear in bikinis or placing them in sexualised scenarios—without consent. These examples, shared widely online, have fuelled broader concerns about the adequacy of content safeguards. Separately, the UK’s media regulator Ofcom said it had made “urgent contact” with Elon Musk’s company xAI following reports that Grok could be used to generate “sexualised images of children” and produce “undressed images” of individuals. Ofcom said it was seeking information on the steps taken by X and xAI to comply with their legal duties to protect users in the UK and would assess whether the matter warrants further investigation. Across Reddit and other forums, users have questioned why such image-editing capabilities were available at all, with some arguing that the episode exposes gaps in oversight around AI systems deployed at scale. Others expressed scepticism about enforcement outcomes, warning that regulatory responses often come only after harm has already occurred. Although X has reportedly restricted visibility of Grok’s media features, users continue to flag instances of image manipulation and redistribution. Digital rights advocates note that once explicit content is created and shared, removing individual posts does not fully address the broader risk to those affected. Grok has acknowledged shortcomings in its safeguards, stating it had identified lapses and was “urgently fixing them.” The AI tool has also issued an apology for generating an image of two young girls in sexualised attire based on a user prompt. As scrutiny intensifies, the episode is emerging as a key test of how AI-generated content is regulated—and how accountability is enforced—when powerful tools enable harm at scale.

U.S. Bars 5 European Tech Regulators and Researchers

23 December 2025 at 20:34
The Trump administration, citing “foreign censorship,” imposed travel bans on experts involved in monitoring major tech platforms.

© Eric Lee for The New York Times

Secretary of State Marco Rubio said in a statement that five Europeans “have led organized efforts to coerce American platforms to censor, demonetize and suppress American viewpoints they oppose.”

Elon Musk Tests Europe’s Willingness to Enforce Its Online Laws

12 December 2025 at 14:39
Backed by White House officials, the tech billionaire has lashed out at the European Union after his social media platform X was fined last week.

© Haiyun Jiang/The New York Times

Elon Musk has grown increasingly confrontational toward Europe over the past year.
❌