Normal view

Received before yesterday

EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

4 December 2025 at 13:04

The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.

The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.

It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.

Let’s break it down. 

 Changing What Constitutes Personal Data 

 The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.  

 The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.  

 The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue. 

 This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

Privileging AI 

In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.  

 Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.  

And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it

A Broad Reform Beyond the GDPR

There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here 

Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.

Browser Signals and Cookie Fatigue

There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights. 

While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later. 

Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on. 

However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps. 

Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected. 

A Muddled Legal Landscape

The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

The result is the opposite of  “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.

The Digital Package Is Not a Done Deal

Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection. 

Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration. 

Safeguarding Human Rights Must Be Integral to the ICC Office of the Prosecutor’s Approach to Tech-Enabled Crimes

23 September 2025 at 12:59

This is Part I of a two-part series on EFF’s comments to the International Criminal Court Office of the Prosecutor (OTP) about its draft policy on cyber-enabled crimes.

As human rights atrocities around the world unfold in the digital age, genocide, war crimes and crimes against humanity are as heinous and wrongful as they were before the advent of AI and social media.

But criminal methods and evidence increasingly involve technology. Think mass digital surveillance of an ethnic or religious community used to persecute them as part of a widespread or systematic attack against civilians, or cyberattacks that disable hospitals or other essential services, causing injury or death.

The International Criminal Court (ICC) Office of the Prosecutor (OTP) intends to use its mandate and powers to investigate and prosecute cyber-enabled crimes within the court's jurisdiction—those covered under the 1989 Rome Statute treaty. The office released for public comment in March 2025 a draft of its proposed policy for how it plans to go about it.

We welcome the OTP draft and urge the OTP to ensure its approach is consistent with internationally recognized human rights, including the rights to free expression, to privacy (with encryption as a vital safeguard), and to fair trial and due process.

We believe those who use digital tools to commit genocide, crimes against humanity, or war crimes should face justice. At the same time, EFF, along with our partner Derechos Digitales, emphasized in comments submitted to the OTP that safeguarding human rights must be integral to its investigations of cyber-enabled crimes.

That’s how we protect survivors, prevent overreach, gather evidence that can withstand judicial scrutiny, and hold perpetrators to account. In a similar context, we’ve opposed abusive domestic cybercrime laws and policing powers that invite censorship, arbitrary surveillance, and other human rights abuses

In this two-part series, we’ll provide background on the ICC and OTP’s draft policy, including what we like about the policy and areas that raise questions.

OTP Defines Cyber-Enabled Crimes

The ICC, established by the Rome Statute, is the permanent international criminal court with jurisdiction over individuals for four core crimes—genocide, crimes against humanity, war crimes, and the crime of aggression. It also exercises jurisdiction over offences against the administration of justice at the court itself. Within the court, the OTP is an independent organization responsible for investigating these crimes and prosecuting them.

The OTP’s draft policy explains how it will apply the statute when crimes are committed or facilitated by digital means, while emphasizing that ordinary cybercrimes (e.g., hacking, fraud, data theft) are outside ICC jurisdiction and remain the responsibility of national courts to address.

The OTP defines “cyber-enabled crime” as crimes within the court’s jurisdiction that are committed or facilitated by technology. “Committed by” covers cases where the online act is the harmful act (or an essential digital contribution), for example, malware is used to disable a hospital and people are injured or die, so the cyber operation can be the attack itself.

A crime is “facilitated by” technology, according to the OTP draft, when digital activity helps someone commit a crime under modes of liability other than direct commission (e.g., ordering, inducing, aiding or abetting), and it doesn’t matter if the main crime was itself committed online. For example, authorities use mass digital surveillance to locate members of a protected group, enabling arrests and abuses as part of a widespread or systematic attack (i.e., persecution).

It further makes clear that the OTP will use its full investigative powers under the Rome Statute—relying on national authorities acting under domestic law and, where possible, on voluntary cooperation from private entities—to secure digital evidence across borders.

Such investigations can be highly intrusive and risk sweeping up data about people beyond the target. Yet many states’ current investigative practices fall short of international human rights standards. The draft should therefore make clear that cooperating states must meet those standards, including by assessing whether they can conduct surveillance in a manner consistent with the rule of law and the right to privacy.

Digital Conduct as Evidence of Rome Statute Crimes

Even when no ICC crime happens entirely online, the OTP says online activity can still be relevant evidence. Digital conduct can help show intent, context, or policies behind abuses (for example, to prove a persecution campaign), and it can also reveal efforts to hide or exploit crimes (like propaganda). In simple terms, online activity can corroborate patterns, link incidents, and support inferences about motive, policy, and scale relevant to these crimes.

The prosecution of such crimes or the use of related evidence must be consistent with internationally recognized human rights standards, including privacy and freedom of expression, the very freedoms that allow human rights defenders, journalists, and ordinary users to document and share evidence of abuses.

In Part II we’ll take a closer look at the substance of our comments about the policy’s strengths and our recommendations for improvements and more clarity.

Mexican Allies Raise Alarms About New Mass Surveillance Laws, Call for International Support

17 September 2025 at 11:02

The Mexican government passed a package of outrageously privacy-invasive laws in July that gives both civil and military law enforcement forces access to troves of personal data and forces every individual to turn over biometric information regardless of any suspicion of crime.   

The laws create a new interconnected intelligence system dubbed the Central Intelligence Platform, under which intelligence and security agencies at all levels of government—federal, state and municipal—have the power to access, from any entity public or private, personal information for “intelligence purposes,” including license plate numbers, biometric information, telephone details that allow the identification of individuals, financial, banking, and health records, public and private property records, tax data, and more. 

You read that right. Banks’ customer information databases? Straight into the platform. Hospital patient records? Same thing. 

The laws were ostensively passed in the name of gathering intelligence to fight high-impact crime. Civil society organizations, including our partners RD3 and Article 19 Mexico, have raised alarms about the bills—as R3D put it, these new laws establish an uncontrolled system of surveillance and social control that goes against privacy and free expression rights and the presumption of innocence.  

In a concept note made public recently, RD3 breaks down exactly how bad the bills are. The General Population Act forces every person in Mexico to enroll in a mandatory biometric ID system with fingerprints and a photo. Under the law, public and private entities are required to ask for the ID for any transaction or access to services, such as banking, healthcare, education, and access to social programs. All data generated through the ID mandate will feed into a new Unique Identity Platform under the Disappeared Persons Act.  

The use of biometric IDs creates a system for tracking activities of the population—also accessible through the Central Intelligence Platform.  

The Telecommunications Act requires telecom companies to create a registry that connects people’s phone numbers with their biometric ID held by the government and cut services off to customers who won’t go along with the practice.  

It gets worse. 

The Intelligence Act explicitly guarantees the armed forces, through the National Guard, legal access to the Central Intelligence Platform, which enables real-time consultation of interconnected databases across sectors.  

Companies, both domestic and international, must either interconnect their databases or hand over information on request. Mexican authorities can share that information even with foreign governments. It also exempts judicial authorization requirements for certain types of surveillance and classifies the entire system as confidential, with criminal penalties for disclosure. All of this is allowed without any suspicion of a crime or prior judicial approval.  

We urge everyone to pay close attention to and support efforts to hold the Mexican government accountable for this egregious surveillance system. RD3 challenged the laws in court and international support is critical to raise awareness and push back.  As R3D put it, "collaboration is vital for the defense of human rights," especially in the face of uncontrolled powers set by disproportionate laws.  

We couldn’t agree more and stand with our Mexican allies. 

❌