Normal view

Received before yesterday

DSA Human Rights Alliance Publishes Principles Calling for DSA Enforcement to Incorporate Global Perspectives

28 January 2026 at 02:16

The Digital Services Act (DSA) Human Rights Alliance has, since its founding by EFF and Access Now in 2021, worked to ensure that the European Union follows a human rights-based approach to platform governance by integrating a wide range of voices and perspectives to contextualise DSA enforcement and examining the DSA’s effect on tech regulations around the world.

As the DSA moves from legislation to enforcement, it has become increasingly clear that its impact depends not only on the text of the Act but also how it’s interpreted and enforced in practice. This is why the Alliance has created a set of recommendations to include civil society organizations and rights-defending stakeholders in the enforcement process. 

 The Principles for a Human Rights-Centred Application of the DSA: A Global Perspective, a report published this week by the Alliance, outlines steps the European Commission, as the main DSA enforcer, as well as national policymakers and regulators, should take to bring diverse groups to the table as a means of ensuring that the implementation of the DSA is grounded in human rights standards.

 The Principles also offer guidance for regulators outside the EU who look to the DSA as a reference framework and international bodies and global actors concerned with digital governance and the wider implications of the DSA. The Principles promote meaningful stakeholder engagement and emphasize the role of civil society organisations in providing expertise and acting as human rights watchdogs.

“Regulators and enforcers need input from civil society, researchers, and affected communities to understand the global dynamics of platform governance,” said EFF International Policy Director Christoph Schmon. “Non-EU-based civil society groups should be enabled to engage on equal footing with EU stakeholders on rights-focused elements of the DSA. This kind of robust engagement will help ensure that DSA enforcement serves the public interest and strengthens fundamental rights for everyone, especially marginalized and vulnerable groups.”

“As activists are increasingly intimidated, journalists silenced, and science and academic freedom attacked by those who claim to defend free speech, it is of utmost importance that the Digital Services Act's enforcement is centered around the protection of fundamental rights, including the right to the freedom of expression,” said Marcel Kolaja, Policy & Advocacy Director—Europe at Access Now. “To do so effectively, the global perspective needs to be taken into account. The DSA Human Rights Principles provide this perspective and offer valuable guidance for the European Commission, policymakers, and regulators for implementation and enforcement of policies aiming at the protection of fundamental rights.”

“The Principles come at the crucial moment for the EU candidate countries, such as Serbia, that have been aligning their legislation with the EU acquis but still struggle with some of the basic rule of law and human rights standards,” said Ana Toskic Cvetinovic, Executive Director for Partners Serbia. “The DSA HR Alliance offers the opportunity for non-EU civil society to learn about the existing challenges of DSA implementation and design strategies for impacting national policy development in order to minimize any negative impact on human rights.”

 The Principles call for:

◼ Empowering EU and non-EU Civil Society and Users to Pursue DSA Enforcement Actions

◼ Considering Extraterritorial and Cross-Border Effects of DSA Enforcement

◼ Promoting Cross-Regional Collaboration Among CSOs on Global Regulatory Issues

◼ Establishing Institutionalised Dialogue Between EU and Non-EU Stakeholders

◼ Upholding the Rule of Law and Fundamental Rights in DSA Enforcement, Free from Political Influence

◼ Considering Global Experiences with Trusted Flaggers and Avoid Enforcement Abuse

◼ Recognising the International Relevance of DSA Data Access and Transparency Provisions for Human Rights Monitoring

The Principles have been signed by 30 civil society organizations,researchers, and independent experts.

The DSA Human Right Alliance represents diverse communities across the globe to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation.

 

Age Verification Threats Across the Globe: 2025 in Review

15 December 2025 at 13:17

Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space. 

The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented. 

Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.

In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:

Social Media Bans for Young People

Banning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight

Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.

Push for Age Assurance on All Users 

This year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.

We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.

Rushed Age Assurance through the EU Digital Wallet

There is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.

These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online. 

However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.  

Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety

Ways Forward

The internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. 

Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

4 December 2025 at 13:04

The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.

The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.

It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.

Let’s break it down. 

 Changing What Constitutes Personal Data 

 The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.  

 The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.  

 The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue. 

 This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

Privileging AI 

In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.  

 Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.  

And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it

A Broad Reform Beyond the GDPR

There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here 

Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.

Browser Signals and Cookie Fatigue

There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights. 

While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later. 

Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on. 

However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps. 

Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected. 

A Muddled Legal Landscape

The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

The result is the opposite of  “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.

The Digital Package Is Not a Done Deal

Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection. 

Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration. 

After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

3 December 2025 at 18:19

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

❌