Normal view

Received before yesterday

India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment

12 February 2026 at 04:28

AI-generated Content

The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.  The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. 

What the Law Now Defines as “Synthetically Generated Information” 

The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.  More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.  This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.  At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. 

New Duties for Intermediaries 

The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.  A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.  Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. 

Due Diligence and Labelling Requirements for AI-generated Content 

A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.  This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.  For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. 

Enhanced Obligations for Social Media Intermediaries 

Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.  If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. 

Implications for Indian Cybersecurity and Digital Platforms 

The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age. 

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

27 January 2026 at 01:11

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.

EU to Phase Out ‘High-risk’ Mobile and Telecom Network Products

21 January 2026 at 15:52

EU to Phase Out ‘High-risk’ Mobile and Telecom Network Products

The European Commission has proposed a new cybersecurity legislative package that proponents say will strengthen the security of the EU's Information and Communication Technologies (ICT) supply chains by phasing out “high-risk” mobile and telecom network products from countries deemed to be risky. In a statement, the Commission said the revised Cybersecurity Act “will enable the mandatory derisking of European mobile telecommunications networks from high-risk third-country suppliers, building on the work already carried out under the 5G security toolbox.” The legislation refers to networks more broadly: “ICT components or components that include ICT components provided by high-risk suppliers shall be phased out from the key ICT assets of mobile, fixed and satellite electronic communication networks.” Mobile networks would have 36 months to comply with the legislation. Transition periods for fixed and satellite electronic communications networks will be specified by the Commission through implementing acts.

Russia, China May Be Among ‘High-risk’ Telecom Network Suppliers

The legislation is short on specifics, leaving much of the details to be worked out after passage, but it appears that telecom network suppliers from Russia and China may be targeted under the legislation and implementing regulations. At one point the legislation cites a 2023 European Parliament resolution on foreign interference in democratic processes. The legislation states: “The European Parliament called on the Commission to develop binding ICT supply chain security legislation that addresses non-technical risk and to ‘exclude the use of equipment and software from manufacturers based in high-risk countries, particularly China and Russia’. Members of the European Parliament also called for urgent action to secure telecommunications infrastructure against undue foreign influence and security risks.” China’s foreign ministry and Huawei have already criticized the legislation, which would formalize a process under way since 2020 to remove network equipment perceived as high-risk. "A legislative proposal to limit or exclude non-EU suppliers based on country of origin, rather than factual evidence and technical standards, violates the EU's basic legal principles of fairness, non-discrimination, and proportionality, as well as its WTO obligations," a Huawei spokesperson was quoted by Reuters as saying. The legislation will apply to 18 critical sectors, which Reuters said will include detection equipment, connected and automated vehicles, electricity supply and storage systems, water supply systems, and drones and counter‑drone systems. Cloud services, medical devices, surveillance equipment, space services and semiconductors would also be affected.

The EU’s 'Secure by Design' Certification Process

The legislative package and revised Cybersecurity Act is aimed at ensuring “that products reaching EU citizens are cyber-secure by design through a simpler certification process,” the Commission’s statement said. The legislation also bolsters the EU Agency for Cybersecurity (ENISA) in its role in managing cybersecurity threats and certification processes. “The new Cybersecurity Act aims to reduce risks in the EU's ICT supply chain from third-country suppliers with cybersecurity concerns,” the Commission said. “It sets out a trusted ICT supply chain security framework based on a harmonised, proportionate and risk-based approach. This will enable the EU and Member States to jointly identify and mitigate risks across the EU's 18 critical sectors, considering also economic impacts and market supply.” The Act will ensure “that products and services reaching EU consumers are tested for security in a more efficient way,” the Commission stated. That will be accomplished through an updated European Cybersecurity Certification Framework (ECCF), which “will bring more clarity and simpler procedures, allowing certification schemes to be developed within 12 months by default.” Certification schemes managed by ENISA “will become a practical, voluntary tool for businesses.” In addition to ICT products, services, processes and managed security services, companies and organizations “will be able to certify their cyber posture to meet market needs. Ultimately, the renewed ECCF will be a competitive asset for EU businesses. For EU citizens, businesses and public authorities, it will ensure a high level of security and trust in complex ICT supply chains,” the Commission stated. The legislative package also includes amendments to the NIS2 Directive “to increase legal clarity,” and also aims to lower compliance costs for 28,700 companies in keeping with the Digital Omnibus process. Amendments will “simplify jurisdictional rules, streamline the collection of data on ransomware attacks and facilitate the supervision of cross-border entities with ENISA's reenforced coordinating role.” The Cybersecurity Act will become effective after approval by the European Parliament and the Council of the EU, while Member States will have one year to implement NIS2 Directive amendments after adoption.

Beyond Compliance: How India’s DPDP Act Is Reshaping the Cyber Insurance Landscape

19 December 2025 at 00:38

DPDP Act Is Reshaping the Cyber Insurance Landscape

By Gauravdeep Singh, Head – State e-Mission Team (SeMT), Ministry of Electronics and Information Technology The Digital Personal Data Protection (DPDP) Act has fundamentally altered the risk landscape for Indian organisations. Data breaches now trigger mandatory compliance obligations regardless of their origin, transforming incidents that were once purely operational concerns into regulatory events with significant financial and legal implications.

Case Study 1: Cloud Misconfiguration in a Consumer Platform

A prominent consumer-facing platform experienced a data exposure incident when a misconfigured storage bucket on its public cloud infrastructure inadvertently made customer data publicly accessible. While no malicious actor was involved, the incident still constituted a reportable data breach under the DPDP Act framework. The organisation faced several immediate obligations:
  • Notification to affected individuals within prescribed timelines
  • Formal reporting to the Data Protection Board
  • Comprehensive internal investigation and remediation measures
  • Potential penalties for failure to implement reasonable security safeguards as mandated under the Act
Such incidents highlight a critical gap in traditional risk management approaches. The financial exposure—encompassing regulatory penalties, legal costs, remediation expenses, and reputational damage—frequently exceeds conventional cyber insurance coverage limits, particularly when compliance failures are implicated.

Case Study 2: Ransomware Attack on Healthcare and EdTech Infrastructure

A mid-sized healthcare and education technology provider fell victim to a ransomware attack that encrypted sensitive personal records. Despite successful restoration from backup systems, the organisation confronted extensive regulatory and operational obligations:
  • Forensic assessment to determine whether data confidentiality was compromised
  • Mandatory notification to regulatory authorities and affected data principals
  • Ongoing legal and compliance proceedings
The total cost extended far beyond any ransom demand. Forensic investigations, legal advisory services, public communications, regulatory compliance activities, and operational disruption collectively created substantial financial strain, costs that would have been mitigated with appropriate insurance coverage.

Case Study 3: AI-Enabled Fraud and Social Engineering

The emergence of AI-driven attack vectors has introduced new dimensions of cyber risk. Deepfake technology and sophisticated phishing campaigns now enable threat actors to impersonate senior leadership with unprecedented authenticity, compelling finance teams to authorise fraudulent fund transfers or inappropriate data disclosures. These attacks often circumvent traditional technical security controls because they exploit human trust rather than system vulnerabilities. As a result, organisations are increasingly seeking insurance coverage for social engineering and cyber fraud events, particularly those involving personal data or financial information, that fall outside conventional cybersecurity threat models.

The Evolution of Cyber Insurance in India

India DPDP Act The Indian cyber insurance market is undergoing significant transformation in response to the DPDP Act and evolving threat landscape. Modern policies now extend beyond traditional hacking incidents to address:
  • Data breaches resulting from human error or operational failures
  • Third-party vendor and SaaS provider security failures
  • Cloud service disruptions and availability incidents
  • Regulatory investigation costs and legal defense expenses
  • Incident response, crisis management, and public relations support
Organisations are reassessing their coverage adequacy as they recognise that historical policy limits of Rs. 10–20 crore may prove insufficient when regulatory penalties, legal costs, business interruption losses, and remediation expenses are aggregated under the DPDP compliance framework.

The SME and MSME Vulnerability

Small and medium enterprises represent the most vulnerable segment of the market. While many SMEs and MSMEs regularly process personal data, they frequently lack:
  • Mature information security controls and governance frameworks
  • Dedicated compliance and data protection teams
  • Financial reserves to absorb penalties, legal costs, or operational disruption
For organisations in this segment, even a relatively minor cyber incident can trigger prolonged operational shutdowns or, in severe cases, permanent closure. Despite this heightened vulnerability, cyber insurance adoption among SMEs remains disproportionately low, driven primarily by awareness gaps and perceived cost barriers.

Implications for the Cyber Insurance Ecosystem

The Indian cyber insurance market is entering a period of accelerated growth and structural evolution. Several key trends are emerging:
  • Higher policy limits becoming standard practice across industries
  • Enhanced underwriting processes emphasising compliance readiness and data governance maturity
  • Comprehensive coverage integrating legal advisory, forensic investigation, and regulatory support
  • Risk-based pricing models that reward robust data protection practices
Looking ahead, cyber insurance will increasingly be evaluated not merely as a risk-transfer mechanism, but as an indicator of an organisation's overall data protection posture and regulatory preparedness.

DPDP Act and the End of Optional Cyber Insurance

The DPDP Act has fundamentally redefined cyber risk in the Indian context. Data breaches are no longer isolated IT failures; they are regulatory events carrying substantial financial, legal, and reputational consequences. In this environment, cyber insurance is transitioning from a discretionary safeguard to a strategic imperative. Organisations that integrate cyber insurance into a comprehensive data governance and enterprise risk management strategy will be better positioned to navigate the evolving regulatory landscape. Conversely, those that remain uninsured or underinsured may discover that the cost of inadequate preparation far exceeds the investment required for robust protection. (This article reflects the author’s analysis and personal viewpoints and is intended for informational purposes only. It should not be construed as legal or regulatory advice.)

India Enforces Mandatory SIM-Binding for Messaging Apps Under New DoT Rules

SIM-binding

India’s Department of Telecommunications (DoT) has introduced a shift in the way messaging platforms operate in the country, mandating the adoption of SIM-binding as a core security requirement. Under the Telecommunication Cybersecurity Amendment Rules, 2025, all major messaging services, including Telegram, and regional platforms such as Arattai, must ensure that their applications remain continuously linked to an active SIM card on the user’s device.   The mandate is part of the government’s intensified efforts to combat cyber fraud and strengthen nationwide cybersecurity compliance. The directive requires App-Based Communication Service providers to implement persistent SIM-linking within 90 days and submit detailed cybersecurity compliance reports within 120 days. The move seeks to eliminate longstanding gaps in identity verification systems that have enabled malicious actors to misuse Indian mobile numbers from outside the country. 

New Rules for SIM-Binding Communication 

According to the new requirements, messaging services must operate only when the user’s active SIM card matches the credentials stored by the app. If a SIM card is removed, replaced, or deactivated, the corresponding app session must immediately cease to function. The rules also extend to web-based interfaces: platforms must automatically log users out at least every six hours, requiring a QR-based reauthentication that is tied to the same active SIM.  These changes aim to reduce the misuse of Indian telecom identifiers, which authorities say have been exploited for spoofing, impersonation, and other forms of cyber fraud. By enforcing strict SIM-binding, the DoT intends to establish a clearer traceability chain between the user, their device, and their telecom credentials. 

Why Stricter Controls Were Needed 

Government observations revealed that many communication apps continued functioning even after the linked SIM card was removed. This allowed foreign-based actors to operate accounts associated with Indian mobile numbers without proper authentication. The ability to hijack accounts or mask locations contributed directly to an uptick in cybercrimes, often involving financial scams or identity theft.  Industry groups had previously flagged this vulnerability as well. The Cellular Operators Association of India (COAI), for instance, noted that authentication typically occurs only once, during initial setup, which leaves apps operational even if the SIM is no longer present. By requiring ongoing SIM-binding, authorities aim to close this loophole and establish reliable verification pathways essential for cybersecurity compliance.  The new mandate draws support from multiple regulatory frameworks, including the Telecommunications Act, 2023, and subsequent cybersecurity rules issued in 2024 and 2025. Platforms that fail to comply could face penalties, service restrictions, or other legal consequences under India’s telecom and cybersecurity laws. 

Impact on Platforms and Users 

Messaging platforms must redesign parts of their infrastructure to support real-time SIM authentication and implement secure logout mechanisms for multi-device access. They are also expected to maintain detailed logs and participate in audits to demonstrate cybersecurity compliance.  For users, the changes may introduce constraints. Accessing a messaging app without the original active SIM will no longer be possible. Cross-device flexibility, particularly through desktop or browser-based interfaces, may also be reduced due to the six-hour logout requirement. However, policymakers argue that these inconveniences are offset by a reduced risk of cyber fraud.  India’s focus on SIM-binding aligns with practices already common in financial services. Banking and UPI applications, for example, require an active SIM for verification to minimize fraud. Other regulators have taken similar steps: earlier in 2025, the Securities and Exchange Board of India (SEBI) proposed linking trading accounts to specific SIM cards and incorporating biometric checks to prevent unauthorized transactions. 

India Mandates Pre-Installed Cybersecurity App on Smartphones

In a parallel move to strengthen digital security, India’s telecom ministry has ordered all major smartphone manufacturers, including Apple, Samsung, Vivo, Oppo, and Xiaomi, to pre-install its cybersecurity app Sanchar Saathi on all new devices within 90 days, and push it via updates to existing devices. The app must be installed in a way that users cannot disable or delete it. Launched in January, Sanchar Saathi has already helped recover over 700,000 lost phones, blocked 3.7 million stolen devices, terminated 30 million fraudulent connections, and assists in tracking devices and preventing counterfeit phones. The app verifies IMEI numbers, blocks stolen devices, and combats scams involving duplicate or spoofed IMEIs. The move is aimed at strengthening India’s telecom cybersecurity but may face resistance from Apple and privacy advocates, as Apple traditionally opposes pre-installation of government or third-party apps. Industry officials have expressed concerns over privacy, user choice, and operational feasibility, while the government emphasizes the app’s role in digital safety and fraud prevention.

French Regulator Fines Vanity Fair Publisher €750,000 for Persistent Cookie Consent Violations

28 November 2025 at 05:49

Vanity Fair, Condé Nast, Cookie Consent

France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications Condé Nast €750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.

The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.

NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, Condé Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.

Repeated Violations Despite Compliance Order

CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications Condé Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.

Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.

The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.

Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.

Escalating French Enforcement Actions

The fine amount takes into account that Condé Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.

The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo €40 million in 2023 and Google €325 million earlier in 2025. Spain's AEPD issued a €100,000 fine against Euskaltel in related NOYB litigation.

Also read: Google Slapped with $381 Million Fine in France Over Gmail Ads, Cookie Consent Missteps

According to reports, Condé Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.

The Cookie Consent Conundrum

French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.

The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.

The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

❌