India Enforces Mandatory SIM-Binding for Messaging Apps Under New DoT Rules
![]()
![]()
![]()
France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications Condé Nast €750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.
The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.
NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, Condé Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.
CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications Condé Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.
Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.
The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.
Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.
The fine amount takes into account that Condé Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.
The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo €40 million in 2023 and Google €325 million earlier in 2025. Spain's AEPD issued a €100,000 fine against Euskaltel in related NOYB litigation.
According to reports, Condé Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.
French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.
The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.
The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.
![]()
That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.
The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.
The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.
One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.
At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.
Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.
The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.
The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.
Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.
The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.
Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.
Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.
The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.
Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.
Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.
The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.
![]()
![]()
The GRC platform market is witnessing strong growth as organizations across the globe focus on strengthening governance, mitigating risks, and meeting evolving compliance demands. According to recent estimates, the market was valued at USD 49.2 billion in 2024 and is projected to reach USD 127.7 billion by 2033, growing at a CAGR of 11.18% between 2025 and 2033.
This GRC platform market growth reflects the increasing need to protect sensitive data, manage cyber risks, and streamline regulatory compliance processes.
As cyberthreats continue to rise, enterprises are turning to GRC platforms to gain centralized visibility into their risk posture. These solutions help organizations identify, assess, and respond to potential risks, ensuring stronger governance and reduced operational disruption.
The market’s momentum is also fueled by heightened regulatory scrutiny and the introduction of new compliance frameworks worldwide. Businesses are under pressure to maintain transparency, accuracy, and accountability in their governance and reporting processes — areas where a GRC platform adds significant value.
By integrating governance, risk, and compliance management into one system, companies can make informed decisions, reduce human error, and ensure consistent adherence to evolving regulations.
The GRC platform market is segmented based on deployment model, solution, component, end-user, and industry vertical.
Deployment Model: The on-premises deployment model dominates the market due to enhanced security and customization options. It is preferred by organizations handling sensitive data or operating under strict regulatory environments.
Solution Type: Compliance management holds the largest market share as businesses prioritize automation of documentation, tracking, and reporting to stay audit-ready.
Component: Software solutions lead the market by offering analytics, policy management, and workflow automation to streamline risk processes.
End User: Medium enterprises represent the largest segment, focusing on scalable solutions that balance security and efficiency.
Industry Vertical: The BFSI sector remains a key adopter due to its complex regulatory landscape and high data security requirements.
Several factors contribute to the rapid expansion of the GRC platform market:
Escalating Cyber Risks: As cyber incidents become more frequent and sophisticated, organizations seek to integrate cybersecurity measures within GRC frameworks. These integrations improve detection, response, and recovery capabilities.
Evolving Compliance Standards: Increasing regulatory pressure drives adoption of GRC solutions to ensure businesses stay aligned with global standards like GDPR, HIPAA, and ISO 27001.
Automation and Efficiency: Advanced GRC software reduces manual reporting and enhances accuracy, enabling faster audit responses and improved decision-making.
Operational Resilience: A robust GRC system ensures business continuity by minimizing vulnerabilities and improving crisis management strategies.
North America currently leads the GRC platform market, supported by mature digital infrastructure and strong regulatory frameworks. Meanwhile, the Asia-Pacific region is emerging as a key growth area, driven by increased cloud adoption and a rising focus on data privacy.
In the coming years, integration with AI, analytics, and threat intelligence tools will transform how organizations approach governance and risk. The market is expected to evolve toward more predictive and adaptive compliance solutions.
As organizations expand their digital ecosystems, threat intelligence has become a vital part of effective risk management. Platforms like Cyble help enterprises identify, monitor, and mitigate emerging cyber risks before they escalate. Integrating such intelligence-driven insights into a GRC platform strengthens visibility and helps build a proactive security posture.
For security leaders aiming to align governance with real-time intelligence, exploring a quick free demo of integrated risk and compliance tools can offer valuable perspective on enhancing organizational resilience.
AI’s growth exposes new risks to data in use. Learn how confidential computing, attestation, and post-quantum security protect AI workloads in the cloud.
The post AI Demands Laser Security Focus on Data in Use appeared first on Security Boulevard.
![]()
![]()
The Federal Communications Commission will vote next month to rescind a controversial January 2025 Declaratory Ruling that attempted to impose sweeping cybersecurity requirements on telecommunications carriers by reinterpreting a 1994 wiretapping law.
In an Order on Reconsideration circulated Thursday, the FCC concluded that the previous interpretation was both legally erroneous and ineffective at promoting cybersecurity.
The reversal marks a dramatic shift in the FCC's approach to telecommunications security, moving away from mandated requirements toward voluntary industry collaboration—particularly in response to the massive Salt Typhoon espionage campaign sponsored by China that compromised at least eight U.S. communications companies in 2024.
On January 16, 2025—just five days before a change in administration—the FCC adopted a Declaratory Ruling claiming that section 105 of the Communications Assistance for Law Enforcement Act (CALEA) "affirmatively requires telecommunications carriers to secure their networks from unlawful access to or interception of communications."
CALEA, enacted in 1994, was designed to preserve law enforcement's ability to conduct authorized electronic surveillance as telecommunications technology evolved. Section 105 specifically requires that interception of communications within a carrier's "switching premises" can only be activated with a court order and with intervention by a carrier employee.
The January ruling took this narrow provision focused on lawful wiretapping and expanded it dramatically, interpreting it as requiring carriers to prevent all unauthorized interceptions across their entire networks. The Commission stated that carriers would be "unlikely" to satisfy these obligations without adopting basic cybersecurity practices including role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication.
The ruling emphasized that "enterprise-level implementation of these basic cybersecurity hygiene practices is necessary" because vulnerabilities in any part of a network could provide attackers unauthorized access to surveillance systems. It concluded that carriers could be in breach of statutory obligations if they failed to adopt certain cybersecurity practices—even without formal rules adopted by the Commission.
CTIA – The Wireless Association, NCTA – The Internet & Television Association, and USTelecom – The Broadband Association filed a petition for reconsideration on February 18, arguing that the ruling exceeded the FCC's statutory authority and misinterpreted CALEA.
The new FCC agreed with these concerns, finding three fundamental legal flaws in the January ruling:
Enforcement Authority: The Commission concluded it lacks authority to enforce its interpretation of CALEA without first adopting implementing rules through notice-and-comment rulemaking. CALEA section 108 commits enforcement authority to the courts, not the FCC. The Commission noted that when it previously wanted to enforce CALEA requirements, it codified them as rules in 2006 specifically to gain enforcement authority.
"Switching Premises" Limitation: Section 105 explicitly refers to interceptions "effected within its switching premises," but the ruling appeared to impose obligations across carriers' entire networks. The Commission found this expansion ignored clear statutory limits.
"Interception" Definition: CALEA incorporates the Wiretap Act's definition of "intercept," which courts have consistently interpreted as limited to communications intercepted contemporaneously with transmission—not stored data. The ruling's required practices target both data in transit and at rest, exceeding section 105's scope.
"It was unlawful because the FCC purported to read a statute that required telecommunications carriers to allow lawful wiretaps within a certain portion of their network as a provision that required carriers to adopt specific network management practices in every portion of their network," the new order states.
Rather than mandated requirements, the FCC pointed to voluntary commitments from communications providers following collaborative engagement throughout 2025. In an October 16 ex parte filing, industry associations detailed "extensive, urgent, and coordinated efforts to mitigate operational risks, protect consumers, and preserve national security interests.
These voluntary measures include:
The government-industry partnership model of collaboration has enabled communications providers to respond swiftly and agilely to Salt Typhoon, reduce vulnerabilities exposed by the attack, and bolster network cyber defenses," the industry associations stated.
The Salt Typhoon attacks, disclosed in September 2024, involved a PRC-sponsored advanced persistent threat group infiltrating U.S. communications companies as part of a massive espionage campaign affecting dozens of countries. Critically, the attacks exploited publicly known common vulnerabilities and exposures (CVEs) rather than zero-day vulnerabilities—meaning they targeted avoidable weaknesses rather than previously unknown flaws.
The FCC noted that following its engagement with carriers after Salt Typhoon, providers agreed to implement additional cybersecurity controls representing "a significant change in cybersecurity practices compared to the measures in place in January."
While rescinding the broad CALEA interpretation, the FCC emphasized it continues pursuing targeted cybersecurity regulations in specific areas where it has clear legal authority:
"The Commission is leveraging the full range of the Commission's regulatory, investigatory, and enforcement authorities to protect Americans and American companies from foreign adversaries," the order states, while maintaining that collaboration with carriers coupled with targeted, legally robust regulatory and enforcement measures, has proven successful.
The FCC also set to withdraw the Notice of Proposed Rulemaking that accompanied the January Declaratory Ruling, which would have proposed specific cybersecurity requirements for a broad array of service providers. The NPRM was never published in the Federal Register, so the public comment period never commenced.
The Commission's new approach reflects a bet that voluntary industry cooperation, supported by targeted regulations in specific high-risk areas, will likely prove more effective than sweeping mandates of questionable legal foundation.
In a recent article on Cybernews there were two clear signs of how fast the world of AI chatbots is growing. A company I had never even heard of had over 150 million app downloads across its portfolio, and it also had an exposed unprotected Elasticsearch instance.
This needs a bit of an explanation. I had never heard of Vyro AI, a company that probably still doesn’t ring many bells, but its app ImagineArt has over 10 million downloads on Google Play. Vyro AI also markets Chatly, which has over 100,000 downloads, and Chatbotx, a web-based chatbot with about 50,000 monthly visits.
An Elasticsearch instance is a database server running a tool used to quickly store and search lots of data. If it’s unsecured because it lacks passwords, authentication, or network restrictions, it is unprotected against unauthorized visitors. This means it’s freely accessible to access by anyone with internet access that happens to find it. And without any protection like a password or a firewall, anyone who finds the database online can read, copy, change, or even delete all its data.
The researcher that found the database says it covered both production and development environments and stored about 2–7 days’ worth of logs, including 116GB of user logs in real time from the company’s three popular apps.
The information that was accessible included:
The researcher found that the database was first indexed by IoT search engines in mid-February. IoT search engines actively find and list devices or servers that anyone can access on the internet. They help users discover vulnerable devices (such as cameras, printers, and smart home gadgets) and also locate open databases.
This means that attackers have had a chance to “stumble” over this open database for months. And with the information there they could have taken over user accounts, accessed chat histories and generated images, and made fraudulent AI credit purchases.
Generative AI has found a place in many homes and even more companies, which means there is a lot of money to be made.
But the companies delivering these AI chatbots feel they can only be relevant when they push out new products. So, their engineering efforts are put there where they can control the cash flow. Security and privacy concerns are secondary at best.
Just looking at the last few months, we have reported about:
As diverse as the causes of the data breaches are—they stem from a combination of human error, platform weaknesses, and architectural flaws—the call to do something about them is starting to get heard.
Hopefully, 2025 will be remembered as a starting point for compliance regulations in the AI chatbots landscape.
The AI Act is a European regulation on artificial intelligence (AI). The Act entered into force on August 1, 2024, and is the first comprehensive regulation on AI by a major regulator anywhere.
The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. But lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
Although not completely ironed out, the NIS2 Directive is destined to have significant implications for AI providers, especially those operating in the EU or serving EU customers. Among others, AI model endpoints, APIs, and data pipelines must be protected to prevent breaches and attacks, ensuring secure deployment and operation.
And, although not cybersecurity related, the California State Assembly took a big step toward regulating AI on September 10, 2025, passing SB 243: a bill that aims to regulate AI companion chatbots in order to protect minors and vulnerable users. One of the major requirements is repeated warnings that the user is “talking to” an AI chatbot and not a real person, and that they should take a break.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.