Normal view

Received before yesterday

India Enforces Mandatory SIM-Binding for Messaging Apps Under New DoT Rules

SIM-binding

India’s Department of Telecommunications (DoT) has introduced a shift in the way messaging platforms operate in the country, mandating the adoption of SIM-binding as a core security requirement. Under the Telecommunication Cybersecurity Amendment Rules, 2025, all major messaging services, including Telegram, and regional platforms such as Arattai, must ensure that their applications remain continuously linked to an active SIM card on the user’s device.   The mandate is part of the government’s intensified efforts to combat cyber fraud and strengthen nationwide cybersecurity compliance. The directive requires App-Based Communication Service providers to implement persistent SIM-linking within 90 days and submit detailed cybersecurity compliance reports within 120 days. The move seeks to eliminate longstanding gaps in identity verification systems that have enabled malicious actors to misuse Indian mobile numbers from outside the country. 

New Rules for SIM-Binding Communication 

According to the new requirements, messaging services must operate only when the user’s active SIM card matches the credentials stored by the app. If a SIM card is removed, replaced, or deactivated, the corresponding app session must immediately cease to function. The rules also extend to web-based interfaces: platforms must automatically log users out at least every six hours, requiring a QR-based reauthentication that is tied to the same active SIM.  These changes aim to reduce the misuse of Indian telecom identifiers, which authorities say have been exploited for spoofing, impersonation, and other forms of cyber fraud. By enforcing strict SIM-binding, the DoT intends to establish a clearer traceability chain between the user, their device, and their telecom credentials. 

Why Stricter Controls Were Needed 

Government observations revealed that many communication apps continued functioning even after the linked SIM card was removed. This allowed foreign-based actors to operate accounts associated with Indian mobile numbers without proper authentication. The ability to hijack accounts or mask locations contributed directly to an uptick in cybercrimes, often involving financial scams or identity theft.  Industry groups had previously flagged this vulnerability as well. The Cellular Operators Association of India (COAI), for instance, noted that authentication typically occurs only once, during initial setup, which leaves apps operational even if the SIM is no longer present. By requiring ongoing SIM-binding, authorities aim to close this loophole and establish reliable verification pathways essential for cybersecurity compliance.  The new mandate draws support from multiple regulatory frameworks, including the Telecommunications Act, 2023, and subsequent cybersecurity rules issued in 2024 and 2025. Platforms that fail to comply could face penalties, service restrictions, or other legal consequences under India’s telecom and cybersecurity laws. 

Impact on Platforms and Users 

Messaging platforms must redesign parts of their infrastructure to support real-time SIM authentication and implement secure logout mechanisms for multi-device access. They are also expected to maintain detailed logs and participate in audits to demonstrate cybersecurity compliance.  For users, the changes may introduce constraints. Accessing a messaging app without the original active SIM will no longer be possible. Cross-device flexibility, particularly through desktop or browser-based interfaces, may also be reduced due to the six-hour logout requirement. However, policymakers argue that these inconveniences are offset by a reduced risk of cyber fraud.  India’s focus on SIM-binding aligns with practices already common in financial services. Banking and UPI applications, for example, require an active SIM for verification to minimize fraud. Other regulators have taken similar steps: earlier in 2025, the Securities and Exchange Board of India (SEBI) proposed linking trading accounts to specific SIM cards and incorporating biometric checks to prevent unauthorized transactions. 

India Mandates Pre-Installed Cybersecurity App on Smartphones

In a parallel move to strengthen digital security, India’s telecom ministry has ordered all major smartphone manufacturers, including Apple, Samsung, Vivo, Oppo, and Xiaomi, to pre-install its cybersecurity app Sanchar Saathi on all new devices within 90 days, and push it via updates to existing devices. The app must be installed in a way that users cannot disable or delete it. Launched in January, Sanchar Saathi has already helped recover over 700,000 lost phones, blocked 3.7 million stolen devices, terminated 30 million fraudulent connections, and assists in tracking devices and preventing counterfeit phones. The app verifies IMEI numbers, blocks stolen devices, and combats scams involving duplicate or spoofed IMEIs. The move is aimed at strengthening India’s telecom cybersecurity but may face resistance from Apple and privacy advocates, as Apple traditionally opposes pre-installation of government or third-party apps. Industry officials have expressed concerns over privacy, user choice, and operational feasibility, while the government emphasizes the app’s role in digital safety and fraud prevention.

French Regulator Fines Vanity Fair Publisher €750,000 for Persistent Cookie Consent Violations

28 November 2025 at 05:49

Vanity Fair, Condé Nast, Cookie Consent

France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications Condé Nast €750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.

The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.

NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, Condé Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.

Repeated Violations Despite Compliance Order

CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications Condé Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.

Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.

The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.

Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.

Escalating French Enforcement Actions

The fine amount takes into account that Condé Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.

The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo €40 million in 2023 and Google €325 million earlier in 2025. Spain's AEPD issued a €100,000 fine against Euskaltel in related NOYB litigation.

Also read: Google Slapped with $381 Million Fine in France Over Gmail Ads, Cookie Consent Missteps

According to reports, Condé Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.

The Cookie Consent Conundrum

French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.

The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.

The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

UK Tightens Cyber Laws as Attacks Threaten Hospitals, Energy, and Transport

12 November 2025 at 00:44

Cyber Security and Resilience Bill

The UK government has unveiled the Cyber Security and Resilience Bill, a landmark move to strengthen UK cyber defences across essential public services, including healthcare, transport, water, and energy. The legislation aims to shield the nation’s critical national infrastructure from increasingly complex cyberattacks, which have cost the UK economy nearly £15 billion annually. According to the latest Cyble report — “Europe’s Threat Landscape: What 2025 Exposed and Why 2026 Could Be Worse”, Europe witnessed over 2,700 cyber incidents in 2025 across sectors such as BFSI, Government, Retail, and Energy. The report highlights how ransomware groups and politically motivated hacktivists have reshaped the regional threat landscape, emphasizing the urgency of unified cyber resilience strategies.

Cyber Security and Resilience Bill to Protect Critical National Infrastructure

At the heart of the new Cyber Security and Resilience Bill is the protection of vital services that people rely on daily. The legislation will ensure hospitals, water suppliers, and transport operators are equipped with stronger cyber resilience capabilities to prevent service disruptions and mitigate risks from future attacks. The Cyber Security and Resilience Bill will, for the first time, regulate medium and large managed service providers offering IT, cybersecurity, and digital support to organisations like the NHS. These providers will be required to report significant incidents promptly and maintain contingency plans for rapid recovery. Regulators will also gain authority to designate critical suppliers — such as diagnostic service providers or energy suppliers — and enforce minimum security standards to close supply chain gaps that cybercriminals could exploit. To strengthen compliance, enforcement will be modernised with turnover-based penalties for serious violations, ensuring cybersecurity remains a non-negotiable priority. The Technology Secretary will also have powers to direct organisations, including NHS Trusts and utilities, to take urgent actions to mitigate threats to national security.

UK Cyber Defences Face Mounting Pressure Amid Rising Attacks

Recent data shows the average cost of a significant cyberattack in the UK now exceeds £190,000, amounting to nearly £14.7 billion in total annual losses. The Office for Budget Responsibility (OBR) warns that a large-scale attack on critical national infrastructure could push borrowing up by £30 billion, equivalent to 1.1% of GDP. These findings align closely with Cyble’s Europe’s Threat Landscape report, which observed the rise of new ransomware groups like Qilin and Akira and a surge in pro-Russian hacktivism targeting European institutions through DDoS and defacement campaigns. The report also revealed that the retail sector accounted for 41% of all compromised access sales, demonstrating the widespread impact of evolving cybercrime tactics. Both the government and industry experts agree that defending against these threats requires a unified approach. National Cyber Security Centre (NCSC) CEO Dr. Richard Horne emphasised that “the real-world impacts of cyberattacks have never been more evident,” calling the Bill “a crucial step in protecting our most critical services.”

Building a Secure and Resilient Future

The Cyber Security and Resilience Bill represent a major shift in how the UK safeguards its people, economy, and digital ecosystem. By tightening cyber regulations for essential and digital services, the government seeks to reduce vulnerabilities and strengthen the UK’s cyber resilience posture for the years ahead. Industry leaders have welcomed the legislation. Darktrace CEO Jill Popelka praised the government’s initiative to modernise cyber laws in an era where attackers are leveraging AI-driven tools. Cisco UK’s CEO Sarah Walker also noted that only 8% of UK organisations are currently “mature” in their cybersecurity readiness, highlighting the importance of continuous improvement. Meanwhile, the Cyble report on Europe’s Threat Landscape warns that as state-backed operations merge with financially motivated attacks, 2026 could bring even more volatility. Cyble Research and Intelligence Labs recommend that organisations adopt intelligence-led defence strategies and proactive threat monitoring to stay ahead of emerging adversaries.

The Road Ahead

Both the Cyber Security and Resilience Bill and Cyble’s Europe’s Threat Landscape findings serve as a wake-up call: the UK and Europe are facing a new era of persistent cyber risks. Strengthening collaboration between government, regulators, and private industry will be key to securing critical systems and ensuring operational continuity. Organizations can explore deeper insights and practical recommendations from Cyble’s Europe’s Threat Landscape: What 2025 Exposed — and Why 2026 Could Be Worse report here, which provides detailed sectoral analysis and strategies to build a stronger, more resilient future against cyber threats.

Global GRC Platform Market Set to Reach USD 127.7 Billion by 2033

12 November 2025 at 00:36

GRC Platform Market

The GRC platform market is witnessing strong growth as organizations across the globe focus on strengthening governance, mitigating risks, and meeting evolving compliance demands. According to recent estimates, the market was valued at USD 49.2 billion in 2024 and is projected to reach USD 127.7 billion by 2033, growing at a CAGR of 11.18% between 2025 and 2033.

This GRC platform market growth reflects the increasing need to protect sensitive data, manage cyber risks, and streamline regulatory compliance processes.

Rising Need for Governance, Risk, and Compliance Solutions

As cyberthreats continue to rise, enterprises are turning to GRC platforms to gain centralized visibility into their risk posture. These solutions help organizations identify, assess, and respond to potential risks, ensuring stronger governance and reduced operational disruption.

The market’s momentum is also fueled by heightened regulatory scrutiny and the introduction of new compliance frameworks worldwide. Businesses are under pressure to maintain transparency, accuracy, and accountability in their governance and reporting processes — areas where a GRC platform adds significant value.

By integrating governance, risk, and compliance management into one system, companies can make informed decisions, reduce human error, and ensure consistent adherence to evolving regulations.

 GRC Platform Market Insights and Key Segments

The GRC platform market is segmented based on deployment model, solution, component, end-user, and industry vertical.

  • Deployment Model: The on-premises deployment model dominates the market due to enhanced security and customization options. It is preferred by organizations handling sensitive data or operating under strict regulatory environments.

  • Solution Type: Compliance management holds the largest market share as businesses prioritize automation of documentation, tracking, and reporting to stay audit-ready.

  • Component: Software solutions lead the market by offering analytics, policy management, and workflow automation to streamline risk processes.

  • End User: Medium enterprises represent the largest segment, focusing on scalable solutions that balance security and efficiency.

  • Industry Vertical: The BFSI sector remains a key adopter due to its complex regulatory landscape and high data security requirements.

Key Drivers of the GRC Platform Market

Several factors contribute to the rapid expansion of the GRC platform market:

  1. Escalating Cyber Risks: As cyber incidents become more frequent and sophisticated, organizations seek to integrate cybersecurity measures within GRC frameworks. These integrations improve detection, response, and recovery capabilities.

  2. Evolving Compliance Standards: Increasing regulatory pressure drives adoption of GRC solutions to ensure businesses stay aligned with global standards like GDPR, HIPAA, and ISO 27001.

  3. Automation and Efficiency: Advanced GRC software reduces manual reporting and enhances accuracy, enabling faster audit responses and improved decision-making.

  4. Operational Resilience: A robust GRC system ensures business continuity by minimizing vulnerabilities and improving crisis management strategies.

Regional Outlook and Future Trends

North America currently leads the GRC platform market, supported by mature digital infrastructure and strong regulatory frameworks. Meanwhile, the Asia-Pacific region is emerging as a key growth area, driven by increased cloud adoption and a rising focus on data privacy.

In the coming years, integration with AI, analytics, and threat intelligence tools will transform how organizations approach governance and risk. The market is expected to evolve toward more predictive and adaptive compliance solutions.

Leveraging Threat Intelligence for Stronger Risk Governance

As organizations expand their digital ecosystems, threat intelligence has become a vital part of effective risk management. Platforms like Cyble help enterprises identify, monitor, and mitigate emerging cyber risks before they escalate. Integrating such intelligence-driven insights into a GRC platform strengthens visibility and helps build a proactive security posture.

For security leaders aiming to align governance with real-time intelligence, exploring a quick free demo of integrated risk and compliance tools can offer valuable perspective on enhancing organizational resilience.

China Updates Cybersecurity Law to Address AI and Infrastructure Risks

CSL

China has announced amendments to its Cybersecurity Law (CSL), marking the first major overhaul of the framework since its enactment in 2017. The revisions, approved by the Standing Committee of the National People’s Congress in October 2025, are aimed at enhancing artificial intelligence (AI) safety, strengthening enforcement mechanisms, and clarifying incident reporting obligations for onshore infrastructure.   The updated cybersecurity law will officially take effect on January 1, 2026. 

CSL Updates Strengthen AI Governance and National Security

One of the most notable updates to the CSL is the inclusion of a new article emphasizing state support for AI development and safety. This addition is the first explicit mention of artificial intelligence within China’s cybersecurity framework.   At the same time, the amendment stresses the importance of establishing ethical standards and safety oversight mechanisms for AI technologies. The new provisions encourage the use of AI and other technologies to improve cybersecurity management, signaling a growing recognition of AI’s dual role as both an enabler of progress and a potential source of risk  While the revised cybersecurity law articulates strategic priorities, detailed implementation guidelines are expected to follow with future regulations or technical standards, reported Global Policy Watch.

Expanding Enforcement and Liability

The 2025 amendments introduce stricter enforcement measures and higher penalties for violations under the CSL. Companies and individuals found in serious breach of the law could face increased fines, up to RMB 10 million for organizations and RMB 1 million for individuals. The revisions also broaden liability to include additional categories of violations, reflecting China’s ongoing efforts to strengthen accountability across its digital ecosystem.  Moreover, the updated cybersecurity law expands its extraterritorial reach. Previously, the CSL’s jurisdiction over cross-border cyber incidents was limited to foreign actions harming China’s critical information infrastructure (CII). The new amendments extend coverage to any foreign conduct that endangers the country’s network security, regardless of whether it targets CII. In severe cases, authorities may impose sanctions such as asset freezes or other punitive measures. 

Clarifying Data Protection Obligations

The amendments also resolve a long-standing ambiguity surrounding personal data processing. Under the revised CSL, network operators are now explicitly required to comply not only with the cybersecurity law itself but also with the Civil Code and the Personal Information Protection Law (PIPL). This clarification reinforces the interconnected nature of China’s data governance regime and provides clearer guidance for companies handling personal information.  Complementing the CSL amendments, the Cyberspace Administration of China (CAC) issued the Administrative Measures for National Cybersecurity Incident Reporting, which will come into force on November 1, 2025. These new reporting measures consolidate previously scattered requirements into a unified framework, creating clearer operational expectations for organizations managing onshore infrastructure.  The Measures apply to all network operators that build or operate networks within China or provide services through Chinese networks. Notably, the rules appear to exclude offshore incidents, even when they affect Chinese users, suggesting that the primary focus remains on domestic cybersecurity resilience. 

Defined Thresholds and Reporting Procedures

Under the new system, cybersecurity incidents are classified into four levels of severity. Operators must report “relatively major” incidents, such as data breaches involving more than one million individuals or economic losses exceeding RMB 5 million (approximately USD 700,000), within four hours of discovery. A preliminary report must be followed by a full assessment within 72 hours and a post-incident review within 30 days of resolution.  The CAC has introduced multiple reporting channels, including a dedicated hotline, website, email, and WeChat platform, to simplify compliance. Failure to report, delayed notifications, or false reporting can result in penalties. Conversely, prompt and transparent reporting may mitigate or eliminate liability under the revised cybersecurity law. 

FCC Set to Reverse Course on Telecom Cybersecurity Mandate

31 October 2025 at 07:36

FCC, Federal Communications Commission, Cybersecurity Mandate

The Federal Communications Commission will vote next month to rescind a controversial January 2025 Declaratory Ruling that attempted to impose sweeping cybersecurity requirements on telecommunications carriers by reinterpreting a 1994 wiretapping law.

In an Order on Reconsideration circulated Thursday, the FCC concluded that the previous interpretation was both legally erroneous and ineffective at promoting cybersecurity.

The reversal marks a dramatic shift in the FCC's approach to telecommunications security, moving away from mandated requirements toward voluntary industry collaboration—particularly in response to the massive Salt Typhoon espionage campaign sponsored by China that compromised at least eight U.S. communications companies in 2024.

CALEA Reinterpretation

On January 16, 2025—just five days before a change in administration—the FCC adopted a Declaratory Ruling claiming that section 105 of the Communications Assistance for Law Enforcement Act (CALEA) "affirmatively requires telecommunications carriers to secure their networks from unlawful access to or interception of communications."

CALEA, enacted in 1994, was designed to preserve law enforcement's ability to conduct authorized electronic surveillance as telecommunications technology evolved. Section 105 specifically requires that interception of communications within a carrier's "switching premises" can only be activated with a court order and with intervention by a carrier employee.

The January ruling took this narrow provision focused on lawful wiretapping and expanded it dramatically, interpreting it as requiring carriers to prevent all unauthorized interceptions across their entire networks. The Commission stated that carriers would be "unlikely" to satisfy these obligations without adopting basic cybersecurity practices including role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication.

The ruling emphasized that "enterprise-level implementation of these basic cybersecurity hygiene practices is necessary" because vulnerabilities in any part of a network could provide attackers unauthorized access to surveillance systems. It concluded that carriers could be in breach of statutory obligations if they failed to adopt certain cybersecurity practices—even without formal rules adopted by the Commission.

Industry Pushback and Legal Questions

CTIA – The Wireless Association, NCTA – The Internet & Television Association, and USTelecom – The Broadband Association filed a petition for reconsideration on February 18, arguing that the ruling exceeded the FCC's statutory authority and misinterpreted CALEA.

The new FCC agreed with these concerns, finding three fundamental legal flaws in the January ruling:

Enforcement Authority: The Commission concluded it lacks authority to enforce its interpretation of CALEA without first adopting implementing rules through notice-and-comment rulemaking. CALEA section 108 commits enforcement authority to the courts, not the FCC. The Commission noted that when it previously wanted to enforce CALEA requirements, it codified them as rules in 2006 specifically to gain enforcement authority.

"Switching Premises" Limitation: Section 105 explicitly refers to interceptions "effected within its switching premises," but the ruling appeared to impose obligations across carriers' entire networks. The Commission found this expansion ignored clear statutory limits.

"Interception" Definition: CALEA incorporates the Wiretap Act's definition of "intercept," which courts have consistently interpreted as limited to communications intercepted contemporaneously with transmission—not stored data. The ruling's required practices target both data in transit and at rest, exceeding section 105's scope.

"It was unlawful because the FCC purported to read a statute that required telecommunications carriers to allow lawful wiretaps within a certain portion of their network as a provision that required carriers to adopt specific network management practices in every portion of their network," the new order states.

The Voluntary Approach of Provider Commitments

Rather than mandated requirements, the FCC pointed to voluntary commitments from communications providers following collaborative engagement throughout 2025. In an October 16 ex parte filing, industry associations detailed "extensive, urgent, and coordinated efforts to mitigate operational risks, protect consumers, and preserve national security interests.

These voluntary measures include:

  • Accelerated patching cycles for outdated or vulnerable equipment
  • Updated and reviewed access controls
  • Disabled unnecessary outbound connections to limit lateral network movement
  • Improved threat-hunting efforts
  • Increased cybersecurity information sharing with federal government and within the communications sector
  • Establishment of the Communications Cybersecurity Information Sharing and Analysis Center (C2 ISAC) for real-time threat intelligence sharing
  • New collaboration forum for Chief Information Security Officers from U.S. and Canadian providers

The government-industry partnership model of collaboration has enabled communications providers to respond swiftly and agilely to Salt Typhoon, reduce vulnerabilities exposed by the attack, and bolster network cyber defenses," the industry associations stated.

Salt Typhoon Context

The Salt Typhoon attacks, disclosed in September 2024, involved a PRC-sponsored advanced persistent threat group infiltrating U.S. communications companies as part of a massive espionage campaign affecting dozens of countries. Critically, the attacks exploited publicly known common vulnerabilities and exposures (CVEs) rather than zero-day vulnerabilities—meaning they targeted avoidable weaknesses rather than previously unknown flaws.

The FCC noted that following its engagement with carriers after Salt Typhoon, providers agreed to implement additional cybersecurity controls representing "a significant change in cybersecurity practices compared to the measures in place in January."

Also read: Salt Typhoon Cyberattack: FBI Investigates PRC-linked Breach of US Telecoms

Targeted Regulatory Actions Continue

While rescinding the broad CALEA interpretation, the FCC emphasized it continues pursuing targeted cybersecurity regulations in specific areas where it has clear legal authority:

  • Rules requiring submarine cable licensees to create and implement cybersecurity risk management plans
  • Rules ensuring test labs and certification bodies in the equipment authorization program aren't controlled by foreign adversaries
  • Investigations of Chinese Communist Party-aligned businesses whose equipment appears on the FCC's Covered List
  • Proceedings to revoke authorizations for entities like HKT (International) Limited over national security concerns

"The Commission is leveraging the full range of the Commission's regulatory, investigatory, and enforcement authorities to protect Americans and American companies from foreign adversaries," the order states, while maintaining that collaboration with carriers coupled with targeted, legally robust regulatory and enforcement measures, has proven successful.

The FCC also set to withdraw the Notice of Proposed Rulemaking that accompanied the January Declaratory Ruling, which would have proposed specific cybersecurity requirements for a broad array of service providers. The NPRM was never published in the Federal Register, so the public comment period never commenced.

The Commission's new approach reflects a bet that voluntary industry cooperation, supported by targeted regulations in specific high-risk areas, will likely prove more effective than sweeping mandates of questionable legal foundation.

When AI chatbots leak and how it happens

11 September 2025 at 08:46

In a recent article on Cybernews there were two clear signs of how fast the world of AI chatbots is growing. A company I had never even heard of had over 150 million app downloads across its portfolio, and it also had an exposed unprotected Elasticsearch instance.

This needs a bit of an explanation. I had never heard of Vyro AI, a company that probably still doesn’t ring many bells, but its app ImagineArt has over 10 million downloads on Google Play. Vyro AI also markets Chatly, which has over 100,000 downloads, and Chatbotx, a web-based chatbot with about 50,000 monthly visits.

An Elasticsearch instance is a database server running a tool used to quickly store and search lots of data. If it’s unsecured because it lacks passwords, authentication, or network restrictions, it is unprotected against unauthorized visitors. This means it’s freely accessible to access by anyone with internet access that happens to find it. And without any protection like a password or a firewall, anyone who finds the database online can read, copy, change, or even delete all its data.

The researcher that found the database says it covered both production and development environments and stored about 2–7 days’ worth of logs, including 116GB of user logs in real time from the company’s three popular apps.

The information that was accessible included:

  • AI prompts that users typed into the apps. AI prompts are the questions and instructions that users submit to the AI.
  • Bearer authentication tokens, which function similarly to cookies so the user does not have to log in before every session, and allows the user to view their history and enter prompts. An attacker could even hijack an account using these tokens.
  • User agents which are strings of text sent with requests to a server to identify the application, its version, and the device’s operating system. For native mobile apps, developers might include a custom user agent string within the HTTP headers of their requests. This allows developers to identify specific app users, and tailor content and experiences for different app versions or platforms.

The researcher found that the database was first indexed by IoT search engines in mid-February. IoT search engines actively find and list devices or servers that anyone can access on the internet. They help users discover vulnerable devices (such as cameras, printers, and smart home gadgets) and also locate open databases.

This means that attackers have had a chance to “stumble” over this open database for months. And with the information there they could have taken over user accounts, accessed chat histories and generated images, and made fraudulent AI credit purchases.

How does this happen all the time?

Generative AI has found a place in many homes and even more companies, which means there is a lot of money to be made.

But the companies delivering these AI chatbots feel they can only be relevant when they push out new products. So, their engineering efforts are put there where they can control the cash flow. Security and privacy concerns are secondary at best.

Just looking at the last few months, we have reported about:

  • Prompt injection vulnerabilities, where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.
  • An AI chatbot used to launch a cybercrime spree where cybercriminals were found to be using a chatbot to help them defraud people and breach organizations.
  • AI chats showing up in Google search results. These findings concerned Grok, ChatGPT, and Meta AI (twice).
  • An insecure backend application that exposed data about chatbot interactions of job applicants at McDonalds.

As diverse as the causes of the data breaches are—they stem from a combination of human error, platform weaknesses, and architectural flaws—the call to do something about them is starting to get heard.

Hopefully, 2025 will be remembered as a starting point for compliance regulations in the AI chatbots landscape.

The AI Act is a European regulation on artificial intelligence (AI). The Act entered into force on August 1, 2024, and is the first comprehensive regulation on AI by a major regulator anywhere.

The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. But lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.

Although not completely ironed out, the NIS2 Directive is destined to have significant implications for AI providers, especially those operating in the EU or serving EU customers. Among others, AI model endpoints, APIs, and data pipelines must be protected to prevent breaches and attacks, ensuring secure deployment and operation.

And, although not cybersecurity related, the California State Assembly took a big step toward regulating AI on September 10, 2025, passing SB 243: a bill that aims to regulate AI companion chatbots in order to protect minors and vulnerable users. One of the major requirements is repeated warnings that the user is “talking to” an AI chatbot and not a real person, and that they should take a break.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

❌