Reading view

60,000 Records Exposed in Cyberattack on Uzbekistan Government

Uzbekistan cyberattack

An alleged Uzbekistan cyberattack that triggered widespread concern online has exposed around 60,000 unique data records, not the personal data of 15 million citizens, as previously claimed on social media. The clarification came from Uzbekistan’s Digital Technologies Minister Sherzod Shermatov during a press conference on 12 February, addressing mounting speculation surrounding the scale of the breach. From 27 to 30 January, information systems of three government agencies in Uzbekistan were targeted by cyberattacks. The names of the agencies have not been disclosed. However, officials were firm in rejecting viral claims suggesting a large-scale national data leak. “There is no information that the personal data of 15 million citizens of Uzbekistan is being sold online. 60,000 pieces of data — that could be five or six pieces of data per person. We are not talking about 60,000 citizens,” the minister noted, adding that law enforcement agencies were examining the types of data involved. For global readers, the distinction matters. In cybersecurity reporting, raw data units are often confused with the number of affected individuals. A single record can include multiple data points such as a name, date of birth, address, or phone number. According to Shermatov, the 60,000 figure refers to individual data units, not the number of citizens impacted.
Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets

Uzbekistan Cyberattack: What Actually Happened

The Uzbekistan cyberattack targeted three government information systems over a four-day period in late January. While the breach did result in unauthorized access to certain systems, the ministry emphasized that it was not a mass compromise of citizen accounts. “Of course, there was an attack. The hackers were skilled and sophisticated. They made attempts and succeeded in gaining access to a specific system. In a sense, this is even useful — an incident like this helps to further examine other systems and increase vigilance. Some data, in a certain amount, could indeed have been obtained from some systems,” Shermatov said. His remarks reveal a balanced acknowledgment: the attack was real, the threat actors were capable, and some data exposure did occur. At the same time, the scale appears significantly smaller than initially portrayed online. The ministry also stressed that a “personal data leak” does not mean citizens’ accounts were hacked or that full digital identities were compromised. Instead, limited personal details may have been accessed.

Rising Cyber Threats in Uzbekistan

The Uzbekistan cyberattack comes amid a sharp increase in attempted digital intrusions across the country. According to the ministry, more than 7 million cyber threats were prevented in 2024 through Uzbekistan’s cybersecurity infrastructure. In 2025, that number reportedly exceeded 107 million. Looking ahead, projections suggest that over 200 million cyberattacks could target Uzbekistan in 2026. These figures highlight a broader global trend: as countries accelerate digital transformation, they inevitably expand their attack surface. Emerging digital economies, in particular, often face intense pressure from transnational cybercriminal groups seeking to exploit gaps in infrastructure and rapid system expansion. Uzbekistan’s growing digital ecosystem — from e-government services to financial platforms — is becoming a more attractive target for global threat actors. The recent Uzbekistan cyberattack illustrates that no country, regardless of size, is immune.

Strengthening Security After the Breach

Following the breach, authorities blocked further unauthorized access attempts and reinforced technical safeguards. Additional protections were implemented within the Unified Identification System (OneID), Uzbekistan’s centralized digital identity platform. Under the updated measures, users must now personally authorize access to their data by banks, telecom operators, and other organizations. This shifts more control, and responsibility, directly to citizens. The ministry emphasized that even with partial personal data, fraudsters cannot fully act on behalf of a citizen without direct involvement. However, officials warned that attackers may attempt secondary scams using exposed details. For example, a fraudster could call a citizen, pose as a bank employee, cite known personal details, and claim that someone is applying for a loan in their name — requesting an SMS code to “cancel” the transaction. Such social engineering tactics remain one of the most effective tools for cybercriminals globally.

A Reality Check on Digital Risk

The Uzbekistan cyberattack highlights two critical lessons. First, misinformation can amplify panic faster than technical facts. Second, even limited data exposure carries real risk if exploited creatively. Shermatov’s comment that the incident can help “increase vigilance” reflects a pragmatic view shared by many cybersecurity professionals worldwide: breaches, while undesirable, often drive improvements in resilience. For Uzbekistan, the challenge now is sustaining public trust while hardening systems against a growing global cyber threats. For the rest of the world, the incident serves as a reminder that cybersecurity transparency — clear communication about scope and impact — is just as important as technical defense.
  •  

Disney Agrees Record $2.75Mn Settlement for Opt-Out Failures

Disney CCPA settlement

Animation giant Walt Disney has agreed to pay a $2.75 million fine and overhaul its privacy practices to settle violation allegations of the California Consumer Privacy Act (CCPA). The Disney CCPA settlement marks the largest settlement in the Act's enforcement history. For a global audience watching the evolution of data privacy enforcement, the Disney CCPA settlement is more than a state-level regulatory action as it signals a tougher stance on how companies handle consumer opt-out rights in an increasingly connected digital ecosystem. Announced by California Attorney General Rob Bonta, the settlement resolves claims that Disney failed to fully honor consumers’ requests to opt out of the sale or sharing of their personal data across all devices and streaming services linked to their accounts. Under the agreement, which remains subject to court approval, Disney will pay $2.75 million in civil penalties and implement a comprehensive privacy program designed to ensure compliance with the CCPA. The company does not admit wrongdoing or accept liability. A Disney spokesperson said that as an “industry leader in privacy protection, Disney continues to invest significant resources to set the standard for responsible and transparent data practices across our streaming services.”
Also read: Disney to Pay $10M After FTC Finds It Enabled Children’s Data Collection Via YouTube Videos

Implications of the Disney CCPA Settlement

While the enforcement action stems from California law, the Disney CCPA settlement has international implications. Many global companies operate under similar opt-out and consent frameworks in Europe, Asia-Pacific, and beyond. Regulators worldwide are scrutinizing whether companies truly make it easy for users to control their data — or merely create the appearance of compliance. The investigation, launched after a January 2024 investigative sweep of streaming services, found that Disney’s opt-out mechanisms contained what the California Department of Justice described as “key gaps.” These gaps allegedly allowed the company to continue selling or sharing consumer data even after users had attempted to opt out. Attorney General Bonta made the state’s position clear: “Consumers shouldn’t have to go to infinity and beyond to assert their privacy rights. Today, my office secured the largest settlement to date under the CCPA over Disney's failure to stop selling and sharing the data of consumers that explicitly asked it to. California’s nation-leading privacy law is clear: A consumer’s opt-out right applies wherever and however a business sells data — businesses can’t force people to go device-by-device or service-by-service. In California, asking a business to stop selling your data should not be complicated or cumbersome. My office is committed to the continued enforcement of this critical privacy law.”

Investigation Findings

According to the Attorney General’s office, Disney offered multiple methods for consumers to opt out — including website toggles, webforms, and the Global Privacy Control (GPC). However, each method allegedly failed to stop data sharing comprehensively. For example, when users activated opt-out toggles within Disney websites or apps, the request was reportedly applied only to the specific streaming service being used — and often only to the specific device. This meant that data sharing could continue on other devices or services connected to the same account. Similarly, consumers who submitted opt-out requests through Disney’s webform were unable to stop all personal data sharing. The investigation alleged that Disney continued to share data with “specific third-party ad-tech companies whose code Disney embedded in its websites and apps.” The Global Privacy Control — designed as a universal “stop selling or sharing my data” signal — was also reportedly limited to the specific device used, even if the consumer was logged into their Disney account. Critically, in many connected TV streaming apps, Disney allegedly did not provide an in-app opt-out mechanism and instead redirected users to the webform. Regulators argued this “effectively leaving consumers with no way to stop Disney’s selling and sharing from these apps.”

Enforcement Momentum Under the CCPA

The Disney CCPA settlement is the seventh enforcement action under the California Consumer Privacy Act and the second action against Disney in five months. In September, the Federal Trade Commission fined Disney $10 million over child privacy violations. Attorney General Bonta emphasized that “Effective opt-out is one of the bare necessities of complying with CCPA.” The law grants California consumers the right to know how their personal data is collected and shared — and the right to request that businesses stop selling or sharing that information. Under the settlement terms, Disney must update California within 60 days after court approval on steps taken to comply. It must also submit progress reports every 60 days until all services meet CCPA requirements.

A Turning Point for Streaming Platforms?

The broader message from the Disney CCPA settlement is unmistakable: privacy controls must work across platforms, devices, and ecosystems — not in silos. Streaming platforms operate globally, with accounts spanning smartphones, smart TVs, gaming consoles, and web browsers. Regulators are increasingly unwilling to accept fragmented compliance models where privacy settings apply only to one device or one service at a time. In that sense, the Disney CCPA settlement may be remembered less for the $2.75 million fine and more for the standard it reinforces: when consumers say “stop,” companies must ensure their systems actually listen.
  •  

India Seeks Larger Role in Global AI and Deep Tech Development

IndiaAI Mission

India’s technology ambitions are no longer limited to policy announcements, they are now translating into capital flows, institutional reforms, and global positioning. At the center of this transformation is the IndiaAI Mission, a flagship initiative that is reshaping AI in India while influencing private sector investment and deep tech growth across multiple domains. Information submitted in the Lok Sabha on February 11, 2026, by Minister of Electronics and IT Ashwini Vaishnaw outlines how government-backed reforms and funding mechanisms are strengthening India’s AI and space technology ecosystem. For global observers, the scale and coordination of these efforts signal a strategic push to position India as a long-term technology powerhouse.

IndiaAI Mission Lays Foundation for AI in India

Launched in March 2024 with an outlay of ₹10,372 crore, the IndiaAI Mission aims to build a comprehensive AI ecosystem. In less than two years, the initiative has delivered measurable progress. More than 38,000 GPUs have been onboarded to create a common compute facility accessible to startups and academic institutions at affordable rates. Twelve teams have been shortlisted to develop indigenous foundational models or Large Language Models (LLMs), while 30 applications have been approved to build India-specific AI solutions. Talent development remains central to the IndiaAI Mission. Over 8,000 undergraduate students, 5,000 postgraduate students, and 500 PhD scholars are currently being supported. Additionally, 27 India Data and AI Labs have been established, with 543 more identified for development. India’s AI ecosystem is also earning global recognition. The Stanford Global AI Vibrancy 2025 report ranks India third worldwide in AI competitiveness and ecosystem vibrancy. The country is also the second-largest contributor to GitHub AI projects—evidence of a strong developer community driving AI in India from the ground up.

Private Sector Investment in AI Gains Speed

Encouraged by the IndiaAI Mission and broader reforms, private sector investment in AI is rising steadily. According to the Stanford AI Index Report 2025, India’s cumulative private investment in AI between 2013 and 2024 reached approximately $11.1 billion. Recent announcements underscore this momentum. Google revealed plans to establish a major AI Hub in Visakhapatnam with an investment of around $15 billion—its largest commitment in India so far. Tata Group has also announced an $11 billion AI innovation city in Maharashtra. These developments suggest that AI in India is moving beyond research output toward large-scale commercial infrastructure. The upcoming India AI Impact Summit 2026, to be held in New Delhi, will further position India within the global AI debate. Notably, it will be the first time the global AI summit series takes place in the Global South, signaling a shift toward more inclusive technology governance.

Deep Tech Push Backed by RDI Fund and Policy Reforms

Beyond AI, the government is reinforcing the broader deep tech sector through funding and policy clarity. A ₹1 lakh crore Research, Development and Innovation (RDI) Fund under the Anusandhan National Research Foundation (ANRF) has been announced to support high-risk, high-impact projects. The National Deep Tech Startup Policy addresses long-standing challenges in funding access, intellectual property, infrastructure, and commercialization. Under Startup India, deep tech firms now enjoy extended eligibility periods and higher turnover thresholds for tax benefits and government support. These structural changes aim to strengthen India’s Gross Expenditure on Research and Development (GERD), currently at 0.64% of GDP. Encouragingly, India’s position in the Global Innovation Index has climbed from 81st in 2015 to 38th in 2025—an indicator that reforms are yielding measurable outcomes.

Space Sector Reforms Expand India’s Global Footprint

Parallel to AI in India, the government is also expanding its ambitions in space technology. The Indian Space Policy 2023 clearly defines the roles of ISRO, IN-SPACe, and private industry, opening the entire space value chain to commercial participation. IN-SPACe now operates as a single-window agency authorizing non-government space activities and facilitating access to ISRO’s infrastructure. A ₹1,000 crore venture capital fund and a ₹500 crore Technology Adoption Fund are supporting early-stage and scaling space startups. Foreign Direct Investment norms have been liberalized, permitting up to 100% FDI in satellite manufacturing and components. Through NewSpace India Limited (NSIL), the country is expanding its presence in the global commercial launch market, particularly for small and medium satellites. The government’s collaboration between ISRO and the Department of Biotechnology in space biotechnology—including microgravity research and space bio-manufacturing—signals how interdisciplinary innovation is becoming a national priority.

A Strategic Inflection Point for AI in India

Taken together, the IndiaAI Mission, private sector investment in AI, deep tech reforms, and space sector liberalization form a coordinated architecture. This is not merely about technology adoption—it is about long-term capability building. For global readers, India’s approach offers an interesting case study: sustained public investment paired with regulatory clarity and private capital participation. While challenges such as research intensity and commercialization gaps remain, the trajectory is clear. The IndiaAI Mission has become more than a policy initiative, it is emerging as a structural driver of AI in India and a signal of the country’s broader technological ambitions in the decade ahead.
  •  

Taiwan Government Agencies Faced 637 Cybersecurity Incidents in H2 2025

cybersecurity incidents

In the past six months, Taiwan’s government agencies have reported 637 cybersecurity incidents, according to the latest data released by the Cybersecurity Academy (CSAA). The findings, published in its Cybersecurity Weekly Report, reveal not just the scale of digital threats facing Taiwan’s public sector, but also four recurring attack patterns that reflect broader global trends targeting government agencies. For international observers, the numbers are significant. Out of a total of 723 cybersecurity incidents reported by government bodies and select non-government organizations during this period, 637 cases involved government agencies alone. The majority of these—410 cases—were classified as illegal intrusion, making it the most prevalent threat category. These cybersecurity incidents provide insight into how threat actors continue to exploit both technical vulnerabilities and human behaviour within public institutions.

Illegal Intrusion Leads the Wave of Cybersecurity Incidents

Illegal intrusion remains the leading category among reported cybersecurity incidents affecting government agencies. While the term may sound broad, it reflects deliberate attempts by attackers to gain unauthorized access to systems, often paving the way for espionage, data theft, or operational disruption. The CSAA identified four recurring attack patterns behind these incidents. The first involves the distribution of malicious programs disguised as legitimate software. Attackers impersonate commonly used applications, luring employees into downloading infected files. Once installed, these malicious programs establish abnormal external connections, creating backdoors for future control or data exfiltration. This tactic is particularly concerning for government agencies, where employees frequently rely on specialized or internal tools. A single compromised endpoint can provide attackers with a foothold into wider networks, increasing the scale of cybersecurity incidents.

USB Worm Infections and Endpoint Vulnerabilities

The second major pattern behind these cybersecurity incidents involves worm infections spread through portable media devices such as USB drives. Though often considered an old-school technique, USB-based attacks remain effective—especially in environments where portable media is routinely used for operational tasks. When infected devices are plugged into systems, malicious code can automatically execute, triggering endpoint intrusion and abnormal system behavior. Such breaches can lead to lateral movement within networks and unauthorized external communications. This pattern underscores a key reality: technical sophistication is not always necessary. In many cybersecurity incidents, attackers succeed by exploiting routine workplace habits rather than zero-day vulnerabilities.

Social Engineering and Watering Hole Attacks Target Trust

The third pattern involves social engineering email attacks, frequently disguised as administrative litigation or official document exchanges. These phishing emails are crafted around business topics highly relevant to government agencies, increasing the likelihood that recipients will open attachments or click malicious links. Such cybersecurity incidents rely heavily on human psychology. The urgency and authority embedded in administrative-themed emails make them particularly effective. Despite years of awareness campaigns, phishing remains one of the most successful entry points for attackers globally. The fourth pattern, known as watering hole attacks, adds another layer of complexity. In these cases, attackers compromise legitimate websites commonly visited by government officials. During normal browsing, malicious commands are silently executed, resulting in endpoint compromise and abnormal network behavior. Watering hole attacks demonstrate how cybersecurity incidents can originate from seemingly trusted digital environments. Even cautious users can fall victim when legitimate platforms are weaponized.

Critical Infrastructure Faces Operational Risks

Beyond government agencies, cybersecurity incidents reported by non-government organizations primarily affected critical infrastructure providers, particularly in emergency response, healthcare, and communications sectors. Interestingly, many of these cases involved equipment malfunctions or damage rather than direct cyberattacks. System operational anomalies led to service interruptions, while environmental factors such as typhoons disrupted critical services. These incidents highlight an important distinction: not all disruptions stem from malicious activity. However, the operational impact can be equally severe. The Cybersecurity Research Institute (CRI) emphasized that equipment resilience, operational continuity, and environmental risk preparedness are just as crucial as cybersecurity protection. In an interconnected world, digital security and physical resilience must go hand in hand.

Strengthening Endpoint Protection and Cyber Governance

In response to the rise in cybersecurity incidents, experts recommend a dual approach—technical reinforcement and management reform. From a technical perspective, endpoint protection and abnormal behavior monitoring must be strengthened. Systems should be capable of detecting malicious programs, suspicious command execution, abnormal connections, and risky portable media usage. Enhanced browsing and attachment access protection can further reduce the risk of malware downloads during routine operations. From a governance standpoint, ongoing education is essential. Personnel must remain alert to risks associated with fake software, social engineering email attacks, and watering hole attacks. Clear management policies regarding portable media usage, software sourcing, and external website access should be embedded into cybersecurity governance frameworks. The volume of cybersecurity incidents reported in just six months sends a clear message: digital threats targeting public institutions are persistent, adaptive, and increasingly strategic. Governments and critical infrastructure providers must move beyond reactive responses and build layered defenses that address both technology and human behavior.
  •  

India Rolls Out AI-on-Wheels to Bridge the Digital Divide

Yuva AI for All

India has taken another step toward expanding AI literacy in India with the launch of Kaushal Rath under the national programme Yuva AI for All. Flagged off from India Gate in New Delhi, the mobile initiative aims to bring foundational Artificial Intelligence (AI) education directly to students, youth, and educators, particularly in semi-urban and underserved regions. For a country positioning itself as a global digital leader, the message behind Yuva AI for All is clear: AI cannot remain limited to elite institutions or metro cities. If Artificial Intelligence is to shape economies and governance, it must be understood by the wider population.

Yuva AI for All: Taking AI to the Doorstep

Launched by the Ministry of Electronics and Information Technology (MeitY) under the IndiaAI Mission in collaboration with AISECT, Yuva AI for All focuses on democratising access to AI education. Launching the initiative, the Minister of State Jitin Prasada stated, “Through the Yuva AI for All initiative and the Kaushal Rath, we are taking AI awareness directly across the country, especially to young people. The bus will travel across regions to familiarise students and youth with the uses and benefits of Artificial Intelligence, fulfilling the Prime Minister Narendra Modi’s vision of ensuring that awareness and access to opportunity transcend geography and demography.” Adding to this, he also said that “The Yuva AI for All with Kaushal Rath initiative is a precursor to the India AI Impact Summit 2026, which is set to take place in New Delhi next week. It is a great pride for India to be hosting a Summit of this kind for the first time, to be held in the Global South. “ [caption id="attachment_109449" align="aligncenter" width="600"]Yuva AI for All Image Source: PIB[/caption] At the centre of this effort is Kaushal Rath, a fully equipped mobile computer lab with internet-enabled systems and audio-visual tools. The vehicle will travel across Delhi-NCR and later other regions, visiting schools, ITIs, colleges, and community spaces. The aim is not abstract policy messaging, but practical exposure—hands-on demonstrations of AI and Generative AI tools, guided by trained facilitators and contextualised Indian use cases. The course structure is intentionally accessible. It is a four-hour, self-paced programme with six modules, requiring zero coding background. Participants learn AI concepts, ethics, and real-world applications. Upon completion, they receive certification, a move designed to add tangible value to academic and professional profiles. Kavita Bhatia, Scientist G, MeitY and COO of IndiaAI Mission highlighted, “Under the IndiaAI Mission, skilling is one of the seven core pillars, and this initiative advances our goal of democratising AI education at scale. Through Kaushal Rath, we are enabling hands-on AI learning for students across institutions using connected systems, AI tools, and structured courses, including the YuvAI for All programme designed to demystify AI. By combining instructor-led training, micro- and nano-credentials, and nationwide outreach, we are ensuring that AI skilling becomes accessible to learners across regions.” In a global context, this matters. Many nations speak of AI readiness, but few actively drive AI education beyond established technology hubs. Yuva AI for All attempts to bridge that gap.

Building Momentum Toward the India AI Impact Summit 2026

The launch of Yuva AI for All and Kaushal Rath also builds momentum toward the upcoming India AI Impact Summit 2026, scheduled from February 16–20 at Bharat Mandapam, New Delhi. Positioned as the first global AI summit to be hosted in the Global South, the event is anchored on three pillars: People, Planet, and Progress. The summit aims to translate global AI discussions into development-focused outcomes aligned with India’s national priorities. But what distinguishes this effort is its nationwide groundwork. Over the past months, seven Regional AI Conferences were conducted across Meghalaya, Gujarat, Odisha, Madhya Pradesh, Uttar Pradesh, Rajasthan, and Kerala under the IndiaAI Mission. These conferences focused on practical AI deployment in governance, healthcare, agriculture, education, language technologies, and public service delivery. Policymakers, startups, academia, industry leaders, and civil society participated, ensuring that discussions were not limited to theory. Insights from these regional consultations will directly shape the agenda of the India AI Impact Summit 2026.

A Nationwide AI Push, Not Just a Summit

Several major announcements emerged from the regional conferences. Among them:
  • A commitment to train one million youth under Yuva AI for All
  • Expansion of AI Data Labs and AI Labs in ITIs and polytechnics
  • Launch of Rajasthan’s AI/ML Policy 2026
  • Announcement of the Uttar Pradesh AI Mission
  • Introduction of Madhya Pradesh’s SpaceTech Policy 2026 integrating AI
  • Signing of MoUs with institutions including Google, IIT Delhi, and National Law University, Jodhpur
  • Rollout of AI Stacks and cloud adoption frameworks for state-level governance
These developments suggest that India’s AI roadmap is not confined to policy speeches. It is being operationalised across states, with funding commitments and institutional backing. For global observers, this signals something important. Emerging economies are not merely consumers of AI technologies—they are actively shaping governance models and skilling frameworks suited to their socio-economic realities.

Why AI Literacy in India Matters Globally

Artificial Intelligence is often discussed in terms of advanced research and frontier innovation. Yet the real challenge is adoption—ensuring people understand what AI is, what it can do, and how it should be used responsibly. By launching Yuva AI for All, India is placing emphasis on foundational awareness, not just high-end research. That approach reflects a broader recognition: AI will influence public service delivery, agriculture systems, healthcare models, and digital governance worldwide. Without widespread literacy, the risk of exclusion grows. At the same time, scaling AI education in a country as large and diverse as India is no small task. The success of Kaushal Rath will depend on sustained outreach, quality training, and long-term institutional support. Still, the initiative marks a visible shift. AI is no longer framed as a specialist subject—it is being positioned as a public capability. As preparations intensify for the India AI Impact Summit 2026, Yuva AI for All stands out as a reminder that AI’s future will not be shaped only in boardrooms or research labs, but also in classrooms, ITIs, and community spaces across regions often left out of the digital conversation.
  •  

Romance, Fake Platforms, $73M Lost: Crypto Scam Leader Gets 20 Years

global cryptocurrency investment scam

The U.S. justice system has sent away an individual behind one of the largest global cryptocurrency investment scam cases, for two decades. While the sentence signals accountability, the individual remains a fugitive after cutting off his electronic ankle monitor and fleeing in December 2025. Daren Li, a 42-year-old dual national of China and St. Kitts and Nevis, has been sentenced in absentia to 20 years in prison for carrying out a $73 million cryptocurrency fraud scheme that targeted American victims.

Inside the $73 Million Global Cryptocurrency Investment Scam

According to court documents, Li pleaded guilty in November 2024 to conspiring to launder funds obtained through cryptocurrency scams. Prosecutors revealed that the global cryptocurrency investment scam was operated from scam centers in Cambodia, a growing hotspot for transnational cyber fraud. The operation followed a now-familiar pattern often referred to as a “pig butchering scam.” Victims were approached through social media, unsolicited calls, text messages, and even online dating platforms. Fraudsters built professional or romantic relationships over weeks or months. Once trust was secured, victims were directed to spoofed cryptocurrency trading platforms that looked legitimate. In other cases, scammers posed as tech support or customer service representatives, convincing victims to transfer funds to fix non-existent viruses or fabricated technical problems. The numbers are staggering. Li admitted that at least $73.6 million flowed into accounts controlled by him and his co-conspirators. Of that, nearly $60 million was funneled through U.S. shell companies designed to disguise the origins of the stolen funds. This was not random fraud—it was organized, calculated, and industrial in scale.

Crypto Money Laundering Through U.S. Shell Companies

What makes this global cryptocurrency investment scam particularly troubling is the complex crypto money laundering infrastructure behind it. Li directed associates to establish U.S. bank accounts under shell companies. These accounts received interstate and international wire transfers from victims. The stolen money was then converted into cryptocurrency, further complicating efforts to trace and recover funds. Eight co-conspirators have already pleaded guilty. Li is the first defendant directly involved in receiving victim funds to be sentenced. Prosecutors pushed for the maximum penalty after hearing from victims who lost life savings, retirement funds, and, in some cases, their entire financial security. Assistant Attorney General A. Tysen Duva described the damage as “devastating.” And that word is not an exaggeration. Behind every dollar in this $73 million cryptocurrency scam is a real person whose trust was manipulated. “As part of an international cryptocurrency investment scam, Daren Li and his co-conspirators laundered over $73 million dollars stolen from American victims,” said Assistant Attorney General A. Tysen Duva of the Justice Department’s Criminal Division. “The Court’s sentence reflects the gravity of Li’s conduct, which caused devastating losses to victims throughout our country. The Criminal Division will work with our law enforcement partners around the world to ensure that Li is returned to the United States to serve his full sentence.”

Scam Centers in Cambodia Under Global Scrutiny

The sentencing comes amid increasing international pressure to dismantle scam centers in Cambodia and across Southeast Asia. For years, these operations flourished with limited oversight. Now, authorities in the U.S., China, and other nations are escalating crackdowns. China recently executed members of two crime families accused of running cyber scam compounds in Myanmar. In Cambodia, the arrest and extradition of Prince Group chairman Chen Zhi—a key figure in cyber scam money laundering—triggered chaotic scenes as human trafficking victims and scam workers sought refuge at embassies. These developments show that the global cryptocurrency investment scam network is not isolated. It is part of a larger ecosystem of organized crime, human trafficking, and digital exploitation.

Law Enforcement’s Expanding Response

The U.S. Secret Service’s Global Investigative Operations Center led the investigation, supported by Homeland Security Investigations, Customs and Border Protection, the U.S. Marshals Service, and international partners. The Justice Department’s Criminal Division continues targeting scam centers by seizing cryptocurrency, dismantling digital infrastructure, and disrupting money laundering networks. Since 2020, the Computer Crime and Intellectual Property Section (CCIPS) has secured more than 180 cybercrime convictions and recovered over $350 million in victim funds. Still, the fact that Li escaped before serving his sentence highlights a sobering truth: enforcement is improving, but global coordination must move even faster.

Why This Global Cryptocurrency Investment Scam Matters

Technology has erased borders, but it has also erased barriers for criminals. The global cryptocurrency investment scam case shows how encrypted apps, fake trading platforms, and shell corporations can be stitched together into a seamless fraud machine. The bigger concern is scale. These operations are not small-time scams run from a basement. They are corporate-style enterprises with recruiters, relationship builders, financial handlers, and laundering specialists. For investors, the lesson is clear: unsolicited investment advice, especially involving cryptocurrency, should raise immediate red flags. For regulators and governments, the message is even stronger. Financial transparency laws, international cooperation, and aggressive enforcement are no longer optional—they are essential. Daren Li’s 20-year sentence may serve as a warning, but until fugitives like him are brought back to face prison time, the fight against the next $73 million cryptocurrency scam continues.
  •  

Discord Introduces Stronger Teen Safety Controls Worldwide

Discord teen-by-default settings

Discord teen-by-default settings are now rolling out globally, marking a major shift in how the popular communication platform handles safety for users aged 13 to 17. The move signals a clear message from Discord: protecting teens online is no longer optional, it is expected. The Discord update applies to all new and existing users worldwide and introduces age-appropriate defaults, restricted access to sensitive content, and stronger safeguards around messaging and interactions. While Discord positions this as a safety-first upgrade, the announcement also arrives at a time when gaming and social platforms are under intense regulatory and public scrutiny.

What Discord Teen-by-Default Settings Actually Change

Discord, headquartered in San Francisco and used by more than 200 million monthly active users, says the new Discord teen-by-default settings are designed to create safer experiences without breaking the sense of community that defines the platform. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities Under the new system, teen users automatically receive stricter communication settings. Sensitive content remains blurred, access to age-restricted servers is blocked, and direct messages from unknown users are routed to a separate inbox. Only age-verified adults can change these defaults. The company says these measures are meant to protect teens while still allowing them to connect around shared interests like gaming, music, and online communities.

Age Verification, But With Privacy Guardrails

Age assurance sits at the core of the Discord teen-by-default settings rollout. Starting in early March, users may be asked to verify their age if they want to access certain content or change safety settings. Discord is offering multiple options: facial age estimation processed directly on a user’s device, or submission of government-issued ID through approved vendors. The company has also introduced an age inference model that runs quietly in the background to help classify accounts without always forcing verification. Discord stresses that privacy remains central. Video selfies never leave the device, identity documents are deleted quickly, and a user’s age status is never visible to others. In most cases, verification is a one-time process.

Why it Matters Now Than Ever Before

The timing of the Discord teen-by-default settings rollout is no coincidence. In October 2025, Discord disclosed a data breach involving a third-party vendor that handled customer support and age verification. While Discord’s own systems were not breached, attackers accessed government ID photos submitted for age verification, limited billing data, and private support conversations. The incident reignited concerns about whether platforms can safely handle sensitive identity data—especially when minors are involved. For many users, that trust has not fully recovered. At the same time, regulators are tightening the screws. The U.S. Federal Trade Commission has publicly urged companies to adopt age verification tools faster. Platforms like Roblox are rolling out facial AI and ID-based age estimation, while Australia has gone further by banning social media use for children under 16. Similar discussions are underway across Europe.

Teen Safety Meets Public Skepticism

Not everyone is convinced. Online reaction, particularly on Reddit, has been harsh. Some users accuse Discord of hypocrisy, pointing to past breaches and questioning the wisdom of asking users to upload IDs to third-party vendors. Others see the changes as the beginning of the end for Discord’s open community model. There is also concern among game studios and online communities that rely heavily on Discord. If access becomes more restricted, some fear engagement could drop—or migrate elsewhere.

Giving Teens a Voice, Not Just Rules

To balance control with understanding, Discord is launching its first Teen Council, a group of 10–12 teens aged 13 to 17 who will advise the company on safety, product design, and policy decisions. The goal is to avoid guessing what teens need and instead hear it directly from them. This approach acknowledges a hard truth: safety tools only work if teens understand them and trust the platform using them.

A Necessary Shift, Even If It’s Uncomfortable

The Discord teen-by-default settings rollout reflects a broader industry reality. Platforms built for connection can no longer rely on self-reported ages and loose moderation. Governments, parents, and regulators are demanding stronger protections—and they are willing to step in if companies do not act. Discord’s approach won’t please everyone. But in today’s climate, doing nothing would be far riskier. Whether this move strengthens trust or fuels backlash will depend on how well Discord protects user data—and how honestly it continues to engage with its community.
  •  

Senegal Confirms Cyberattack on Agency Managing National ID and Biometric Data

Senegal cyberattack

The recent Senegal cyberattack on the Directorate of File Automation (DAF) has done more than disrupt government services. It has exposed how vulnerable the country’s most sensitive data systems really are, and why cybersecurity can no longer be treated as a technical issue handled quietly in the background. DAF, the government agency responsible for managing national ID cards, passports, biometric records, and electoral data, was forced to temporarily shut down operations after detecting a cyber incident. For millions of Senegalese citizens, this means delays in accessing essential identity services. For the country, it raises far bigger concerns about data security and national trust.

Senegal Cyberattack Brings Identity Services to a Standstill

In an official public notice, DAF confirmed that the production of national identity cards had been suspended following the cyberattack. Authorities assured citizens that personal data had not been compromised and that systems were being restored. However, as days passed and the DAF website remained offline, doubts began to grow. A Senegal cyberattack affecting such a critical agency is not something that can be brushed off quickly, especially when biometric and identity data are involved. [caption id="attachment_109392" align="aligncenter" width="500"]Senegal Cyberattack Image Source: X[/caption]

Hackers Claim Theft of Massive Biometric Data

The situation escalated when a ransomware group calling itself The Green Blood Group claimed responsibility for the attack. The group says it stole 139 terabytes of data, including citizen records, biometric information, and immigration documents. To back up its claims, the hackers released data samples on the dark web. They also shared an internal email from IRIS Corporation Berhad, a Malaysian company working with Senegal on its digital national ID system. In the email, a senior IRIS executive warned that two DAF servers had been breached and that card personalization data may have been accessed. Emergency steps were taken, including cutting network connections and shutting access to external offices. Even if authorities insist that data integrity remains intact, the scale of the alleged breach makes the Senegal cyberattack impossible to ignore.

Implications of the Senegal Cyberattack

DAF is not just another government office. It manages the digital identities of Senegalese citizens. Any compromise—real or suspected—creates long-term risks, from identity fraud to misuse of biometric data. What makes this incident more worrying is that it is not the first major breach. Just months ago, Senegal’s tax authority also suffered a cyberattack. Together, these incidents point to a larger problem: critical systems are being targeted, and attackers are finding ways in. Cybercrime groups are no longer experimenting in Africa. They are operating with confidence, speed, and clear intent. The Green Blood Group, which appeared only recently, has reportedly targeted just two countries so far—Senegal and Egypt. That alone should be taken seriously.

Disputes, Outsourcing, and Cybersecurity Blind Spots

The cyberattack also comes during a payment dispute between the Senegalese government and IRIS Corporation. While no official link has been confirmed, the situation highlights a key issue: when governments rely heavily on third-party vendors, cybersecurity responsibility can become blurred. The lesson from this Senegal cyberattack is simple and urgent. Senegal needs a dedicated National Cybersecurity Agency, along with a central team to monitor, investigate, and respond to cyber incidents across government institutions. Cyberattacks in Africa are no longer rare or unexpected. They are happening regularly, and they are hitting the most sensitive systems. Alongside better technology, organizations must focus on insider threats, staff awareness, and leadership accountability. If sensitive data from this attack is eventually leaked, the damage will be permanent. Senegal still has time to act—but only if this warning is taken seriously.
  •  

Why TikTok’s Addictive Design Is Now a Regulatory Problem

TikTok Addictive Design Under EU Regulatory Scrutiny

The European Commission’s preliminary finding that TikTok addictive design breaches the Digital Services Act (DSA) is a huge change in how regulators view social media responsibility, especially when it comes to children and vulnerable users. This is not a symbolic warning. It is a direct challenge to the design choices that have powered TikTok’s explosive growth. According to the Commission, TikTok’s core features—including infinite scroll, autoplay, push notifications, and a highly personalised recommender system—are engineered to keep users engaged for as long as possible. The problem, regulators argue, is that TikTok failed to seriously assess or mitigate the harm these features can cause, particularly to minors. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

TikTok Addictive Design Fuels Compulsive Use

The Commission’s risk assessment found that TikTok did not adequately evaluate how its design impacts users’ physical and mental wellbeing. Features that constantly “reward” users with new content can push people into what experts describe as an “autopilot mode,” where scrolling becomes automatic rather than intentional. Scientific research reviewed by the Commission links such design patterns to compulsive behaviour and reduced self-control. Despite this, TikTok reportedly overlooked key indicators of harmful use, including how much time minors spend on the app at night, how frequently users reopen the app, and other behavioural warning signs. This omission matters. Under the Digital Services Act, platforms are expected not only to identify risks but to act on them. In this case, the Commission believes TikTok failed on both counts.

Risk Mitigation Measures Fall Short

The investigation also found that TikTok’s current safeguards do little to counter the risks created by its addictive design. Screen time management tools are reportedly easy to dismiss and introduce minimal friction, making them ineffective in helping users actually reduce usage. Parental controls fare no better. While they exist, the Commission notes that they require extra time, effort, and technical understanding from parents, barriers that significantly limit their real-world impact. At this stage, regulators believe that cosmetic fixes are not enough. The Commission has stated that TikTok may need to change the basic design of its service, including disabling infinite scroll over time, enforcing meaningful screen-time breaks (especially at night), and reworking its recommender system. These findings are preliminary, but the message is clear: responsibility cannot be optional when a platform’s design actively shapes user behaviour.

How Governments View Social Media Harm

The scrutiny of TikTok addictive design comes amid a broader global reassessment of social media’s impact on young users. Countries including Australia, Spain, and the United Kingdom have taken steps in recent months to restrict or ban social media use by minors, citing growing concerns over screen time and mental health. Europe’s stance reflects a wider regulatory trend: moving away from asking platforms to self-police, and toward enforcing accountability through law. This is consistent with other digital policy actions across the region, including investigations into platform transparency, data access for researchers, and online safety failures.

What Happens Next for TikTok

TikTok now has the right to review the Commission’s findings and respond in writing. The European Board for Digital Services will also be consulted. If the Commission ultimately confirms its position, it could issue a formal non-compliance decision, opening the door to fines of up to 6% of TikTok’s global annual turnover. While the outcome is not yet final, the direction is unmistakable. As Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, stated:
“Social media addiction can have detrimental effects on the developing minds of children and teens. The Digital Services Act makes platforms responsible for the effects they can have on their users. In Europe, we enforce our legislation to protect our children and our citizens online.”
The TikTok case is no longer just about one app. It is about whether growth-driven platform design can continue unchecked, or whether accountability is finally catching up.
  •  

Illinois Man Charged in Massive Snapchat Hacking Scheme Targeting Hundreds of Women

Snapchat hacking investigation

The Snapchat hacking investigation involving an Illinois man accused of stealing and selling private images of hundreds of women is not just another cybercrime case, it is a reminder of how easily social engineering can be weaponized against trust, privacy, and young digital users. Federal prosecutors say the case exposes a disturbing intersection of identity theft, online exploitation, and misuse of social media platforms that continues to grow largely unchecked. Kyle Svara, a 26-year-old from Oswego, Illinois, has been charged in federal court in Boston for his role in a wide-scale Snapchat account hacking scheme that targeted nearly 600 women. According to court documents, Svara used phishing and impersonation tactics to steal Snapchat access codes, gain unauthorized account access, and extract nude or semi-nude images that were later sold or traded online.

Snapchat Hacking Investigation Reveals Scale of Phishing Abuse

At the core of the Snapchat hacking investigation is a textbook example of social engineering. Between May 2020 and February 2021, Svara allegedly gathered emails, phone numbers, and Snapchat usernames using online tools and research techniques. He then deliberately triggered Snapchat’s security system to send one-time access codes to victims. Using anonymized phone numbers, Svara allegedly impersonated a Snap Inc. representative and texted more than 4,500 women, asking them to share their security codes. About 570 women reportedly complied—handing over access to their accounts without realizing they were being manipulated. Once inside, prosecutors say Svara accessed at least 59 Snapchat accounts and downloaded private images. These images were allegedly kept, sold, or exchanged on online forums. The investigation found that Svara openly advertised his services on platforms such as Reddit, offering to “get into girls’ snap accounts” for a fee or trade.

Snapchat Hacking for Hire

What makes this Snapchat hacking case especially troubling is that it was not driven solely by curiosity or personal motives. Investigators allege that Svara operated as a hacking-for-hire service. One of his co-conspirators was Steve Waithe, a former Northeastern University track and field coach, who allegedly paid Svara to hack Snapchat accounts of women he coached or knew personally. Waithe was convicted in November 2023 on multiple counts, including wire fraud and cyberstalking, and sentenced to five years in prison. The link between authority figures and hired cybercriminals adds a deeply unsettling dimension to the case, one that highlights how power dynamics can be exploited through digital tools. Beyond hired jobs, Svara also allegedly targeted women in and around Plainfield, Illinois, as well as students at Colby College in Maine, suggesting a pattern of opportunistic and localized targeting.

Why the Snapchat Hacking Investigation Matters

This Snapchat hacking investigation features a critical cybersecurity truth: technical defenses mean little when human trust is exploited. The victims did not lose access because Snapchat’s systems failed; they were deceived into handing over the keys themselves. It also raises serious questions about accountability on social platforms. While Snapchat provides security warnings and access codes, impersonation attacks continue to succeed at scale. The ease with which attackers can pose as platform representatives points to a larger problem of user awareness and platform-level safeguards. The case echoes other recent investigations, including the indictment of a former University of Michigan football coach accused of hacking thousands of athlete accounts to obtain private images. Together, these cases reveal a troubling pattern—female student athletes being specifically researched, targeted, and exploited.

Legal Consequences

Svara faces charges including aggravated identity theft, wire fraud, computer fraud, conspiracy, and false statements related to child pornography. If convicted, he could face decades in prison, with a cumulative maximum sentence of 32 years. His sentencing is scheduled for May 18. Federal authorities have urged anyone who believes they may be affected by this Snapchat hacking scheme to come forward. More than anything, this case serves as a warning. The tools used were not sophisticated exploits or zero-day vulnerabilities—they were lies, impersonation, and manipulation. As this Snapchat hacking investigation shows, the most dangerous cyber threats today often rely on human error, not broken technology.
  •  

Spain Ministry of Science Cyberattack Triggers Partial IT Shutdown

Spain Ministry of Science cyberattack

The Spain Ministry of Science cyberattack has caused a partial shutdown of government IT systems, disrupting services used daily by researchers, universities, students, and businesses across the country. While officials initially described the issue as a “technical incident,” boarding evidence and confirmations from Spanish media now point to a cyberattack involving potentially sensitive academic, personal, and financial data. The Ministry of Science, Innovation and Universities plays a central role in Spain’s research and higher education ecosystem. Any disruption to its digital infrastructure has wide-reaching consequences, making this incident far more serious than a routine systems outage.

Official Notice Confirms System Closure and Suspended Procedures

In a public notice published on its electronic headquarters, the ministry acknowledged the disruption and announced a temporary shutdown of key digital services. “As a result of a technical incident that is currently being assessed, the electronic headquarters of the Ministry of Science, Innovation and Universities has been partially closed.” The notice further stated: “All ongoing administrative procedures are suspended, safeguarding the rights and legitimate interests of all persons affected by said temporary closure, resulting in an extension of all deadlines for the various procedures affected.” The ministry added that deadline extensions would remain in place “until the complete resolution of the aforementioned incident occurs,” citing Article 32 of Law 39/2015. While procedural safeguards are welcome, the lack of early transparency around the nature of the incident raised concerns among affected users.

Spain Ministry of Science Cyberattack: Hacker Claims 

Those concerns intensified when a threat actor using the alias “GordonFreeman” appeared on underground forums claiming responsibility for the Spain Ministry of Science cyberattack. The attacker alleged that they exploited a critical Insecure Direct Object Reference (IDOR) vulnerability, granting “full-admin-level access” to internal systems. Data samples shared online—though not independently verified—include screenshots of official documents, email addresses, enrollment applications, and internal records. Spanish media outlet OKDIARIO reported that a ministry spokesperson confirmed the IT disruption was linked to a cyberattack and that the electronic headquarters had been shut down to assess the scope of the data breach. Although the forum hosting the alleged leak is now offline and the data has not resurfaced elsewhere, the screenshots appear legitimate. If confirmed, this would represent a serious breakdown in access control protections.

Alleged Data Exposure Raises Serious Privacy Concerns

According to claims made by the attacker, the stolen data includes highly sensitive information related to students and researchers, such as:
  • Scanned ID documents, NIEs, and passports
  • Email addresses
  • Payment receipts showing IBAN numbers
  • Academic records, including transcripts and apostilled degrees
  • Curricula containing private personal data
If even a portion of this data is authentic, the Spain Ministry of Science cyberattack could expose thousands of individuals to identity theft, financial fraud, and long-term privacy risks. Academic data, in particular, is difficult to replace or invalidate once leaked.

Spain’s Growing Cybercrime Problem

This Spain Ministry of Science cyberattack incident does not exist in isolation. Cybercrime now accounts for more than one in six recorded criminal offenses in Spain. Attacks have increased by 35% this year, with more than 45,000 incidents reported daily. Between late February and early March, attacks surged by 750% compared to the same period last year. During the week of 5–11 March 2025, Spain was the most targeted country globally, accounting for 22.6% of all cyber incidents—surpassing even the United States. Two factors continue to drive this trend. Rapid digital transformation, fueled by EU funding, has often outpaced cybersecurity investment. At the same time, ransomware attacks—up 120%—have increasingly targeted organizations with weak defenses, particularly public institutions and SMEs. The Spain Ministry of Science cyberattack stresses a hard truth, digital services without strong security become liabilities, not efficiencies. As public administrations expand online access, cybersecurity can no longer be treated as a secondary concern or an afterthought. Until Spain addresses systemic gaps in public-sector cybersecurity, incidents like Spain Ministry of Science cyberattack will continue, not as exceptions, but as warnings ignored too long.
  •  

Why End-of-Support Edge Devices Have Become a National Security Risk

End-of-Support edge devices

The growing cyber threat from End-of-Support edge devices is no longer a technical inconvenience, it is a national cybersecurity liability. With threat actors actively exploiting outdated infrastructure, federal agencies can no longer afford to treat unsupported edge technology as a future problem. The latest Binding Operational Directive (BOD 26-02) makes one thing clear- mitigating risk from End-of-Support edge devices is now mandatory, measurable, and time-bound. This directive, issued under the authority of the Department of Homeland Security (DHS) and enforced by the Cybersecurity and Infrastructure Security Agency (CISA), forces Federal Civilian Executive Branch (FCEB) agencies to confront a long-standing weakness at the network perimeter, devices that no longer receive vendor support but still sit exposed to the internet.

Why End-of-Support Edge Devices Are a High-Risk Blind Spot

End-of-Support (EOS) edge devices are particularly dangerous because of where they live. Firewalls, routers, VPN gateways, load balancers, and network security appliances operate at the boundary of federal networks. When these devices stop receiving patches, firmware updates, or CVE fixes, they become ideal entry points for attackers. CISA has already observed widespread exploitation campaigns targeting EOS edge devices. Advanced threat actors are using them not just for initial access, but as pivot points into identity systems and internal networks. In simple terms, one outdated edge device can undermine an entire Zero Trust strategy. The uncomfortable truth is this that agencies that delay replacing EOS edge devices are accepting disproportionate and avoidable risk.

Binding Operational Directive 26-02

BOD 26-02 is not guidance, it is enforcement. Federal agencies are legally required to comply, and the directive lays out a clear lifecycle-based approach to mitigating risk from End-of-Support edge devices. Within three months, agencies must inventory EOS devices using the CISA EOS Edge Device List. Within twelve months, they must decommission devices already past support deadlines. By eighteen months, all EOS edge devices must be removed from agency networks, replaced with vendor-supported alternatives. Most importantly, the directive doesn’t stop at cleanup. Within twenty-four months, agencies must establish continuous discovery processes to ensure no edge device reaches EOS while still operational. This is the shift federal cybersecurity has needed for years—from reactive patching to proactive lifecycle management.

Lifecycle Management is the Real Security Control

What BOD 26-02 exposes is not just a device problem, but a governance failure. Agencies that struggle with End-of-Support edge devices often lack mature asset management, refresh planning, and procurement alignment. OMB Circular A-130 already required unsupported systems to be phased out “as rapidly as possible.” This directive simply removes ambiguity and excuses. If an agency cannot track when its edge devices reach EOS, it cannot credibly claim to manage cyber risk. The directive also aligns closely with Zero Trust principles outlined in OMB Memorandum M-22-09, reinforcing MFA, asset visibility, workload isolation, and encryption. EOS devices undermine every one of these controls.

What it Means for Federal Cybersecurity

Some agencies will view this directive as operationally disruptive. That reaction misses the point. The real disruption comes from ransomware, espionage, and persistent network compromise—outcomes that EOS edge devices actively enable. BOD 26-02 signals a long-overdue cultural shift- unsupported technology is no longer tolerated at the federal network edge. Agencies that treat compliance as a checkbox will struggle. Those that use it to modernize lifecycle management will be far more resilient. In today’s threat environment, mitigating risk from End-of-Support edge devices is not about compliance, it’s about survival.
  •  

Russian Cyberattacks Target Milan-Cortina Winter Olympics Ahead of Opening Ceremony

Russian cyberattacks

With the Milan-Cortina Winter Olympics just hours from opening, Russian cyberattacks have forced Italian authorities into a full-scale security response that blends digital defence with boots on the ground. Italy confirmed this week that it successfully thwarted a coordinated wave of cyber incidents targeting government infrastructure and Olympic-linked sites, exposing how global sporting events are now frontline targets in geopolitical conflict. Italian Foreign Minister Antonio Tajani revealed that the Russian cyberattacks hit around 120 websites, including Italy’s foreign ministry offices abroad and several Winter Olympics-related locations, such as hotels in Cortina d’Ampezzo. While officials insist the attacks were “effectively neutralised,” the timing sends a clear message: cyber operations are now as much a part of Olympic security planning as physical threats.

Russian Cyberattacks and the Olympics: A Political Signal

According to Tajani, the attacks began with foreign ministry offices, including Italy’s embassy in Washington, before spreading to Olympic-linked infrastructure. A Russian hacker group known as Noname057 claimed responsibility, framing the Russian cyberattacks as retaliation for Italy’s political support for Ukraine. In a statement shared on Telegram, the group warned that Italy’s “pro-Ukrainian course” would be met with DDoS attacks—described provocatively as “missiles”—against Italian websites. While AFP could not independently verify the group’s identity, cybersecurity analysts noted that the tactics and messaging align with previous operations attributed to the same network. DDoS attacks may seem unsophisticated compared to advanced espionage campaigns, but their impact during high-profile events like the Olympics is strategic. Disrupting hotel websites, travel systems, or government portals creates confusion, undermines confidence, and grabs headlines—all without crossing into kinetic conflict.

Digital Threats Meet Physical Security Lockdown

Italy’s response to the Russian cyberattacks has been layered and aggressive. More than 6,000 police officers and nearly 2,000 military personnel have been deployed across Olympic venues stretching from Milan to the Dolomites. Snipers, bomb disposal units, counterterrorism teams, and even skiing police are now part of the security landscape. The defence ministry has added drones, radars, aircraft, and over 170 vehicles, underlining how cyber threats are now treated as triggers for broader security escalation. Milan, hosting the opening ceremony at San Siro stadium, is under particular scrutiny, with global leaders—including US Vice President JD Vance—expected to attend. The International Olympic Committee, however, stuck to its long-standing position. “We don’t comment on security,” IOC communications director Mark Adams said, a response that feels increasingly outdated in an era where Russian cyberattacks are openly claimed and politically framed.

ICE Controversy Adds Fuel to a Tense Atmosphere

Cybersecurity is not the only issue complicating Winter Olympic 2026 preparations. The presence of US Immigration and Customs Enforcement (ICE) officials in Italy has sparked political backlash and public protests. Milan Mayor Giuseppe Sala went as far as to say ICE agents were “not welcome,” calling the agency “a militia that kills.” Italy’s interior minister Matteo Piantedosi pushed back hard, clarifying that ICE’s Homeland Security Investigations unit would operate strictly within US diplomatic missions and have no enforcement powers. Still, the optics matter—especially as Russian cyberattacks amplify fears of foreign interference and sovereignty breaches. Even symbolic gestures have changed. A US hospitality venue originally called “Ice House” was quietly renamed “Winter House,” highlighting how sensitive the political climate has become.
  •  

What the Incognito Market Sentencing Reveals About Dark Web Drug Trafficking

Incognito Market

The 30-year prison sentence handed to Rui-Siang Lin, the operator of the infamous Incognito Market, is more than just another darknet takedown story. Lin, who ran Incognito Market under the alias “Pharaoh,” oversaw one of the largest online narcotics operations in history, generating more than $105 million in illegal drug sales worldwide before its collapse in March 2024. Platforms like Incognito Market are not clever experiments in decentralization. They are industrial-scale criminal enterprises, and their architects will be treated as such. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

How Incognito Market Became a Global Narcotics Hub

Launched in October 2020, Incognito Market was designed to look and feel like a legitimate e-commerce platform, only its products were heroin, cocaine, methamphetamine, MDMA, LSD, ketamine, and counterfeit prescription drugs. Accessible through the Tor browser, the dark web marketplace allowed anyone with basic technical knowledge to buy illegal narcotics from around the globe. At its peak, Incognito Market supported over 400,000 buyer accounts, more than 1,800 vendors, and facilitated 640,000 drug transactions. Over 1,000 kilograms of cocaine, 1,000 kilograms of methamphetamine, and fentanyl-laced pills were likely sold, the authorities said. This was not a fringe operation—it was a global supply chain built on code, crypto, and calculated harm.
Also read: “Incognito Market” Operator Arrested for Running $100M Narcotics Marketplace

“Pharaoh” and the Business of Digital Drug Trafficking

Operating as “Pharaoh,” Lin exercised total control over Incognito Market. Vendors paid an entry fee and a 5% commission on every sale, creating a steady revenue stream that funded servers, staff, and Lin’s personal profit—more than $6 million by prosecutors’ estimates. The marketplace had a very professional-looking modus operandi from branding, customer service, vendor ratings, and even its own internal financial system—the Incognito Bank—which allowed users to deposit cryptocurrency and transact anonymously. The system was designed to remove trust from human relationships and replace it with platform-controlled infrastructure. This was not chaos. It was corporate-style crime.

Fentanyl, Fake Oxycodone, and Real Deaths

In January 2022, Lin explicitly allowed opiate sales on Incognito Market, a decision that proved deadly. Listings advertised “authentic” oxycodone, but laboratory tests later revealed fentanyl instead. In September 2022, a 27-year-old man from Arkansas died after consuming pills purchased through the platform. This is where the myth of victimless cybercrime collapsed. Incognito Market did not just move drugs—it amplified the opioid crisis and directly contributed to loss of life. U.S. Attorney Jay Clayton stated that Lin’s actions caused misery for more than 470,000 users and their families, a figure that shows the human cost behind the transactions.

Exit Scam, Extortion, and the Final Collapse

When Incognito Market shut down in March 2024, Lin didn’t disappear quietly. He stole at least $1 million in user deposits and attempted to extort buyers and vendors, threatening to expose their identities and crypto addresses. His message was blunt: “YES, THIS IS AN EXTORTION!!!” It was a fittingly brazen end to an operation built on manipulation and fear. Judge Colleen McMahon called Incognito Market the most serious drug case she had seen in nearly three decades, labeling Lin a “drug kingpin.” The message from law enforcement is unmistakable: dark web platforms, cryptocurrency, and blockchain are not shields against justice.
  •  

Mountain View Shuts Down Flock Safety ALPR Cameras After Year-Long Unrestricted Data Access

Flock Safety ALPR cameras

Mountain View’s decision to shut down its automated license plate reader program is a reminder of an uncomfortable truth that surveillance technology is only as trustworthy as the systems—and vendors—behind it. This week, Police Chief Mike Canfield announced that all Flock Safety ALPR cameras in Mountain View have been turned off, effective immediately. The move pauses the city’s pilot program until the City Council reviews its future at a February 24 meeting. The decision comes after the police department discovered that hundreds of unauthorized law enforcement agencies had been able to search Mountain View’s license plate camera data for more than a year—without the city’s awareness. For a tool that was sold to the public as tightly controlled and privacy-focused, this is a serious breach of trust.

Flock Safety ALPR Cameras Shut Down Over Data Access Failures

In his message to the community, Chief Canfield made it clear that while the Flock Safety ALPR pilot program had shown value in solving crimes, he no longer has confidence in the vendor. “I personally no longer have confidence in this particular vendor,” Canfield wrote, citing failures in transparency and access control. The most troubling issue, according to the police chief, was the discovery that out-of-state agencies had been able to search Mountain View’s license plate data—something that should never have been possible under state law or city policy. This wasn’t a minor technical glitch. It was a breakdown in oversight, accountability, and vendor responsibility.

Automated License Plate Readers Under Growing National Scrutiny

Automatic license plate readers, or ALPR surveillance cameras, have become one of the most controversial policing technologies in the United States. These cameras capture images of passing vehicles, including license plate numbers, make, and model. The information is stored and cross-checked with databases to flag stolen cars or vehicles tied to investigations. Supporters argue that ALPRs help law enforcement respond faster and solve crimes more efficiently. But critics have long warned that ALPR systems can easily become tools of mass surveillance—especially when data-sharing controls are weak. That concern has intensified under the Trump administration, as reports have emerged of license plate cameras being used for immigration enforcement and even reproductive healthcare-related investigations. Mountain View’s case shows exactly why the debate isn’t going away.

Mountain View Police Violated Its Own ALPR Policies

According to disclosures made this week, the Mountain View Police Department unintentionally violated its own policies by allowing statewide and national access to its ALPR data. Chief Canfield admitted that “statewide lookup” had been enabled since the program began 17 months ago, meaning agencies across California could search Mountain View’s license plate records without prior authorization. Even more alarming, “national lookup” was reportedly turned on for three months in 2024, allowing agencies across the country to access the city’s data. State law prohibits sharing ALPR information with out-of-state agencies, especially for immigration enforcement purposes. So how did it happen? Canfield was blunt: “Why wasn’t it caught sooner? I couldn’t tell you.” That answer won’t reassure residents who were promised strict safeguards.

Community Trust Matters More Than Surveillance Tools

Chief Canfield’s message repeatedly emphasized one point: technology cannot replace trust. “Community trust is more important than any individual tool,” he wrote. That statement deserves attention. Police departments across the country have adopted surveillance systems with the promise of safety, only to discover later that the systems operate with far less control than advertised. When a vendor fails to disclose access loopholes—or when law enforcement fails to detect them—the public pays the price. Canfield acknowledged residents’ anger and frustration, offering an apology and stating that transparency is essential for community policing. It’s a rare moment of accountability in a space where surveillance expansion often happens quietly.

Flock Safety Faces Questions About Transparency and Oversight

Mountain View’s ALPR program began in May 2024, when the City Council approved a contract with Flock Safety, a surveillance technology company. Since August 2024, the city installed cameras at major entry and exit points. By January 2026, Mountain View had 30 Flock cameras operating. Now, the entire program is paused. Flock spokesperson Paris Lewbel said the company would address the concerns directly with the police chief, but the damage may already be done. This incident raises a bigger question: should private companies be trusted to manage sensitive surveillance infrastructure in the first place?

What Happens Next for the Flock Safety ALPR Program?

The City Council will now decide whether Mountain View continues with the Flock contract, modifies the program, or shuts it down permanently. But the broader lesson is already clear. ALPR surveillance cameras may offer law enforcement real investigative value, but without airtight safeguards, they risk becoming tools of unchecked monitoring. Mountain View’s shutdown is not just a local story—it’s part of a national reckoning over how much surveillance is too much, and whether public safety can ever justify the loss of privacy without full accountability.
  •  

Spain Ban Social Media Platforms for Kids as Global Trend Grows

Spain Ban Social Media Platforms

Spain is preparing to take one of the strongest steps yet in Europe’s growing push to regulate the digital world for young people. Spain will ban social media platforms for children under the age of 16, a move Prime Minister Pedro Sanchez framed as necessary to protect minors from what he called the “digital Wild West.” This, Spain ban social media platforms, is not just another policy announcement. The Spain ban social media decision reflects a wider global shift: governments are finally admitting that social media has become too powerful, too unregulated, and too harmful for children to navigate alone.

Spain Ban Social Media Platforms for Children Under Age of 16

Speaking at the World Government Summit in Dubai, Sanchez said Spain will require social media platforms to implement strict age verification systems, ensuring that children under 16 cannot access these services freely. “Social media has become a failed state,” Sanchez declared, arguing that laws are ignored and harmful behavior is tolerated online. The Spain ban social media platforms for children under age of 16 is being positioned as a child safety measure, but it is also a direct challenge to tech companies that have long avoided accountability. Sanchez’s language was blunt, and honestly, refreshing. For years, platforms have marketed themselves as neutral spaces while profiting from algorithms that amplify outrage, addictive scrolling, and harmful content. Spain’s message is clear: enough is enough.

Social Media Ban and Executive Accountability

Spain is not stopping at age limits. Sanchez also announced a new bill expected next week that would hold social media executives personally accountable for illegal and hateful content. That is a significant escalation. A social media ban alone may restrict access, but forcing executives to face consequences could change platform behavior at its core. The era of tech leaders hiding behind “we’re just a platform” excuses may finally be coming to an end. This makes Spain’s approach one of the most aggressive in Europe so far.

France Joins the Global Social Media Ban Movement

Spain is not acting in isolation. On February 3, 2026, French lawmakers approved their own social media ban for children under 15. The bill passed by a wide margin in the National Assembly and is expected to take effect in September, at the start of the next school year. French President Emmanuel Macron strongly backed the move, saying: “Our children’s brains are not for sale… Their dreams must not be dictated by algorithms.” That statement captures the heart of this debate. Social media is not just entertainment anymore. It is an attention economy designed to hook young minds early, shaping behavior, self-image, and even mental health. France’s decision adds momentum to the idea that a social media ban globally for children may soon become the norm rather than the exception.

Australia’s World-First Social Media Ban for Children Under 16

The strongest example so far comes from Australia, which implemented a world-first social media ban for children under 16 in December 2025. The ban covered major platforms including:
  • Facebook
  • Instagram
  • TikTok
  • Snapchat
  • Reddit
  • X
  • YouTube
  • Twitch
Messaging apps like WhatsApp were exempt, acknowledging that communication tools are different from algorithm-driven feeds. Since enforcement began, companies have revoked access to around 4.7 million accounts linked to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australia’s case shows that enforcement is possible, even at scale, through ID checks, third-party age estimation tools, and data inference. Yes, some children try to bypass restrictions. But the broader impact is undeniable: governments can intervene when platforms fail to self-regulate.

UK Exploring Similar Social Media Ban Measures

The United Kingdom is now considering its own restrictions. Prime Minister Keir Starmer recently said the government is exploring a social media ban for children aged 15 and under, alongside stricter age verification and limits on addictive features. The UK’s discussion highlights another truth: this is no longer just about content moderation. It’s about the mental wellbeing of an entire generation growing up inside algorithmic systems.

Is a Social Media Ban Globally for Children the Future?

Spain’s move, combined with France, Australia, and the UK, signals a clear global trend. For years, social media companies promised safety tools, parental controls, and community guidelines. Yet harmful content, cyberbullying, predatory behavior, and addictive design have continued to spread. The reality is uncomfortable: platforms were never built with children in mind. They were built for engagement, profit, and data. A social media ban globally for children may not be perfect, but it is becoming a political and social necessity. Spain’s decision to ban social media platforms for children under age of 16 is not just about restricting access. It is about redefining digital childhood, reclaiming accountability, and admitting that the online world cannot remain lawless. The digital Wild West era may finally be ending.
  •  

France Approves Social Media Ban for Children Under 15 Amid Global Trend

social media ban for children France

French lawmakers have approved a social media ban for children under 15, a move aimed at protecting young people from harmful online content. The bill, which also restricts mobile phone use in high schools, was passed by a 130-21 vote in the National Assembly and is expected to take effect at the start of the next school year in September. French President Emmanuel Macron has called for the legislation to be fast-tracked, and it will now be reviewed by the Senate. “Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said. “Our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Their dreams must not be dictated by algorithms.”

Why France Introduced a Social Media Ban for Children

The new social media ban for children in France is part of a broader effort to address the negative effects of excessive screen time and harmful content. Studies show that one in two French teenagers spends between two and five hours daily on smartphones, with 58% of children aged 12 to 17 actively using social networks. Health experts warn that prolonged social media use can lead to reduced self-esteem, exposure to risky behaviors such as self-harm or substance abuse, and mental health challenges. Some families in France have even taken legal action against platforms like TikTok over teen suicides allegedly linked to harmful online content. The French legislation carefully exempts educational resources, online encyclopedias, and platforms for open-source software, ensuring children can still access learning and development tools safely.

Lessons From Australia’s Social Media Ban for Children

France’s move mirrors global trends. In December 2025, Australia implemented a social media ban for children under 16, covering major platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, X, YouTube, and Twitch. Messaging apps like WhatsApp were exempt. Since the ban, social media companies have revoked access to about 4.7 million accounts identified as belonging to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australian officials said the measures restore children’s online safety and prevent predatory social media practices. Platforms comply with the ban through age verification methods such as ID checks, third-party age estimation technologies, or inference from existing account data. While some children attempted to bypass restrictions, the ban is considered a significant step in protecting children online.

UK Considers Following France and Australia

The UK is also exploring similar measures. Prime Minister Keir Starmer recently said the government is considering a social media ban for children aged 15 and under, along with stricter age verification, phone curfews, and restrictions on addictive platform features. The UK’s move comes amid growing concern about the mental wellbeing and safety of children online.

Global Shift Toward Child Cyber Safety

The introduction of a social media ban for children in France, alongside Australia’s implementation and the UK’s proposal, highlights a global trend toward protecting minors in the digital age. These measures aim to balance access to educational and creative tools while shielding children from online harm and excessive screen time. As more countries consider social media regulations for minors, the focus is clear: ensuring cyber safety, supporting mental health, and giving children the chance to enjoy a safe and healthy online experience.
  •  

Lt Gen (Dr) Rajesh Pant to Lead Webinar on AI-Driven Cyber Threats — Register Free Now

ai cybersecurity webinar February 2026

Cyble and The Cyber Express has announced a high-impact ai cybersecurity webinar February 2026, bringing urgent focus to the growing convergence of AI-driven cybercrime, ransomware escalation, and hacktivism-led disruption. Titled “AI, Ransomware & Hacktivism: The Cyber Risk Shift Most Leaders Are Failing to See,” this timely ai ransomware webinar February 2026 will feature Lt Gen (Dr) Rajesh Pant, Chairman, Cyber Security Association of India and Former National Cyber Security Coordinator, Government of India. The Zoom webinar will take place on: Tuesday, 24 February 2026 4:00 PM IST Moderator: Mihir Bagwe, Principal Correspondent, The Cyber Express Registration is now open with FREE seats available, but slots are limited and seats are filling quickly. Register Now (FREE, Limited Seats): [Insert Registration Link Here]

Bonus for Registered Attendees: Annual Threat Landscape Report 2025

All registered attendees of the ai ransomware webinar February 2026 will receive a downloadable copy of the Annual Threat Landscape Report 2025. The 2025 threat landscape shows ransomware, hacktivism, and AI-enabled attacks continuing to scale despite global law enforcement disruptions. Based on millions of observations across dark web and open web sources — spanning industries, regions, and sectors, the report reveals:
  • How attackers adapted
  • Where defenses failed
  • Which threats are set to persist into 2026
This makes the webinar a valuable learning and intelligence opportunity as organizations plan for ai cybersecurity 2026.

AI Cybersecurity Webinar February 2026: Why This Session Matters Now

This ai cybersecurity webinar February 2026 comes at a critical moment as the global cyber threat environment rapidly evolves under the influence of AI. Ransomware groups are increasingly using AI to automate targeting, improve evasion, and scale attacks across industries. At the same time, hacktivist campaigns are merging with organized cybercrime, creating hybrid threats that challenge both enterprise security teams and national infrastructure defenses. The rise of these combined risks is shaping the future of ai cybersecurity 2026, and leaders who fail to adapt now may face severe consequences in the year ahead.

Featuring Lt Gen (Dr) Rajesh Pant at the AI Ransomware Webinar February 2026

The upcoming ai ransomware webinar February 2026 will offer rare leadership-level insights from: Lt Gen (Dr) Rajesh Pant Chairman, Cyber Security Association of India Former National Cyber Security Coordinator, Government of India With decades of experience guiding national cyber preparedness and responding to global threat dynamics, Dr. Pant will share frontline perspectives on how AI is reshaping ransomware operations and hacktivism-driven cyber disruption.

What This AI Ransomware Webinar February 2026 Covers

This ai ransomware webinar February session will focus on the cyber risk shifts most leaders are still underestimating. Key discussion points include:
  • How threat actors are using AI to expand ransomware campaigns
  • Why hacktivism is converging with cybercrime networks
  • The most dangerous cyber risk trends heading into ai cybersecurity 2026
  • What CISOs must prioritize now to avoid reactive defenses later
  • How leadership, policy, and execution often fail to align
The webinar will also explore evolving activity across the ai hacktivism website February 2026 landscape, where AI-enabled tactics are accelerating rapidly.

Here's Why You Should Attend This AI Cybersecurity Webinar February 2026

This ai cybersecurity webinar February 2026 is designed for CISOs, cyber risk leaders, security professionals, and decision-makers who need clarity on what comes next. By attending the ai ransomware webinar February 2026, participants will gain:
  • Strategic understanding of AI-powered ransomware evolution
  • Insights into the hacktivism-cybercrime overlap
  • Practical guidance for preparing enterprise defenses for 2026
  • Direct perspectives from one of India’s top cyber leaders
For professionals tracking threats through any ai hacktivism website, this session provides essential context and actionable takeaways. Register Now: Cybersecurity Webinar February 2026 (FREE, Limited Seats) FREE Registration | Limited Seat Slots | Seats Filling Quickly Don’t miss this essential ai cybersecurity webinar February 2026 and the must-attend ai ransomware webinar February 2026 discussion on the future of AI-driven cyber threats. Register Now (FREE) 
  •  

Berchem School Hit by Cyberattack as Hackers Target Parents With €50 Ransom Demand

cyberattack on Berchem school

A cyberattack on Berchem school has raised serious concerns after hackers demanded ransom money not only from the institution but also directly from students’ families. The Berchem school cyberattack incident occurred at the secondary school Onze-Lieve-Vrouwinstituut Pulhof (OLV Pulhof), where attackers disrupted servers and later threatened to release sensitive information unless payments were made. The case, confirmed by the public prosecutor’s office and first reported by ATV, highlights the growing threat of ransomware attacks on schools, where cybercriminals increasingly target educational institutions due to their reliance on digital systems and the sensitive data they store.

Cyberattack on Berchem School Disrupted Servers

The Berchem school hacking incident took place shortly after the Christmas holidays, in early January. According to reports, the school’s servers were taken offline, causing disruption to internal systems. Hackers reportedly demanded a ransom from the school soon after the breach. However, the institution refused to comply with the demands. This decision appears to have triggered an escalation in the attackers’ strategy, shifting pressure onto parents.

School Files Police Complaint After Ransom Demand

Following the cyberattack on Berchem school, OLV Pulhof acted quickly by contacting law enforcement. The school filed a formal complaint against unknown persons and brought in the police’s Regional Computer Crime Unit (RCCU) to respond to the incident. In addition to involving authorities, the school also moved to secure its digital infrastructure. Out of concern for student safety and data protection, the institution reportedly set up a new, secure network environment soon after the breach. The incident is now under investigation by the Federal Judicial Police.

Hackers Target Parents With €50 Per Child Ransom Demand

This week, the cybercriminals expanded their attack by sending threatening messages directly to parents of students. The hackers demanded a ransom of 50 euros per child, warning that private information such as addresses or photos could be released if the payment was not made. A student described the situation, saying that the school required everyone to change passwords and warned students not to click on suspicious links. “We had to change all our passwords at school, otherwise they would release our addresses or photos,” the student said. Another student added that their father received an email demanding payment, which caused fear and uncertainty. “My dad also got an email last night. That scares me a little. They were asking for 50 euros per child.” This tactic reflects a disturbing trend in school cyberattacks, where criminals attempt to exploit families emotionally and financially.

Parents Advised Not to Pay and Not to Click

The school has strongly advised parents not to respond to the ransom demands. Families were told not to pay, and more importantly, not to click on any links or attachments included in the hackers’ communications, as these could lead to further compromise or malware infections. Cybersecurity experts generally warn against paying ransoms, as it does not guarantee that stolen data will be deleted or that systems will be restored. Paying can also encourage attackers to continue targeting schools and vulnerable communities.

Classes Continue Despite Cybersecurity Incident

Despite the attack, lessons at OLV Pulhof have continued. While the school’s servers were initially down, it appears that temporary solutions and new systems allowed teaching to proceed. However, the full consequences of the hacking have not yet been disclosed. It remains unclear what data may have been accessed or whether any personal information was stolen. Educational institutions often store sensitive records, including student details, contact information, and internal documents, making them attractive targets for cybercriminal groups.

Rising Concern Over Ransomware Attacks on Schools

The cyberattack on the Berchem secondary school is part of a wider pattern of increasing cybercrime targeting schools across Europe. Schools often face limited cybersecurity budgets, older IT systems, and large networks of users, making them easier to infiltrate than larger corporate organizations. Attacks like this demonstrate how ransomware incidents can go beyond technical disruption, affecting families and creating fear in local communities.

Investigation Ongoing

Authorities have not yet identified who is behind the attack. The Federal Judicial Police continue to investigate, while the school works to strengthen its systems and protect students and staff. For now, parents are being urged to remain cautious, avoid engaging with the attackers, and report any suspicious communications to law enforcement. The cyberattack on Berchem school incident serves as a reminder that cybersecurity in schools is no longer optional, but essential for protecting students, families, and the education system itself.
  •  

Union Budget 2026–27: India Bets Big on Cloud, AI, and Cyber Resilience

Budget 2026

When Union Finance Minister Nirmala Sitharaman of India presented the Union Budget 2026–27 on February 1, it became clear that this year’s financial roadmap is not only about fiscal numbers, it is also about shaping the infrastructure of India’s digital future. The India Budget 2026 sends a strong and confident message: India wants to lead in the next phase of global growth, and that leadership will be built on AI, cloud, data centres, semiconductors, and cybersecurity. What stands out in Budget 2026 is the long-term thinking. Instead of short-term incentives or fragmented digital schemes, the government is laying down policy signals that stretch decades ahead, a rare and important move in the technology sector.

Budget 2026 Recognises Digital Infrastructure as Economic Infrastructure

For years, digital growth was often discussed as an enabler. In cyber budget 2026, digital infrastructure is being treated as core infrastructure, just like roads, railways, or energy. Union Minister for Electronics and IT Ashwini Vaishnaw rightly pointed out that AI data centres are now part of the foundation layer of modern economies. Without compute power, cloud capacity, and reliable digital networks, AI ambitions remain theoretical. India already has nearly USD 70 billion in investments underway in this space, with another USD 90 billion announced. That alone reflects how quickly the global market is betting on India’s data centre potential. But budget 2026 goes further. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

The Tax Holiday Till 2047 Is a Bold Signal

One of the most defining announcements in India Budget 2026 is the proposed tax holiday till 2047 for foreign cloud companies providing services globally using Indian data centres. This is more than a tax incentive — it is a strategic invitation. By offering policy clarity over two decades, India is telling global cloud players: build here, scale here, and serve the world from here. In a world where cloud infrastructure is becoming as geopolitically important as energy supply chains, this move positions India as not just a consumer of cloud services, but a serious global hosting hub. The safe harbour provision of 15% on cost further strengthens confidence for companies operating through related entities. This long-term stability may prove to be one of the smartest digital policy bets India has made in years.

IT Services Get the Relief They Have Long Needed

India’s IT services industry has always been a powerhouse, with exports now exceeding USD 220 billion. Yet, the sector has often faced complexity in tax compliance and transfer pricing frameworks. Budget 2026–27 addresses this in a practical way. By grouping software development, IT-enabled services, KPO, and contract R&D under one unified category — Information Technology Services — the government is acknowledging how interconnected these segments truly are. The proposed safe harbour margin of 15.5% and the jump in threshold from Rs. 300 crore to Rs. 2,000 crore is not just a technical reform. It reduces friction, encourages growth, and allows companies to focus more on innovation than paperwork. Even more importantly, approvals through an automated, rule-driven process remove uncertainty — something businesses value as much as incentives.

Semiconductor Mission 2.0 Reflects Strategic Continuity

The announcement of India Semiconductor Mission (ISM) 2.0, with an allocation of Rs. 1,000 crore, reinforces that India is serious about building supply chain resilience. Semiconductors are no longer just an industrial component — they are a strategic necessity. ISM 2.0’s focus on full-stack Indian IP, equipment production, and skilled workforce development reflects a long-term push toward self-reliance, not isolation. It is about India becoming a meaningful contributor to global technology manufacturing, not just an importer.

Electronics Manufacturing Momentum Is Clearly Building

The expansion of the Electronics Components Manufacturing Scheme (ECMS), with the outlay proposed to rise to Rs. 40,000 crore, shows that India wants to capture the manufacturing opportunity created by global supply chain shifts. Investment commitments already exceeding targets indicate that industry is responding. This is one area where Budget 2026 could create a multiplier effect — across jobs, exports, innovation, and ecosystem development.

Industry Response Highlights the Bigger Picture

The technology and cybersecurity industry has largely welcomed the direction of Budget 2026, especially its long-term focus on cloud infrastructure, AI readiness, and digital resilience. Pinkesh Kotecha, Chairman and CEO of Ishan Technologies, noted that the Budget puts strong backing behind India’s infrastructure ambitions.
“Union Budget 2026 puts hard numbers behind India’s digital infrastructure ambition,” he said, pointing to the tax holiday till 2047 for global cloud providers using Indian data centres and the safe harbour provisions for IT services. According to him, these steps position India not only as a large digital market, but also as “a global hosting hub.”
He also stressed that as AI workloads grow, the need for secure, high-availability connectivity will become just as important as compute and storage. Cybersecurity leaders have echoed similar views. Major Vineet Kumar, Founder and Global President of CyberPeace, called the Budget a strong signal that India’s growth and security priorities are now deeply connected.
“India’s growth ambitions are now inseparable from its digital and security foundations,” he said.
He added that the focus on AI, cloud, and deep-tech infrastructure makes cybersecurity a core national and economic requirement, not a secondary concern. From the banking and services perspective, Manish S., Head of Trade Finance Implementation at Standard Chartered India, highlighted the opportunities the Budget creates for professionals and businesses.
“India’s Budget 2026–27 supports services with fiscal incentives for foreign cloud firms, a data centre push, GCC support and skilling commitments,”
he said, encouraging professionals to upskill in cloud, AI, data engineering, and cybersecurity to stay relevant in the evolving ecosystem. Infrastructure providers also see long-term impact. Subhasis Majumdar, Managing Director of Vertiv India, described the tax holiday as a major competitiveness boost.
“The long-term tax holiday for foreign cloud companies until 2047 is a game-changing move,”
he said, adding that it will attract large global investments and create a multiplier effect across power, cooling, and critical digital infrastructure. Sujata Seshadrinathan, Co-Founder and Director at Basiz Fund Service, also welcomed the Budget’s balanced approach to advanced technology adoption. She noted that the government has recognised both the benefits and challenges of emerging technologies like AI, including ecological concerns and labour displacement. She highlighted that the focus on skilling, reskilling, and DeepTech-led inclusive growth is “a push in the right direction.” Together, these reactions reflect a shared view across industry: Budget 2026 is not just supporting technology growth, but actively shaping the foundation for India’s long-term digital and cyber future.

Budget 2026 Sets the Stage for India’s Digital Decades

Overall, Budget 2026 feels less like an annual budget and more like a policy blueprint for India’s digital future. The focus on AI infrastructure, cloud investments, IT simplification, semiconductor capability, and cybersecurity readiness suggests India is preparing not just for the next fiscal year — but for the next generation. The foundation is being laid. The opportunity is clear. The next step will be execution — because if these measures translate into real infrastructure, skilled talent, and secure digital systems, India Budget 2026 could be remembered as the moment India firmly positioned itself as a global digital powerhouse.
  •  

U.S. and Bulgaria Shut Down Three Major Piracy Websites in EU Crackdown

online piracy

In a major step against online piracy and illegal copyright distribution, U.S. law enforcement has partnered with Bulgarian authorities to dismantle three of the largest piracy websites operating in the European Union. The coordinated operation targeted platforms that allegedly provided unauthorized access to thousands of copyrighted movies, television shows, video games, software, and other digital content. The U.S. government executed seizure warrants against three U.S.-registered internet domains that were reportedly operated from Bulgaria. These domains — zamunda.net, arenabg.com, and zelka.org — were among the most heavily visited piracy services in the region. This action highlights growing international cooperation in tackling copyright infringement and protecting intellectual property rights worldwide.

Crackdown Targets Large-Scale Online Piracy Networks

According to U.S. authorities, the seized websites were allegedly engaged in the illegal distribution of copyrighted works on a massive scale. These platforms offered users access to unauthorized copies of content, including many works owned by U.S. companies and creators. The operation focused on online services that allowed millions of downloads of copyrighted material, contributing to significant financial losses for the entertainment, software, and publishing industries. Law enforcement officials emphasized that willful copyright infringement is a crime, and such piracy networks often operate as commercial enterprises rather than casual file-sharing platforms. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

Tens of Millions of Visits and Millions in Losses

Court affidavits supporting the seizure warrants reveal the enormous scale of the piracy activity linked to these domains. The three websites reportedly:
  • Received tens of millions of visits annually
  • Offered thousands of infringed works without authorization
  • Generated millions of illegal downloads
  • Caused retail losses totaling millions of dollars
One of the domains was frequently ranked among the top 10 most visited websites in Bulgaria, highlighting how deeply embedded these piracy platforms were in the country’s online ecosystem. Authorities also noted that the websites appeared to generate substantial revenue through online advertisements, making piracy not only a copyright issue but also a profitable criminal business model.

Seized Domains Now Under U.S. Government Custody

The domains are now in the custody of the United States government. Visitors attempting to access the sites will instead see an official seizure banner. The notice informs users that:
  • Federal authorities have seized the domain names
  • Copyright infringement is a serious criminal offense
  • The websites are no longer operational
The seizure of these domains represents a significant disruption of piracy infrastructure and sends a clear warning to operators running similar illegal platforms.

Strong Cooperation Between U.S., Bulgaria, and Europol

The Justice Department credited Bulgarian law enforcement agencies for their critical support in the takedown. Key Bulgarian partners included:
  • The National Investigative Service
  • The Ministry of the Interior’s General Directorate Combating Organized Crime
  • The State Agency for National Security
  • The Prosecutor’s Office
On the U.S. side, the operation involved:
  • The U.S. Attorney’s Office for the Southern District of Mississippi
  • Homeland Security Investigations (HSI) New Orleans Field Office
  • The National Intellectual Property Rights Coordination Center (IPR Center)
The Justice Department also acknowledged the important coordination role played by Europol, along with technical support from the HSI Athens office and U.S. Customs and Border Protection (CBP) in Sofia. This case demonstrates how international partnerships are becoming essential in fighting cross-border cybercrime and piracy.

Role of ICHIP Program in Global Cybercrime Support

The Justice Department noted that it continues to provide intellectual property and cybercrime assistance to foreign partners through the International Computer Hacking and Intellectual Property (ICHIP) program. This program helps strengthen global law enforcement capabilities in areas such as:
  • Cybercrime investigations
  • Digital piracy enforcement
  • Intellectual property protection
  • Prosecutorial and judicial cooperation
The ICHIP initiative is jointly administered through OPDAT and the Computer Crime and Intellectual Property Section, in partnership with the U.S. Department of State.

IPR Center Remains Key Weapon Against Digital Piracy

The National Intellectual Property Rights Coordination Center (IPR Center) plays a central role in combating criminal piracy and counterfeiting. By bringing together expertise from multiple agencies, the IPR Center works to:
  • Share intelligence on IP theft
  • Coordinate enforcement actions
  • Protect the U.S. economy and consumers
  • Support investigations into digital piracy networks
Authorities encourage individuals and businesses to report suspected IP theft through the official IPR Center website.

Investigation Ongoing

The announcement was made by Assistant Attorney General A. Tysen Duva, U.S. Attorney Baxter Kruger, and Acting Special Agent in Charge Matt Wright of HSI New Orleans. Homeland Security Investigations has confirmed that the matter remains under active investigation. With the takedown of these major piracy sites, U.S. and Bulgarian authorities have delivered one of the strongest blows yet against online copyright infringement in the European Union.
  •  

Ad Fraud Is Exploding — Dhiraj Gupta of mFilterIt Explains How Brands Can Respond

Data Privacy Week 2026-Interview

Ad fraud isn’t just a marketing problem anymore — it’s a full-scale threat to the trust that powers the digital economy. As Data Privacy Week 2026 puts a global spotlight on protecting personal information and ensuring accountability online, the growing fraud crisis in digital advertising feels more urgent than ever.

In 2024 alone, fraud in mobile advertising jumped 21%, while programmatic ad fraud drained nearly $50 billion from the industry. During data privacy week 2026, these numbers serve as a reminder that ad fraud is not only about wasted budgets — it’s also about how consumer data moves, gets tracked, and sometimes misused across complex ecosystems.

This urgency is reflected in the rapid growth of the ad fraud detection tools market, expected to rise from $410.7 million in 2024 to more than $2 billion by 2034. And in the context of data privacy week 2026, the conversation is shifting beyond fraud prevention to a bigger question: if ads are being manipulated and user data is being shared without clear oversight, who is truly in control?

To unpack these challenges, The Cyber Express team, during data privacy week 2026, spoke with Dhiraj Gupta, CTO & Co-founder of mFilterIt,  a technology leader at the forefront of helping brands win the battle against ad fraud and restore integrity across the advertising ecosystem. With a background in telecom and a passion for building AI-driven solutions, Gupta argues that brands can no longer rely on surface-level compliance or platform-reported metrics. As he puts it,
“Independent verification and data-flow audits are critical because they validate what actually happens in a campaign, not just what media plans, platforms, or dashboards report.”
Read the excerpt from the data privacy week 2026 interview below to understand why real-time audits, stronger privacy controls, and continuous accountability are quickly becoming non-negotiable in the fight against fraud — and in rebuilding consumer trust in digital advertising.

Interview Excerpt: Data Privacy Week 2026 Special

TCE: Why are independent verification and data-flow audits becoming essential for brands beyond just detecting ad fraud?

Gupta: Independent verification and data-flow audits are critical because they validate what actually happens in a campaign, not just what media plans, platforms, or dashboards report. They provide evidence-based accountability to regulators, advertisers, and agencies, allowing brands to move from assumed compliance to provable control. Importantly, these audits don’t only verify whether impressions are real; they also assess whether user data is being accessed, shared, or reused - such as for remarketing or profiling, in ways the brand never explicitly approved. In today’s regulatory environment, intent is no longer enough. Brands must be able to demonstrate operational control over how data moves across their digital ecosystem.

TCE: How can unauthorized or excessive tracking of users occur even when a brand believes it is compliant with privacy norms?

Gupta: In many cases, this happens not due to malicious intent, but because of operational complexity and the push for funnel optimization and deeper data mapping. Common scenarios include tags or SDKs triggering secondary or tertiary data calls that are not disclosed to the advertiser, and vendors activating new data parameters, such as device IDs or lead identifiers without explicit approval. Over time, incremental changes in tracking configurations can significantly expand data collection beyond what was originally consented to or contractually permitted, even though the brand may still believe it is operating within compliance frameworks.

TCE: How does programmatic advertising contribute to widespread sharing of user data across multiple intermediaries?

Gupta: Programmatic advertising is inherently multi-layered. A single ad impression can involve dozens of intermediaries like DSPs, SSPs, data providers, verification partners, and identity resolution platforms, each receiving some form of user signal for bidding, measurement, or optimization. While consent is often collected once, the data derived from that consent may be replicated, enriched, and reused multiple times across the supply chain. Without real-time data-flow monitoring, brands have very limited visibility into how far that data travels, who ultimately accesses it, or how long it persists across partner systems.

TCE: What risks do brands face if they don’t fully track the activities of their data partners, even when they don’t directly handle consumer information?

Gupta: Even when brands do not directly process personally identifiable information, they remain accountable for how their broader ecosystem behaves. The risks include regulatory exposure, reputational damage, erosion of consumer trust, and an inability to defend compliance claims during audits or investigations. Regulators are increasingly asking brands to demonstrate active control, not just contractual intent. Without independent verification and documented evidence, brands effectively carry residual compliance risk by default.

TCE: Why do consent frameworks sometimes fail to ensure that user data is controlled as intended?

Gupta: Consent frameworks are effective at capturing permission, but far less effective at enforcing downstream behaviour. They typically do not monitor what happens after consent is granted, whether data usage aligns with stated purposes, whether new vendors are added, or whether data access expands over time. Without execution-level oversight, consent becomes symbolic rather than operational. For example, data that was shared for campaign measurement may later be reused by third parties for audience profiling, without the user’s awareness and often without the brand’s visibility.

TCE: How can brands bridge the gap between regulatory intent and real-world implementation of privacy rules?

Gupta: Brands need to shift from document-based compliance to behaviour-based verification. This means auditing live campaigns, tracking actual data access, and continuously validating that data usage aligns with both consent terms and declared purposes. For instance, in quick-commerce or hyperlocal advertising, sensitive data like precise pin codes can be captured through data layers or partner integrations without the brand’s direct knowledge. Only runtime monitoring can surface such risks and align real-world execution with regulatory intent.

TCE: What strategies or tools can brands use to identify unauthorized data access within complex digital ecosystems?

Gupta: Effective control requires continuous, not one-time, oversight. Key strategies include independent runtime audits, continuous monitoring of data calls, partner-level risk scoring, and full data-journey mapping across platforms and vendors. Rather than relying solely on contractual assurances or annual audits, brands need ongoing visibility into how data is accessed and shared, especially as campaign structures, vendors, and technologies change rapidly.

TCE: How does excessive tracking or shadow profiling affect consumers’ privacy and trust in digital services?

Gupta: Consumers are becoming increasingly aware of how their data is used, and excessive or opaque tracking creates a perception of surveillance rather than value exchange. When users feel they have lost control over their personal information, trust declines, not only in platforms, but also in the brands advertising on them. For example, when consumers receive hyper-local ads on social media for products they were discussing offline, they often perceive it as continuous tracking, even if the data correlation occurred through indirect signals. This perception alone can damage brand credibility and long-term loyalty.

TCE: In your view, what will become the most critical privacy controls for organizations in the next 2–3 years? What practical steps can organizations take today?

Gupta: The most critical controls will be data-flow transparency, strict enforcement of purpose limitation, and continuous partner accountability. Organizations will be expected to prove where data goes, why it goes there, and whether that usage aligns with user consent and regulatory expectations. Privacy will increasingly be measured by operational evidence, not policy declarations. Practically, brands should start by independently auditing all live trackers and data endpoints, not just approved vendors. Privacy indicators should be reviewed alongside media and performance KPIs, and verification must be continuous rather than episodic. Most importantly, privacy must be treated as part of the brand’s trust infrastructure, not merely as a compliance checklist. Brands that invest in transparency and control today will be far better positioned as regulations tighten and consumer expectations continue to rise.
  •  

CNIL Fine on France Travail After Hack Exposes 20 Years of Job Seekers’ Personal Data

CNIL fine on France Travail

On January 22, 2026, France’s data protection authority, the CNIL, imposed a €5 million fine on France Travail (formerly Pôle Emploi) for failing to properly secure the personal data of job seekers. The CNIL fine on France Travail highlights growing regulatory pressure across Europe to strengthen GDPR data security measures, especially when sensitive public-sector systems are involved. The decision follows a major cyberattack in early 2024 that exposed personal information linked to millions of individuals registered with France’s national employment services over the last two decades.

CNIL Fine on France Travail After Major Job Seekers’ Data Breach

The CNIL fine on France Travail comes after hackers successfully infiltrated the organisation’s information system during the first quarter of 2024. According to investigators, the attackers relied on social engineering, a method that exploits human trust and behaviour rather than purely technical vulnerabilities. Using these tactics, hackers were able to hijack the accounts of advisers working with CAP EMPLOI — organisations responsible for supporting employment access for people with disabilities. This breach allowed attackers to gain entry into France Travail’s broader digital environment.

Hackers Accessed 20 Years of Personal Data

Investigations confirmed that the attackers accessed data relating to all individuals currently registered, or previously registered, with France Travail over the past 20 years. This also included individuals holding candidate accounts on the official francetravail.fr platform. The compromised information included:
  • National Insurance numbers
  • Email addresses
  • Postal addresses
  • Telephone numbers
While the hackers did not access complete job seeker files — which may contain health-related information — the CNIL still considered the exposed dataset highly sensitive due to its scale and the nature of the identifiers involved. The breach affected an extremely large portion of the French population, making it one of the most significant recent incidents involving a public institution.

GDPR Article 32 and Failure to Ensure Data Security

The CNIL’s decision focuses heavily on failure to ensure the security of personal data processed, a requirement under Article 32 of the GDPR. Under GDPR data security rules, organisations must implement security measures that are appropriate to the risks involved. The CNIL concluded that France Travail’s technical and organisational safeguards were inadequate and could have made the attack more difficult if properly applied. The restricted committee identified several key weaknesses.

Weak Authentication and Poor Monitoring Measures

One of the main concerns raised was the lack of authentication procedures for CAP EMPLOI advisers accessing the France Travail system. Weak access controls made it easier for hackers to take over adviser accounts and move through the network. The CNIL also highlighted insufficient logging and monitoring capabilities, which reduced the organisation’s ability to detect abnormal behaviour or suspicious activity early. Additionally, CAP EMPLOI adviser permissions were defined too broadly. Advisers could access data on individuals they were not directly supporting, significantly increasing the volume of information available once an account was compromised. This overexposure amplified the scale of the breach.

Security Measures Were Identified but Not Implemented

In determining the sanction, the restricted committee noted that many appropriate security measures had already been identified by France Travail during earlier impact assessments. However, these measures were not actually implemented before the processing began. This gap between awareness and execution played an important role in the CNIL’s decision to impose a multi-million-euro penalty. As regulators increasingly stress proactive security compliance, failure to act on known risks is being treated as a serious breach of responsibility. Beyond the financial penalty, the CNIL has ordered France Travail to justify the corrective measures taken, along with a precise implementation schedule. If the organisation fails to meet these requirements, it will face an additional penalty of €5,000 per day of delay, increasing the pressure to demonstrate meaningful improvements quickly.

Why CNIL Fine on France Travail Is Not Based on Turnover

France Travail is a national public administrative institution funded mainly through social security contributions rather than commercial revenue. As a result, the CNIL explained that the fine is not based on turnover, but instead falls under the GDPR framework for public-sector bodies, with a maximum limit of €10 million for a data security breach. “All fines imposed by the CNIL, whether they concern private or public actors, are collected by the Treasury and paid into the State budget.”

CNIL’s Role for Individuals Affected

The CNIL reminded the public that it serves as France’s personal data regulator, responding to requests and complaints from both individuals and professionals. Anyone can lodge a complaint with the CNIL when facing difficulties exercising their rights or when reporting violations of personal data protection rules. The authority can investigate organisations and issue sanctions where necessary. However, the CNIL does not have the power to compensate affected individuals directly. Those seeking compensation may file a complaint with the police. The France Travail data breach and subsequent CNIL sanction underline the importance of strong cybersecurity practices, especially for institutions handling large-scale citizen data. With regulators enforcing GDPR obligations more strictly, public bodies and private organisations alike are being reminded that data security is no longer optional — it is a legal and operational necessity.
  •  

US Charges 87 in Major ATM Jackpotting Scheme Linked to Tren de Aragua

ATM jackpotting

A federal grand jury in Nebraska has issued a new indictment in a major international cybercrime case involving an “ATM jackpotting” scheme tied to the violent transnational gang Tren de Aragua (TdA). The latest charges bring the total number of defendants accused in the operation to 87, highlighting the growing threat of malware-driven attacks on financial institutions across the United States. The additional indictment charges 31 individuals for their alleged roles in a large conspiracy to deploy malware and steal millions of dollars from ATMs, a crime widely known as ATM jackpotting. Fifty-six other defendants had already been charged in earlier cases. Prosecutors say many of those involved are Venezuelan and Colombian nationals, including illegal alien members of Tren de Aragua, which has been designated a Foreign Terrorist Organization. The indictment includes 32 counts, such as conspiracy to commit bank fraud, conspiracy to commit bank burglary and computer fraud, and damage to computers.

Justice Department Highlights Terror and Financial Crime Connection

Attorney General Pamela Bondi described Tren de Aragua as more than a financial crime network. “Tren de Aragua is a complex terrorist organization that commits serious financial crimes in addition to horrific rapes, murders, and drug trafficking,” Bondi said. “This Department of Justice has already prosecuted more than 290 members of Tren de Aragua and will continue working tirelessly to put these vicious terrorists behind bars after the prior administration let them infiltrate our country.” Deputy Attorney General Todd Blanche emphasized the Justice Department’s focus on dismantling the group. “A large ring of criminal aliens allegedly engaged in a nationwide conspiracy to enrich themselves and the TdA terrorist organization by ripping off American citizens,” Blanche said. “The Justice Department’s Joint Task Force Vulcan will not stop until it completely dismantles and destroys TdA and other foreign terrorists that import chaos to America.”

Ploutus Malware Used in ATM Jackpotting Scheme

According to court documents, the conspiracy developed and deployed a variant of malware known as Ploutus, which was used to hack into ATMs and force them to dispense cash. Investigators allege the group recruited individuals across the country to carry out the attacks. Members would travel in teams, often using multiple vehicles, to targeted banks and credit unions. The operation typically involved reconnaissance first. Groups would inspect the ATM’s external security features, then open the hood or access panel and wait nearby to see whether alarms were triggered or law enforcement responded. Once the area appeared clear, attackers allegedly installed Ploutus malware in several ways, including removing and replacing hard drives or connecting external devices like thumb drives to deploy the malicious software. The malware’s main function was to issue unauthorized commands to the ATM’s cash dispensing module, forcing withdrawals of currency. Prosecutors also say Ploutus was designed to delete evidence of the attack to mislead banks and conceal the intrusion. Proceeds were then split among members in predetermined portions.

Task Force Vulcan Targets TdA’s Financial Pipeline

U.S. Attorney Lesley A. Woods for Nebraska said the case is part of a broader effort to stop the gang’s funding. “Tren de Aragua uses ATM jackpotting crimes committed all across America to fund its terrorist organization,” Woods said, adding that authorities are working to “shut down their financial pipeline and handicap their ability to terrorize American communities.” Joint Task Force Vulcan Co-Director Chris Eason warned that malware-driven attacks on financial institutions will not be tolerated. “Using sophisticated malware to empty ATMs and damage U.S. financial institutions that also fund TdA’s terrorist activity will not be tolerated,” he said. FBI Omaha Special Agent in Charge Eugene Kowel noted that the conspiracy poses a direct threat nationwide. “This case highlights TdA's plot to deploy malware to steal vast funds from financial institutions across the United States,” Kowel said.

Previous Indictments and Potential Sentences

The latest indictment follows earlier cases returned in October and December 2025. Prosecutors allege TdA conducted jackpotting attacks across America, stealing millions and transferring proceeds among members to conceal illegally obtained cash. If convicted, defendants face maximum prison terms ranging from 20 to 335 years. Tren de Aragua originated as a Venezuelan prison gang in the mid-2000s and has since expanded throughout the Western Hemisphere. U.S. officials say the organization is involved in drug trafficking, sex trafficking, kidnapping, robbery, fraud, extortion, and murder. The Justice Department says ATM jackpotting has become one of the gang’s key revenue streams, making financial cybercrime a central part of its operations.
  •  

Major Cyberattack Cripples Russia’s Alarm and Vehicle Security Provider Delta

cyberattack on Delta

A cyberattack on Delta, a Russian provider of alarm and security systems for homes, businesses, and vehicles, has disrupted operations and triggered widespread service outages, leaving many customers unable to access critical security functions. Delta, which serves tens of thousands of users across Russia, confirmed the Delta cyberattack on Monday, stating that it faced a major external assault on its IT infrastructure. The disruption due to cyberattack on Delta has affected both online services and customer communication channels, raising concerns about the resilience of connected security platforms.

Cyberattack on Delta Security Systems Causes Major Outage

In an official statement, the company emphasized its position in the market and its ongoing investments in cybersecurity. Delta said: “On January 26, DELTA experienced a large-scale external attack on its IT infrastructure aimed at disrupting the company's services.” The company added that some services were temporarily unavailable, but insisted there were no immediate signs of customer data exposure. “At this time, no signs of a compromise of customer personal data have been detected.” Delta also apologized to customers and said restoration efforts were underway with the help of specialized experts.

Delta Struggles to Restore Services After Cyberattack

Delta marketing director Valery Ushkov provided additional details in a video address, acknowledging the large scale of the incident. He said: “Our architecture was unable to withstand a well-coordinated attack coming from outside the country.” Ushkov noted that recovery was taking longer than expected because the company was still facing the risk of follow-up attacks while attempting to restore backups. As of Tuesday, Delta’s website and phone lines remained offline. With traditional communication channels down, the company has been forced to issue updates through its official page on VKontakte, Russia’s largest social media platform.

Customers Report Alarm Failures and Vehicle Access Issues

The Delta cyberattack disruption has had direct consequences for customers relying on the company’s systems for everyday safety and mobility. Russian-language Telegram outlet Baza reported that users began complaining shortly after the incidentof cyberattack on Delta that car alarm systems could not be turned off, and in some cases, vehicles could not be unlocked. Newspaper Kommersant also reported ongoing failures despite Delta’s assurances that most services were operating normally. Users described serious malfunctions, including remote vehicle start features failing, doors locking unexpectedly, and engines shutting down while in motion. In addition to vehicle-related issues, customers reported that alarm systems in homes and commercial buildings switched into emergency mode and could not be deactivated. Recorded Future News said it could not independently verify these claims.

Data Leak Claims Surface After Delta Cyberattack

Although Delta maintains that no customer data was compromised, uncertainty remains. An unidentified Telegram channel claiming to be operated by the attackers published an archive it alleges contains stolen information from Delta systems. However, the authenticity of the material and the identity of the hackers have not been independently verified. The cyberattack on Delta has increased anxiety among customers, particularly because Delta’s mobile app, launched in 2020, is widely used for tracking vehicles and managing alarm functions. According to Auto.ru, the app is compatible with most cars and can store payment data, making some users wary of potential financial exposure if internal systems were breached.

Broader Pattern of IT Disruptions in Russia

The Delta security systems cyberattack occurred on the same day as a separate large-scale outage affected booking and check-in systems used by Russian airlines and airports. Airlines reported temporary disruptions to ticket sales, refunds, and rebooking after problems were detected in aviation IT platforms. While the two incidents have not been officially linked, the timing highlights growing instability in critical digital infrastructure. No known hacking group has claimed responsibility for the cyberattack on Delta so far. It also remains unclear whether the incident was a relatively limited distributed denial-of-service (DDoS) attack or something more severe, such as ransomware or destructive malware. For now, Delta says the situation is manageable and expects services to return soon, but customer concerns continue as outages persist and unverified leak claims circulate.
  •  

Canada Marks Data Privacy Week 2026 as Commissioner Pushes for Privacy by Design

Data Privacy Week 2026

As Data Privacy Week 2026 gets underway from January 26 to 30, Canada’s Privacy Commissioner Philippe Dufresne has renewed calls for stronger data protection practices, modern privacy laws, and a privacy-first approach to emerging technologies such as artificial intelligence. In a statement marking Data Privacy Week 2026, Dufresne said data has become one of the most valuable resources of the 21st century, making responsible data management essential for both individuals and organizations. “Data is one of the most important resources of the 21st century and managing it well is essential for ensuring that individuals and organizations can confidently reap the benefits of a digital society,” he said. The Office of the Privacy Commissioner (OPC) has chosen privacy by design as its theme this year, highlighting the need for organizations to embed privacy into their programs, products, and services from the outset. According to Dufresne, this proactive approach can help organizations innovate responsibly, reduce risks, build for the future, and earn public trust.

Data Privacy Week 2026: Privacy by Design Takes Centre Stage

Speaking on the growing integration of technology into everyday life, Dufresne said Data Privacy Week 2026 is a timely opportunity to underline the importance of data protection. With personal data being collected, used, and shared at unprecedented levels, privacy is no longer a secondary concern. “Prioritizing privacy by design is my Office’s theme for Data Privacy Week this year, which highlights the benefits to organizations of taking a proactive approach to protect the personal information that is in their care,” he said. The OPC is also offering guidance for individuals on how to safeguard their personal information in a digital world, while providing organizations with resources to support privacy-first programs, policies, and services. These include principles to encourage responsible innovation, especially in the use of generative AI technologies.

Real-World Cases Show Why Privacy Matters

In parallel with Data Privacy Week 2026, Dufresne used a recent appearance before Parliament to point to concrete cases that show how privacy failures can cause serious and lasting harm. He referenced investigations into the non-consensual sharing of intimate images involving Aylo, the operator of Pornhub, and the 23andMe data breach, which exposed highly sensitive personal information of 7 million customers, including more than 300,000 Canadians. His office’s joint investigation into TikTok also highlighted the need to protect children’s privacy online. The probe not only resulted in a report but also led TikTok to improve its privacy practices in the interests of its users, particularly minors. Dufresne also confirmed an expanded investigation into X and its Grok chatbot, focusing on the emerging use of AI to create deepfakes, which he said presents significant risks to Canadians. “These are some of many examples that demonstrate the importance of privacy for current and future generations,” he told lawmakers, adding that prioritizing privacy is also a strategic and competitive asset for organizations.

Modernizing Canada’s Privacy Laws

A central theme of Data Privacy Week 2026 in Canada is the need to modernize privacy legislation. Dufresne said existing laws must be updated to protect Canadians in a data-driven world while giving businesses clear and practical rules. He voiced support for proposed changes under Bill C-15, the Budget 2025 Implementation Act, which would amend the Personal Information Protection and Electronic Documents Act (PIPEDA) to introduce a right to data mobility. This would allow individuals to request that their personal information be transferred to another organization, subject to regulations and safeguards. “A right to data mobility would give Canadians greater control of their personal information by allowing them to make decisions about who they want their information shared with,” he said, adding that it would also make it easier for people to switch service providers and support innovation and competition. Under the proposed amendments, organizations would be required to disclose personal information to designated organizations upon request, provided both are subject to a data-mobility framework. The federal government would also gain authority to set regulations covering safeguards, interoperability standards, and exceptions. Given the scope of these changes, Dufresne said it will be important for his office to be consulted as the regulations are developed.

A Call to Act During Data Privacy Week 2026

Looking ahead, Dufresne framed Data Privacy Week 2026 as both a moment of reflection and a call to action. “Let us work together to create a safer digital future for all, where privacy is everyone’s priority,” he said. He invited Canadians to take part in Data Privacy Week 2026 by joining the conversation online, engaging with content from the OPC’s LinkedIn account, and using the hashtag #DPW2026 to connect with others committed to advancing privacy in Canada and globally. As digital technologies continue to reshape daily life, the message from Canada’s Privacy Commissioner is clear: privacy is not just a legal requirement, but a foundation for trust, innovation, and long-term economic growth.
  •  

European Commission Launches Fresh DSA Investigation Into X Over Grok AI Risks

European Commission investigation into Grok AI

The European Commission has launched a new formal investigation into X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the platform’s use of its AI chatbot, Grok. Announced on January 26, the move follows mounting concerns that Grok AI image-generation and recommender functionalities may have exposed users in the EU to illegal and harmful content, including manipulated sexually explicit images and material that could amount to child sexual abuse material (CSAM). This latest European Commission investigation into X runs in parallel with an extension of an ongoing probe first opened in December 2023. The Commission will now examine whether X properly assessed and mitigated the systemic risks associated with deploying Grok’s functionalities into its platform in the EU, as required under the Digital Services Act (DSA).

Focus on Grok AI and Illegal Content Risks

At the core of the new proceedings is whether X fulfilled its obligations to assess and reduce risks stemming from Grok AI. The Commission said the risks appear to have already materialised, exposing EU citizens to serious harm. Regulators will investigate whether X:
  • Diligently assessed and mitigated systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being.
  • Conducted and submitted an ad hoc risk assessment report to the Commission for Grok’s functionalities before deploying them, given their critical impact on X’s overall risk profile.
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the Digital Services Act. The Commission stressed that the opening of formal proceedings does not prejudge the outcome but confirmed that an in-depth investigation will now proceed as a matter of priority.

Recommender Systems Also Under Expanded Scrutiny

In a related step, the European Commission has extended its December 2023 investigation into X’s recommender systems. This expanded review will assess whether X properly evaluated and mitigated all systemic risks linked to how its algorithms promote content, including the impact of its recently announced switch to a Grok-based recommender system. As a designated very large online platform (VLOP) under the DSA, X is legally required to identify, assess, and reduce systemic risks arising from its services in the EU. These risks include the spread of illegal content and threats to fundamental rights, particularly those affecting minors. Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, underlined the seriousness of the case in a statement: “Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service.” Earlier this month, a European Commission spokesperson had also addressed the issue while speaking to journalists in Brussels, calling the matter urgent and unacceptable. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

International Pressure Builds Around Grok AI

The investigation comes against a backdrop of rising regulatory pressure worldwide over Grok AI’s image-generation capabilities. On January 16, X announced changes to Grok aimed at preventing the creation of nonconsensual sexualised images, including content that critics say amounts to CSAM. The update followed weeks of scrutiny and reports of explicit material generated using Grok. In the United States, California Attorney General Rob Bonta confirmed on January 14 that his office had opened an investigation into xAI, the company behind Grok, over reports describing the depiction of women and children in explicit situations. Bonta called the reports “shocking” and urged immediate action, saying his office is examining whether the company may have violated the law. U.S. lawmakers have also stepped in. On January 12, three senators urged Apple and Google to remove X and Grok from their app stores, arguing that the chatbot had repeatedly violated app store policies related to abusive and exploitative content.

Next Steps in the European Commission Investigation Into X

As part of the Digital Services Act (DSA) enforcement process, the Commission will continue gathering evidence by sending additional requests for information, conducting interviews, or carrying out inspections. Interim measures could be imposed if X fails to make meaningful adjustments to its service. The Commission is also empowered to adopt a non-compliance decision or accept commitments from X to remedy the issues under investigation. Notably, the opening of formal proceedings shifts enforcement authority to the Commission, relieving national Digital Services Coordinators of their supervisory powers for the suspected infringements. The investigation complements earlier DSA proceedings that resulted in a €120 million fine against X in December 2025 for deceptive design, lack of advertising transparency, and insufficient data access for researchers. With Grok AI now firmly in regulators’ sights, the outcome of this probe could have major implications for how AI-driven features are governed on large online platforms across the EU.
  •  

Ingram Micro Data Breach Affects Over 42,000 People After Ransomware Attack

Ingram Micro data breach

Ingram Micro, one of the world’s largest IT distributors, has confirmed that sensitive personal data was leaked following a ransomware attack that disrupted its operations last year. The Ingram Micro data breach incident, which paralysed the company’s logistics systems for nearly a week in July 2025, has now been linked to the theft of files containing employee and applicant information, affecting more than 42,000 individuals. The Ingram Micro data breach came to light through a mandatory filing with U.S. authorities, which revealed that 42,521 people were impacted, including five residents of the state of Maine. According to the company, the attackers accessed internal file repositories between July 2 and July 3, 2025, during an external system breach involving hacking. However, the breach was only discovered several months later, on December 26, 2025.

Ransomware Attack Led to Extended Disruption

The data exposure follows a ransomware attack that caused widespread operational disruption at Ingram Micro in July 2025. At the time, the company’s logistics were reportedly paralysed for about a week, affecting its ability to process and distribute products. While the immediate impact of Ingram Micro data breach on operations was known, it has now emerged that the attackers also exfiltrated sensitive files during the same period. In a notice sent to affected individuals, Ingram Micro said it detected a cybersecurity incident involving some of its internal systems on July 3, 2025. The company launched an investigation into the nature and scope of the issue and determined that an unauthorised third party had taken certain files from internal repositories over a two-day window.

Ingram Micro Data Breach: Personal and Employment Data Stolen

The compromised files included employment and job applicant records, containing a wide range of personal information. According to the Ingram Micro data breach notification, the stolen data may include names, contact information, dates of birth, and government-issued identification numbers such as Social Security numbers, driver’s licence numbers, and passport numbers. In addition, certain employment-related information, including work evaluations and application documents, was also accessed. The company noted that the types of affected personal information varied by individual. Ingram Micro employs approximately 23,500 people worldwide, and the breach affected both current and former employees, as well as job applicants. Ingram Micro said it took steps to contain and remediate the unauthorised activity as soon as the incident was detected. These measures included proactively taking certain systems offline and implementing additional security controls. The company also engaged leading cybersecurity experts to assist with its investigation and notified law enforcement. As part of its response to the Ingram Micro data breach, the company conducted a detailed review of the affected files to understand their contents. It was only after completing this review that Ingram Micro confirmed that some of the files contained personal information about individuals.

Support Offered to Affected Individuals

Ingram Micro is notifying impacted individuals and encouraging them to take steps to protect their personal information. Under U.S. law, affected individuals are entitled to one free credit report annually from each of the three nationwide consumer reporting agencies. The company has also arranged to provide complimentary credit monitoring and identity protection services for two years. In its notification, Ingram Micro urged people to remain vigilant by reviewing their account statements and monitoring their credit reports. The company included guidance on how to register for the free protection services and additional steps to reduce the risk of identity theft. For further assistance, Ingram Micro has set up a dedicated call centre for questions related to the breach. The company said it regrets any inconvenience caused and is working to address concerns raised by those affected.

Broader Implications for Corporate Cybersecurity

The incident highlights the growing risks organisations face from ransomware attacks that not only disrupt operations but also result in data theft. The delay between the occurrence of the breach in July and its discovery in December emphasizes the challenges companies face in detecting and containing sophisticated cyber intrusions. For large enterprises like Ingram Micro, which play a central role in global IT supply chains, the consequences of such attacks can extend beyond immediate operational losses. The exposure of sensitive employee and applicant data adds a long-term dimension to the impact, increasing the risk of identity theft and fraud for those affected. As investigations continue, the ransomware attack on Ingram Micro serves as a reminder of the importance of strong cybersecurity controls, continuous monitoring, and timely incident response to limit both operational disruption and data loss.
  •  

One in Ten UK Businesses Fear They Would Not Survive a Major Cyberattack

UK Businesses Cyberattack

UK businesses are facing growing pressure from cyber threats, with a new survey warning that many may not withstand major UK businesses cyberattack. The findings highlight how exposed companies across the country remain to online fraud and cybercrime, as gaps in training, weak password practices, and increasingly sophisticated scams continue to undermine cyber resilience. According to a recent Vodafone Business study, more than one in ten business leaders in the UK believe their organisation would be unlikely to survive a major cyberattack. The research, which surveyed 1,000 senior leaders across British businesses of all sizes, paints a concerning picture of how prepared—or unprepared—many firms are for incidents similar to those that disrupted major UK retailers and car manufacturers last year.

Weak Preparedness and Rising Threats Put Firms at Risk

The survey suggests that risk awareness has grown, but action has not kept pace. Nearly two-thirds of business leaders (63%) reported that their organisation’s risk of cyberattack has increased over the past year. At the same time, 89% said the highly publicised attacks on well-known brands last year had made them significantly more alert to online threats. Despite this heightened awareness, fewer than half (45%) have ensured that all staff have undergone basic cyber-awareness training. This gap between concern and concrete action is leaving many UK businesses cyberattack–ready in name only, without the practical safeguards needed to prevent or respond effectively to incidents. The findings also point to troubling weaknesses in everyday security practices. Password reuse remains widespread, with employers estimating that staff use their work passwords for an average of 11 other personal accounts, including social media and dating platforms. Such habits significantly increase the risk of credential theft and unauthorised access, particularly when personal platforms suffer breaches.

UK Businesses Cyberattack: Human Error Remains a Major Vulnerability

The study underlines the central role of human behaviour in cyber risk. Nearly three-quarters of business leaders (71%) believe that at least one member of their staff would fall for a convincing phishing email. The most common reasons cited were a lack of awareness and training, staff being “too busy,” and the absence of clear protocols for verifying and flagging suspicious messages. These factors continue to erode cyber resilience, especially as phishing campaigns grow more advanced. The emergence of artificial intelligence and deepfake scams is further complicating the threat landscape. Around seven in ten leaders said that deepfake AI videos have made them more wary of video calls that claim to be from senior colleagues or their boss, signalling a growing concern about impersonation fraud and social engineering attacks.

Government Moves to Strengthen National Defences

The UK Government’s announcement of a second Telecommunications Fraud Charter, set to launch later this year, has been positioned as a key step in strengthening national defences against cyber-enabled crime. The charter aims to bring industry and government closer together to close vulnerabilities, disrupt criminal activity, and protect businesses from financial and operational harm. By enhancing collaboration and setting clearer standards for prevention, detection, and response, the new charter is intended to provide a more coordinated framework to safeguard the resilience and trust that UK businesses rely on. It also aligns with a broader fraud strategy expected to be launched next year.

Industry Reaction and Call for Practical Measures

Commenting on the findings, Nick Gliddon, Business Director, VodafoneThree, said: “Some of these findings are truly alarming. The revelation that one in ten business leaders believe their company would not survive a cyber-attack highlights the scale of vulnerability facing UK firms today. “Many steps – such as avoiding password reuse and enhancing staff training – are relatively simple to implement, and Vodafone Business is here to support organisations with practical solutions and expert guidance. “In this context, the Government’s announcement of its second Telecommunications Fraud Charter, coupled with a new fraud strategy to be launched next year, marks a significant and timely development. “This renewed focus from policymakers underscores the seriousness of the threat and the necessity of a united approach between industry and government to effectively tackle online fraud and cyber-crime.” The survey results serve as a warning that cyber resilience is still uneven across sectors and company sizes. While awareness of threats is growing, persistent weaknesses in training, password practices, and incident readiness continue to leave many organisations vulnerable. As cybercriminals adopt more advanced tools and techniques, including AI-driven scams, the gap between perceived risk and real preparedness could become increasingly costly. For UK businesses cyberattack readiness is no longer optional, it is a critical factor that may determine whether a company can survive and recover from the next major incident.
  •  

NCSC Warns of Rising Russian-Aligned Hacktivist Attacks on UK Organisations

Russian-aligned hacktivist groups

The UK’s National Cyber Security Centre (NCSC) has issued a fresh alert warning that Russian-aligned hacktivist groups continue to target British organisations with disruptive cyberattacks. The advisory, published on 19 January 2026, highlights a sustained campaign aimed at taking websites offline, disrupting online services, and disabling critical systems, particularly across local government and national infrastructure. The NCSC warning on hacktivist attacks urges organisations to strengthen their defences against denial-of-service (DoS) incidents, which, while often low in technical sophistication, can still cause widespread operational disruption. Officials say the activity is ideologically driven, reflecting geopolitical tensions linked to Western support for Ukraine, rather than financial motivations.

Persistent Threat from Russian-Aligned Hacktivist Groups

According to the NCSC, Russian-aligned hacktivist groups have been conducting cyber operations against UK and global organisations for several years, with activity intensifying since the Russian invasion of Ukraine. In December 2025, the NCSC co-sealed an international advisory warning that pro-Russian hacktivists were targeting government and private sector entities in NATO member states and other European countries perceived as hostile to Russia’s geopolitical interests. One group named in the advisory, NoName057(16), has been active since March 2022 and has repeatedly launched distributed denial-of-service (DDoS) attacks against public and private sector organisations. The group has targeted government bodies and businesses across Europe, including frequent DDoS attempts against UK local government services. NoName057(16) primarily operates through Telegram channels and has used GitHub and other repositories to host its proprietary DDoS tool, known as DDoSia. The group has also shared tactics, techniques, and procedures (TTPs) with followers to encourage participation in coordinated disruption campaigns. The NCSC said this activity reflects an evolution in the threat landscape, with attacks increasingly extending beyond traditional IT systems to include operational technology (OT) environments. As a result, the agency is encouraging all OT owners to review mitigation measures and harden their cyber defences.

NCSC Warning on Hacktivist Attacks and Resilience Measures

The NCSC warning on hacktivist attacks stresses that organisations, particularly local authorities and operators of critical national infrastructure, should review their DoS protections and improve resilience. While DoS attacks are often technically simple, a successful incident can overwhelm key websites and online systems, preventing access to essential services and causing significant operational and financial strain. NCSC Director of National Resilience Jonathon Ellison said: “We continue to see Russian-aligned hacktivist groups targeting UK organisations and although denial-of-service attacks may be technically simple, their impact can be significant. By overwhelming important websites and online systems, these attacks can prevent people from accessing the essential services they depend on every day.” He urged organisations to act quickly by reviewing and implementing the NCSC’s guidance to protect against DoS attacks and related cyber threats.

Guidance to Mitigate Denial-of-Service Attacks

As part of its advisory, the NCSC outlined practical steps organisations can take to reduce their exposure to DoS incidents. These include understanding where services may be vulnerable to resource exhaustion and clarifying whether responsibility for protection lies with internal teams or third-party suppliers. Organisations are encouraged to strengthen upstream defences by working closely with internet service providers and cloud vendors. The NCSC recommends understanding the DoS mitigations already in place, exploring third-party DDoS protection services, deploying content delivery networks for web-based platforms, and considering multiple service providers for critical functions. The agency also advises building systems that can scale rapidly during an attack. Cloud-native applications can be automatically scaled using provider APIs, while private data centres can deploy modern virtualisation, provided spare capacity is available.

Preparing for and Responding to Attacks

The advisory highlights the importance of a clear response plan that allows services to continue operating, even in a degraded state. Recommended measures include graceful degradation, retaining administrative access during an attack, adapting to changing attacker tactics, and maintaining scalable fallback options for essential services. Testing and monitoring are also central to resilience. The NCSC encourages organisations to test their defences to understand the volume and types of attacks they can withstand, and to deploy monitoring tools that can detect incidents early and support real-time analysis.

Broader Context and Ongoing Threat

This is not the first time the NCSC has called out malicious activity from Russian-aligned groups. In 2023, it warned of heightened risks from state-aligned adversaries following Russia’s invasion of Ukraine. The agency says the latest activity remains ideologically motivated and is carried out outside direct state control. Organisations are also being encouraged to engage with the NCSC’s heightened cyber threat reporting and information-sharing channels. Officials say building resilience now is critical as Russian-aligned hacktivist groups continue to test the UK’s digital infrastructure through persistent and disruptive campaigns.
  •  

UK Turns to Australia Model as British Government Considers Social Media Ban for Children

social media ban for children

Just weeks after Australia rolled out the world’s first nationwide social media ban for children under 16, the British government has signaled it may follow a similar path. On Monday, Prime Minister Keir Starmer said the UK is considering a social media ban for children aged 15 and under, warning that “no option is off the table” as ministers confront growing concerns about young people’s online wellbeing. The move places the British government ban social media proposal at the center of a broader national debate about the role of technology in childhood. Officials said they are studying a wide range of measures, including tougher age checks, phone curfews, restrictions on addictive platform features, and potentially raising the digital age of consent.

UK Explores Stricter Limits on Social Media Ban for Children

social media ban for children In a Substack post on Tuesday, Starmer said that for many children, social media has become “a world of endless scrolling, anxiety and comparison.” “Being a child should not be about constant judgement from strangers or the pressure to perform for likes,” he wrote. Alongside the possible ban, the government has launched a formal consultation on children’s use of technology. The review will examine whether a social media ban for children would be effective and, if introduced, how it could be enforced. Ministers will also look at improving age assurance technology and limiting design features such as “infinite scrolling” and “streaks,” which officials say encourage compulsive use. The consultation will be backed by a nationwide conversation with parents, young people, and civil society groups. The government said it would respond to the consultation in the summer.

Learning from Australia’s Unprecedented Move

British ministers are set to visit Australia to “learn first-hand from their approach,” referencing Canberra’s decision to ban social media for children under 16. The Australian law, which took effect on December 10, requires platforms such as Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads, and YouTube to block underage users or face fines of up to AU$32 million. Prime Minister Anthony Albanese made clear why his government acted. “Social media is doing harm to our kids, and I’m calling time on it,” he said. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Parents and children are not penalized under the Australian rules; enforcement targets technology companies. Early figures suggest significant impact. Australia’s eSafety Commissioner Julie Inman-Grant said 4.7 million social media accounts were deactivated in the first week of the policy. To put that in context, there are about 2.5 million Australians aged eight to 15. “This is exactly what we hoped for and expected: early wins through focused deactivations,” she said, adding that “absolute perfection is not a realistic goal,” but the law aims to delay exposure, reduce harm, and set a clear social norm.

UK Consultation and School Phone Bans

The UK’s proposals go beyond a possible social media ban. The government said it will examine raising the digital age of consent, introducing phone curfews, and restricting addictive platform features. It also announced tougher guidance for schools, making it clear that pupils should not have access to mobile phones during lessons, breaks, or lunch. Ofsted inspectors will now check whether mobile phone bans are properly enforced during school inspections. Schools struggling to implement bans will receive one-to-one support from Attendance and Behaviour Hub schools. Although nearly all UK schools already have phone policies—99.9% of primary schools and 90% of secondary schools—58% of secondary pupils reported phones being used without permission in some lessons. Education Secretary Bridget Phillipson said: “Mobile phones have no place in schools. No ifs, no buts.”

Building on Existing Online Safety Laws

Technology Secretary Liz Kendall said the government is prepared to take further action beyond the Online Safety Act. “These laws were never meant to be the end point, and we know parents still have serious concerns,” she said. “We are determined to ensure technology enriches children’s lives, not harms them.” The Online Safety Act has already introduced age checks for adult sites and strengthened rules around harmful content. The government said children encountering age checks online has risen from 30% to 47%, and 58% of parents believe the measures are improving safety. The proposed British government ban social media initiative would build on this framework, focusing on features that drive excessive use regardless of content. Officials said evidence from around the world will be examined as they consider whether a UK-wide social media ban for children could work in practice. As Australia’s experience begins to unfold, the UK is positioning itself to decide whether similar restrictions could reshape how children engage with digital platforms. The consultation marks the start of what ministers describe as a long-term effort to ensure young people develop a healthier relationship with technology.
  •  

Attack Surface Visibility Tops CISO Infrastructure Security Priorities for 2026

Attack Surface Visibility Tops CISO Priorities for 2026

As organizations look toward 2026, infrastructure security is becoming one of the most defining challenges for cybersecurity leaders. Expanding cloud adoption, hybrid IT environments, growing reliance on APIs, and a rapidly widening digital footprint are making it harder for organizations to understand what assets they actually own and expose to the internet. Against this backdrop, attack surface visibility is emerging as a central concern for CISOs shaping their long-term cybersecurity strategy. To understand how security leaders are prioritizing these challenges, The Cyber Express (TCE) conducted a LinkedIn poll asking, “What will be the top infrastructure security priority for CISOs in 2026?” The results point clearly to a growing consensus: before organizations can defend effectively, they must first gain visibility into their expanding digital attack surface.

The Cyber Express Poll Results: Attack Surface Visibility Takes the Lead

The poll generated strong engagement from cybersecurity professionals across roles and industries. The final results were:
  • Attack surface visibility – 40%
  • Cloud and hybrid security – 25%
  • Identity and access security – 25%
  • Ransomware resilience – 10%
Attack Surface Visibility With 40% of respondents selecting attack surface visibility, it emerged as the top infrastructure security priority for CISOs heading into 2026. The result reflects a growing recognition that organizations cannot secure what they cannot see — particularly as assets are spread across cloud platforms, SaaS tools, APIs, endpoints, development environments, and third-party services. Both cloud and hybrid security and identity and access security tied for second place, each receiving 25% of the vote. Ransomware resilience, while still a major operational concern, ranked lower at 10%, suggesting that many security leaders are shifting focus toward foundational controls that reduce exposure before attacks occur.

Why Attack Surface Visibility Is Rising to the Top

The dominance of attack surface visibility in the poll reflects a practical reality facing modern enterprises. Infrastructure today is no longer limited to on-premise servers and corporate networks. It now includes cloud workloads, remote endpoints, APIs, shadow IT, and externally facing services that change constantly. Without accurate, real-time visibility into these assets, even mature cybersecurity strategies struggle to apply controls consistently or detect threats early enough to prevent impact. Marcos S, Founder & CEO and Senior Full Stack Developer specializing in email infrastructure and cybersecurity, highlighted this shift in focus. He said, “It's interesting to see how organizations are adjusting their focus towards infrastructure security as digital transformation accelerates. Investing in robust API security solutions could play a crucial role when facing evolving threat landscapes.” His comment underscores how modern attack surfaces are increasingly shaped by APIs, integrations, and digital services that were not part of traditional security models. “They’re All Intertwined” — The Link Between Visibility, Cloud, and Identity While attack surface visibility topped the list, the close ranking of cloud and hybrid security and identity and access security shows how interconnected modern infrastructure security priorities have become. Mary Teisserenc, who works in MFA and access security for Active Directory, captured this reality in a comment on the poll. She wrote, “It's hard to alienate all of these, they're so intertwined. How do you have hybrid security without strong IAM?” Her observation reflects a common challenge for CISOs: visibility alone is not enough if identity controls are weak or cloud environments are misconfigured. Each layer of infrastructure security depends on the others to be effective. attack surface visibility

CISO Priorities for 2026: Identity, AI, and Leadership

The themes emerging from the TCE poll closely mirror what senior security leaders are already predicting. Adam Palmer, CISO at First Hawaiian Bank, recently shared his top three predictions for cybersecurity in 2026:
  1. AI becomes the foundation of security operations, but governance lags adoption.
  2. Boards will continue to seek CISOs who translate risk into business decisions.
  3. Identity becomes the dominant control strategy led across PAM, Zero Trust, and SSO.
He added, “Across all three predictions, the differentiator will not be technology. It will be leadership.” Palmer’s post reinforce why identity and access security and attack surface visibility are gaining traction as top CISO priorities for 2026. Both are foundational controls that support AI-driven operations and help translate cyber risk into business impact.

AI, Scale, and a Growing Digital Attack Surface

Matthew Rosenquist, Founder of Cybersecurity Insights and CISO at Mercury Risk, also pointed to artificial intelligence as the defining force shaping cybersecurity in 2026. He warned that attackers will use AI to scale proven techniques faster and more effectively, while defenders struggle to keep pace. He said: “AI is an amazing tool for computing, but in 2026, there will be significant pain, public failures, and a few uncomfortable Board conversations.” As attacks become faster and more automated, blind spots in the digital attack surface will become far more dangerous — further elevating the importance of continuous visibility.

From Strategy to Execution

Industry research is also pushing CISOs toward execution-focused priorities. William Luders, Business Development Associate at Gartner, highlighted key initiatives leaders have recently prioritized:
  • Developing an actionable zero-trust strategy
  • Maturing governance with NIST CSF 2.0
  • Embedding cybersecurity into GenAI governance
  • Enhancing data security with cyberstorage
  • Monitoring and managing OT, IoT, and IIoT systems
He asked, “Which of these initiatives will you prioritize in 2026? And how will you measure success?”

A Clear Shift Toward Foundational Security

Taken together, the poll results and industry perspectives reflect a practical shift in how CISOs are approaching infrastructure security. Rather than prioritizing isolated threat categories, leaders are increasingly focusing on core capabilities that support every layer of defense — particularly attack surface visibility, identity control, and governance. The strong preference for attack surface visibility highlights a growing recognition that security programs cannot function effectively without a clear understanding of what needs to be protected. As CISO priorities for 2026 continue to evolve, infrastructure security is shaping up to be less about deploying more tools and more about strengthening fundamentals — visibility, identity, leadership, and execution.
  •  

Canada’s Investment Regulator Investigates Cyber Incident, Data Exposure Confirmed

CIRO cybersecurity incident

The Canadian Investment Regulatory Organization (CIRO) has confirmed that it detected a cybersecurity threat earlier this month and took immediate steps to contain the situation. The CIRO cybersecurity incident, first identified on August 11, 2025, prompted CIRO to proactively shut down parts of its IT environment to protect its systems and data while an investigation was launched. The CIRO is the national self-regulatory body overseeing all investment dealers, mutual fund dealers, and trading activity across Canada’s debt and equity markets. CIRO’s mandate includes protecting investors, ensuring efficient and consistent regulation, and strengthening public trust in financial regulation and the professionals who manage Canadians’ investments. In a public update issued from Toronto on August 18, CIRO said critical regulatory and surveillance functions remained operational throughout the disruption. The organization also reassured the public that its real-time equity market surveillance operations are continuing as normal and that there is currently no active threat within its systems. CIRO added a clear warning to the public: “CIRO will never contact you about this event with an unsolicited call or email asking for your personal or financial information.”

CIRO Cybersecurity Incident: What Happened

According to organization, the CIRO cybersecurity incident was detected on August 11, 2025. As a precautionary measure, the organization temporarily shut down some of its systems to ensure their safety and immediately began a technical and forensic investigation. “Throughout this time, critical functions remained available,” CIRO stated, emphasizing that its core regulatory responsibilities were not disrupted. It later confirmed, “We are confident that the incident is contained and that there is no active threat in CIRO’s environment.” CIRO is working with both internal teams and external cybersecurity and legal experts, as well as law enforcement authorities, to determine the nature and full scope of the breach.

Personal Information Affected at CIRO

On August 17, preliminary investigative findings indicated that some personal information had been impacted. The affected data relates to certain member firms and their registered employees. CIRO acknowledged the seriousness of this development, stating, “Given the high standard of security that CIRO expects of both itself and its members, we are deeply concerned about this, and know our members will be too.” The organization said its immediate priority is to identify which individual registrants may have been affected. Once that process is complete, CIRO will notify impacted individuals directly and provide appropriate risk mitigation services. Further updates are expected as the investigation progresses.

Are Investors Impacted?

CIRO stressed that Canadians’ investments are not at risk as a result of the CIRO cybersecurity incident. The regulator clarified that it only holds limited investor data, obtained through its member compliance and oversight functions. “It is important to note that Canadians’ investments are not at risk. CIRO only receives information about a sample of investors through its member compliance functions,” the organization said. However, CIRO acknowledged that some investor information may have been impacted. If the investigation confirms that any investor data was affected, those individuals will be notified directly and offered risk mitigation services.

What CIRO Is Doing Now

In response to the breach, CIRO has engaged both internal and external experts to carry out a full technical and forensic investigation. The regulator said the incident has been successfully contained and that additional system and data security measures have already been implemented. “We engaged internal and external experts to perform a technical and forensic investigation to identify the nature and scope of the event,” CIRO said. “As previously shared, the incident has been successfully contained, and additional system and data security measures have been implemented to enhance our existing cyber security protections.” CIRO also expressed regret over the CIRO cybersecurity incident and committed to ongoing transparency. “We deeply regret this has happened and remain committed to providing further updates on this page as we learn more.”

Key Takeaways

  • CIRO detected a cybersecurity threat on August 11, 2025, and shut down some systems as a precaution.
  • The CIRO cybersecurity incident is contained, and there is no active threat in CIRO’s environment.
  • Some personal and registration information linked to member firms and registered employees was affected.
  • Some investor information may have been impacted, but Canadians’ investments are not at risk.
  • Impacted individuals will be notified directly and offered risk mitigation services.
  • CIRO will never contact individuals with unsolicited calls or emails seeking personal or financial information.
As the investigation continues, CIRO says it will release more details in due course and provide direct notifications to anyone confirmed to be affected.
  •  

Grok Image Abuse Prompts X to Roll Out New Safety Limits

Grok AI Image Abuse

Elon Musk’s social media platform X has announced a series of changes to its AI chatbot Grok, aiming to prevent the creation of nonconsensual sexualized images, including content that critics and authorities say amounts to child sexual abuse material (CSAM). The announcement was made Wednesday via X’s official Safety account, following weeks of growing scrutiny over Grok AI’s image-generation capabilities and reports of nonconsensual sexualized content.

X Reiterates Zero Tolerance Policy on CSAM and Nonconsensual Content

In its statement, X emphasized that it maintains “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The platform said it continues to remove high-priority violative content, including CSAM, and to take enforcement action against accounts that violate X’s rules. Where required, accounts seeking child sexual exploitation material are reported to law enforcement authorities. The company acknowledged that the rapid evolution of generative AI presents industry-wide challenges and said it is actively working with users, partners, governing bodies, and other platforms to respond more quickly as new risks emerge.

Grok AI Image Generation Restrictions Expanded

As part of the update, X said it has implemented technological measures to restrict Grok AI from editing images of real people into revealing clothing, such as bikinis. These restrictions apply globally and affect all users, including paid subscribers. In a further change, image creation and image editing through the @Grok account are now limited to paid subscribers worldwide. X said this step adds an additional layer of accountability by helping ensure that users who attempt to abuse Grok in violation of laws or platform policies can be identified. X also confirmed the introduction of geoblocking measures in certain jurisdictions. In regions where such content is illegal, users will no longer be able to generate images of real people in bikinis, underwear, or similar attire using Grok AI. Similar geoblocking controls are being rolled out for the standalone Grok app by xAI.

Announcement Follows Widespread Abuse Reports

The update comes amid a growing scandal involving Grok AI, after thousands of users were reported to have generated sexualized images of women and children using the tool. Numerous reports documented how users took publicly available images and used Grok to depict individuals in explicit or suggestive scenarios without their consent. Particular concern has centered on a feature known as “Spicy Mode,” which xAI developed as part of Grok’s image-generation system and promoted as a differentiator. Critics say the feature enabled large-scale abuse and contributed to the spread of nonconsensual intimate imagery. According to one analysis cited in media reports, more than half of the approximately 20,000 images generated by Grok over a recent holiday period depicted people in minimal clothing, with some images appearing to involve children.

U.S. and European Authorities Escalate Scrutiny

On January 14, 2026, ahead of X’s announcement, California Attorney General Rob Bonta confirmed that his office had opened an investigation into xAI over the proliferation of nonconsensual sexually explicit material produced using Grok. In a statement, Bonta said reports describing the depiction of women and children in explicit situations were “shocking” and urged xAI to take immediate action. His office is examining whether and how xAI may have violated the law. Regulatory pressure has also intensified internationally. The European Commission confirmed earlier this month that it is examining Grok’s image-generation capabilities, particularly the creation of sexually explicit images involving minors. European officials have signaled that enforcement action is being considered.

App Store Pressure Adds to Challenges

On January 12, 2026, three U.S. senators urged Apple and Google to remove X and Grok from their app stores, arguing that Grok AI has repeatedly violated app store policies related to abusive and exploitative content. The lawmakers warned that app distribution platforms may also bear responsibility if such content continues.

Ongoing Oversight and Industry Implications

X said the latest changes do not alter its existing safety rules, which apply to all AI prompts and generated content, regardless of whether users are free or paid subscribers. The platform stated that its safety teams are working continuously to add safeguards, remove illegal content, suspend accounts where appropriate, and cooperate with authorities. As investigations continue across multiple jurisdictions, the Grok controversy is becoming a defining case in the broader debate over AI safety, accountability, and the protection of children and vulnerable individuals in the age of generative AI.
  •  

APD Investigates Third-Party Cybersecurity Incident, Says No Evidence of Data Compromise

Anchorage Police Department Cybersecurity Incident

The Anchorage Police Department (APD) has taken action after being notified of a cybersecurity incident involving a third-party service provider, emphasizing growing concerns around third-party cyber risks for local governments in the United States. APD, which serves the Municipality of Anchorage in Alaska, confirmed that the cybersecurity incident is linked to Whitebox Technologies, a data migration firm that supports multiple agencies nationwide. The department was alerted to the issue on January 7, 2026, while preparing for an internal software system upgrade. Whitebox Technologies has not publicly commented on the incident.

No Evidence of Data Compromise, Anchorage Police Department Says

According to the Anchorage Police Department, there is currently no evidence that its systems were compromised or that departmental data was accessed by threat actors. However, the department emphasized that precautionary measures were immediately implemented to reduce risk and protect sensitive information. In an official statement, APD said:
“Currently, there is no evidence indicating that APD systems have been compromised or that any APD data has been acquired by the threat actor. However, as a precautionary measure, the department is actively monitoring the systems and implementing protective measures to safeguard information.”
Anchorage, Alaska’s largest city, is home to approximately 300,000 residents, making the protection of public safety data a critical priority for municipal authorities.

Immediate Actions Taken to Secure APD Systems

Following notification of the APD cybersecurity incident, the city’s Information Technology Department (ITD) moved quickly to contain potential exposure. Officials confirmed that relevant APD servers were shut down, and access for the vendor and all associated third-party service providers was disabled. Additionally, ITD oversaw the deletion and removal of all remaining APD data from the third-party service provider’s servers. APD has since initiated continued oversight of its internal systems and is closely monitoring for any unusual or suspicious activity. As part of its response, APD also notified employees via email on January 7, advising them to remain alert and report any irregular system behavior through established channels.

Investigation Ongoing, Notifications Promised if Needed

The third-party service provider is leading the investigation, with APD working closely alongside other municipal departments to oversee the response. Officials stated that this collaboration is focused on ensuring appropriate safeguards are in place and minimizing potential risks as the investigation continues. APD pledged that if it is determined that protected personal information was accessed during the incident, affected individuals will be notified in accordance with applicable requirements. The department declined to provide further details about the nature of the cyberattack and confirmed that the incident is not related to a recent 311 service outage experienced by the city.

Whitebox Technologies and Broader Third-Party Risks

APD noted that Whitebox Technologies works with multiple agencies nationwide. Information published on the company’s website indicates it has provided services to municipalities in states including Washington, New Jersey, Oklahoma, and Maine. The APD cybersecurity incident reflects a broader trend in which hackers increasingly target third-party service providers as a pathway into government systems. These vendors often hold or process sensitive data, making them attractive targets for cybercriminals.

Recent Cyberattacks 

The Anchorage incident comes amid a wave of cyberattacks affecting local government technology providers. In November 2025, Crisis24’s OnSolve CodeRED emergency alert system was disrupted following a cyberattack claimed by the INC ransomware group. That incident impacted local governments across the U.S., with some user data potentially exposed, including names, addresses, email addresses, phone numbers, and passwords. Crisis24 has since announced plans to launch a new secure CodeRED system, prompting varying responses from municipalities relying on the platform. While APD maintains that its systems remain secure, officials confirmed that monitoring will continue as the investigation progresses. The department stressed that protective measures remain in place to safeguard information and maintain public trust.
  •  

Nicole Ozer Joins CPPA to Drive Privacy and Digital Security Initiatives

Nicole Ozer appointment

The California Privacy Protection Agency (CalPrivacy) has announced a significant leadership appointment, as Assembly Speaker Robert Rivas named Nicole Ozer to the CPPA Board, emphasising California’s ongoing commitment to strengthening consumer privacy protections. The Nicole Ozer appointment comes at a time when privacy regulation, digital rights, and responsible data governance are taking on increased importance across both state and federal institutions. Ozer brings decades of experience working at the intersection of privacy rights, technology, and democratic governance. She currently serves as the inaugural Executive Director of the Center for Constitutional Democracy at UC Law San Francisco, where her work focuses on safeguarding civil liberties in the digital age.

Nicole Ozer Appointment Strengthens CalPrivacy Board

Jennifer Urban, Chair of the California Privacy Protection Agency Board, welcomed the Nicole Ozer appointment, citing Ozer’s extensive background in privacy law, surveillance policy, artificial intelligence, and digital speech. “Nicole has a long history of service to Californians and deep legal and policy expertise,” Urban said. “Her knowledge will be a valuable asset to the agency as we continue advancing privacy protections across the state.” Urban also acknowledged the contributions of outgoing board member Dr. Brandie Nonnecke, noting her role in supporting CalPrivacy’s rulemaking, enforcement efforts, and public outreach initiatives over the past year. The CPPA Board plays a central role in guiding how California’s privacy laws are implemented and enforced, making leadership appointments especially critical as regulatory expectations evolve.

Nicole Ozer’s Background in Privacy and Civil Liberties

Before joining UC Law San Francisco, Nicole Ozer served as the founding Director of the Technology and Civil Liberties Program at the ACLU of Northern California. Her career also includes roles as a Technology and Human Rights Fellow at the Harvard Kennedy School, a Visiting Researcher at the Berkeley Center for Law and Technology, and a Fellow at Stanford’s Digital Civil Society Lab. Her work has been widely recognized, including a California Senate Members Resolution honoring her dedication to defending civil liberties in the digital world and her contributions to protecting the rights of people across California. “I appreciate the opportunity to serve on the CPPA Board,” Ozer said. “This is a critical moment to ensure that California’s robust privacy rights are meaningful in practice. I look forward to supporting the agency’s important work.”

Role of the California Privacy Protection Agency

The California Privacy Protection Agency is governed by a five-member board, with appointments made by the Governor, the Senate Rules Committee, the Assembly Speaker, and the Attorney General. The agency is responsible for administering and enforcing key privacy laws, including the California Consumer Privacy Act, the Delete Act, and the Opt Me Out Act. Beyond enforcement, CalPrivacy focuses on educating consumers and businesses about their rights and obligations. Through its website, Privacy.ca.gov, Californians can access guidance on protecting personal data, submitting delete requests, and using the Delete Request and Opt-out Platform (DROP).

Leadership Shifts Across Security and Privacy Institutions

Ozer’s appointment to the California Privacy Protection Agency Board comes in the same week as another notable leadership development at the federal level. The National Security Agency (NSA) announced the appointment of Timothy Kosiba as its 21st Deputy Director, highlighting parallel leadership changes shaping the future of privacy, cybersecurity, and national security. As NSA Deputy Director, Kosiba becomes the agency’s senior civilian leader, responsible for strategy execution, policy development, and operational oversight. His appointment was designated by Secretary of War Pete Hegseth and Director of National Intelligence Tulsi Gabbard, and formally approved by President Donald J. Trump. While the missions of the National Security Agency and the California Privacy Protection Agency differ, both appointments underline a growing emphasis on experienced leadership in institutions responsible for protecting sensitive data, infrastructure, and public trust. Together, these developments reflect how governance around privacy, cybersecurity, and digital rights continues to evolve, with leadership playing a central role in shaping how protections are implemented in practice.
  •  

Spanish Energy Giant Endesa Notifies Customers of Data Breach Impacting Energía XXI

Endesa Data Breach

Spanish energy provider Endesa and its regulated electricity operator Energía XXI have begun notifying customers after detecting unauthorized access to the company’s internal systems, resulting in the exposure of personal and contract-related data. The Endesa data breach incident, publicly disclosed by the company, impacts customers linked to Endesa’s commercial platform and is currently under investigation. Endesa, Spain’s largest electric utility company and a subsidiary of the Enel Group, provides electricity and gas services to millions customers across Spain and Portugal. In total, the company reports serving approximately 22 million clients. The Endesa data breach specifically affects customers of Energía XXI, which operates under Spain’s regulated energy market.

Unauthorized Access Detected on Commercial Platform

According to Endesa, the security incident involved unauthorized and illegitimate access to its commercial platform, enabling attackers to view sensitive customer information tied to energy contracts. In a notification sent to affected customers, the company acknowledged the Endesa data breach, stating: “Despite the security measures implemented by this company, we have detected evidence of unauthorized and illegitimate access to certain personal data of our customers related to their energy contracts, including yours.” The company clarified that while account passwords were not compromised, other categories of data were potentially accessed during the incident. [caption id="attachment_108537" align="aligncenter" width="823"]Endesa Data Breach Image Source: X[/caption]

Types of Data Potentially Exposed in Endesa Data Breach

Based on the ongoing investigation, Endesa confirmed that attackers may have accessed or exfiltrated the following information:
  • Basic identification data
  • Contact information
  • National identity card numbers
  • Contract-related data
  • Possible payment details, including IBANs
Despite the scope of exposed data, Endesa emphasized that login credentials remained secure, reducing the likelihood of direct account takeovers.

Endesa Activates Incident Response Measures

Following detection of the Endesa data breach, the company activated its established security response protocols to contain and mitigate the incident. In its official statement, Endesa detailed the actions taken: “As soon as Endesa Energía became aware of the incident, the established security protocols and procedures were activated, along with all necessary technical and organizational measures to contain it, mitigate its effects, and prevent its recurrence.” These actions included blocking compromised internal accounts, analyzing log records, notifying affected customers, and implementing enhanced monitoring to detect further suspicious activity. The company confirmed that operations and services remain unaffected.

Authorities Notified as Investigation Continues

As required under applicable regulations, Endesa notified the Spanish Data Protection Agency and other relevant authorities after conducting an initial assessment of the incident. The company stated that the investigation is ongoing, involving both internal teams and external suppliers, to fully understand the cause and impact of the breach. Addressing customer concerns, Endesa noted: “As of the date of this communication, there is no evidence of any fraudulent use of the data affected by the incident, making it unlikely that a high-risk impact on your rights and freedoms will materialize.”

Customers Warned of Potential Phishing and Impersonation Risks

While no misuse of data has been identified so far, Endesa acknowledged potential risks associated with the exposed information. Customers have been urged to remain vigilant against identity impersonation, data misuse, phishing attempts, and spam campaigns. The company advised affected individuals to report any suspicious communications to its call center and to avoid sharing personal or sensitive information with unknown parties. Customers were also encouraged to contact law enforcement in case of suspected fraudulent activity. The Cyber Express Team has contacted Energía XXI and Endesa seeking further clarification on the incident and its impact. However, at the time of publication, no additional response had been received from either entity.
  •  

NSA Appoints Timothy Kosiba to Oversee Strategy and Cybersecurity Operations

National Security Agency (NSA) appointment

The National Security Agency (NSA) has announced the appointment of Timothy Kosiba as its 21st Deputy Director, marking a significant leadership development at one of the United States’ most critical national security institutions. The designation was made by Secretary of War Pete Hegseth and Director of National Intelligence Tulsi Gabbard, and formally approved by President Donald J. Trump, according to an official statement released on January 9. As NSA Deputy Director, Kosiba becomes the agency’s senior civilian leader, responsible for overseeing strategy execution, establishing agency-wide policy, guiding operational priorities, and managing senior civilian leadership. In this role, he will also support the broader U.S. defense and intelligence enterprise, contributing to the formulation of national security policy and strengthening NSA’s position as an integrated mission partner against evolving foreign threats.

NSA Appoints Timothy Kosiba as Deputy Director

The NSA leadership appointment places Kosiba at the center of U.S. efforts to maintain a decisive national security advantage, particularly in the areas of foreign signals intelligence and cybersecurity operations. His return to the agency comes at a time when cybersecurity, cyber defense, and intelligence integration remain top priorities for U.S. national security planners. Lieutenant General William J. Hartman, Acting Commander of U.S. Cyber Command and Performing Duties of Director of the National Security Agency, welcomed Kosiba’s return, emphasizing his leadership credentials and institutional knowledge. “Tim is a people-focused leader with a wealth of experience that makes him ideal for the deputy director role,” Hartman said, citing Kosiba’s 33-year federal career and extensive experience across intelligence and cybersecurity missions. Hartman added that Kosiba’s leadership will be critical as the NSA advances its mission to protect U.S. national security interests in an increasingly complex threat environment.

Deep Experience Across Intelligence and Cybersecurity

With more than 30 years in the U.S. Intelligence Community, Timothy Kosiba brings deep familiarity with the NSA mission, particularly in public sector cybersecurity, cyber policy, and operational execution. Over the course of his career, he has played a key role in implementing the NSA’s Cyber Security Policy and has frequently represented both the NSA and U.S. Cyber Command in cyber-related discussions at the White House and other interagency forums. Kosiba’s experience spans both technical leadership and strategic engagement, positioning him to bridge operational realities with national-level policy objectives. His appointment reinforces the NSA’s focus on aligning intelligence capabilities with broader government cybersecurity and defense strategies.

Career Path Spanning Global and Operational Leadership

Kosiba began his NSA career as technical director for the Joint Functional Component Command for Network Warfare, where he worked on mission-critical cyber operations. He later served as technical director for the Requirements and Targeting Office within the Tailored Access Operations organization, a role focused on advanced cyber capabilities. Selected for the Defense Intelligence Senior Level (DISL) Service, Kosiba was posted overseas as chief of the Special U.S. Liaison Office in Canberra, Australia, strengthening intelligence cooperation with key allies. After returning to the United States, he became deputy director of the NSA/CSS Commercial Solutions Center and was later appointed Chief of Computer Network Operations (CNO). Following three years as CNO, Kosiba was assigned as Deputy Commander of NSA Georgia, the largest NSA field location, where he oversaw large-scale operational and workforce initiatives.

Commitment to the NSA Mission

Commenting on his appointment, Kosiba described the role as a return to familiar ground. “It is an honor to come back home and serve as the National Security Agency’s next deputy director,” he said, emphasizing his long-standing commitment to the agency’s mission and workforce. As NSA Deputy Director, Timothy Kosiba is expected to play a central role in shaping the agency’s approach to cybersecurity, intelligence operations, and national security policy, reinforcing NSA’s position within the U.S. intelligence and defense ecosystem amid persistent and emerging global threats.
  •  

After EU Probe, U.S. Senators Push Apple and Google to Review Grok AI

U.S. Senators Push Apple and Google to Review Grok AI

Concerns surrounding Grok AI are escalating rapidly, with pressure now mounting in the United States after ongoing scrutiny in Europe. Three U.S. senators have urged Apple and Google to remove the X app and Grok AI from the Apple App Store and Google Play Store, citing the large-scale creation of nonconsensual sexualized images of real people, including children. The move comes as a direct follow-up to the European Commission’s investigation into Grok AI’s image-generation capabilities, marking a significant expansion of regulatory attention beyond the EU. While European regulators have openly weighed enforcement actions, U.S. authorities are now signaling that app distribution platforms may also bear responsibility.

U.S. senators Cite App Store Policy Violations by Grok AI

In a letter dated January 9, 2026, Senators Ron Wyden, Ed Markey, and Ben Ray Luján formally asked Apple CEO Tim Cook and Google CEO Sundar Pichai to enforce their app store policies against X Corp. The lawmakers argue that Grok AI, which operates within the X app, has repeatedly violated rules governing abusive and exploitative content. According to the senators, users have leveraged Grok AI to generate nonconsensual sexualized images of women, depicting abuse, humiliation, torture, and even death. More alarmingly, the letter states that Grok AI has also been used to create sexualized images of children, content the senators described as both harmful and potentially illegal. The lawmakers emphasized that such activity directly conflicts with policies enforced by both the Apple App Store and Google Play Store, which prohibit content involving sexual exploitation, especially material involving minors.

Researchers Flag Potential Child Abuse Material Linked to Grok AI

The letter also references findings by independent researchers who identified an archive connected to Grok AI containing nearly 100 images flagged as potential child sexual abuse material. These images were reportedly generated over several months, raising questions about X Corp’s oversight and response mechanisms. The senators stated that X appeared fully aware of the issue, pointing to public reactions by Elon Musk, who acknowledged reports of Grok-generated images with emoji responses. In their view, this signaled a lack of seriousness in addressing the misuse of Grok AI.

Premium Restrictions Fail to Calm Controversy

In response to the backlash, X recently limited Grok AI’s image-generation feature to premium subscribers. However, the senators dismissed this move as inadequate. Sen. Wyden said the change merely placed a paywall around harmful behavior rather than stopping it, arguing that it allowed the production of abusive content to continue while generating revenue. The lawmakers stressed that restricting access does not absolve X of responsibility, particularly when nonconsensual sexualized images remain possible through the platform.

Pressure Mounts on Apple App Store and Google Play Store

The senators warned that allowing the X app and Grok AI to remain available on the Apple App Store and Google Play Store would undermine both companies’ claims that their platforms offer safer environments than alternative app distribution methods. They also pointed to recent instances where Apple and Google acted swiftly to remove other controversial apps under government pressure, arguing that similar urgency should apply in the case of Grok AI. At minimum, the lawmakers said, temporary removal of the apps would be appropriate while a full investigation is conducted. They requested a written response from both companies by January 23, 2026, outlining how Grok AI and the X app are being assessed under existing policies. Apple and Google have not publicly commented on the letter, and X has yet to issue a formal response. The latest development adds momentum to global scrutiny of Grok AI, reinforcing concerns already raised by the European Commission. Together, actions in the U.S. and Europe signal a broader shift toward holding AI platforms, and the app ecosystems that distribute them, accountable for how generative technologies are deployed and controlled at scale.
  •  

UK Moves to Close Public Sector Cyber Gaps With Government Cyber Action Plan

Government Cyber Action Plan

The UK government has revealed the Government Cyber Action Plan as a renewed effort to close the growing gap between escalating cyber threats and the public sector’s ability to respond effectively. The move comes amid a series of cyberattacks targeting UK retail and manufacturing sectors, incidents that have underscored broader vulnerabilities affecting critical services and government operations. Designed to strengthen UK cyber resilience, the plan reflects a shift from fragmented cyber initiatives to a more coordinated, accountable, and outcomes-driven approach across government departments.

A Growing Gap Between Threats and Defences

Recent cyber incidents have highlighted a persistent challenge: while threats to public services continue to grow in scale and sophistication, defensive capabilities have not kept pace. Reviews conducted by the Department for Science, Innovation and Technology (DSIT) revealed that cyber and digital resilience across the public sector was significantly lower than previously assessed. This assessment was reinforced by the National Audit Office’s report on government cyber resilience, which warned that without urgent improvements, the government risks serious incidents and operational disruption. The report concluded that the public sector must “catch up with the acute cyber threat it faces” to protect services and ensure value for money.

Building on Existing Foundations

The Government Cyber Action Plan builds on earlier collaborative efforts between DSIT, the National Cyber Security Centre (NCSC), and the Cabinet Office. Notable achievements to date include the establishment of the Government Cyber Coordination Centre (GC3), created to manage cross-government incident response, and the rollout of GovAssure, a scheme designed to assess the security of government-critical systems. Despite these initiatives, officials acknowledged that structural issues, inconsistent governance, and limited accountability continued to hinder effective cyber risk management. GCAP is intended to address these gaps directly.

Five Delivery Strands of the Government Cyber Action Plan

At the core of the Government Cyber Action Plan are five delivery strands aimed at strengthening accountability and improving operational resilience across departments. The first strand focuses on accountability, placing clearer responsibility for cyber risk management on accounting officers, senior leaders, Chief Digital and Information Officers (CDIOs), and Chief Information Security Officers (CISOs). The second strand emphasises support, providing departments with access to shared cyber expertise and the rapid deployment of technical teams during high-risk situations. Under the services strand, GCAP promotes the development of secure digital solutions that can be built once and used across multiple departments. This approach is intended to reduce duplication, improve consistency, and address capability gaps through innovation, including initiatives such as the NCSC’s ACD 2.0 programme. Response is another key focus, with the introduction of the Government Cyber Incident Response Plan (G-CIRP). This framework formalises how departments report and respond to cyber incidents, improving coordination during national-level events. The final strand addresses skills, aiming to attract, develop, and retain cyber professionals across government. Central to this effort is the creation of a Government Cyber Security Profession—the first dedicated government profession focused specifically on cyber security and resilience.

Role of the NCSC and Long-Term Impact

The NCSC will play a central role across all five strands of the Government Cyber Action Plan, from supporting departments during incidents to helping design services that improve resilience. This approach aligns with the NCSC’s existing work with critical national infrastructure and public sector organisations, offering technical guidance, assurance, and incident response support. While GCAP’s implementation will be phased through to 2029 and beyond, officials say the framework is expected to deliver measurable improvements even in its first year. These include stronger risk management practices and faster coordination during cyber incidents. According to Johnny McManus, Deputy Director for Government Cyber Resilience at the NCSC, the combination of DSIT’s delivery leadership and the NCSC’s technical authority provides a foundation for transforming UK cyber resilience across the public sector.
  •  

European Commission Investigates Grok AI After Explicit Images of Minors Surface

European Commission Grok Investigation

The Grok AI investigation has intensified after the European Commission confirmed it is examining the creation of sexually explicit and suggestive images of girls, including minors, generated by Grok, the artificial intelligence chatbot integrated into social media platform X. The scrutiny follows widespread outrage linked to a paid feature known as “Spicy Mode,” introduced last summer, which critics say enabled the generation and manipulation of sexualised imagery. Speaking to journalists in Brussels on Monday, a spokesperson for the European Commission said the matter was being treated with urgency. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” the spokesperson said, adding: “This is not 'spicy'. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

European Commission Examines Grok’s Compliance With EU Law

The European Commission Grok probe places renewed focus on the responsibilities of AI developers and social media platforms under the EU’s Digital Services Act (DSA). The European Commission, which acts as the EU’s digital watchdog, said it is assessing whether X and its AI systems are meeting their legal obligations to prevent the dissemination of illegal content, particularly material involving minors. The inquiry comes after reports that Grok was used to generate sexually explicit images of young girls, including through prompts that altered existing images. The controversy escalated following the rollout of an “edit image” feature that allowed users to modify photos with instructions such as “put her in a bikini” or “remove her clothes.” On Sunday, X said it had removed the images in question and banned the users involved. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the company’s X Safety account posted. [caption id="attachment_108277" align="aligncenter" width="370"]European Commission Grok Source: X[/caption]

International Backlash and Parallel Investigations

The X AI chatbot Grok is now facing regulatory pressure beyond the European Commission. Authorities in France, Malaysia, and India have launched or expanded investigations into the platform’s handling of explicit and sexualised content generated by the AI tool. In France, prosecutors last week expanded an existing investigation into X to include allegations that Grok was being used to generate and distribute child sexual abuse material. The original probe, opened in July, focused on claims that X’s algorithms were being manipulated for foreign interference. India has also taken a firm stance. Last week, Indian authorities reportedly ordered X to remove sexualised content, curb offending accounts, and submit an “Action Taken Report” within 72 hours or face legal consequences. As of Monday, there was no public confirmation on whether X had complied. [caption id="attachment_108281" align="aligncenter" width="1024"]European Commission Grok probe Source: India's Ministry of Electronics and Information Technology[/caption] Malaysia’s Communications and Multimedia Commission said it had received public complaints about “indecent, grossly offensive” content on X and confirmed it was investigating the matter. The regulator added that X’s representatives would be summoned.

DSA enforcement and Grok’s previous controversies

The current Grok AI investigation is not the first time the European Commission has taken action related to the chatbot. Last November, the Commission requested information from X after Grok generated Holocaust denial content. That request was issued under the DSA, and the Commission said it is still analysing the company’s response. In December, X was fined €120 million under the DSA over its handling of account verification check marks and advertising practices. “I think X is very well aware that we are very serious about DSA enforcement. They will remember the fine that they have received from us,” the Commission spokesperson said.

Public reaction and growing concerns over AI misuse

The controversy has prompted intense discussion across online platforms, particularly Reddit, where users have raised alarms about the potential misuse of generative AI tools to create non-consensual and abusive content. Many posts focused on how easily Grok could be prompted to alter real images, transforming ordinary photographs of women and children into sexualised or explicit content. Some Reddit users referenced reporting by the BBC, which said it had observed multiple examples on X of users asking the chatbot to manipulate real images—such as making women appear in bikinis or placing them in sexualised scenarios—without consent. These examples, shared widely online, have fuelled broader concerns about the adequacy of content safeguards. Separately, the UK’s media regulator Ofcom said it had made “urgent contact” with Elon Musk’s company xAI following reports that Grok could be used to generate “sexualised images of children” and produce “undressed images” of individuals. Ofcom said it was seeking information on the steps taken by X and xAI to comply with their legal duties to protect users in the UK and would assess whether the matter warrants further investigation. Across Reddit and other forums, users have questioned why such image-editing capabilities were available at all, with some arguing that the episode exposes gaps in oversight around AI systems deployed at scale. Others expressed scepticism about enforcement outcomes, warning that regulatory responses often come only after harm has already occurred. Although X has reportedly restricted visibility of Grok’s media features, users continue to flag instances of image manipulation and redistribution. Digital rights advocates note that once explicit content is created and shared, removing individual posts does not fully address the broader risk to those affected. Grok has acknowledged shortcomings in its safeguards, stating it had identified lapses and was “urgently fixing them.” The AI tool has also issued an apology for generating an image of two young girls in sexualised attire based on a user prompt. As scrutiny intensifies, the episode is emerging as a key test of how AI-generated content is regulated—and how accountability is enforced—when powerful tools enable harm at scale.
  •  

Higham Lane School Cyberattack Disrupts IT Systems, Forcing Temporary Closure

Higham Lane School

A UK school cyberattack has forced a British secondary school to close its doors at the start of the new term, highlighting ongoing cybersecurity challenges across the education sector. Higham Lane School in Nuneaton, central England, confirmed that a cyber incident has disrupted its entire IT infrastructure, preventing students and staff from accessing essential digital services. The Higham Lane School cyberattack incident has left the school’s approximately 1,500 students unable to return to classrooms following the Christmas holidays. School officials confirmed that the campus will remain closed until at least Wednesday while investigations and recovery efforts continue.

Higham Lane School Cyber Incident Disrupts IT Systems

In an email sent to parents and carers, Higham Lane School stated that the cyberattack “has taken down the school IT system,” leaving staff without access to “any digital services including telephones / emails / servers and the school’s management system.” The outage has affected all internal communications and administrative functions, prompting school leaders to take the precautionary step of closing the site. Headteacher Michael Gannon detailed the situation in a formal letter to families, explaining the steps being taken to manage the incident. “We are writing to provide you with an update following the recent cyber incident that has affected our school,” the letter stated. “As you are aware, the school will be closed today, Monday 5th January, and will remain closed tomorrow, Tuesday 6th January, while we continue to respond to this situation.” The decision, according to the school, was made following advice from external experts. Higham Lane School is working with a Cyber Incident Response Team from the Department for Education, alongside IT specialists from its Multi Academy Trust, the Central England Academy Trust, to investigate and resolve the issue.

UK School Cyberattack: Students Advised Not to Access School Systems

As part of the response to the school IT system outage, staff and students have been instructed not to log into any school platforms, including Google Classroom and SharePoint, until further notice. The school emphasized that students who may have already accessed systems using their credentials should not worry, but added that the temporary restriction is necessary to ensure safety while the investigation continues. Despite the closure, students have been encouraged to continue learning independently using external platforms not connected to the school network. Resources such as BBC Bitesize and Oak National Academy were recommended, with the school noting that these services can be accessed safely using personal devices and home internet connections.

Education Sector Cybersecurity Under Growing Pressure

The Higham Lane School cyber incident comes amid rising concern over cybersecurity in schools, both in the UK and internationally. In October 2025, Kearney Public Schools (KPS) in the United States disclosed a cybersecurity incident that compromised its entire technology network, affecting phones, computers, and digital systems district-wide. The KPS cyberattack disrupted communications as students and staff prepared to return to classrooms, requiring support from external cybersecurity experts. In the UK, recent findings from the Information Commissioner’s Office (ICO) have drawn attention to another emerging risk: student-led insider cyber incidents. According to the regulator’s analysis of 215 personal data breach reports in the education sector, 57% of insider incidents over the past two years were linked to students. Nearly a third involved stolen login credentials, and in 97% of those cases, students were responsible. “It’s important that we understand the next generation’s interests and motivations in the online world to ensure children remain on the right side of the law,” said Heather Toomey, Principal Cyber Specialist at the ICO. She warned that behavior driven by curiosity or dares can escalate into serious cyber incidents, with potential consequences extending beyond school systems.

Weak Security Controls Amplify Risks

The ICO cited several cases where weak password practices, poor access controls, and limited monitoring created opportunities for misuse. In one secondary school, Year 11 students accessed sensitive data belonging to 1,400 pupils after cracking staff passwords. In another case, a student used a compromised staff login to alter and delete records for more than 9,000 individuals. As investigations continue at Higham Lane School, the UK school cyberattack incident serves as another reminder of the growing importance of education sector cybersecurity, particularly as schools remain heavily reliant on digital platforms for teaching, administration, and communication.
  •  

Critical IBM API Connect Vulnerability Enables Authentication Bypass

IBM API Connect

IBM has released security updates to address a critical IBM API Connect vulnerability that could allow remote attackers to bypass authentication controls and gain unauthorized access to affected applications. The flaw, tracked as CVE-2025-13915, carries a CVSS 3.1 score of 9.8, placing it among the most severe vulnerabilities disclosed in recent months. According to IBM, the IBM API Connect vulnerability impacts multiple versions of the platform and stems from an authentication bypass weakness that could be exploited remotely without any user interaction or prior privileges. Organizations running affected versions are being urged to apply fixes immediately to reduce exposure.

CVE-2025-13915: IBM API Connect Authentication Bypass Explained

The vulnerability has been classified under CWE-305: Authentication Bypass by Primary Weakness, indicating a failure in enforcing authentication checks under certain conditions. IBM said internal testing revealed that the flaw could allow an attacker to circumvent authentication mechanisms entirely. The CVSS vector (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H) highlights the seriousness of the issue. The attack can be carried out over the network, requires low attack complexity, and does not depend on user interaction. If exploited, it could result in a complete compromise of confidentiality, integrity, and availability within the affected IBM API Connect environment. IBM warned that a successful attack could grant unauthorized access to API Connect applications, potentially exposing sensitive data and backend services managed through the platform.

Affected IBM API Connect Versions

The IBM API Connect vulnerability affects specific versions within the 10.x release series. IBM confirmed that the following product versions are impacted:
  • IBM API Connect V10.0.8.0 through V10.0.8.5
  • IBM API Connect V10.0.11.0
API Connect is widely deployed in enterprise environments to manage APIs, control developer access, and secure integrations between internal and external services. As a result, vulnerabilities in the platform can have cascading effects across connected systems.

IBM Releases Fixes for IBM API Connect Vulnerability

To remediate CVE-2025-13915, IBM has issued interim fixes (iFixes) for all affected versions and strongly recommends that customers upgrade without delay. For the 10.0.8.x branch, fixes have been released for each affected sub-version, including 10.0.8.1, 10.0.8.2 (iFix1 and iFix2), 10.0.8.3, 10.0.8.4, and 10.0.8.5. IBM has also provided an interim fix for IBM API Connect V10.0.11.0. IBM emphasized that upgrading to the remediated versions is the most effective way to eliminate the authentication bypass risk associated with this vulnerability.

Workarounds and Mitigations for Unpatched Systems

For organizations unable to apply the fixes immediately, IBM has outlined a temporary mitigation to reduce risk. Administrators are advised to disable self-service sign-up on the Developer Portal, if that feature is enabled. While this measure does not fully address the IBM API Connect authentication bypass vulnerability, IBM said it can help minimize exposure until patching is completed. The company cautioned that workarounds should only be used as a short-term solution.

Why the IBM API Connect Vulnerability Matters

Authentication bypass vulnerabilities are particularly dangerous because they undermine one of the most fundamental security controls in enterprise applications. In API-driven environments, such flaws can provide attackers with a direct path to sensitive services, data stores, and internal systems. The vulnerability was published in the National Vulnerability Database (NVD) on December 26, 2025, and last updated on December 31, 2025, with IBM listed as the CNA and source. Given the critical severity rating, security teams are expected to prioritize remediation and review API access logs for any signs of unauthorized activity. Organizations running affected versions of IBM API Connect are urged to assess their deployments immediately and apply the recommended fixes to prevent potential exploitation.
  •  

European Space Agency Confirms Cybersecurity Breach on External Servers

European Space Agency Confirms Cybersecurity Breach

The European Space Agency (ESA) has confirmed a cybersecurity breach involving servers located outside its corporate network. This confirmation comes following threat actor claim that they had compromised ESA systems and stolen a large volume of internal data. While ESA maintains that only unclassified information was affected. In an official statement shared on social media, the European Space Agency said it is aware of the cybersecurity issue and has already launched a forensic security investigation, which remains ongoing. According to ESA, preliminary findings indicate that only a very small number of external servers were impacted. “These servers support unclassified collaborative engineering activities within the scientific community,” ESA stated, emphasizing that the affected infrastructure does not belong to its internal corporate network. The agency added that containment measures have been implemented to secure potentially affected devices and that all relevant stakeholders have been informed. [caption id="attachment_108221" align="aligncenter" width="620"]European Space Agency Source: ESA Twitter Handle[/caption] ESA said it will provide further updates as additional details become available.

Threat Actor Claims Data Theft

The confirmation follows claims posted on BreachForums and DarkForums, where a hacker using the alias “888” alleges responsibility for the cybersecurity breach. According to the posts, the attack occurred on December 18, 2025, and resulted in the full exfiltration of internal ESA development assets. The threat actor claims to have stolen over 200 GB of data, including private Bitbucket repositories, source code, CI/CD pipelines, API tokens, access tokens, configuration files, Terraform files, SQL files, confidential documents, and hardcoded credentials. “I’ve been connecting to some of their services for about a week now and have stolen over 200GB of data, including dumping all their private Bitbucket repositories,” the actor wrote in one forum post. The stolen data is reportedly being offered as a one-time sale, with payment requested exclusively in Monero (XMR), a cryptocurrency commonly associated with underground cybercrime marketplaces. [caption id="attachment_108222" align="aligncenter" width="832"]ESA Threat Actor Claim Source: Data Breach Fourm[/caption] ESA has not verified the authenticity or scope of the claims made by the threat actor. So far, ESA has not disclosed which specific external servers were compromised or whether any credentials or development assets referenced by the threat actor were confirmed to be exposed. Founded 50 years ago and headquartered in Paris, the European Space Agency is an intergovernmental organization that coordinates space activities across 23 member states. Given ESA’s role in space exploration, satellite systems, and scientific research, cybersecurity incidents involving the agency carry heightened strategic and reputational significance.

Previous European Space Agency Cybersecurity Incidents 

This is not the first cybersecurity breach involving ESA in recent years. In December 2024, the agency’s official web shop was compromised after attackers injected malicious JavaScript code designed to steal customer information and payment card data during checkout. That incident raised concerns around third-party systems and external-facing infrastructure, an issue that appears relevant again in the current breach involving non-corporate servers.

What Happens Next

While ESA insists the compromised systems hosted only unclassified data, the ongoing forensic investigation will be critical in determining the true scope and impact of the breach. As threat actors continue to publish claims on hacking forums, the incident highlights the growing cybersecurity risks facing large scientific and governmental organizations that rely heavily on collaborative and distributed digital environments. ESA has said further updates will be shared once more information becomes available.
  •  

CNIL Fines NEXPUBLICA FRANCE €1.7 Million for GDPR Security Failures

GDPR Fine

France’s data protection authority, the CNIL, has imposed a €1.7 million GDPR fine on software company NEXPUBLICA FRANCE for failing to implement adequate cybersecurity measures. The penalty was announced on 22 December 2025 following an investigation into a data breach linked to the company’s PCRM software, widely used in the social services sector. The regulator said the GDPR fine reflects serious shortcomings in how the company protected sensitive personal data, despite being aware of long-standing security weaknesses before the breach occurred.

Data Breach Exposed Third-Party Documents

The case dates back to November 2022, when users of a Nexpublica online portal reported that they could access documents belonging to other individuals. These documents included personal files that should have been strictly restricted, raising immediate concerns about data security and access controls. Customers of NEXPUBLICA notified the CNIL after discovering that users could view third-party information through the portal. Given the nature of the data involved, the incident posed a high risk to individuals’ privacy and rights, prompting a formal investigation by the regulator.

PCRM Software Used in Sensitive Social Services

NEXPUBLICA FRANCE, formerly known as INETUM SOFTWARE FRANCE, specializes in designing IT systems and software. One of its core products, PCRM, is a user relationship management tool used in social action services. It is notably deployed by Departmental Houses for the Disabled (MDPH) in several French departments. Because PCRM processes highly sensitive personal data, including information that can reveal a person’s disability, the CNIL stressed that a high level of security was required. The GDPR fine reflects the sensitivity of the data exposed and the potential harm caused to affected individuals.

CNIL Finds Serious Security Failures

Following its investigation, the CNIL concluded that the technical and organisational measures implemented to secure PCRM were insufficient. The regulator identified a general weakness in Nexpublica’s information system, along with structural vulnerabilities that had been allowed to persist over time. According to the CNIL, many of these vulnerabilities stemmed from a lack of knowledge of basic cybersecurity principles and current best practices. Several security flaws had already been identified in internal and external audit reports prior to the breach. Despite this, the company failed to correct the issues until after the data breaches were reported. This delay played a key role in the decision to impose the GDPR fine.

Violation of Article 32 of the GDPR

The CNIL ruled that Nexpublica violated Article 32 of the GDPR, which requires organisations to implement security measures appropriate to the level of risk. This includes considering the state of the art, implementation costs, and the risks posed to individuals’ rights and freedoms. The restricted committee, the CNIL body responsible for sanctions, found that Nexpublica did not meet these requirements. The situation was considered more serious because the company operates as an IT systems and software specialist and should have been fully aware of its security obligations.

Why the GDPR Fine Was €1.7 Million

In setting the amount of the GDPR fine, the CNIL considered several factors. These included Nexpublica’s financial capacity, the number of people potentially affected, and the sensitive nature of the data processed through PCRM. The regulator also took into account that the security issues were known internally before the breach and were only addressed afterward. While Nexpublica has since implemented corrective measures, the CNIL said this did not outweigh the severity of the earlier failings. As the necessary fixes have now been applied, the CNIL did not issue a separate compliance order. However, the GDPR fine serves as a clear warning to software providers handling sensitive public-sector data: known security weaknesses must be addressed before, not after, a breach occurs.
  •  

Former Georgian Security Chief Grigol Liluashvili Arrested on Multiple Bribery Charges

Grigol Liluashvili

Georgia’s former Head of the State Security Service, Grigol Liluashvili, has been arrested following an investigation into alleged corruption, bribery, and abuse of power. The Grigol Liluashvili arrest was carried out as part of joint operational and investigative actions by the Prosecutor General’s Office of Georgia and the State Security Service. Liluashvili, who led the country’s security agency from 2019 until April 2025, is accused of accepting bribes in several criminal cases. Prosecutors say the alleged crimes span multiple years and involve both financial corruption and the protection of illegal activities.

Alleged Bribery Linked to Energy and Gas Projects

According to the investigation, the first bribery episode dates back to October 2022. Prosecutors allege that Liluashvili received $1 million from Turkish investor Çağatay Ülker. The payment was reportedly transferred through Romeo Mikautadze, who at the time served as First Deputy Minister of Economy and Sustainable Development. In return, Liluashvili is accused of lobbying for the signing of a memorandum of understanding related to the construction of wind power plants. The second episode occurred in February 2022, again involving Mikautadze as an intermediary. Prosecutors claim that Liluashvili demanded and extorted 1.5 million GEL from Giorgi Khazhalia, founder of the company Express Service 2008, in exchange for assistance in gasification tenders.

Protection of Fraudulent Call Centers

A major part of the case focuses on the period between 2021 and 2023, when Georgia was officially battling illegal scam call centers. Despite this effort, prosecutors say dozens of fraudulent call centers continued to operate in the country. Based on witness testimony, investigators claim that most of these call centers were controlled by individuals who used their profits to finance opposition-aligned media outlets. Prosecutors further allege that a smaller group of call centers was protected by Liluashvili, who carried out this activity through his cousin, Sandro Liluashvili. Through this scheme, Liluashvili is accused of receiving approximately $1.365 million in bribes. Sandro Liluashvili has already been arrested on charges of fraud and money laundering. Authorities are also investigating whether Liluashvili and his accomplices deliberately concealed the existence of these call centers. Prosecutors claim that, in exchange, certain opposition media outlets allegedly refrained from reporting on scam operations under his protection, despite having information about them.

Kindergarten Procurement Scheme

The fourth episode involves alleged corruption within the Tbilisi City Hall Kindergarten Management Agency. Prosecutors say Liluashvili used his position to protect his friend, Kakha Gvantseladze, the agency’s former director, who is accused of receiving large-scale kickbacks from businesses involved in kindergarten procurement contracts. According to investigators, several agency employees involved in financial accounting, calculations, and monitoring were also part of the scheme. Criminal charges have been brought against all of them. Prosecutors say hundreds of investigative actions have been carried out, including witness interrogations, video and audio recordings, and the seizure of other evidence supporting the charges. Liluashvili has been charged under Article 338 of the Criminal Code of Georgia, which covers taking bribes in a particularly large amount by a group acting by prior agreement. The charge carries a sentence of 11 to 15 years in prison. Prosecutors plan to request pretrial detention as a preventive measure.

Ongoing Investigation Against Grigol Liluashvili

The investigation remains ongoing, with authorities working to identify additional crimes and other individuals involved. Georgian law enforcement officials say the case is part of a broader effort to combat corruption and reduce it “to a historical minimum.” The Grigol Liluashvili arrest follows earlier reporting on scam call centers in Tbilisi, including the Scam Empire investigation, which revealed a large call center operating near the State Security Service headquarters and defrauding thousands of victims worldwide. While assets in that case were frozen and arrests made, prosecutors have not yet specified which call centers Liluashvili is accused of protecting. This is not the first time Grigol Liluashvili has faced such allegations. In 2022, he denied similar claims and filed defamation lawsuits against several opposition television channels.
  •  

The End of Excuses: 10 Cybersecurity Investments Every CISO Must Make by 2026

2026-CISOs Investment

Coupang’s CEO resigned. Bed Bath & Beyond’s CTO stepped down. Two very different companies, two very similar stories: a massive breach, millions of exposed records, and executives suddenly facing the consequences. Park Dae-jun of Coupang called it a resignation, but everyone knew it was forced. Rafeh Masood’s departure at Bed Bath & Beyond came just days after a breach, leaving questions hanging in the air. These are not isolated incidents, they are a warning. For years, CISOs operated with a cushion. A breach? Brush it off. A delayed response? Justify it. A failing tool? Swap it out. That era is over.  By 2026, cybersecurity isn’t just about systems and alerts. It’s about governance, accountability, and real-world consequences. AI is moving faster than humans can react. Ransomware is clever, adaptive, and relentless. Regulators want proof, not excuses. Boards will no longer settle for “we’re still maturing.”  The hard truth: most security programs as they exist today will not survive 2026. CISOs are being forced to make hard choices, fewer tools, stricter controls, and investments that actually protect the business. Speed helps, but clarity and accountability matter far more.  Here are 10 technologies CISOs will invest in during 2026, not because they are trendy, but because without them, security leadership simply won’t exist.

1. AI-Driven Security Operations (AI-SOC)

Ransomware is no longer noisy, careless, or opportunistic. It is calculated.  As Dr Sheeba Armoogum, Associate Professor in Cybersecurity, University of Mauritius, explains to The Cyber Express, “By 2026, CISOs will prioritize investment in AI-driven security operations and identity-first security platforms to counter the rapid rise of AI-based ransomware and automated extortion attacks. Ransomware is no longer opportunistic; it is adaptive, identity-aware, and increasingly capable of evading traditional detection using AI techniques.”  This is the line CISOs must internalise: traditional SOC models are structurally obsolete. 

Threats now move faster than human workflows can respond. Static rules, manual triage, and analyst-centric escalation chains break down when adversaries use AI to adapt in real time. As a result, CISOs are increasingly backing AI-native SOC platforms that operate through autonomous agents rather than dashboards and alerts.

Cyble Blaze AI exemplifies this shift. Built as an AI-native, multi-agent cybersecurity platform, Blaze AI enables continuous threat hunting, real-time correlation, and autonomous response, allowing security teams to identify and neutralize threats in seconds rather than hours. In practice, this moves security operations from reactive monitoring to machine-speed defense.

AI-SOC is not about replacing analysts; it is about re-architecting operations so humans supervise outcomes instead of chasing alerts. Behavioural analysis, automated decisioning, and immediate containment are no longer “advanced capabilities”—they are foundational.

Any CISO still relying on static rules and manual triage in 2026 will be explaining failure, not preventing it.

2. Identity-First Security Platforms

Perimeter security died quietly. Identity replaced it loudly.  Dr Armoogum makes the reason explicit, “At the same time, identity security controls such as continuous authentication and privileged access governance are critical, as most ransomware campaigns now begin with credential compromise rather than malware exploits.”  This is not a technical nuance, it is a strategic failure point. Most breaches do not break in; they log in.  In 2026, CISOs will invest in identity-first security because everything else depends on it. Human users, service accounts, APIs, workloads, and AI agents all require governance. If identity is weak, cloud controls, endpoint tools, and network defenses are cosmetic.  Identity is now the security control plane.

3. Privacy and Data Governance Platforms

Privacy failures no longer stay in legal departments—they land squarely on security leadership.  As Nikhil Jhanji, Principal Product Manager at Privy by IDfy, told The Cyber Express, “By 2026, CISOs will invest far more in privacy and data governance technologies that make compliance operational rather than aspirational.”  This is the pivot point. Policies and spreadsheets cannot scale to modern data flows. Regulators expect continuous accountability, consent traceability, and defensible evidence.  What matters, as Jhanji notes, is not just prevention:  “What matters now is not just preventing incidents but being able to demonstrate responsible data handling at scale to regulators, boards, and customers.”  In 2026, privacy becomes a living control layer, not a compliance afterthought.

4. Continuous Exposure Management (CEM)

Patch faster has failed as a strategy.  Swati Bhate, Chief Information Security Officer and Chief Risk Officer, i-Source Infosystems Pvt. Ltd., delivers the most uncompromising view in her LinkedIn post of what lies ahead:  “By 2026, the margin for error has hit zero.”  She makes the mandate clear:  “Pre-emptive Blocking > Reactive Patching: Machine-speed attacks demand Continuous Exposure Management (CEM) to block non-compliant deployments automatically.”  This is not about improving hygiene, it is about stopping unsafe systems from existing at all. In 2026, environments that fail security baselines should never reach production.  Security becomes a gate, not a clean-up crew.

5. Confidential Computing and Silicon-Level Isolation

Cloud security tools have a blind spot, and attackers know it.  Bhate warns, “Attackers now target hypervisors to bypass guest OS defenses. Our baseline mandates silicon-level isolation and Confidential Computing.”  This is a direct challenge to CISOs who believe visibility equals control. If memory, workloads, and virtualization layers are exposed, traditional controls are irrelevant.  Confidential Computing moves trust down the stack, to hardware. In 2026, CISOs will invest here not for innovation, but because it closes an attack surface software cannot defend alone.

6. AI Governance and AI Risk Controls

Shadow AI is already out of control.  Bhate again is unequivocal, “Eliminate AI Exhaust: Shadow AI pilots leave unmonitored vector databases. In 2026, data without verified lineage is a liability—not an asset.”  AI governance tools will become mandatory, not optional. CISOs will need visibility into model usage, data provenance, and decision pathways to comply with the EU AI Act and NIS2.  As Bhate concludes:  “The question is no longer how fast your AI can run—it’s whether you’ve built the brakes to keep it from taking the enterprise over a cliff.”

7. Security Platforms That Reduce Tool Sprawl

2025 exposed a hard truth: more tools did not mean more security.  As Manish Bakshi, National Sales Head – Professional Services, Ingram Micro, observed, “Fewer vendors worked better than too many tools.”  CISOs learned that speed without clarity creates fragility. In 2026, they will choose platforms, and partners—that understand business context and remain accountable after go-live.  Enterprise security buyers are no longer impressed by roadmaps. They want predictable outcomes.

8. Cloud-Native Security Platforms

Cloud misconfigurations are no longer accidents; they are liabilities.  CISOs will invest in cloud-native security platforms that continuously assess posture, identity exposure, and workload risk. These tools align with a growing sentiment from practitioners themselves:  As one security practitioner noted on Reddit, “CISOs need people who understand identity, cloud, and how systems connect, not tool jockeys.”  Security in 2026 demands system thinking, not isolated controls.

9. Detection Engineering and SIEM Evolution

Alert volume is meaningless. Understanding is not.  As one security practitioner noted in a Reddit discussion on modern SOC skills, “Shallow alert clicker skills are fading.”  CISOs will invest in platforms and people who can map attack paths, tune detections, automate response, and explain impact in plain English. In 2026, detection engineering becomes a craft—not a checkbox.

10. Risk Quantification and Board-Ready Security Metrics

Finally, CISOs will invest in tools that translate cyber risk into business reality.

By 2026, security leaders will no longer be judged on how many threats they block, but on how clearly they can explain risk, impact, and trade-offs to the business. Boards are done with abstract heat maps and technical severity scores. They want to know what a risk costs, what reducing it achieves, and what happens if it is ignored.

This is where risk quantification platforms come into play. By framing cyber exposure in business terms, they allow CISOs to prioritize controls, justify investment decisions, and have credible, outcome-driven conversations at the executive level. Platforms such as Cyble Saratoga, which focus on moving organizations beyond subjective assessments toward measurable risk understanding, reflect this shift in how security decisions are made.

In 2026, outcomes will matter more than effort. CISOs who cannot quantify risk and articulate trade-offs will lose influence, and eventually relevance.

2026 Will Separate Cybersecurity Leaders From Security Operators 

None of what’s coming in 2026 is surprising. The warning signs have been there for a while, breaches getting bigger, attacks getting smarter, regulators getting stricter, and boards getting far more involved than they used to be.  What is changing is tolerance. Tolerance for loose controls. Tolerance for fragmented tooling. Tolerance for security programs that can’t clearly explain what they’re protecting, why it matters, and what happens when things go wrong.  The technologies CISOs are investing in reflect that shift. Less experimentation. More control. Fewer tools, clearer accountability, and systems designed to prevent mistakes rather than explain them after the fact.  By 2026, cybersecurity won’t be about reacting faster. It will be about making fewer things possible in the first place, and making sure the people responsible can stand behind those decisions when it matters. 
  •  

Spotify Disables Accounts After Open-Source Group Scrapes 86 Million Songs

Spotify scraping

Spotify has disabled multiple user accounts after an open-source group claimed it scraped millions of songs and related data from the music streaming platform. The move comes after Anna’s Archive published files over the weekend containing metadata and audio for 86 million tracks, triggering concerns around Spotify scraping and copyright enforcement. In a statement shared with The Cyber Express Spotify scraping, company confirmed that it identified and shut down user accounts involved in unlawful scraping activities. The company said it has also introduced new safeguards to prevent similar incidents in the future. “Spotify has identified and disabled the nefarious user accounts that engaged in unlawful scraping,” a Spotify spokesperson said. “We’ve implemented new safeguards for these types of anti-copyright attacks and are actively monitoring for suspicious behavior. Since day one, we have stood with the artist community against piracy.”

Spotify Says Spotify Scraping Was Not a Hack

Spotify clarified that the Spotify scraping incident did not involve a breach of its internal systems. According to the company, the people behind the dataset violated Spotify’s terms of service over several months by using stream-ripping techniques through third-party user accounts. “They did this through user accounts set up by a third party and not by accessing Spotify’s business systems,” the spokesperson said, adding that Anna’s Archive did not contact Spotify before releasing the files. The company stressed that this Spotify scraping case should not be classified as a hack, but rather as systematic abuse of user access, which falls under unlawful scraping and copyright violation.

Anna’s Archive Claims “Preservation” Motive

Anna’s Archive, which describes itself as the “largest truly open library in human history,” published a blog post explaining its decision to expand beyond books and research papers into music. The group said it discovered a method of Spotify scraping at scale and saw an opportunity to build what it calls a “preservation archive” for music. “Sometimes an opportunity comes along outside of text. This is such a case,” the group wrote, arguing that its goal is to preserve cultural content rather than profit from it. The released dataset includes a music metadata database covering 256 million tracks and a bulk archive of nearly 300 terabytes containing 86 million audio files. According to Anna’s Archive, these tracks account for roughly 99.6% of all listens on Spotify.

Data Spans Nearly Two Decades of Music

The scraped files cover music released on Spotify between 2007 and July 2025. The group also released a smaller dataset featuring the top 10,000 most popular songs on the platform. Using the scraped data, Anna’s Archive highlighted streaming trends, noting that the top three songs on Spotify—Billie Eilish’s “Birds of a Feather,” Lady Gaga’s “Die With a Smile,” and Bad Bunny’s “DtMF”—have more combined streams than tens of millions of lesser-known tracks. While Anna’s Archive framed the release as a cultural archive, copyright holders and technology companies have consistently challenged the group’s activities.

A History of Copyright Violations

Anna’s Archive emerged shortly after the 2022 shutdown of Z-Library, a massive online repository of pirated books. Following Z-Library’s takedown, the group aggregated content from several shadow libraries, including Library Genesis, Sci-Hub, and the Internet Archive. The platform is banned in multiple countries due to repeated copyright violations. As of December, it reportedly hosts more than 61 million books and 95 million academic papers. In November, Google removed nearly 800 million links to Anna’s Archive following takedown requests from publishers.

Spotify Reinforces Anti-Piracy Measures

Spotify said it is actively monitoring for suspicious behavior and working with industry partners to protect creators’ rights. The company reiterated its stance against piracy and emphasized that Spotify scraping undermines both artists and the broader music ecosystem. As streaming platforms continue to grow, incidents like this highlight the ongoing tension between open-access movements and copyright enforcement in the digital music industry.
  •  

U.S. Authorities Seize Domain Linked to $28 Million Bank Account Takeover Fraud

bank account takeover fraud

The U.S. Department of Justice has announced a major disruption of a bank account takeover fraud operation that led to more than $28 million in unauthorized bank transfers from victims across the United States. Federal authorities seized a web domain and its supporting database that played a central role in helping criminals steal bank login details and drain victim accounts. The seized domain, web3adspanels.org, was used as a backend control panel to store and manage stolen login credentials. According to investigators, the domain supported an organized scheme that targeted Americans through advanced impersonation scams and phishing advertisements designed to look like legitimate bank services.

How the Bank Account Takeover Fraud Worked

Court documents reveal that the criminal group relied heavily on fraudulent search engine advertisements. These phishing advertisements appeared on popular platforms such as Google and Bing and closely mimicked sponsored ads from real financial institutions. [caption id="attachment_108029" align="aligncenter" width="1000"]bank account takeover fraud Image Source: https://www.justice.gov/[/caption] When users clicked on these fraudulent search ads, they believed they were visiting their bank’s official website. In reality, they were redirected to fake bank websites controlled by the attackers. Once victims entered their usernames and passwords, malicious software embedded in the fake pages captured those details in real time. The stolen login credentials were then used to access legitimate bank accounts. From there, the criminals initiated unauthorized bank transfers, effectively draining funds before victims realized their accounts had been compromised. Investigators confirmed that the seized domain continued hosting stolen credentials and backend infrastructure as recently as November 2025.

Financial Impact and Victims Identified

So far, the FBI has identified at least 19 confirmed victims across multiple U.S. states. This includes two businesses located in the Northern District of Georgia. The scheme resulted in attempted losses of approximately $28 million, with actual confirmed losses reaching around $14.6 million. The server linked to the seized domain contained thousands of stolen login credentials, suggesting that the total number of affected individuals and organizations could be significantly higher. Authorities believe the web domain seizure has cut off the criminals’ ability to access and exploit this sensitive data.

Rising Threat Highlighted by FBI IC3 Data

Since January 2025, the FBI’s Internet Crime Complaint Center (IC3) has received more than 5,100 complaints related to bank account takeover fraud. Reported losses from these incidents now exceed $262 million nationwide. In response, the FBI has issued public warnings urging individuals and businesses to remain vigilant. Recommended steps include closely monitoring financial accounts, using saved bookmarks instead of search engine links to access banking websites, and staying alert for impersonation scams and phishing attempts.

International Cooperation and Ongoing Investigation

The investigation is being led by the FBI Atlanta Field Office, with prosecutors from the U.S. Attorney’s Office for the Northern District of Georgia and the Justice Department’s Computer Crime and Intellectual Property Section (CCIPS). International partners played a critical role, including law enforcement agencies from Estonia and Georgia. Estonian authorities preserved and collected key evidence from servers hosting the phishing pages and stolen login credentials. The Department of Justice’s Office of International Affairs also provided substantial assistance, highlighting the importance of cross-border cooperation in tackling cybercrime. Since 2020, CCIPS has secured convictions against more than 180 cybercriminals and obtained court orders returning over $350 million to victims. Officials say the seizure of web3adspanels.org represents another important step in disrupting global cyber fraud networks and protecting victims from future financial harm.
  •  

South Korea’s Shinhan Card Data Breach Affects 192,000 Merchants

Shinhan Card data breach

The Shinhan Card data breach has exposed the personal information of approximately 192,000 card merchants, the South Korea–based financial services company confirmed on Tuesday. The incident, which involved the unauthorized disclosure of phone numbers and limited personal details, has been reported to the country’s Personal Information Protection Commission (PIPC). According to Shinhan Card, the breach affected self-employed individuals who operate franchised merchant locations and had shared personal details as part of standard merchant agreements. The company said there is currently no evidence that sensitive financial information, such as credit card numbers, bank account details, or national identification numbers, was compromised.

Employee Misconduct Identified as Cause of Shinhan Card Data Breach

In a statement, Shinhan Card clarified that the Shinhan Card data breach was not the result of an external cyberattack. Instead, the company suspects internal misconduct, with an employee at a sales branch allegedly transmitting merchant data to a card recruiter for sales-related purposes. “This was not due to external hacking but an employee’s misconduct,” a Shinhan Card official said, adding that the internal process involved has since been blocked. The company launched an internal investigation immediately after becoming aware of the incident and has taken steps to prevent similar actions in the future.

Scope of Personal Information Leak

The leaked data primarily involved mobile phone numbers, which accounted for roughly 180,000 cases. In about 8,000 instances, phone numbers were leaked alongside names. A smaller subset of records also included additional details such as birthdates and gender. Shinhan Card stated that its investigation has not identified cases where citizen registration numbers, card numbers, account details, or credit information were exposed. At this stage, the company has also said that no confirmed cases of misuse of the leaked information have been reported. The personal information leak affected merchants who signed contracts with Shinhan Card between March 2022 and May 2025, according to findings shared with regulators.

Shinhan Card Data Breach Timeline and Regulatory Notification

The breach came to light last month following a report submitted to the Personal Information Protection Commission, South Korea’s data protection authority. After receiving the notification, the PIPC requested supporting materials from Shinhan Card to assess the scope and cause of the incident. Following its internal review, Shinhan Card formally reported the data breach to the PIPC on December 23, complying with regulatory disclosure requirements. The company has continued to cooperate with authorities as the review process continues.

Company Response and Merchant Support Measures

In response to the Shinhan Card data breach, the company published an apology and detailed guidance on its website and mobile application. It also launched a dedicated page allowing affected merchants to check whether their personal data was compromised. “We will make every effort to protect our customers and prevent similar incidents from recurring,” a Shinhan Card spokesperson said. The company has emphasized that it is strengthening internal controls and reviewing access permissions related to merchant data. Shinhan Card also urged merchants to remain vigilant for potential phishing or unsolicited contact attempts, even though no additional harm linked to the leaked data has been confirmed so far.

Broader Implications for Financial Data Protection

The Shinhan Card data breach incident highlights ongoing challenges around data governance and insider risk within financial institutions, even as companies continue to invest heavily in cybersecurity defenses against external threats. While many breaches globally involve hacking or ransomware, incidents stemming from employee misconduct remain a persistent concern for banks and payment providers. Authorities have not yet announced whether penalties or corrective actions will follow the investigation. For now, Shinhan Card maintains that it is focused on customer protection and restoring trust following the incident.
  •