Reading view

The Cyber Express Weekly Roundup: Escalating Breaches, Regulatory Crackdowns, and Global Cybercrime Developments

The Cyber Express Weekly Roundup

As February 2026 progresses, this week’s The Cyber Express Weekly Roundup examines a series of cybersecurity incidents and enforcement actions spanning Europe, Africa, Australia, and the United States.   The developments include a breach affecting the European Commission’s mobile management infrastructure, a ransomware attack disrupting Senegal’s national identity systems, a landmark financial penalty imposed on an Australian investment firm, and the sentencing of a fugitive linked to a multimillion-dollar cryptocurrency scam.  From suspected exploitation of zero-day vulnerabilities to prolonged breach detection failures and cross-border financial crime, these cases highlights the operational, legal, and systemic dimensions of modern cyber risk.  

The Cyber Express Weekly Roundup 

European Commission Mobile Infrastructure Breach Raises Supply Chain Questions 

The European Commission reported a cyberattack on its mobile device management (MDM) system on January 30, potentially exposing staff names and mobile numbers, though no devices were compromised, and the breach was contained within nine hours. Read more... 

Ransomware Disrupts Senegal’s National Identity Systems 

In West Africa, a major cyberattack hit Senegal’s Directorate of File Automation (DAF), halting identity card production and disrupting national ID, passport, and electoral services. While authorities insist no personal data was compromised, the ransomware group. The full extent of the breach is still under investigation. Read more... 

Australian Court Imposes Landmark Cybersecurity Penalty 

In Australia, FIIG Securities was fined AU$2.5 million for failing to maintain adequate cybersecurity protections, leading to a 2023 ransomware breach that exposed 385GB of client data, including IDs, bank details, and tax numbers. The firm must also pay AU$500,000 in legal costs and implement an independent compliance program. Read more... 

Crypto Investment Scam Leader Sentenced in Absentia 

U.S. authorities sentenced Daren Li in absentia to 20 years for a $73 million cryptocurrency scam targeting American victims. Li remains a fugitive after fleeing in December 2025. The Cambodia-based scheme used “pig butchering” tactics to lure victims to fake crypto platforms, laundering nearly $60 million through U.S. shell companies. Eight co-conspirators have pleaded guilty. The case was led by the U.S. Secret Service. Read more... 

India Brings AI-Generated Content Under Formal Regulation 

India has regulated AI-generated content under notification G.S.R. 120(E), effective February 20, 2026, defining “synthetically generated information” (SGI) as AI-created content that appears real, including deepfakes and voiceovers. Platforms must label AI content, embed metadata, remove unlawful content quickly, and verify user declarations. Read More... 

Weekly Takeaway 

Taken together, this weekly roundup highlights the expanding attack surface created by digital transformation, the persistence of ransomware threats to national infrastructure, and the intensifying regulatory scrutiny facing financial institutions.  From zero-day exploitation and supply chain risks to enforcement actions and transnational crypto fraud, organizations are confronting an environment where operational resilience, compliance, and proactive monitoring are no longer optional; they are foundational to trust and continuity in the digital economy. 
  •  

60,000 Records Exposed in Cyberattack on Uzbekistan Government

Uzbekistan cyberattack

An alleged Uzbekistan cyberattack that triggered widespread concern online has exposed around 60,000 unique data records, not the personal data of 15 million citizens, as previously claimed on social media. The clarification came from Uzbekistan’s Digital Technologies Minister Sherzod Shermatov during a press conference on 12 February, addressing mounting speculation surrounding the scale of the breach. From 27 to 30 January, information systems of three government agencies in Uzbekistan were targeted by cyberattacks. The names of the agencies have not been disclosed. However, officials were firm in rejecting viral claims suggesting a large-scale national data leak. “There is no information that the personal data of 15 million citizens of Uzbekistan is being sold online. 60,000 pieces of data — that could be five or six pieces of data per person. We are not talking about 60,000 citizens,” the minister noted, adding that law enforcement agencies were examining the types of data involved. For global readers, the distinction matters. In cybersecurity reporting, raw data units are often confused with the number of affected individuals. A single record can include multiple data points such as a name, date of birth, address, or phone number. According to Shermatov, the 60,000 figure refers to individual data units, not the number of citizens impacted.
Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets

Uzbekistan Cyberattack: What Actually Happened

The Uzbekistan cyberattack targeted three government information systems over a four-day period in late January. While the breach did result in unauthorized access to certain systems, the ministry emphasized that it was not a mass compromise of citizen accounts. “Of course, there was an attack. The hackers were skilled and sophisticated. They made attempts and succeeded in gaining access to a specific system. In a sense, this is even useful — an incident like this helps to further examine other systems and increase vigilance. Some data, in a certain amount, could indeed have been obtained from some systems,” Shermatov said. His remarks reveal a balanced acknowledgment: the attack was real, the threat actors were capable, and some data exposure did occur. At the same time, the scale appears significantly smaller than initially portrayed online. The ministry also stressed that a “personal data leak” does not mean citizens’ accounts were hacked or that full digital identities were compromised. Instead, limited personal details may have been accessed.

Rising Cyber Threats in Uzbekistan

The Uzbekistan cyberattack comes amid a sharp increase in attempted digital intrusions across the country. According to the ministry, more than 7 million cyber threats were prevented in 2024 through Uzbekistan’s cybersecurity infrastructure. In 2025, that number reportedly exceeded 107 million. Looking ahead, projections suggest that over 200 million cyberattacks could target Uzbekistan in 2026. These figures highlight a broader global trend: as countries accelerate digital transformation, they inevitably expand their attack surface. Emerging digital economies, in particular, often face intense pressure from transnational cybercriminal groups seeking to exploit gaps in infrastructure and rapid system expansion. Uzbekistan’s growing digital ecosystem — from e-government services to financial platforms — is becoming a more attractive target for global threat actors. The recent Uzbekistan cyberattack illustrates that no country, regardless of size, is immune.

Strengthening Security After the Breach

Following the breach, authorities blocked further unauthorized access attempts and reinforced technical safeguards. Additional protections were implemented within the Unified Identification System (OneID), Uzbekistan’s centralized digital identity platform. Under the updated measures, users must now personally authorize access to their data by banks, telecom operators, and other organizations. This shifts more control, and responsibility, directly to citizens. The ministry emphasized that even with partial personal data, fraudsters cannot fully act on behalf of a citizen without direct involvement. However, officials warned that attackers may attempt secondary scams using exposed details. For example, a fraudster could call a citizen, pose as a bank employee, cite known personal details, and claim that someone is applying for a loan in their name — requesting an SMS code to “cancel” the transaction. Such social engineering tactics remain one of the most effective tools for cybercriminals globally.

A Reality Check on Digital Risk

The Uzbekistan cyberattack highlights two critical lessons. First, misinformation can amplify panic faster than technical facts. Second, even limited data exposure carries real risk if exploited creatively. Shermatov’s comment that the incident can help “increase vigilance” reflects a pragmatic view shared by many cybersecurity professionals worldwide: breaches, while undesirable, often drive improvements in resilience. For Uzbekistan, the challenge now is sustaining public trust while hardening systems against a growing global cyber threats. For the rest of the world, the incident serves as a reminder that cybersecurity transparency — clear communication about scope and impact — is just as important as technical defense.
  •  

Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks

GTIG AI threat tracker

In the fourth quarter of 2025, the Google Threat Intelligence Group (GTIG) reported a significant uptick in the misuse of artificial intelligence by threat actors. According to GTIG’s AI threat tracker, what initially appeared as experimental probing has evolved into systematic, repeatable exploitation of large language models (LLMs) to enhance reconnaissance, phishing, malware development, and post-compromise activity.  A notable trend identified by GTIG is the rise of model extraction attempts, or “distillation attacks.” In these operations, threat actors systematically query production models to replicate proprietary AI capabilities without directly compromising internal networks. Using legitimate API access, attackers can gather outputs sufficient to train secondary “student” models. While knowledge distillation is a valid machine learning method, unauthorized replication constitutes intellectual property theft and a direct threat to developers of proprietary AI.  Throughout 2025, GTIG observed sustained campaigns involving more than 100,000 prompts aimed at uncovering internal reasoning and chain-of-thought logic. Attackers attempted to coerce Gemini into revealing hidden decision-making processes. GTIG’s monitoring systems detected these patterns and mitigated exposure, protecting the internal logic of proprietary AI.  

AI Threat Tracker, a Force Multiplier 

Beyond intellectual property theft, GTIG’s AI threat tracker reports that state-backed and sophisticated actors are leveraging LLMs to accelerate reconnaissance and social engineering. Threat actors use AI to synthesize open-source intelligence (OSINT), profile high-value individuals, map organizational hierarchies, and identify decision-makers, dramatically reducing the manual effort required for research.  For instance, UNC6418 employed Gemini to gather account credentials and email addresses prior to launching phishing campaigns targeting Ukrainian and defense-sector entities. Temp.HEX, a China-linked actor, used AI to collect intelligence on individuals in Pakistan and analyze separatist groups. While immediate operational targeting was not always observed, Google mitigated these risks by disabling associated assets.  Phishing tactics have similarly evolved. Generative AI enables actors to produce highly polished, culturally accurate messaging. APT42, linked to Iran, used Gemini to enumerate official email addresses, research business connections, and create personas tailored to targets, while translation capabilities allowed multilingual operations. North Korea’s UNC2970 leveraged AI to profile cybersecurity and defense professionals, refining phishing narratives with salary and role information. All identified assets were disabled, preventing further compromise. 

AI-Enhanced Malware Development 

GTIG also documented AI-assisted malware development. APT31 prompted Gemini with expert cybersecurity personas to automate vulnerability analysis, including remote code execution, firewall bypass, and SQL injection testing. UNC795 engaged Gemini regularly to troubleshoot code and explore AI-integrated auditing, suggesting early experimentation with agentic AI, systems capable of autonomous multi-step reasoning. While fully autonomous AI attacks have not yet been observed, GTIG anticipates growing underground interest in such capabilities.  Generative AI is also supporting information operations. Threat actors from China, Iran, Russia, and Saudi Arabia used Gemini to draft political content, generate propaganda, and localize messaging. According to GTIG’s AI threat tracker, these efforts improved efficiency and scale but did not produce transformative influence capabilities. AI is enhancing productivity rather than creating fundamentally new tactics in the information operations space. 

AI-Powered Malware Frameworks: HONESTCUE and COINBAIT 

In September 2025, GTIG identified HONESTCUE, a malware framework outsourcing code generation via Gemini’s API. HONESTCUE queries the AI for C# code to perform “stage two” functionality, which is compiled and executed in memory without writing artifacts to disk, complicating detection.   Similarly, COINBAIT, a phishing kit detected in November 2025, leveraged AI-generated code via Lovable AI to impersonate a cryptocurrency exchange. COINBAIT incorporated complex React single-page applications, verbose developer logs, and cloud-based hosting to evade traditional network defenses.  GTIG also reported that underground markets are exploiting AI services and API keys to scale attacks. One example, “Xanthorox,” marketed itself as a self-contained AI for autonomous malware generation but relied on commercial AI APIs, including Gemini.  
  •  

Disney Agrees Record $2.75Mn Settlement for Opt-Out Failures

Disney CCPA settlement

Animation giant Walt Disney has agreed to pay a $2.75 million fine and overhaul its privacy practices to settle violation allegations of the California Consumer Privacy Act (CCPA). The Disney CCPA settlement marks the largest settlement in the Act's enforcement history. For a global audience watching the evolution of data privacy enforcement, the Disney CCPA settlement is more than a state-level regulatory action as it signals a tougher stance on how companies handle consumer opt-out rights in an increasingly connected digital ecosystem. Announced by California Attorney General Rob Bonta, the settlement resolves claims that Disney failed to fully honor consumers’ requests to opt out of the sale or sharing of their personal data across all devices and streaming services linked to their accounts. Under the agreement, which remains subject to court approval, Disney will pay $2.75 million in civil penalties and implement a comprehensive privacy program designed to ensure compliance with the CCPA. The company does not admit wrongdoing or accept liability. A Disney spokesperson said that as an “industry leader in privacy protection, Disney continues to invest significant resources to set the standard for responsible and transparent data practices across our streaming services.”
Also read: Disney to Pay $10M After FTC Finds It Enabled Children’s Data Collection Via YouTube Videos

Implications of the Disney CCPA Settlement

While the enforcement action stems from California law, the Disney CCPA settlement has international implications. Many global companies operate under similar opt-out and consent frameworks in Europe, Asia-Pacific, and beyond. Regulators worldwide are scrutinizing whether companies truly make it easy for users to control their data — or merely create the appearance of compliance. The investigation, launched after a January 2024 investigative sweep of streaming services, found that Disney’s opt-out mechanisms contained what the California Department of Justice described as “key gaps.” These gaps allegedly allowed the company to continue selling or sharing consumer data even after users had attempted to opt out. Attorney General Bonta made the state’s position clear: “Consumers shouldn’t have to go to infinity and beyond to assert their privacy rights. Today, my office secured the largest settlement to date under the CCPA over Disney's failure to stop selling and sharing the data of consumers that explicitly asked it to. California’s nation-leading privacy law is clear: A consumer’s opt-out right applies wherever and however a business sells data — businesses can’t force people to go device-by-device or service-by-service. In California, asking a business to stop selling your data should not be complicated or cumbersome. My office is committed to the continued enforcement of this critical privacy law.”

Investigation Findings

According to the Attorney General’s office, Disney offered multiple methods for consumers to opt out — including website toggles, webforms, and the Global Privacy Control (GPC). However, each method allegedly failed to stop data sharing comprehensively. For example, when users activated opt-out toggles within Disney websites or apps, the request was reportedly applied only to the specific streaming service being used — and often only to the specific device. This meant that data sharing could continue on other devices or services connected to the same account. Similarly, consumers who submitted opt-out requests through Disney’s webform were unable to stop all personal data sharing. The investigation alleged that Disney continued to share data with “specific third-party ad-tech companies whose code Disney embedded in its websites and apps.” The Global Privacy Control — designed as a universal “stop selling or sharing my data” signal — was also reportedly limited to the specific device used, even if the consumer was logged into their Disney account. Critically, in many connected TV streaming apps, Disney allegedly did not provide an in-app opt-out mechanism and instead redirected users to the webform. Regulators argued this “effectively leaving consumers with no way to stop Disney’s selling and sharing from these apps.”

Enforcement Momentum Under the CCPA

The Disney CCPA settlement is the seventh enforcement action under the California Consumer Privacy Act and the second action against Disney in five months. In September, the Federal Trade Commission fined Disney $10 million over child privacy violations. Attorney General Bonta emphasized that “Effective opt-out is one of the bare necessities of complying with CCPA.” The law grants California consumers the right to know how their personal data is collected and shared — and the right to request that businesses stop selling or sharing that information. Under the settlement terms, Disney must update California within 60 days after court approval on steps taken to comply. It must also submit progress reports every 60 days until all services meet CCPA requirements.

A Turning Point for Streaming Platforms?

The broader message from the Disney CCPA settlement is unmistakable: privacy controls must work across platforms, devices, and ecosystems — not in silos. Streaming platforms operate globally, with accounts spanning smartphones, smart TVs, gaming consoles, and web browsers. Regulators are increasingly unwilling to accept fragmented compliance models where privacy settings apply only to one device or one service at a time. In that sense, the Disney CCPA settlement may be remembered less for the $2.75 million fine and more for the standard it reinforces: when consumers say “stop,” companies must ensure their systems actually listen.
  •  

8,000+ ChatGPT API Keys Left Publicly Accessible

ChatGPT API keys

The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed ChatGPT API keys are currently accessible across public infrastructure, dramatically lowering the barrier for abuse.  CRIL identified more than 5,000 publicly accessible GitHub repositories containing hardcoded OpenAI credentials. In parallel, approximately 3,000 live production websites were found to expose active API keys directly in client-side JavaScript and other front-end assets.   Together, these findings reveal a widespread pattern of credential mismanagement affecting both development and production environments. 

GitHub as a Discovery Engine for Exposed ChatGPT API Keys 

Public GitHub repositories have become one of the most reliable sources for exposed AI credentials. During development cycles, especially in fast-moving environments, developers often embed ChatGPT API keys directly into source code, configuration files, or .env files. While the intent may be to rotate or remove them later, these keys frequently persist in commit histories, forks, archived projects, and cloned repositories.  CRIL’s analysis shows that these exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. Many repositories were actively maintained or recently updated, increasing the likelihood that the exposed ChatGPT API keys remained valid at the time of discovery.  Once committed, secrets are quickly indexed by automated scanners that monitor GitHub repositories in near real time. This drastically reduces the window between exposure and exploitation, often to mere hours or minutes. 

Exposure in Live Production Websites 

Beyond repositories, CRIL uncovered roughly 3,000 public-facing websites leaking ChatGPT API keys directly in production. In these cases, credentials were embedded within JavaScript bundles, static files, or front-end framework assets, making them visible to anyone inspecting network traffic or application source code.  A commonly observed implementation resembled: 
const OPENAI_API_KEY = "sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX"; const OPENAI_API_KEY = "sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX";  
The sk-proj- prefix typically denotes a project-scoped key tied to a specific environment and billing configuration. The sk-svcacct- prefix generally represents a service-account key intended for backend automation or system-level integration. Despite their differing scopes, both function as privileged authentication tokens granting direct access to AI inference services and billing resources.  Embedding these keys in client-side JavaScript fully exposes them. Attackers do not need to breach infrastructure or exploit software vulnerabilities; they simply harvest what is publicly available. 

“The AI Era Has Arrived — Security Discipline Has Not” 

Richard Sands, CISO at Cyble, summarized the issue bluntly: “The AI Era Has Arrived — Security Discipline Has Not.” AI systems are no longer experimental tools; they are production-grade infrastructure powering chatbots, copilots, recommendation engines, and automated workflows. Yet the security rigor applied to cloud credentials and identity systems has not consistently extended to ChatGPT API keys.  A contributing factor is the rise of what some developers call “vibe coding”—a culture that prioritizes speed, experimentation, and rapid feature delivery. While this accelerates innovation, it often sidelines foundational security practices. API keys are frequently treated as configuration values rather than production secrets.  Sands further emphasized, “Tokens are the new passwords — they are being mishandled.” From a security standpoint, ChatGPT API keys are equivalent to privileged credentials. They control inference access, usage quotas, billing accounts, and sometimes sensitive prompts or application logic. 

Monetization and Criminal Exploitation 

Once discovered, exposed keys are validated through automated scripts and operationalized almost immediately. Threat actors monitor GitHub repositories, forks, gists, and exposed JavaScript assets to harvest credentials at scale.  CRIL observed that compromised keys are typically used to: 
  • Execute high-volume inference workloads 
  • Generate phishing emails and scam scripts 
  • Assist in malware development 
  • Circumvent service restrictions and usage quotas 
  • Drain victim billing accounts and exhaust API credits 
Some exposed credentials were also referenced in discussions mentioning Cyble Vision, indicating that threat actors may be tracking and sharing discovered keys. Using Cyble Vision, CRIL identified instances in which exposed keys were subsequently leaked and discussed on underground forums.  [caption id="" align="alignnone" width="1024"]Cyble Vision indicates API key exposure leak Cyble Vision indicates API key exposure leak (Source: Cyble Vision)[/caption] Unlike traditional cloud infrastructure, AI API activity is often not integrated into centralized logging systems, SIEM platforms, or anomaly detection pipelines. As a result, abuse can persist undetected until billing spikes, quota exhaustion, or degraded service performance reveal the compromise.  Kaustubh Medhe, CPO at Cyble, warned: “Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.” 
  •  

India Brings AI-Generated Content Under Formal Regulation with IT Rules Amendment

AI-generated Content

The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.  The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. 

What the Law Now Defines as “Synthetically Generated Information” 

The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.  More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.  This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.  At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. 

New Duties for Intermediaries 

The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.  A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.  Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. 

Due Diligence and Labelling Requirements for AI-generated Content 

A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.  This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.  For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. 

Enhanced Obligations for Social Media Intermediaries 

Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.  If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. 

Implications for Indian Cybersecurity and Digital Platforms 

The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age. 
  •  

SMS and OTP Bombing Campaigns Found Abusing API, SSL and Cross-Platform Automation

SMS and OTP bombing

The modern authentication ecosystem runs on a fragile assumption: that requests for one-time passwords are genuine. That assumption is now under sustained pressure. What began in the early 2020s as loosely shared scripts for irritating phone numbers has evolved into a coordinated ecosystem of SMS and OTP bombing tools engineered for scale, speed, and persistence. Recent research from Cyble Research and Intelligence Labs (CRIL) examined approximately 20 of the most actively maintained repositories reveals a sharp technical evolution continuing through late 2025 and into 2026. These are no longer simple terminal-based scripts. They are cross-platform desktop applications, Telegram-integrated automation tools, and high-performance frameworks capable of orchestrating large-scale SMS and OTP bombing and voice-bombing campaigns across multiple regions. Importantly, the findings reflect patterns observed within a defined research sample and should be interpreted as indicative trends rather than a complete census of the broader ecosystem. Even within that limited scope, the scale is striking. 

From Isolated Scripts to Organized API Exploitation 

SMS and OTP bombing campaigns operate by abusing legitimate authentication endpoints. Attackers repeatedly trigger password reset flows, registration verifications, or login challenges to flood a victim’s device with legitimate SMS messages or automated calls. The result is harassment, disruption, and in some cases, MFA fatigue.  Across the 20 repositories analyzed, approximately 843 vulnerable API endpoints were catalogued. These endpoints belonged to organizations spanning telecommunications, financial services, e-commerce, ride-hailing platforms, and government portals. Each shared a common weakness: inadequate rate limiting, insufficient CAPTCHA enforcement, or both.  The regional targeting pattern was highly uneven. Roughly 61.68% of observed endpoints, about 520, were associated with infrastructure in Iran. India accounted for 16.96%, or approximately 143 endpoints. Additional activity focused on Turkey, Ukraine, and other parts of Eastern Europe and South Asia.  [caption id="" align="aligncenter" width="612"]Regional Distribution of Observed Endpoints (n ≈ 843) Distribution of Observed Endpoints (Source: Cyble)[/caption] The abuse lifecycle typically begins with API discovery. Attackers manually test login and signup flows, scan common paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile apps to extract hardcoded API references, or rely on community-maintained endpoint lists shared through public repositories and forums.  [caption id="" align="aligncenter" width="563"]Observed SMS/OTP Bombing Abuse Lifecycle SMS/OTP Bombing Abuse Lifecycle (Source: Cyble)[/caption] Once identified, these endpoints are integrated into multi-threaded attack tools capable of issuing simultaneous requests at scale. 

The Rise of Automation and SSL Bypass Techniques 

The technical stack behind SMS and OTP bombing tools has matured considerably. [caption id="" align="aligncenter" width="489"]Technology Stack Distribution (n ≈ 20 repositories) Technology Stack Distribution (Source: Cyble)[/caption] Maintainers now provide implementations across seven programming languages and frameworks, lowering the barrier to entry for attackers with minimal coding knowledge. Modern tools incorporate: 
  • Multi-threading for parallel API abuse 
  • Proxy rotation to evade IP-based controls 
  • Request randomization to simulate human behavior 
  • Automated retries and failure handling 
  • Real-time reporting dashboards 
More concerning is the widespread use of SSL bypass mechanisms. Approximately 75% of analyzed repositories disable SSL certificate validation to circumvent basic security controls. Instead of trusting properly validated SSL connections, these tools intentionally ignore certificate errors, allowing interception or manipulation of traffic without interruption. SSL bypass has become one of the most prevalent evasion techniques observed.  Additionally, 58.3% of repositories randomize User-Agent headers to evade signature-based detection. Around 33% exploit static or hardcoded reCAPTCHA tokens, defeating poorly implemented bot protections.  The ecosystem is no longer confined to SMS alone. Voice-bombing campaigns, automated calls triggered through telephony APIs, have been integrated into several tools, expanding the harassment vector beyond text messages. 

Commercial Web Services and Data Harvesting 

Parallel to open-source development, a commercial layer has emerged. Web-based SMS and OTP bombing platforms offer point-and-click interfaces accessible from any browser. Marketed deceptively as “prank tools” or “SMS testing services,” remove all technical barriers.  These services represent an escalation in accessibility. Unlike repository-based tools requiring local execution, web platforms abstract away configuration, proxy management, and API integration.  However, they operate on a dual-threat model. Phone numbers entered into these platforms are frequently harvested. Collected data may be reused for spam campaigns, sold as lead lists, or integrated into fraud operations. In effect, users expose both their targets and themselves to long-term exploitation. 

Financial and Operational Impact 

For individuals, SMS and OTP bombing can degrade device performance, bury legitimate communications, exhaust SMS storage limits, drain battery life, and create MFA fatigue that increases the risk of accidental approval of malicious login attempts. The addition of voice-bombing campaigns further intensifies disruption.  For organizations, the impact extends beyond inconvenience.  Financially, each OTP message costs between $0.05 and $0.20. A single attack generating 10,000 messages can cost $500 to $2,000. Unprotected API endpoints subjected to sustained abuse can push monthly SMS bills into five-figure territory.  Operationally, legitimate users may be unable to receive verification codes. Customer support teams become overwhelmed. Delivery delays affect all customers' needs. In regulated sectors, failure to ensure secure and reliable authentication of flows may create compliance exposure.  Reputational damage compounds the issue. Public perception quickly associates spam-like behavior with poor security controls. 
  •  

India Seeks Larger Role in Global AI and Deep Tech Development

IndiaAI Mission

India’s technology ambitions are no longer limited to policy announcements, they are now translating into capital flows, institutional reforms, and global positioning. At the center of this transformation is the IndiaAI Mission, a flagship initiative that is reshaping AI in India while influencing private sector investment and deep tech growth across multiple domains. Information submitted in the Lok Sabha on February 11, 2026, by Minister of Electronics and IT Ashwini Vaishnaw outlines how government-backed reforms and funding mechanisms are strengthening India’s AI and space technology ecosystem. For global observers, the scale and coordination of these efforts signal a strategic push to position India as a long-term technology powerhouse.

IndiaAI Mission Lays Foundation for AI in India

Launched in March 2024 with an outlay of ₹10,372 crore, the IndiaAI Mission aims to build a comprehensive AI ecosystem. In less than two years, the initiative has delivered measurable progress. More than 38,000 GPUs have been onboarded to create a common compute facility accessible to startups and academic institutions at affordable rates. Twelve teams have been shortlisted to develop indigenous foundational models or Large Language Models (LLMs), while 30 applications have been approved to build India-specific AI solutions. Talent development remains central to the IndiaAI Mission. Over 8,000 undergraduate students, 5,000 postgraduate students, and 500 PhD scholars are currently being supported. Additionally, 27 India Data and AI Labs have been established, with 543 more identified for development. India’s AI ecosystem is also earning global recognition. The Stanford Global AI Vibrancy 2025 report ranks India third worldwide in AI competitiveness and ecosystem vibrancy. The country is also the second-largest contributor to GitHub AI projects—evidence of a strong developer community driving AI in India from the ground up.

Private Sector Investment in AI Gains Speed

Encouraged by the IndiaAI Mission and broader reforms, private sector investment in AI is rising steadily. According to the Stanford AI Index Report 2025, India’s cumulative private investment in AI between 2013 and 2024 reached approximately $11.1 billion. Recent announcements underscore this momentum. Google revealed plans to establish a major AI Hub in Visakhapatnam with an investment of around $15 billion—its largest commitment in India so far. Tata Group has also announced an $11 billion AI innovation city in Maharashtra. These developments suggest that AI in India is moving beyond research output toward large-scale commercial infrastructure. The upcoming India AI Impact Summit 2026, to be held in New Delhi, will further position India within the global AI debate. Notably, it will be the first time the global AI summit series takes place in the Global South, signaling a shift toward more inclusive technology governance.

Deep Tech Push Backed by RDI Fund and Policy Reforms

Beyond AI, the government is reinforcing the broader deep tech sector through funding and policy clarity. A ₹1 lakh crore Research, Development and Innovation (RDI) Fund under the Anusandhan National Research Foundation (ANRF) has been announced to support high-risk, high-impact projects. The National Deep Tech Startup Policy addresses long-standing challenges in funding access, intellectual property, infrastructure, and commercialization. Under Startup India, deep tech firms now enjoy extended eligibility periods and higher turnover thresholds for tax benefits and government support. These structural changes aim to strengthen India’s Gross Expenditure on Research and Development (GERD), currently at 0.64% of GDP. Encouragingly, India’s position in the Global Innovation Index has climbed from 81st in 2015 to 38th in 2025—an indicator that reforms are yielding measurable outcomes.

Space Sector Reforms Expand India’s Global Footprint

Parallel to AI in India, the government is also expanding its ambitions in space technology. The Indian Space Policy 2023 clearly defines the roles of ISRO, IN-SPACe, and private industry, opening the entire space value chain to commercial participation. IN-SPACe now operates as a single-window agency authorizing non-government space activities and facilitating access to ISRO’s infrastructure. A ₹1,000 crore venture capital fund and a ₹500 crore Technology Adoption Fund are supporting early-stage and scaling space startups. Foreign Direct Investment norms have been liberalized, permitting up to 100% FDI in satellite manufacturing and components. Through NewSpace India Limited (NSIL), the country is expanding its presence in the global commercial launch market, particularly for small and medium satellites. The government’s collaboration between ISRO and the Department of Biotechnology in space biotechnology—including microgravity research and space bio-manufacturing—signals how interdisciplinary innovation is becoming a national priority.

A Strategic Inflection Point for AI in India

Taken together, the IndiaAI Mission, private sector investment in AI, deep tech reforms, and space sector liberalization form a coordinated architecture. This is not merely about technology adoption—it is about long-term capability building. For global readers, India’s approach offers an interesting case study: sustained public investment paired with regulatory clarity and private capital participation. While challenges such as research intensity and commercialization gaps remain, the trajectory is clear. The IndiaAI Mission has become more than a policy initiative, it is emerging as a structural driver of AI in India and a signal of the country’s broader technological ambitions in the decade ahead.
  •  

Taiwan Government Agencies Faced 637 Cybersecurity Incidents in H2 2025

cybersecurity incidents

In the past six months, Taiwan’s government agencies have reported 637 cybersecurity incidents, according to the latest data released by the Cybersecurity Academy (CSAA). The findings, published in its Cybersecurity Weekly Report, reveal not just the scale of digital threats facing Taiwan’s public sector, but also four recurring attack patterns that reflect broader global trends targeting government agencies. For international observers, the numbers are significant. Out of a total of 723 cybersecurity incidents reported by government bodies and select non-government organizations during this period, 637 cases involved government agencies alone. The majority of these—410 cases—were classified as illegal intrusion, making it the most prevalent threat category. These cybersecurity incidents provide insight into how threat actors continue to exploit both technical vulnerabilities and human behaviour within public institutions.

Illegal Intrusion Leads the Wave of Cybersecurity Incidents

Illegal intrusion remains the leading category among reported cybersecurity incidents affecting government agencies. While the term may sound broad, it reflects deliberate attempts by attackers to gain unauthorized access to systems, often paving the way for espionage, data theft, or operational disruption. The CSAA identified four recurring attack patterns behind these incidents. The first involves the distribution of malicious programs disguised as legitimate software. Attackers impersonate commonly used applications, luring employees into downloading infected files. Once installed, these malicious programs establish abnormal external connections, creating backdoors for future control or data exfiltration. This tactic is particularly concerning for government agencies, where employees frequently rely on specialized or internal tools. A single compromised endpoint can provide attackers with a foothold into wider networks, increasing the scale of cybersecurity incidents.

USB Worm Infections and Endpoint Vulnerabilities

The second major pattern behind these cybersecurity incidents involves worm infections spread through portable media devices such as USB drives. Though often considered an old-school technique, USB-based attacks remain effective—especially in environments where portable media is routinely used for operational tasks. When infected devices are plugged into systems, malicious code can automatically execute, triggering endpoint intrusion and abnormal system behavior. Such breaches can lead to lateral movement within networks and unauthorized external communications. This pattern underscores a key reality: technical sophistication is not always necessary. In many cybersecurity incidents, attackers succeed by exploiting routine workplace habits rather than zero-day vulnerabilities.

Social Engineering and Watering Hole Attacks Target Trust

The third pattern involves social engineering email attacks, frequently disguised as administrative litigation or official document exchanges. These phishing emails are crafted around business topics highly relevant to government agencies, increasing the likelihood that recipients will open attachments or click malicious links. Such cybersecurity incidents rely heavily on human psychology. The urgency and authority embedded in administrative-themed emails make them particularly effective. Despite years of awareness campaigns, phishing remains one of the most successful entry points for attackers globally. The fourth pattern, known as watering hole attacks, adds another layer of complexity. In these cases, attackers compromise legitimate websites commonly visited by government officials. During normal browsing, malicious commands are silently executed, resulting in endpoint compromise and abnormal network behavior. Watering hole attacks demonstrate how cybersecurity incidents can originate from seemingly trusted digital environments. Even cautious users can fall victim when legitimate platforms are weaponized.

Critical Infrastructure Faces Operational Risks

Beyond government agencies, cybersecurity incidents reported by non-government organizations primarily affected critical infrastructure providers, particularly in emergency response, healthcare, and communications sectors. Interestingly, many of these cases involved equipment malfunctions or damage rather than direct cyberattacks. System operational anomalies led to service interruptions, while environmental factors such as typhoons disrupted critical services. These incidents highlight an important distinction: not all disruptions stem from malicious activity. However, the operational impact can be equally severe. The Cybersecurity Research Institute (CRI) emphasized that equipment resilience, operational continuity, and environmental risk preparedness are just as crucial as cybersecurity protection. In an interconnected world, digital security and physical resilience must go hand in hand.

Strengthening Endpoint Protection and Cyber Governance

In response to the rise in cybersecurity incidents, experts recommend a dual approach—technical reinforcement and management reform. From a technical perspective, endpoint protection and abnormal behavior monitoring must be strengthened. Systems should be capable of detecting malicious programs, suspicious command execution, abnormal connections, and risky portable media usage. Enhanced browsing and attachment access protection can further reduce the risk of malware downloads during routine operations. From a governance standpoint, ongoing education is essential. Personnel must remain alert to risks associated with fake software, social engineering email attacks, and watering hole attacks. Clear management policies regarding portable media usage, software sourcing, and external website access should be embedded into cybersecurity governance frameworks. The volume of cybersecurity incidents reported in just six months sends a clear message: digital threats targeting public institutions are persistent, adaptive, and increasingly strategic. Governments and critical infrastructure providers must move beyond reactive responses and build layered defenses that address both technology and human behavior.
  •  

India Rolls Out AI-on-Wheels to Bridge the Digital Divide

Yuva AI for All

India has taken another step toward expanding AI literacy in India with the launch of Kaushal Rath under the national programme Yuva AI for All. Flagged off from India Gate in New Delhi, the mobile initiative aims to bring foundational Artificial Intelligence (AI) education directly to students, youth, and educators, particularly in semi-urban and underserved regions. For a country positioning itself as a global digital leader, the message behind Yuva AI for All is clear: AI cannot remain limited to elite institutions or metro cities. If Artificial Intelligence is to shape economies and governance, it must be understood by the wider population.

Yuva AI for All: Taking AI to the Doorstep

Launched by the Ministry of Electronics and Information Technology (MeitY) under the IndiaAI Mission in collaboration with AISECT, Yuva AI for All focuses on democratising access to AI education. Launching the initiative, the Minister of State Jitin Prasada stated, “Through the Yuva AI for All initiative and the Kaushal Rath, we are taking AI awareness directly across the country, especially to young people. The bus will travel across regions to familiarise students and youth with the uses and benefits of Artificial Intelligence, fulfilling the Prime Minister Narendra Modi’s vision of ensuring that awareness and access to opportunity transcend geography and demography.” Adding to this, he also said that “The Yuva AI for All with Kaushal Rath initiative is a precursor to the India AI Impact Summit 2026, which is set to take place in New Delhi next week. It is a great pride for India to be hosting a Summit of this kind for the first time, to be held in the Global South. “ [caption id="attachment_109449" align="aligncenter" width="600"]Yuva AI for All Image Source: PIB[/caption] At the centre of this effort is Kaushal Rath, a fully equipped mobile computer lab with internet-enabled systems and audio-visual tools. The vehicle will travel across Delhi-NCR and later other regions, visiting schools, ITIs, colleges, and community spaces. The aim is not abstract policy messaging, but practical exposure—hands-on demonstrations of AI and Generative AI tools, guided by trained facilitators and contextualised Indian use cases. The course structure is intentionally accessible. It is a four-hour, self-paced programme with six modules, requiring zero coding background. Participants learn AI concepts, ethics, and real-world applications. Upon completion, they receive certification, a move designed to add tangible value to academic and professional profiles. Kavita Bhatia, Scientist G, MeitY and COO of IndiaAI Mission highlighted, “Under the IndiaAI Mission, skilling is one of the seven core pillars, and this initiative advances our goal of democratising AI education at scale. Through Kaushal Rath, we are enabling hands-on AI learning for students across institutions using connected systems, AI tools, and structured courses, including the YuvAI for All programme designed to demystify AI. By combining instructor-led training, micro- and nano-credentials, and nationwide outreach, we are ensuring that AI skilling becomes accessible to learners across regions.” In a global context, this matters. Many nations speak of AI readiness, but few actively drive AI education beyond established technology hubs. Yuva AI for All attempts to bridge that gap.

Building Momentum Toward the India AI Impact Summit 2026

The launch of Yuva AI for All and Kaushal Rath also builds momentum toward the upcoming India AI Impact Summit 2026, scheduled from February 16–20 at Bharat Mandapam, New Delhi. Positioned as the first global AI summit to be hosted in the Global South, the event is anchored on three pillars: People, Planet, and Progress. The summit aims to translate global AI discussions into development-focused outcomes aligned with India’s national priorities. But what distinguishes this effort is its nationwide groundwork. Over the past months, seven Regional AI Conferences were conducted across Meghalaya, Gujarat, Odisha, Madhya Pradesh, Uttar Pradesh, Rajasthan, and Kerala under the IndiaAI Mission. These conferences focused on practical AI deployment in governance, healthcare, agriculture, education, language technologies, and public service delivery. Policymakers, startups, academia, industry leaders, and civil society participated, ensuring that discussions were not limited to theory. Insights from these regional consultations will directly shape the agenda of the India AI Impact Summit 2026.

A Nationwide AI Push, Not Just a Summit

Several major announcements emerged from the regional conferences. Among them:
  • A commitment to train one million youth under Yuva AI for All
  • Expansion of AI Data Labs and AI Labs in ITIs and polytechnics
  • Launch of Rajasthan’s AI/ML Policy 2026
  • Announcement of the Uttar Pradesh AI Mission
  • Introduction of Madhya Pradesh’s SpaceTech Policy 2026 integrating AI
  • Signing of MoUs with institutions including Google, IIT Delhi, and National Law University, Jodhpur
  • Rollout of AI Stacks and cloud adoption frameworks for state-level governance
These developments suggest that India’s AI roadmap is not confined to policy speeches. It is being operationalised across states, with funding commitments and institutional backing. For global observers, this signals something important. Emerging economies are not merely consumers of AI technologies—they are actively shaping governance models and skilling frameworks suited to their socio-economic realities.

Why AI Literacy in India Matters Globally

Artificial Intelligence is often discussed in terms of advanced research and frontier innovation. Yet the real challenge is adoption—ensuring people understand what AI is, what it can do, and how it should be used responsibly. By launching Yuva AI for All, India is placing emphasis on foundational awareness, not just high-end research. That approach reflects a broader recognition: AI will influence public service delivery, agriculture systems, healthcare models, and digital governance worldwide. Without widespread literacy, the risk of exclusion grows. At the same time, scaling AI education in a country as large and diverse as India is no small task. The success of Kaushal Rath will depend on sustained outreach, quality training, and long-term institutional support. Still, the initiative marks a visible shift. AI is no longer framed as a specialist subject—it is being positioned as a public capability. As preparations intensify for the India AI Impact Summit 2026, Yuva AI for All stands out as a reminder that AI’s future will not be shaped only in boardrooms or research labs, but also in classrooms, ITIs, and community spaces across regions often left out of the digital conversation.
  •  

12 Lakh SIM Cards Cancelled, over 3 Lakh IMEI Numbers Blocked as Centre Intensifies Crackdown on Cybercrime

SIM Cards Cancelled in Cybercrime Crackdown

Union Home Minister Amit Shah on Tuesday announced that the Central government has cancelled 12 lakh SIM cards and ensured that IMEI numbers blocked exceeded 3 lakh mobile devices as part of a sweeping nationwide crackdown on cybercrime. He added that 20,853 accused individuals have been arrested in connection with cyber offences up to December 2025.  Shah shared these figures while addressing the National Conference on “Tackling Cyber-Enabled Frauds and Dismantling the Ecosystem,” organized by the Central Bureau of Investigation (CBI) and the Indian Cyber Crime Coordination Centre (I4C). The conference focused on strategies to dismantle the growing organized ecosystem of cybercrime.  The large-scale action involving SIM cards being cancelled and IMEI numbers being blocked is aimed at cutting off the communication channels frequently used by fraud networks. According to Shah, these measures are part of a coordinated national effort to prevent and respond effectively to cybercrime. 

Multi-Agency Coordination Strengthened to Combat Organized Cybercrime 

The Home Minister underlined that tackling cybercrime requires close cooperation among multiple institutions. Agencies, including I4C, State Police forces, the CBI, the National Investigation Agency (NIA), the Enforcement Directorate (ED), the Department of Telecommunications, the banking sector, the Ministry of Electronics and Information Technology (MeitY), the Reserve Bank of India (RBI), and the judiciary, are collectively engaged in sustained enforcement efforts.  Emphasising the importance of inter-agency coordination, Shah said each institution has a clearly defined role and responsibility. Seamless cooperation among stakeholders, he noted, is essential to deliver effective outcomes, especially when cybercrime operations span across states and international jurisdictions.  He described the initiative taken by the CBI and I4C as “extremely significant,” stating that it brings various departments together and strengthens the implementation of anti-cybercrime measures. Through this integrated framework, authorities aim not only to make arrests but also to dismantle the broader infrastructure supporting cybercrime activities.  Shah also stressed the crucial role of the CBI and NIA, particularly in addressing cybercrimes originating outside India. He pointed out that lapses in maintaining the chain of custody of digital evidence often hinder convictions and remain a key challenge in prosecuting cyber offenders. 

Digital Growth, 181 Billion UPI Transactions and Rising Cybercrime Risks 

Highlighting India’s digital transformation over the past 11 years under the Digital India initiative, Shah said the country’s digital expansion has been remarkable. The number of internet users has risen from 250 million to over 1 billion, while broadband connections have grown nearly sixteenfold, also crossing the 1-billion mark.  He further noted that the cost of one gigabyte of data has dropped by 97 per cent, expanding internet access and usage. Connectivity through the BharatNet project has also seen dramatic growth. Eleven years ago, only 546 village panchayats were connected, whereas more than 2 lakh village panchayats are now covered, ensuring connectivity from Parliament to Panchayats.  Shah also pointed to the surge in digital financial transactions. In 2024 alone, India recorded more than 181 billion Unified Payments Interface (UPI) transactions with a total value exceeding Rs 233 trillion. The rapid expansion of digital payments, he indicated, has made the fight against cybercrime even more critical.  He warned that cybercrime, which was once largely individual-driven, has now become institutionalised. Criminal groups are using advanced technologies and continuously adapting their methods. In this environment, actions such as SIM cards cancelled and IMEI numbers blocked are intended to disrupt the operational backbone of fraudulent networks.  Calling for collective responsibility, Shah urged all agencies to identify vulnerabilities and minimise risks at every level. He said the Centre has adopted a comprehensive, multi-dimensional strategy to combat cybercrime. The key pillars include real-time cybercrime reporting, strengthening forensic networks, capacity building, research and development, promoting cyber awareness, and encouraging cyber hygiene.  He cautioned that without timely intervention, cyber fraud could have escalated into a national crisis. Shah called on stakeholders to act simultaneously, whether by identifying fraudulent call centres, enhancing awareness campaigns, improving the 1930 cybercrime helpline, reducing response times, or strengthening coordination between banks and I4C. 
  •  

Microsoft Patch Tuesday February Update Flags Exchange and Azure Vulnerabilities as High-Priority Risks

Microsoft Patch Tuesday February

Microsoft Patch Tuesday February 2026 addressed 54 vulnerabilities including six zero-days across Windows, Office, Azure services, Exchange Server, and developer tools. The latest patch update, rollout is notable not only for its smaller size but for the presence of six zero-day vulnerabilities that were already being exploited in active attacks before patch availability. As part of the 2026 patch Tuesday, the release carries heightened urgency for enterprise defenders and system administrators. 

Microsoft Patch Tuesday February has Six New Zero-Day Fixes

The most critical aspect of this Microsoft Patch Tuesday February update is the confirmation that six vulnerabilities were under active exploitation. These flaws impact core Windows components and productivity applications widely deployed in enterprise environments.  The actively exploited zero-days are:
  • CVE-2026-21510Windows Shell Security Feature Bypass (Severity: Important; CVSS 7.8) 
  • CVE-2026-21513MSHTML Platform Security Feature Bypass (Important; CVSS 7.5) 
  • CVE-2026-21514Microsoft Word Security Feature Bypass (Important; CVSS 7.8) 
  • CVE-2026-21519Desktop Window Manager Elevation of Privilege (Important; CVSS 7.8) 
  • CVE-2026-21525Windows Remote Access Connection Manager Denial of Service (Important; CVSS 7.5) 
  • CVE-2026-21533Windows Remote Desktop Services Elevation of Privilege (Important; CVSS 7.8) 
CVE-2026-21510 allows attackers to bypass the Mark of the Web (MoTW) mechanism in Windows Shell, preventing users from seeing security warnings on files downloaded from the internet. CVE-2026-21513, affecting the MSHTML engine, enables malicious shortcut or file-based payloads to bypass prompts and execute code without user awareness. CVE-2026-21514 similarly permits crafted Microsoft Word files to evade OLE mitigation protections.  Privilege escalation vulnerabilities are also prominent. CVE-2026-21519 involves a type confusion flaw in the Desktop Window Manager that can grant attackers SYSTEM-level privileges. CVE-2026-21533 affects Windows Remote Desktop Services, allowing authenticated attackers to elevate privileges due to improper privilege handling. Meanwhile, CVE-2026-21525 can trigger a null pointer dereference in Windows Remote Access Connection Manager, leading to denial-of-service conditions by crashing VPN connections. 

Vulnerability Distribution and Impact 

Beyond the zero-days, Microsoft Patch Tuesday resolves a broad range of additional issues. Of the 54 vulnerabilities fixed, Elevation of Privilege (EoP) flaws account for 25. Remote Code Execution (RCE) vulnerabilities total 12, followed by 7 spoofing issues, 6 information disclosure flaws, 5 security feature bypass vulnerabilities, and 3 denial-of-service issues.  High-risk vulnerabilities affecting enterprise infrastructure include: 
  • CVE-2026-21527Microsoft Exchange Server Spoofing Vulnerability (Critical; potential RCE vector) 
  • CVE-2026-23655Azure Container Instances Information Disclosure (Critical) 
  • CVE-2026-21518GitHub Copilot / Visual Studio Remote Code Execution (Important) 
  • CVE-2026-21528Azure IoT SDK Remote Code Execution (Important) 
  • CVE-2026-21531Azure SDK Vulnerability (Important; CVSS 9.8) 
  • CVE-2026-21222Windows Kernel Information Disclosure (Important) 
  • CVE-2026-21249Windows NTLM Spoofing Vulnerability (Moderate) 
  • CVE-2026-21509Microsoft Office Security Feature Bypass (Important) 
Azure-related services received multiple fixes, including Azure Compute Gallery (CVE-2026-21522 and CVE-2026-23655), Azure Function (CVE-2026-21532; CVSS 8.2), Azure Front Door (CVE-2026-24300; CVSS 9.8), Azure Arc (CVE-2026-24302; CVSS 8.6), Azure DevOps Server (CVE-2026-21512), and Azure HDInsights (CVE-2026-21529).   Exchange Server remains a particularly sensitive asset in enterprise networks. CVE-2026-21527 highlights continued risks to messaging infrastructure, which has historically been a prime target for remote code execution and post-exploitation campaigns. 

Additional CVEs and Exploitability Ratings 

The official advisory states: “February 2026 Security Updates. This release consists of the following 59 Microsoft CVEs.” Among them:  Microsoft also republished one non-Microsoft CVE: CVE-2026-1861, associated with Chrome and affecting Chromium-based Microsoft Edge.  Exploitability ratings range from “Exploitation Detected” and “Exploitation More Likely” to “Exploitation Less Likely” and “Exploitation Unlikely.” Most entries include FAQs, but workarounds and mitigations are generally listed as unavailable. 

Lifecycle Notes, Hotpatching, and Known Issues 

The advisory reiterates that Windows 10 and Windows 11 updates are cumulative and available through the Microsoft Update Catalog. Lifecycle timelines are documented in the Windows Lifecycle Facts Sheet. Microsoft is also continuing improvements to Windows Release Notes and provides servicing stack update details under ADV990001.  The Hotpatching feature is now generally available for Windows Server Azure Edition virtual machines. Customers using Windows Server 2008 or Windows Server 2008 R2 must purchase Extended Security Updates to continue receiving patches; additional information is available under 4522133.  Known issues tied to this 2026 Patch Tuesday release include: 
  • KB5075942: Windows Server 2025 Hotpatch 
  • KB5075897: Windows Server 23H2 
  • KB5075899: Windows Server 2025 
  • KB5075906: Windows Server 2022 
Given the confirmed exploitation of multiple zero-days and the concentration of Elevation of Privilege and Remote Code Execution flaws, Microsoft Patch Tuesday 2026 represents a high-priority patch cycle. Organizations are advised to prioritize remediation of the six actively exploited vulnerabilities and critical infrastructure components, and to conduct rapid compatibility testing to reduce operational disruption. 
  •  

Romance, Fake Platforms, $73M Lost: Crypto Scam Leader Gets 20 Years

global cryptocurrency investment scam

The U.S. justice system has sent away an individual behind one of the largest global cryptocurrency investment scam cases, for two decades. While the sentence signals accountability, the individual remains a fugitive after cutting off his electronic ankle monitor and fleeing in December 2025. Daren Li, a 42-year-old dual national of China and St. Kitts and Nevis, has been sentenced in absentia to 20 years in prison for carrying out a $73 million cryptocurrency fraud scheme that targeted American victims.

Inside the $73 Million Global Cryptocurrency Investment Scam

According to court documents, Li pleaded guilty in November 2024 to conspiring to launder funds obtained through cryptocurrency scams. Prosecutors revealed that the global cryptocurrency investment scam was operated from scam centers in Cambodia, a growing hotspot for transnational cyber fraud. The operation followed a now-familiar pattern often referred to as a “pig butchering scam.” Victims were approached through social media, unsolicited calls, text messages, and even online dating platforms. Fraudsters built professional or romantic relationships over weeks or months. Once trust was secured, victims were directed to spoofed cryptocurrency trading platforms that looked legitimate. In other cases, scammers posed as tech support or customer service representatives, convincing victims to transfer funds to fix non-existent viruses or fabricated technical problems. The numbers are staggering. Li admitted that at least $73.6 million flowed into accounts controlled by him and his co-conspirators. Of that, nearly $60 million was funneled through U.S. shell companies designed to disguise the origins of the stolen funds. This was not random fraud—it was organized, calculated, and industrial in scale.

Crypto Money Laundering Through U.S. Shell Companies

What makes this global cryptocurrency investment scam particularly troubling is the complex crypto money laundering infrastructure behind it. Li directed associates to establish U.S. bank accounts under shell companies. These accounts received interstate and international wire transfers from victims. The stolen money was then converted into cryptocurrency, further complicating efforts to trace and recover funds. Eight co-conspirators have already pleaded guilty. Li is the first defendant directly involved in receiving victim funds to be sentenced. Prosecutors pushed for the maximum penalty after hearing from victims who lost life savings, retirement funds, and, in some cases, their entire financial security. Assistant Attorney General A. Tysen Duva described the damage as “devastating.” And that word is not an exaggeration. Behind every dollar in this $73 million cryptocurrency scam is a real person whose trust was manipulated. “As part of an international cryptocurrency investment scam, Daren Li and his co-conspirators laundered over $73 million dollars stolen from American victims,” said Assistant Attorney General A. Tysen Duva of the Justice Department’s Criminal Division. “The Court’s sentence reflects the gravity of Li’s conduct, which caused devastating losses to victims throughout our country. The Criminal Division will work with our law enforcement partners around the world to ensure that Li is returned to the United States to serve his full sentence.”

Scam Centers in Cambodia Under Global Scrutiny

The sentencing comes amid increasing international pressure to dismantle scam centers in Cambodia and across Southeast Asia. For years, these operations flourished with limited oversight. Now, authorities in the U.S., China, and other nations are escalating crackdowns. China recently executed members of two crime families accused of running cyber scam compounds in Myanmar. In Cambodia, the arrest and extradition of Prince Group chairman Chen Zhi—a key figure in cyber scam money laundering—triggered chaotic scenes as human trafficking victims and scam workers sought refuge at embassies. These developments show that the global cryptocurrency investment scam network is not isolated. It is part of a larger ecosystem of organized crime, human trafficking, and digital exploitation.

Law Enforcement’s Expanding Response

The U.S. Secret Service’s Global Investigative Operations Center led the investigation, supported by Homeland Security Investigations, Customs and Border Protection, the U.S. Marshals Service, and international partners. The Justice Department’s Criminal Division continues targeting scam centers by seizing cryptocurrency, dismantling digital infrastructure, and disrupting money laundering networks. Since 2020, the Computer Crime and Intellectual Property Section (CCIPS) has secured more than 180 cybercrime convictions and recovered over $350 million in victim funds. Still, the fact that Li escaped before serving his sentence highlights a sobering truth: enforcement is improving, but global coordination must move even faster.

Why This Global Cryptocurrency Investment Scam Matters

Technology has erased borders, but it has also erased barriers for criminals. The global cryptocurrency investment scam case shows how encrypted apps, fake trading platforms, and shell corporations can be stitched together into a seamless fraud machine. The bigger concern is scale. These operations are not small-time scams run from a basement. They are corporate-style enterprises with recruiters, relationship builders, financial handlers, and laundering specialists. For investors, the lesson is clear: unsolicited investment advice, especially involving cryptocurrency, should raise immediate red flags. For regulators and governments, the message is even stronger. Financial transparency laws, international cooperation, and aggressive enforcement are no longer optional—they are essential. Daren Li’s 20-year sentence may serve as a warning, but until fugitives like him are brought back to face prison time, the fight against the next $73 million cryptocurrency scam continues.
  •  

FIIG Securities Fined AU$2.5 Million Following Prolonged Cybersecurity Failures

FIIG cyberattack

Australian fixed-income firm FIIG Securities has been fined AU$2.5 million after the Federal Court found it failed to adequately protect client data from cybersecurity threats over a period exceeding four years. The penalty follows a major FIIG cyberattack in 2023 that resulted in the theft and exposure of highly sensitive personal and financial information belonging to thousands of clients.  It is the first time the Federal Court has imposed civil penalties for cybersecurity failures under the general obligations of an Australian Financial Services (AFS) license.   In addition to the fine, the court ordered FIIG Securities to pay AU$500,000 toward the Australian Securities and Investments Commission’s (ASIC) enforcement costs. FIIG must also implement a compliance program, including the engagement of an independent expert to ensure its cybersecurity and cyber resilience systems are reasonably managed going forward. 

FIIG Cyberattack Exposed Sensitive Client Data After Years of Security Gaps 

The enforcement action stems from a ransomware attack that occurred in 2023. ASIC alleged that between March 2019 and June 2023, FIIG Securities failed to implement adequate cybersecurity measures, leaving its systems vulnerable to intrusion. On May 19, 2023, a hacker gained access to FIIG’s IT network and remained undetected for nearly three weeks.  During that time, approximately 385 gigabytes of confidential data were exfiltrated. The stolen data included names, addresses, dates of birth, driver’s licences, passports, bank account details, tax file numbers, and other sensitive information. FIIG later notified around 18,000 clients that their personal data may have been compromised as a result of the FIIG cyberattack.  Alarmingly, FIIG Securities did not discover the breach on its own. The company became aware of the incident only after being contacted by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) on June 2. Despite receiving this warning, FIIG did not launch a formal internal investigation until six days later.  FIIG admitted it had failed to comply with its AFS licence obligations and acknowledged that adequate cybersecurity controls would have enabled earlier detection and response. The firm also conceded that adherence to its own policies and procedures could have prevented much of the client information from being downloaded. 

Regulatory Action Against FIIG Securities Sets Precedent for Cybersecurity Enforcement 

ASIC Deputy Chair Sarah Court said the case highlights the growing risks posed by cyber threats and the consequences of inadequate controls. “Cyber-attacks and data breaches are escalating in both scale and sophistication, and inadequate controls put clients and companies at real risk,” she said. “ASIC expects financial services licensees to be on the front foot every day to protect their clients. FIIG wasn’t – and they put thousands of clients at risk.”  ASIC Chair Joe Longo described the matter as a broader warning for Australian businesses. “This matter should serve as a wake-up call to all companies on the dangers of neglecting cybersecurity systems,” he said, emphasizing that cybersecurity is not a “set and forget” issue but one that requires continuous monitoring and improvement.  ASIC alleged that FIIG Securities failed to implement basic cybersecurity protection, including properly configured firewalls, regular patching of software and operating systems, mandatory cybersecurity training for staff, and sufficient allocation of financial and human resources to manage cyber risk.  Additional deficiencies cited by ASIC included the absence of an up-to-date incident response plan, ineffective privileged access management, lack of regular vulnerability scanning, failure to deploy endpoint detection and response tools, inadequate use of multi-factor authentication, and a poorly configured Security Information and Event Management (SIEM) system. 

Lessons From the FIIG Cyberattack for Australia’s Financial Sector 

Cybersecurity experts have pointed out that the significance of the FIIG cyberattack lies not only in the breach itself but in the prolonged failure to implement reasonable protections. Annie Haggar, Partner and Head of Cybersecurity at Norton Rose Fulbright Australia, noted in a LinkedIn post that ASIC’s case provides clarity on what regulators consider “adequate” cybersecurity. Key factors include the nature of the business, the sensitivity of stored data, the value of assets under management, and the potential impact of a successful attack.  The attack on FIIG Securities was later claimed by the ALPHV/BlackCat ransomware group, which stated on the dark web that it had stolen approximately 385GB of data from FIIG’s main server. The group warned the company that it had three days to make contact regarding the consequences of what it described as a failure by FIIG’s IT department.  According to FBI and Center for Internet Security reports, the ALPHV/BlackCat group gains initial access using compromised credentials, deploys PowerShell scripts and Cobalt Strike to disable security features, and uses malicious Group Policy Objects to spread ransomware across networks.  The breach was discovered after an employee reported being locked out of their email account. Further investigation revealed that files had been encrypted and backups wiped. While FIIG managed to restore some systems, other data could not be recovered. 
  •  

ENISA Updates Its International Strategy to Strengthen EU’s Cybersecurity Cooperation

ENISA International Strategy

The European Union Agency for Cybersecurity has released an updated international strategy to reinforce the EU’s cybersecurity ecosystem and strengthen cooperation beyond Europe’s borders. The revised ENISA International Strategy refreshes the agency’s approach to working with global partners while ensuring stronger alignment with the European Union’s international cybersecurity policies, core values, and long-term objectives.  Cybersecurity challenges today rarely stop at national or regional borders. Digital systems, critical infrastructure, and data flows are deeply intertwined across continents, making international cooperation a necessity rather than a choice. Against this backdrop, ENISA has clarified that it will continue to engage strategically with international partners outside the European Union, but only when such cooperation directly supports its mandate to improve cybersecurity within Europe. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

ENISA International Strategy Aligns Global Cooperation With Europe’s Cybersecurity Priorities 

Under the updated ENISA International Strategy, the agency’s primary objective remains unchanged: raising cybersecurity levels across the EU. International cooperation is therefore pursued selectively and strategically, focusing on areas where collaboration can deliver tangible benefits to EU Member States and strengthen Europe’s overall cybersecurity resilience. ENISA Executive Director Juhan Lepassaar highlighted the importance of international engagement in achieving this goal. He stated: “International cooperation is essential in cybersecurity. It complements and strengthens the core tasks of ENISA to achieve a high common level of cybersecurity across the Union.   Together with our Management Board, ENISA determines how we engage at an international level to achieve our mission and mandate. ENISA stands fully prepared to cooperate on the global stage to support the EU Member States in doing so.”  The strategy is closely integrated with ENISA’s broader organizational direction, including its recently renewed stakeholders’ strategy. A central focus is cooperation with international partners that share the EU’s values and maintain strategic relationships with the Union.

Expanding Cybersecurity Partnerships Beyond Europe While Supporting EU Policy Objectives 

The revised ENISA International Strategy outlines several active areas of international cooperation. These include more tailored working arrangements with specific countries, notably Ukraine and the United States. These partnerships are designed to focus on capacity-building, best practice exchange, and structured information and knowledge sharing in the field of cybersecurity.  ENISA will also continue supporting the European Commission and the European External Action Service (EEAS) in EU cyber dialogues with partners such as Japan and the United Kingdom. Through this role, ENISA provides technical expertise to inform discussions and to help align international cooperation with Europe’s cybersecurity priorities.  Another key element of the strategy involves continued support for EU candidate countries in the Western Balkans region. From 2026 onward, this support is planned to expand through the extension of specific ENISA frameworks and tools. These may include the development of comparative cyber indexes, cybersecurity exercise methodologies, and the delivery of targeted training programs aimed at strengthening national capabilities. 

Strengthening Europe’s Cybersecurity Resilience Through Multilateral Frameworks 

The updated strategy also addresses the operationalization of the EU Cybersecurity Reserve, established under the 2025 EU Cyber Solidarity Act. ENISA plans to support making the reserve operational for third countries associated with the Digital Europe Programme, including Moldova, thereby extending coordinated cybersecurity response mechanisms while maintaining alignment with EU standards.  In addition, ENISA will continue contributing to the cybersecurity work of the G7 Cybersecurity Working Group. In this context, the agency provides EU-level cybersecurity expertise when required, supporting cooperation on shared cyber threats and resilience efforts. The strategy also leaves room for exploring further cooperation with other like-minded international partners where mutual interests align.  Finally, the ENISA International Strategy reaffirms the principles guiding ENISA’s international cooperation and clarifies working modalities with the European Commission, the EEAS, and EU Member States. These principles were first established following the adoption of ENISA’s initial international strategy in 2021 and have since been consolidated and refined based on practical experience and best practices. 
  •  

Discord Introduces Stronger Teen Safety Controls Worldwide

Discord teen-by-default settings

Discord teen-by-default settings are now rolling out globally, marking a major shift in how the popular communication platform handles safety for users aged 13 to 17. The move signals a clear message from Discord: protecting teens online is no longer optional, it is expected. The Discord update applies to all new and existing users worldwide and introduces age-appropriate defaults, restricted access to sensitive content, and stronger safeguards around messaging and interactions. While Discord positions this as a safety-first upgrade, the announcement also arrives at a time when gaming and social platforms are under intense regulatory and public scrutiny.

What Discord Teen-by-Default Settings Actually Change

Discord, headquartered in San Francisco and used by more than 200 million monthly active users, says the new Discord teen-by-default settings are designed to create safer experiences without breaking the sense of community that defines the platform. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities Under the new system, teen users automatically receive stricter communication settings. Sensitive content remains blurred, access to age-restricted servers is blocked, and direct messages from unknown users are routed to a separate inbox. Only age-verified adults can change these defaults. The company says these measures are meant to protect teens while still allowing them to connect around shared interests like gaming, music, and online communities.

Age Verification, But With Privacy Guardrails

Age assurance sits at the core of the Discord teen-by-default settings rollout. Starting in early March, users may be asked to verify their age if they want to access certain content or change safety settings. Discord is offering multiple options: facial age estimation processed directly on a user’s device, or submission of government-issued ID through approved vendors. The company has also introduced an age inference model that runs quietly in the background to help classify accounts without always forcing verification. Discord stresses that privacy remains central. Video selfies never leave the device, identity documents are deleted quickly, and a user’s age status is never visible to others. In most cases, verification is a one-time process.

Why it Matters Now Than Ever Before

The timing of the Discord teen-by-default settings rollout is no coincidence. In October 2025, Discord disclosed a data breach involving a third-party vendor that handled customer support and age verification. While Discord’s own systems were not breached, attackers accessed government ID photos submitted for age verification, limited billing data, and private support conversations. The incident reignited concerns about whether platforms can safely handle sensitive identity data—especially when minors are involved. For many users, that trust has not fully recovered. At the same time, regulators are tightening the screws. The U.S. Federal Trade Commission has publicly urged companies to adopt age verification tools faster. Platforms like Roblox are rolling out facial AI and ID-based age estimation, while Australia has gone further by banning social media use for children under 16. Similar discussions are underway across Europe.

Teen Safety Meets Public Skepticism

Not everyone is convinced. Online reaction, particularly on Reddit, has been harsh. Some users accuse Discord of hypocrisy, pointing to past breaches and questioning the wisdom of asking users to upload IDs to third-party vendors. Others see the changes as the beginning of the end for Discord’s open community model. There is also concern among game studios and online communities that rely heavily on Discord. If access becomes more restricted, some fear engagement could drop—or migrate elsewhere.

Giving Teens a Voice, Not Just Rules

To balance control with understanding, Discord is launching its first Teen Council, a group of 10–12 teens aged 13 to 17 who will advise the company on safety, product design, and policy decisions. The goal is to avoid guessing what teens need and instead hear it directly from them. This approach acknowledges a hard truth: safety tools only work if teens understand them and trust the platform using them.

A Necessary Shift, Even If It’s Uncomfortable

The Discord teen-by-default settings rollout reflects a broader industry reality. Platforms built for connection can no longer rely on self-reported ages and loose moderation. Governments, parents, and regulators are demanding stronger protections—and they are willing to step in if companies do not act. Discord’s approach won’t please everyone. But in today’s climate, doing nothing would be far riskier. Whether this move strengthens trust or fuels backlash will depend on how well Discord protects user data—and how honestly it continues to engage with its community.
  •  

Senegal Confirms Cyberattack on Agency Managing National ID and Biometric Data

Senegal cyberattack

The recent Senegal cyberattack on the Directorate of File Automation (DAF) has done more than disrupt government services. It has exposed how vulnerable the country’s most sensitive data systems really are, and why cybersecurity can no longer be treated as a technical issue handled quietly in the background. DAF, the government agency responsible for managing national ID cards, passports, biometric records, and electoral data, was forced to temporarily shut down operations after detecting a cyber incident. For millions of Senegalese citizens, this means delays in accessing essential identity services. For the country, it raises far bigger concerns about data security and national trust.

Senegal Cyberattack Brings Identity Services to a Standstill

In an official public notice, DAF confirmed that the production of national identity cards had been suspended following the cyberattack. Authorities assured citizens that personal data had not been compromised and that systems were being restored. However, as days passed and the DAF website remained offline, doubts began to grow. A Senegal cyberattack affecting such a critical agency is not something that can be brushed off quickly, especially when biometric and identity data are involved. [caption id="attachment_109392" align="aligncenter" width="500"]Senegal Cyberattack Image Source: X[/caption]

Hackers Claim Theft of Massive Biometric Data

The situation escalated when a ransomware group calling itself The Green Blood Group claimed responsibility for the attack. The group says it stole 139 terabytes of data, including citizen records, biometric information, and immigration documents. To back up its claims, the hackers released data samples on the dark web. They also shared an internal email from IRIS Corporation Berhad, a Malaysian company working with Senegal on its digital national ID system. In the email, a senior IRIS executive warned that two DAF servers had been breached and that card personalization data may have been accessed. Emergency steps were taken, including cutting network connections and shutting access to external offices. Even if authorities insist that data integrity remains intact, the scale of the alleged breach makes the Senegal cyberattack impossible to ignore.

Implications of the Senegal Cyberattack

DAF is not just another government office. It manages the digital identities of Senegalese citizens. Any compromise—real or suspected—creates long-term risks, from identity fraud to misuse of biometric data. What makes this incident more worrying is that it is not the first major breach. Just months ago, Senegal’s tax authority also suffered a cyberattack. Together, these incidents point to a larger problem: critical systems are being targeted, and attackers are finding ways in. Cybercrime groups are no longer experimenting in Africa. They are operating with confidence, speed, and clear intent. The Green Blood Group, which appeared only recently, has reportedly targeted just two countries so far—Senegal and Egypt. That alone should be taken seriously.

Disputes, Outsourcing, and Cybersecurity Blind Spots

The cyberattack also comes during a payment dispute between the Senegalese government and IRIS Corporation. While no official link has been confirmed, the situation highlights a key issue: when governments rely heavily on third-party vendors, cybersecurity responsibility can become blurred. The lesson from this Senegal cyberattack is simple and urgent. Senegal needs a dedicated National Cybersecurity Agency, along with a central team to monitor, investigate, and respond to cyber incidents across government institutions. Cyberattacks in Africa are no longer rare or unexpected. They are happening regularly, and they are hitting the most sensitive systems. Alongside better technology, organizations must focus on insider threats, staff awareness, and leadership accountability. If sensitive data from this attack is eventually leaked, the damage will be permanent. Senegal still has time to act—but only if this warning is taken seriously.
  •  

SmarterTools Breached by Own SmarterMail Vulnerabilities

SmarterTools Breached by Own SmarterMail Vulnerabilities

SmarterTools was breached by hackers exploiting a vulnerability in its own SmarterMail software through an unknown virtual machine set up by an employee that wasn’t being updated. “Prior to the breach, we had approximately 30 servers/VMs with SmarterMail installed throughout our network,” SmarterTools COO Derek Curtis noted in a Feb. 3 post. “Unfortunately, we were unaware of one VM, set up by an employee, that was not being updated. As a result, that mail server was compromised, which led to the breach.” Network segmentation helped limit the breach, Curtis said, so the company website, shopping cart, account portal, and other services “remained online while we mitigated the issue. None of our business applications or account data were affected or compromised.”

SmarterTools Breach Comes Amid SmarterMail Vulnerability Warnings

Curtis said SmarterTools was compromised by the Warlock ransomware group, “and we have observed similar activity on customer machines.” In a blog post today, ReliaQuest researchers said they’ve observed SmarterMail vulnerability CVE-2026-23760 exploited in attacks “attributed with moderate-to-high confidence to ‘Storm-2603.’ This appears to be the first observed exploitation linking the China-based actor to the vulnerability as an entry point for its ‘Warlock’ ransomware operations.” ReliaQuest said other ransomware actors may be targeting a second SmarterMail vulnerability. “This activity coincides with a February 5, 2026 CISA warning that ransomware actors are exploiting a second SmarterMail vulnerability (CVE-2026-24423),” ReliaQuest said. “We observed probes for this second vulnerability alongside the Storm-2603 activity. However, because these attempts originated from different infrastructure, it remains unclear whether Storm-2603 is rotating IP addresses or a separate group is capitalizing on the same window. “Specific attribution matters less than the operational reality: Internet-facing servers are being targeted by multiple vectors simultaneously,” ReliQuest added. “Patching one entry point is insufficient if the adversary is actively pivoting to another or—worse—has already established persistence using legitimate tools.” Curtis said that once Warlock actors gain access, “they typically install files and wait approximately 6–7 days before taking further action. This explains why some customers experienced a compromise even after updating—the initial breach occurred prior to the update, but malicious activity was triggered later.”

SmarterTools Breach Limited by Linux Use

Curtis said the SmarterTools breach affected networks at the company office and a data center “which primarily had various labs where we do much of our QC work, etc.” “Because we are primarily a Linux company now, only about 12 Windows servers looked to be compromised and on those servers, our virus scanners blocked most efforts,” he wrote. “None of the Linux servers were affected.” He said Sentinel One “did a really good job detecting vulnerabilities and preventing servers from being encrypted.” He said that SmarterMail Build 9518 (January 15) contains fixes for the vulnerabilities, while Build 9526 (January 22) “complements those fixes with additional improvements and resolves lesser issues that have been brought to our attention and/or discovered during our internal security audits.” He said based on the company’s own breach and observations of customer incidents, Warlock actors “often attempt to take control of the Active Directory server and create new users. From there, they distribute files across Windows machines and attempt to execute files that encrypt data.” Common file names and programs abused by the threat actors have included:
  • Velociraptor
  • JWRapper
  • Remote Access
  • SimpleHelp
  • WinRAR (older, vulnerable versions)
  • exe
  • dll
  • exe
  • Short, random filenames such as e0f8rM_0.ps1 or abc...
  • Random .aspx files
“We hope this provides a fuller summary of what we have seen and what customers can look for in their own environments,” Curtis said. “We also hope it demonstrates that we are taking every possible step to prevent issues like this from occurring again and making every effort to consolidate what we’re seeing and sharing with our customers.”
  •  

European Commission Hit by Mobile Infrastructure Data Breach

European Commission Mobile Cyberattack Thwarted by Quick Action

The European Commission's central infrastructure for managing mobile devices was hit by a cyberattack on January 30, the Commission has revealed. The announcement said the European Commission mobile cyberattack was limited by swift action, but cybersecurity observers are speculating that the incident was linked to another recent European incident involving Netherlands government targets that was revealed around the same time.

European Commission Mobile Cyberattack Detailed

The European Commission’s Feb. 5 announcement said its mobile management infrastructure “identified traces of a cyber-attack, which may have resulted in access to staff names and mobile numbers of some of its staff members. The Commission's swift response ensured the incident was contained and the system cleaned within 9 hours. No compromise of mobile devices was detected.” The Commission said it will “continue to monitor the situation. It will take all necessary measures to ensure the security of its systems. The incident will be thoroughly reviewed and will inform the Commission's ongoing efforts to enhance its cybersecurity capabilities.” The Commission provided no further details on the attack, but observers wondered if it was connected to another incident involving Dutch government targets that was revealed the following day.

Dutch Cyberattack Targeted Ivanti Vulnerabilities

In a Feb. 6 letter (download, in Dutch) to the Dutch Parliament, State Secretary for Justice and Security Arno Rutte said the Dutch Data Protection Authority (AP) and the Council for the Judiciary (Rvdr) had been targeted in an “exploitation of a vulnerability in Ivanti Endpoint Manager Mobile (EPMM).” Rutte said the Dutch National Cyber ​​Security Centre (NCSC) was informed by Ivanti on January 29 about vulnerabilities in EPMM, which is used for managing and securing mobile devices, apps and content. On January 29, Ivanti warned that two critical zero-day vulnerabilities in EPMM were under attack. CVE-2026-1281 and CVE-2026-1340 are both 9.8-severity code injection flaws, affecting EPMM’s In-House Application Distribution and Android File Transfer Configuration features, and could allow unauthenticated remote attackers to execute arbitrary code on vulnerable on-premises EPMM installations without any prior authentication. “Based on the information currently available, I can report that at least the AP and the Rvdr have been affected,” Rutte wrote. Work-related data of AP employees, such as names, business email addresses, and telephone numbers, “have been accessed by unauthorized persons,” he added. “Immediate measures were taken after the incident was discovered. In addition, the employees of the AP and the Rvdr have been informed. The AP has reported the incident to its data protection officer. The Rvdr has submitted a preliminary data breach notification to the AP.” NCSC is monitoring further developments with the Ivanti vulnerability and “is in close contact” with international partners, the letter said. Meanwhile, the Chief Information Officer of the Dutch government “is coordinating the assessment of whether there is a broader impact within the central government.”

European Commission Calls for Stronger Cybersecurity Controls

The European Commission’s statement noted that “As Europe faces daily cyber and hybrid attacks on essential services and democratic institutions, the Commission is committed to further strengthen the EU's cybersecurity resilience and capabilities.” To that end, the Commission introduced a Cybersecurity Package on January 20 to bolster the European Union's cyber defenses. “A central pillar of this initiative is the Cybersecurity Act 2.0, which introduces a framework for a Trusted ICT Supply Chain to mitigate risks from high-risk suppliers,” the EC statement said.
  •  

UAE Cyber Security Council Warns Stolen Logins Fuel Majority of Financial Cyberattacks

UAE Cyber Security Council

The UAE Cyber Security Council has issued a renewed warning about the growing threat of financial cybercrime, cautioning that stolen login credentials remain the most common entry point for attacks targeting individuals, companies, and institutions. According to the council, around 60% of financial cyberattacks begin with the theft of usernames and passwords, making compromised credentials a primary gateway for fraud, identity theft, and unauthorized access to sensitive financial information.  In comments to the Emirates News Agency (WAM), the UAE Cyber Security Council said that financial data remains one of the most sought-after assets for cybercriminals, particularly as digital banking and online transactions become more deeply embedded in daily life. The council stressed that while threat actors are increasingly sophisticated, many successful attacks still exploit basic security weaknesses that can be mitigated through stronger digital hygiene. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities The council urged individuals and organizations to exercise greater caution when handling financial information online, emphasizing that simple preventive steps can reduce exposure to cyber risks. Users were advised against storing sensitive passwords on unsecured or inadequately protected devices, and were encouraged to regularly review privacy settings, remove untrusted applications, and ensure operating systems and software are kept up to date. 
Also read: The Top 25 Women Cybersecurity Leaders in the UAE in 2025

Emirates News Agency Reports 60% of Attacks Begin with Compromised Credentials 

Speaking to the Emirates News Agency, the UAE Cyber Security Council highlighted two-factor authentication as one of the most effective defenses against unauthorized access. The council described multi-factor security controls as a critical layer of protection in an environment where stolen credentials are frequently traded, reused, or exploited across multiple platforms. “Every step taken to protect personal and financial data contributes directly to reducing the likelihood of falling victim to online fraud,” the council said.  The council also warned that cybercriminals often gain access to financial information indirectly. Rather than attacking banking systems outright, attackers may first compromise email or social media accounts and then use those accounts to reset passwords or harvest banking details. This method enables fraudsters to remain undetected while expanding their access to more sensitive systems.  To counter this, the UAE Cyber Security Council called on users to adopt safer digital habits, including using secure payment methods, avoiding the storage of financial data on mobile phones or personal computers, and monitoring bank accounts regularly for suspicious activity. The council also recommended enabling instant bank alerts to receive real-time notifications of account activity, allowing for rapid response and immediate reporting in the event of a breach. 

Council Urges Stronger Digital Habits to Protect Banking and Financial Data 

The council further cautioned against engaging with fake advertisements, phishing messages, or unverified online entities. According to the Emirates News Agency, fraudsters are increasingly using advanced technologies to imitate the logos, branding, and messaging styles of banks and trusted financial institutions, making fraudulent communications harder to identify. Users were urged to carefully verify messages, avoid clicking on suspicious links, and refrain from sharing personal or financial information outside official banking channels.  As part of its ongoing weekly cybersecurity awareness efforts, the UAE Cyber Security Council emphasized the importance of constant vigilance to prevent attacks targeting financial and banking data. It noted that cyber threats may take the form of direct attacks on bank accounts or indirect identity theft through unauthorized access to personal accounts, often resulting in financial losses.  The council also advised against using open or free Wi-Fi networks for banking activities or financial transactions, warning that such networks are often unsecured and vulnerable to interception. It stressed the importance of creating strong, unique passwords for banking and financial service accounts, noting that password reuse increases the risk of compromise. 
Also read: UAE Cyber Security Council Flags 70% Smart Home Devices as Vulnerable
  •  

Why TikTok’s Addictive Design Is Now a Regulatory Problem

TikTok Addictive Design Under EU Regulatory Scrutiny

The European Commission’s preliminary finding that TikTok addictive design breaches the Digital Services Act (DSA) is a huge change in how regulators view social media responsibility, especially when it comes to children and vulnerable users. This is not a symbolic warning. It is a direct challenge to the design choices that have powered TikTok’s explosive growth. According to the Commission, TikTok’s core features—including infinite scroll, autoplay, push notifications, and a highly personalised recommender system—are engineered to keep users engaged for as long as possible. The problem, regulators argue, is that TikTok failed to seriously assess or mitigate the harm these features can cause, particularly to minors. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

TikTok Addictive Design Fuels Compulsive Use

The Commission’s risk assessment found that TikTok did not adequately evaluate how its design impacts users’ physical and mental wellbeing. Features that constantly “reward” users with new content can push people into what experts describe as an “autopilot mode,” where scrolling becomes automatic rather than intentional. Scientific research reviewed by the Commission links such design patterns to compulsive behaviour and reduced self-control. Despite this, TikTok reportedly overlooked key indicators of harmful use, including how much time minors spend on the app at night, how frequently users reopen the app, and other behavioural warning signs. This omission matters. Under the Digital Services Act, platforms are expected not only to identify risks but to act on them. In this case, the Commission believes TikTok failed on both counts.

Risk Mitigation Measures Fall Short

The investigation also found that TikTok’s current safeguards do little to counter the risks created by its addictive design. Screen time management tools are reportedly easy to dismiss and introduce minimal friction, making them ineffective in helping users actually reduce usage. Parental controls fare no better. While they exist, the Commission notes that they require extra time, effort, and technical understanding from parents, barriers that significantly limit their real-world impact. At this stage, regulators believe that cosmetic fixes are not enough. The Commission has stated that TikTok may need to change the basic design of its service, including disabling infinite scroll over time, enforcing meaningful screen-time breaks (especially at night), and reworking its recommender system. These findings are preliminary, but the message is clear: responsibility cannot be optional when a platform’s design actively shapes user behaviour.

How Governments View Social Media Harm

The scrutiny of TikTok addictive design comes amid a broader global reassessment of social media’s impact on young users. Countries including Australia, Spain, and the United Kingdom have taken steps in recent months to restrict or ban social media use by minors, citing growing concerns over screen time and mental health. Europe’s stance reflects a wider regulatory trend: moving away from asking platforms to self-police, and toward enforcing accountability through law. This is consistent with other digital policy actions across the region, including investigations into platform transparency, data access for researchers, and online safety failures.

What Happens Next for TikTok

TikTok now has the right to review the Commission’s findings and respond in writing. The European Board for Digital Services will also be consulted. If the Commission ultimately confirms its position, it could issue a formal non-compliance decision, opening the door to fines of up to 6% of TikTok’s global annual turnover. While the outcome is not yet final, the direction is unmistakable. As Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, stated:
“Social media addiction can have detrimental effects on the developing minds of children and teens. The Digital Services Act makes platforms responsible for the effects they can have on their users. In Europe, we enforce our legislation to protect our children and our citizens online.”
The TikTok case is no longer just about one app. It is about whether growth-driven platform design can continue unchecked, or whether accountability is finally catching up.
  •  

Singapore Launches Largest-Ever Cyber Defense Operation After UNC3886 Targets All Major Telcos

UNC3886

Singapore has launched its largest-ever coordinated cyber defense operation following a highly targeted cyberattack on telecommunications that affected all four of the country’s major telecommunications operators.   The cyberattack in Singapore was attributed to the advanced threat actor UNC3886, according to Minister for Digital Development and Information and Minister-in-charge of Cybersecurity and Smart Nation Group, Josephine Teo. She disclosed the details on Feb. 9 while speaking at an engagement event for cyber defenders involved in the national response effort, codenamed Operation Cyber Guardian.  Teo confirmed that the UNC3886 cyberattack in Singapore targeted M1, Singtel, StarHub, and Simba.
Also read: ‘UNC3886 is Attacking Our Critical Infrastructure Right Now’: Singapore’s National Security Lawmaker
Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

Decoding the UNC3886 Cyberattack in Singapore 

Once suspicious activity was detected, the affected operators immediately alerted the Infocomm Media Development Authority (IMDA) and the Cyber Security Agency of Singapore (CSA). CSA, IMDA, and several other government bodies then launched Operation Cyber Guardian to contain the breach.   The operation involved more than 100 cyber defenders from six government agencies, including CSA, IMDA, the Singapore Armed Forces’ Digital and Intelligence Service, the Centre for Strategic Infocomm Technologies, the Internal Security Department, and GovTech, all working closely with the telcos.  Teo said the response has, for now, managed to limit the attackers’ activities. Although the attackers accessed a small number of critical systems in one instance, they were unable to disrupt services or move deeper into the telco networks. “There is also no evidence thus far to suggest that the attackers were able to access or steal sensitive customer data,” she said. 

UNC3886 Cyberattack Posed Severe Risks to Essential Services 

Despite the containment, Teo warned against complacency. She stressed that the cyberattack in Singapore highlighted the presence of persistent threat actors capable of targeting critical infrastructure. She added that sectors such as power, water, and transport could also face similar threats and urged private-sector operators to remain vigilant.  The government, Teo said, will continue to work closely with critical infrastructure operators through cybersecurity exercises and the sharing of classified threat intelligence to enable early detection and faster response. “But even as we try our best to prevent and detect cyber-attacks, we may not always be able to stop them in time,” she said. “All of us must also be prepared for the threat of disruption.”  The UNC3886 operation was first revealed publicly in July 2025 by Minister for Home Affairs and Coordinating Minister for National Security K Shanmugam. Teo described the telecommunication cyberattack as a “potentially more serious threat” than previous cyber incidents faced by Singapore, noting that it targeted systems directly responsible for delivering essential public services.  “The consequences could have been more severe,” she said. “If the attack went far enough, it could have allowed the attacker to one day cut off telecoms or internet services.”  Investigations later revealed that the UNC3886 cyberattack in Singapore was a deliberate, targeted, and well-planned campaign aimed specifically at the telco sector. The attackers exploited a zero-day vulnerability, a previously unknown flaw for which no patch was available at the time. Teo likened this to “finding a new key that no one else had found, to unlock the doors to our telcos’ information system and networks.”  After gaining access, UNC3886 reportedly stole a small amount of technical data and used advanced techniques to evade detection and erase forensic traces. Beyond espionage, the group was assessed to have the capability to disrupt telecommunications and internet services, which could have had knock-on effects on banking, finance, transport, and medical services. 

Telcos and Government Strengthen Defenses Against Persistent Threats 

In a joint statement, M1, Singtel, StarHub, and Simba said they face a wide range of cyber threats, including distributed denial-of-service attacks, malware, phishing, and persistent campaigns.   To counter these risks, the telcos said they have implemented defense-in-depth measures and carried out prompt remediation when vulnerabilities are identified. They also emphasized close collaboration with government agencies and industry experts to strengthen resilience. “Protecting our critical infrastructure is a top priority. We will continue to keep pace with the evolving cyber threat landscape and update our measures accordingly,” the statement said.  UNC3886 is a China-linked cyber espionage actor classified as an Advanced Persistent Threat. The “UNC” label indicates that the group remains uncategorized. Cybersecurity researchers have observed that UNC3886 frequently targets network devices and virtualization technologies, often exploiting zero-day vulnerabilities. The group primarily focuses on defense, technology, and telecommunication organizations in the United States and Asia. 
  •  

Illinois Man Charged in Massive Snapchat Hacking Scheme Targeting Hundreds of Women

Snapchat hacking investigation

The Snapchat hacking investigation involving an Illinois man accused of stealing and selling private images of hundreds of women is not just another cybercrime case, it is a reminder of how easily social engineering can be weaponized against trust, privacy, and young digital users. Federal prosecutors say the case exposes a disturbing intersection of identity theft, online exploitation, and misuse of social media platforms that continues to grow largely unchecked. Kyle Svara, a 26-year-old from Oswego, Illinois, has been charged in federal court in Boston for his role in a wide-scale Snapchat account hacking scheme that targeted nearly 600 women. According to court documents, Svara used phishing and impersonation tactics to steal Snapchat access codes, gain unauthorized account access, and extract nude or semi-nude images that were later sold or traded online.

Snapchat Hacking Investigation Reveals Scale of Phishing Abuse

At the core of the Snapchat hacking investigation is a textbook example of social engineering. Between May 2020 and February 2021, Svara allegedly gathered emails, phone numbers, and Snapchat usernames using online tools and research techniques. He then deliberately triggered Snapchat’s security system to send one-time access codes to victims. Using anonymized phone numbers, Svara allegedly impersonated a Snap Inc. representative and texted more than 4,500 women, asking them to share their security codes. About 570 women reportedly complied—handing over access to their accounts without realizing they were being manipulated. Once inside, prosecutors say Svara accessed at least 59 Snapchat accounts and downloaded private images. These images were allegedly kept, sold, or exchanged on online forums. The investigation found that Svara openly advertised his services on platforms such as Reddit, offering to “get into girls’ snap accounts” for a fee or trade.

Snapchat Hacking for Hire

What makes this Snapchat hacking case especially troubling is that it was not driven solely by curiosity or personal motives. Investigators allege that Svara operated as a hacking-for-hire service. One of his co-conspirators was Steve Waithe, a former Northeastern University track and field coach, who allegedly paid Svara to hack Snapchat accounts of women he coached or knew personally. Waithe was convicted in November 2023 on multiple counts, including wire fraud and cyberstalking, and sentenced to five years in prison. The link between authority figures and hired cybercriminals adds a deeply unsettling dimension to the case, one that highlights how power dynamics can be exploited through digital tools. Beyond hired jobs, Svara also allegedly targeted women in and around Plainfield, Illinois, as well as students at Colby College in Maine, suggesting a pattern of opportunistic and localized targeting.

Why the Snapchat Hacking Investigation Matters

This Snapchat hacking investigation features a critical cybersecurity truth: technical defenses mean little when human trust is exploited. The victims did not lose access because Snapchat’s systems failed; they were deceived into handing over the keys themselves. It also raises serious questions about accountability on social platforms. While Snapchat provides security warnings and access codes, impersonation attacks continue to succeed at scale. The ease with which attackers can pose as platform representatives points to a larger problem of user awareness and platform-level safeguards. The case echoes other recent investigations, including the indictment of a former University of Michigan football coach accused of hacking thousands of athlete accounts to obtain private images. Together, these cases reveal a troubling pattern—female student athletes being specifically researched, targeted, and exploited.

Legal Consequences

Svara faces charges including aggravated identity theft, wire fraud, computer fraud, conspiracy, and false statements related to child pornography. If convicted, he could face decades in prison, with a cumulative maximum sentence of 32 years. His sentencing is scheduled for May 18. Federal authorities have urged anyone who believes they may be affected by this Snapchat hacking scheme to come forward. More than anything, this case serves as a warning. The tools used were not sophisticated exploits or zero-day vulnerabilities—they were lies, impersonation, and manipulation. As this Snapchat hacking investigation shows, the most dangerous cyber threats today often rely on human error, not broken technology.
  •  

What CISA KEV Is and Isn’t – and a Tool to Help Guide Security Teams

What CISA KEV Is and Isn’t - and a Tool to Help Guide Security Teams

A new paper gives an insider’s perspective into CISA’s Known Exploited Vulnerability catalog – and also offers a free tool to help security teams use the CISA KEV catalog more effectively. The paper, by former CISA KEV Section Chief and current runZero VP of Security Research Tod Beardsley, applies commonly used enrichment signals like CVSS, EPSS and SSVC, public exploit tooling from Metasploit and Nuclei, MITRE ATT&CK mappings, and “time-sequenced relationships” to help security teams prioritize vulnerabilities based on urgency. The paper’s findings led to the development of KEV Collider, a web application and dataset “that encourages readers to explore, recombine, and validate KEV enrichment data to better leverage the KEV in their daily operations,” the paper said. One interesting finding in the paper is that only 32% of CISA KEV vulnerabilities are “immediately exploitable for initial access.”

CISA KEV Is Not a List of the Worst Vulnerabilities

CISA KEV is not a list of the worst vulnerabilities, and the criteria for inclusion in the KEV catalog is perhaps surprisingly narrow. “The KEV is often misunderstood as a government-curated list of the most severe vulnerabilities ever discovered, or as a catalog of hyper-critical remote code execution flaws actively being used by foreign adversaries against U.S. government systems,” the paper said. “This casual interpretation is incorrect on several counts. While KEV-listed vulnerabilities do represent confirmed exploitation, the catalog exists primarily as an operational prioritization tool rather than as a comprehensive inventory of exploited vulnerabilities.” Inclusion in the KEV Catalog is limited to vulnerabilities that meet four conditions:
  • The vulnerability must have an assigned Common Vulnerabilities and Exposures (CVE) identifier.
  • There must be a reasonable mitigation. “This means that vulnerabilities with no realistic path to mitigation will not reach the KEV,” the paper said. The lack of a straightforward fix has kept CVE-2022-21894, aka “BlackLotus,” off the list even though the NSA has provided mitigation guidance.
  • There must be evidence of exploitation. “This exploitation must be observed by CISA, either directly or through trusted reporting channels,” the paper said.
  • The vulnerability must be relevant to the U.S. Federal Civilian Executive Branch (FCEB).
CISA KEV is not the only list of known exploited vulnerabilities, the paper said. Another is the VulnCheck KEV, which is three times bigger than CISA KEV. “It often adds vulnerabilities to its KEV in closer-to-real-time as exploitation evidence surfaces, sometimes beating the CISA KEV as first to publish exploitation notifications,” the paper said – and would also be an interesting place to apply the paper’s criteria. CISA KEV isn’t a list of the most severe vulnerabilities: “the vulnerabilities there are not all unauthenticated, remotely exploitable, initial intrusion vulnerabilities,” the paper said. Looking at just the last 12 vulnerabilities added to the KEV catalog in December, only four met the criteria for a “straight shot RCE bug.” Those criteria are:
  • Access Vector of “Network” (as opposed to “Adjacent,” “Local,” or “Physical”)
  • Privileges Required of “None” (as opposed to “Low” or “High”)
  • User Interaction of “None” (as opposed to “Required”)
  • Integrity Impact of “High” (as opposed to “None” or “Low”)
“These are the vulnerabilities that listen on an internet socket, don’t require a login, don’t require the victim to act, and the attacker ends up with total control over the affected system,” the paper said. Interestingly, the four straight-shot RCE vulnerabilities are all rated Critical, while the rest are rated High or Medium. Out of 1,488 KEV vulnerabilities as of January 14, 2026, only 483, or 32%, “are useful for immediate initial access,” the paper said. Using the Straight-Shot RCE filter in KEV Collider, 494 of 1,507 KEV vulnerabilities in the catalog as of Feb. 6 qualify, or 32.7 Looking at EPSS scores suggests that some of the vulnerabilities have a low probability of being exploited again in the future. There are 545 KEV vulnerabilities with very high EPSS scores – and 353 in the sub-10% category. Examining Metasploit Framework exploits, 464 KEV vulnerabilities were associated with at least one Metasploit module. “This means that just about a third of all KEVs are trivially exploitable today, as Metasploit modules are free, easy to use, and well-understood by attackers and defenders alike,” the paper said. There were 398 Nuclei templates “suitable for testing KEV vulnerabilities,” and 235 vulnerabilities with both Metasploit and Nuclei exploits. The paper also looked at the correlation of MITRE ATT&CK mappings with Metasploit and Nuclei exploit development and found that vulnerabilities associated with T1190 (Exploit Public-Facing Application) and T1059 (Command and Scripting Interpreter) “are more likely to attract the attention of public exploit developers.” Also read: CISA Silently Updates Vulnerabilities Exploited by Ransomware Groups

Perfect Vulnerability Coverage ‘Unrealistic’

The paper noted that “perfect vulnerability coverage is an increasingly unrealistic goal, particularly when organizations are constrained by finite tooling, staffing, or budget. This is even true when the focus is narrowed to merely the CISA KEV catalog.” “Many KEVs now affect assets that are difficult to inventory, difficult to scan, or difficult to patch using conventional enterprise tooling,” and can’t be covered by a single product. The paper’s goal is to help security practitioners “reason about uncertainty and prioritize effort when full coverage is unattainable. In practice, organizations must decide how to sequence remediation, where to apply detection and monitoring first, and when to escalate resource allocation to meet particularly aggressive deadlines.” All source JSON files used by the KEV Collider application are available in a public GitHub repository.
  •  

The Cyber Express Weekly Roundup: Global Cybersecurity Incidents and Policy Shifts

TCE weekly roundup

As the first week of February 2026 concludes, The Cyber Express weekly roundup examines the developments shaping today’s global cybersecurity landscape. Over the past several days, governments, technology companies, and digital platforms have confronted a wave of cyber incidents ranging from disruptive attacks on public infrastructure to large-scale data exposures and intensifying regulatory scrutiny of artificial intelligence systems.  This week’s cybersecurity reporting reflects a broader pattern: rapid digital expansion continues to outpace security maturity. High-profile breaches, misconfigured cloud environments, and powerful AI tools are creating both defensive opportunities and significant new risks.  

The Cyber Express Weekly Roundup 

Cyberattack Disrupts Spain’s Ministry of Science Operations 

Spain’s Ministry of Science, Innovation, and Universities confirmed that a cyberattack forced a partial shutdown of its IT systems, disrupting digital services relied upon by researchers, universities, students, and businesses nationwide. Initially described as a technical incident, the disruption was later acknowledged as a cybersecurity event that required the temporary closure of the ministry’s electronic headquarters. Read more.. 

OpenAI Expands Controlled Access to Advanced Cyber Defense Models 

OpenAI announced the launch of Trusted Access for Cyber, a new initiative designed to strengthen defensive cybersecurity capabilities while limiting the potential misuse of highly capable AI systems. The program provides vetted security professionals with controlled access to advanced models such as GPT-5.3-Codex, which OpenAI identifies as its most cyber-capable reasoning model to date. Read more.. 

French Authorities Escalate Investigations Into X and Grok AI 

French police raided offices belonging to the social media platform X as European investigations expanded into alleged abuses involving its Grok AI chatbot. Authorities are examining claims that Grok generated nonconsensual sexual deepfakes, child sexual abuse material (CSAM), and content denying crimes against humanity, including Holocaust denial. Read more.. 

AI-Generated Platform Moltbook Exposes Millions of Credentials 

Security researchers disclosed that Moltbook, a viral social network built entirely using AI-generated code, exposed 1.5 million API authentication tokens, 35,000 user email addresses, and thousands of private messages due to a database misconfiguration. Wiz Security identified the issue after discovering an exposed Supabase API key embedded in client-side JavaScript, which granted unrestricted access to the platform’s production database. Read more.. 

Substack Discloses Breach Months After Initial Compromise 

Substack revealed that attackers accessed user email addresses, phone numbers, and internal metadata in October 2025, though the breach went undetected until February 3, 2026. CEO Chris Best notified affected users, stating, “I’m incredibly sorry this happened. We take our responsibility to protect your data and your privacy seriously, and we came up short here.” Read more.. 

Weekly Takeaway 

This Cyber Express weekly roundup highlights a clear takeaway for the global cybersecurity community: digital expansion without equivalent security investment increases organizational and systemic risk. AI-built platforms, advanced security tooling, and large-scale public-sector systems are being deployed rapidly, often without adequate access controls, monitoring, or testing. As recent incidents show, these gaps lead to data exposure, prolonged breach detection, and service disruption. To reduce risk, organizations must embed security controls, clear ownership, and continuous monitoring into system design and daily operations, rather than relying on post-incident fixes or policy statements.
  •  

Spain Ministry of Science Cyberattack Triggers Partial IT Shutdown

Spain Ministry of Science cyberattack

The Spain Ministry of Science cyberattack has caused a partial shutdown of government IT systems, disrupting services used daily by researchers, universities, students, and businesses across the country. While officials initially described the issue as a “technical incident,” boarding evidence and confirmations from Spanish media now point to a cyberattack involving potentially sensitive academic, personal, and financial data. The Ministry of Science, Innovation and Universities plays a central role in Spain’s research and higher education ecosystem. Any disruption to its digital infrastructure has wide-reaching consequences, making this incident far more serious than a routine systems outage.

Official Notice Confirms System Closure and Suspended Procedures

In a public notice published on its electronic headquarters, the ministry acknowledged the disruption and announced a temporary shutdown of key digital services. “As a result of a technical incident that is currently being assessed, the electronic headquarters of the Ministry of Science, Innovation and Universities has been partially closed.” The notice further stated: “All ongoing administrative procedures are suspended, safeguarding the rights and legitimate interests of all persons affected by said temporary closure, resulting in an extension of all deadlines for the various procedures affected.” The ministry added that deadline extensions would remain in place “until the complete resolution of the aforementioned incident occurs,” citing Article 32 of Law 39/2015. While procedural safeguards are welcome, the lack of early transparency around the nature of the incident raised concerns among affected users.

Spain Ministry of Science Cyberattack: Hacker Claims 

Those concerns intensified when a threat actor using the alias “GordonFreeman” appeared on underground forums claiming responsibility for the Spain Ministry of Science cyberattack. The attacker alleged that they exploited a critical Insecure Direct Object Reference (IDOR) vulnerability, granting “full-admin-level access” to internal systems. Data samples shared online—though not independently verified—include screenshots of official documents, email addresses, enrollment applications, and internal records. Spanish media outlet OKDIARIO reported that a ministry spokesperson confirmed the IT disruption was linked to a cyberattack and that the electronic headquarters had been shut down to assess the scope of the data breach. Although the forum hosting the alleged leak is now offline and the data has not resurfaced elsewhere, the screenshots appear legitimate. If confirmed, this would represent a serious breakdown in access control protections.

Alleged Data Exposure Raises Serious Privacy Concerns

According to claims made by the attacker, the stolen data includes highly sensitive information related to students and researchers, such as:
  • Scanned ID documents, NIEs, and passports
  • Email addresses
  • Payment receipts showing IBAN numbers
  • Academic records, including transcripts and apostilled degrees
  • Curricula containing private personal data
If even a portion of this data is authentic, the Spain Ministry of Science cyberattack could expose thousands of individuals to identity theft, financial fraud, and long-term privacy risks. Academic data, in particular, is difficult to replace or invalidate once leaked.

Spain’s Growing Cybercrime Problem

This Spain Ministry of Science cyberattack incident does not exist in isolation. Cybercrime now accounts for more than one in six recorded criminal offenses in Spain. Attacks have increased by 35% this year, with more than 45,000 incidents reported daily. Between late February and early March, attacks surged by 750% compared to the same period last year. During the week of 5–11 March 2025, Spain was the most targeted country globally, accounting for 22.6% of all cyber incidents—surpassing even the United States. Two factors continue to drive this trend. Rapid digital transformation, fueled by EU funding, has often outpaced cybersecurity investment. At the same time, ransomware attacks—up 120%—have increasingly targeted organizations with weak defenses, particularly public institutions and SMEs. The Spain Ministry of Science cyberattack stresses a hard truth, digital services without strong security become liabilities, not efficiencies. As public administrations expand online access, cybersecurity can no longer be treated as a secondary concern or an afterthought. Until Spain addresses systemic gaps in public-sector cybersecurity, incidents like Spain Ministry of Science cyberattack will continue, not as exceptions, but as warnings ignored too long.
  •  

La Sapienza Cyberattack Forces Italy’s Largest University Offline

La Sapienza cyberattack

Rome’s Sapienza University, Europe’s largest university by number of on-campus students, is grappling with a major IT outage following a cyberattack on La Sapienza that disrupted digital services across the institution. The La Sapienza cyberattack has forced the university to take critical systems offline as officials work to contain the incident and restore operations.  The university publicly acknowledged the cyberattack on La Sapienza earlier this week through a social media statement, confirming that its IT infrastructure “has been the target of a cyberattack.” As an immediate response, Sapienza ordered a shutdown of its network systems “to ensure the integrity and security of data,” a decision that triggered widespread operational disruptions. 

Updates to the La Sapienza Cyberattack

Sapienza University of Rome enrolls more than 112,500 students, making the impact of the outage particularly significant. Following the incident, university officials notified Italian authorities and established a dedicated technical task force to coordinate remediation and recovery efforts. As of the latest updates, the university’s official website remains offline, and recovery status updates have been communicated primarily through social media channels, including Instagram. To mitigate disruption to students, the university announced the creation of temporary in-person “infopoints.” These locations are intended to provide access to information normally available through digital systems and databases that remain unavailable due to the cyberattack on La Sapienza.

Cyberattack on La Sapienza Linked to BabLock Malware 

While the university has not publicly confirmed the technical nature of the incident or identified those responsible, Italian newspaper Corriere Della Sera reports that the La Sapienza cyberattack bears the hallmarks of a ransomware operation. According to the outlet, the attack is allegedly linked to a previously unknown, pro-Russian threat actor known as “Femwar02.”  The reporting suggests the attackers used BabLock malware, also referred to as Rorschach, based on observed malware characteristics and operational behavior. BabLock malware first emerged in 2023 and has attracted researchers' attention for its unusually fast encryption speeds and extensive customization capabilities.  Sources cited by Corriere della Sera claim that the systems at Sapienza were encrypted and that a ransom demand exists. However, university staff reportedly have not opened the ransom note, as doing so would trigger a 72-hour countdown timer. As a result, the ransom amount has not been disclosed. This tactic, designed to pressure victims into rapid negotiations, is increasingly common in ransomware campaigns using BabLock malware. 

Investigation and Recovery Efforts Continue 

In response to the cyberattack on La Sapienza, university technicians are working alongside Italy’s national Computer Security Incident Response Team (CSIRT), specialists from the Agenzia per la Cybersicurezza Nazionale (ACN), and the Polizia Postale. Their primary objective is to restore systems using backups, which, according to reports, were not affected by the attack.  Italy’s national cybersecurity agency has confirmed that it is investigating the incident. However, neither Sapienza University nor Italian authorities have publicly verified whether the attack involved ransomware or whether any data was exfiltrated. This distinction is critical: encryption-only incidents primarily cause operational disruption, while confirmed data theft can trigger additional legal and regulatory obligations under the EU’s General Data Protection Regulation (GDPR). 
  •  

OpenAI Launches Trusted Access for Cyber to Expand AI-Driven Defense While Managing Risk

Trusted Access for Cyber

OpenAI has announced a new initiative aimed at strengthening digital defenses while managing the risks that come with capable artificial intelligence systems. The effort, called Trusted Access for Cyber, is part of a broader strategy to enhance baseline protection for all users while selectively expanding access to advanced cybersecurity capabilities for vetted defenders.   The initiative centers on the use of frontier models such as GPT-5.3-Codex, which OpenAI identifies as its most cyber-capable reasoning model to date, and tools available through ChatGPT. 

What is Trusted Access for Cyber? 

Over the past several years, AI systems have evolved rapidly. Models that once assisted with simple tasks like auto-completing short sections of code can now operate autonomously for extended periods, sometimes hours or even days, to complete complex objectives.   In cybersecurity, this shift is especially important. According to OpenAI, advanced reasoning models can accelerate vulnerability discovery, support faster remediation, and improve resilience against targeted attacks. At the same time, these same capabilities could introduce serious risks if misused.  Trusted Access for Cyber is intended to unlock the defensive potential of models like GPT-5.3-Codex while reducing the likelihood of abuse. As part of this effort, OpenAI is also committing $10 million in API credits to support defensive cybersecurity work.

Expanding Frontier AI Access for Cyber Defense 

OpenAI argues that the rapid adoption of frontier cyber capabilities is critical to making software more secure and raising the bar for security best practices. Highly capable models accessed through ChatGPT can help organizations of all sizes strengthen their security posture, shorten incident response times, and better detect cyber threats. For security professionals, these tools can enhance analysis and improve defenses against severe and highly targeted attacks.  The company notes that many cyber-capable models will soon be broadly available from a range of providers, including open-weight models. Against that backdrop, OpenAI believes it is essential that its own models strengthen defensive capabilities from the outset. This belief has shaped the decision to pilot Trusted Access for Cyber, which prioritizes placing OpenAI’s most capable models in the hands of defenders first.  A long-standing challenge in cybersecurity is the ambiguity between legitimate and malicious actions. Requests such as “find vulnerabilities in my code” can support responsible patching and coordinated disclosure, but they can also be used to identify weaknesses for exploitation. Because of this overlap, restrictions designed to prevent harm have often slowed down good-faith research. OpenAI says the trust-based approach is meant to reduce that friction while still preventing misuse.

How Trusted Access for Cyber Works 

Frontier models like GPT-5.3-Codex are trained with protection methods that cause them to refuse clearly malicious requests, such as attempts to steal credentials. In addition to this safety training, OpenAI uses automated, classifier-based monitoring to detect potential signals of suspicious cyber activity. During this calibration phase, developers and security professionals using ChatGPT for cybersecurity tasks may still encounter limitations.  Trusted Access for Cyber introduces additional pathways for legitimate users. Individual users can verify their identity through a dedicated cyber access portal. Enterprises can request trusted access for entire teams through their OpenAI representatives. Security researchers and teams that require even more permissive or cyber-capable models to accelerate defensive work can apply to an invite-only program. All users granted trusted access must continue to follow OpenAI’s usage policies and terms of use.  The framework is designed to prevent prohibited activities, including data exfiltration, malware creation or deployment, and destructive or unauthorized testing, while minimizing unnecessary barriers for defenders. OpenAI expects both its mitigation strategies and Trusted Access for Cyber itself to evolve as it gathers feedback from early participants. 

Scaling the Cybersecurity Grant Program 

To further support defensive use cases, OpenAI is expanding its Cybersecurity Grant Program with a $10 million commitment in API credits. The program is aimed at teams with a proven track record of identifying and remediating vulnerabilities in open source software and critical infrastructure systems.   By pairing financial support with controlled access to advanced models like GPT-5.3-Codex through ChatGPT, OpenAI seeks to accelerate legitimate cybersecurity research without broadly exposing powerful tools to misuse. 
  •  

Why End-of-Support Edge Devices Have Become a National Security Risk

End-of-Support edge devices

The growing cyber threat from End-of-Support edge devices is no longer a technical inconvenience, it is a national cybersecurity liability. With threat actors actively exploiting outdated infrastructure, federal agencies can no longer afford to treat unsupported edge technology as a future problem. The latest Binding Operational Directive (BOD 26-02) makes one thing clear- mitigating risk from End-of-Support edge devices is now mandatory, measurable, and time-bound. This directive, issued under the authority of the Department of Homeland Security (DHS) and enforced by the Cybersecurity and Infrastructure Security Agency (CISA), forces Federal Civilian Executive Branch (FCEB) agencies to confront a long-standing weakness at the network perimeter, devices that no longer receive vendor support but still sit exposed to the internet.

Why End-of-Support Edge Devices Are a High-Risk Blind Spot

End-of-Support (EOS) edge devices are particularly dangerous because of where they live. Firewalls, routers, VPN gateways, load balancers, and network security appliances operate at the boundary of federal networks. When these devices stop receiving patches, firmware updates, or CVE fixes, they become ideal entry points for attackers. CISA has already observed widespread exploitation campaigns targeting EOS edge devices. Advanced threat actors are using them not just for initial access, but as pivot points into identity systems and internal networks. In simple terms, one outdated edge device can undermine an entire Zero Trust strategy. The uncomfortable truth is this that agencies that delay replacing EOS edge devices are accepting disproportionate and avoidable risk.

Binding Operational Directive 26-02

BOD 26-02 is not guidance, it is enforcement. Federal agencies are legally required to comply, and the directive lays out a clear lifecycle-based approach to mitigating risk from End-of-Support edge devices. Within three months, agencies must inventory EOS devices using the CISA EOS Edge Device List. Within twelve months, they must decommission devices already past support deadlines. By eighteen months, all EOS edge devices must be removed from agency networks, replaced with vendor-supported alternatives. Most importantly, the directive doesn’t stop at cleanup. Within twenty-four months, agencies must establish continuous discovery processes to ensure no edge device reaches EOS while still operational. This is the shift federal cybersecurity has needed for years—from reactive patching to proactive lifecycle management.

Lifecycle Management is the Real Security Control

What BOD 26-02 exposes is not just a device problem, but a governance failure. Agencies that struggle with End-of-Support edge devices often lack mature asset management, refresh planning, and procurement alignment. OMB Circular A-130 already required unsupported systems to be phased out “as rapidly as possible.” This directive simply removes ambiguity and excuses. If an agency cannot track when its edge devices reach EOS, it cannot credibly claim to manage cyber risk. The directive also aligns closely with Zero Trust principles outlined in OMB Memorandum M-22-09, reinforcing MFA, asset visibility, workload isolation, and encryption. EOS devices undermine every one of these controls.

What it Means for Federal Cybersecurity

Some agencies will view this directive as operationally disruptive. That reaction misses the point. The real disruption comes from ransomware, espionage, and persistent network compromise—outcomes that EOS edge devices actively enable. BOD 26-02 signals a long-overdue cultural shift- unsupported technology is no longer tolerated at the federal network edge. Agencies that treat compliance as a checkbox will struggle. Those that use it to modernize lifecycle management will be far more resilient. In today’s threat environment, mitigating risk from End-of-Support edge devices is not about compliance, it’s about survival.
  •  

AI-Coded Moltbook Platform Exposes 1.5 Mn API Keys Through Database Misconfiguration

Moltbook, AI Agent, Database Leak, API Keys Leak, API Keys,

Viral social network "Moltbook" built entirely by artificial intelligence leaked authentication tokens, private messages and user emails through missing security controls in production environment.

Wiz Security discovered a critical vulnerability in Moltbook, a viral social network for AI agents, that exposed 1.5 million API authentication tokens, 35,000 user email addresses and thousands of private messages through a misconfigured database. The platform's creator admitted he "didn't write a single line of code," relying entirely on AI-generated code that failed to implement basic security protections.

The vulnerability stemmed from an exposed Supabase API key in client-side JavaScript that granted unauthenticated read and write access to Moltbook's entire production database. Researchers discovered the flaw within minutes of examining the platform's publicly accessible code bundles, demonstrating how easily attackers could compromise the system.

"When properly configured with Row Level Security, the public API key is safe to expose—it acts like a project identifier," explained Gal Nagli, Wiz's head of threat exposure. "However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook's implementation, this critical line of defense was missing."

Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

What's Moltbook

Moltbook launched January 28, as a Reddit-like platform where autonomous AI agents could post content, vote and interact with each other. The concept attracted significant attention from technology influencers, including former Tesla AI director Andrej Karpathy, who called it "the most incredible sci-fi takeoff-adjacent thing" he had seen recently. The viral attention drove massive traffic within hours of launch.

However, the platform's backend relied on Supabase, a popular open-source Firebase alternative providing hosted PostgreSQL databases with REST APIs. Supabase became especially popular with "vibe-coded" applications—projects built rapidly using AI code generation tools—due to its ease of setup. The service requires developers to enable Row Level Security policies to prevent unauthorized database access, but Moltbook's AI-generated code omitted this critical configuration.

Wiz researchers examined the client-side JavaScript bundles loaded automatically when users visited Moltbook's website. Modern web applications bundle configuration values into static JavaScript files, which can inadvertently expose sensitive credentials when developers fail to implement proper security practices.

What and How Data was Leaking

The exposed data included approximately 4.75 million database records. Beyond the 1.5 million API authentication tokens that would allow complete agent impersonation, researchers discovered 35,000 email addresses of platform users and an additional 29,631 early access signup emails. The platform claimed 1.5 million registered agents, but the database revealed only 17,000 human owners—an 88:1 ratio.

More concerning, 4,060 private direct message conversations between agents were fully accessible without encryption or access controls. Some conversations contained plaintext OpenAI API keys and other third-party credentials that users shared under the assumption of privacy. This demonstrated how a single platform misconfiguration can expose credentials for entirely unrelated services.

The vulnerability extended beyond read access. Even after Moltbook deployed an initial fix blocking read access to sensitive tables, write access to public tables remained open. Wiz researchers confirmed they could successfully modify existing posts on the platform, introducing risks of content manipulation and prompt injection attacks.

Wiz used GraphQL introspection—a method for exploring server data schemas—to map the complete database structure. Unlike properly secured implementations that would return errors or empty arrays for unauthorized queries, Moltbook's database responded as if researchers were authenticated administrators, immediately providing sensitive authentication tokens including API keys of the platform's top AI agents.

Matt Schlicht, CEO of Octane AI and Moltbook's creator, publicly stated his development approach: "I didn't write a single line of code for Moltbook. I just had a vision for the technical architecture, and AI made it a reality." This "vibe coding" practice prioritizes speed and intent over engineering rigor, but the Moltbook breach demonstrates the dangerous security oversights that can result.

Wiz followed responsible disclosure practices after discovering the vulnerability January 31. The company contacted Moltbook's maintainer and the platform deployed its first fix securing sensitive tables within a couple of hours. Additional fixes addressing exposed data, blocking write access and securing remaining tables followed over the next few hours, with final remediation completed by February 1.

"As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data," Nagli concluded. "That's a powerful shift."

The breach revealed that anyone could register unlimited agents through simple loops with no rate limiting, and users could post content disguised as AI agents via basic POST requests. The platform lacked mechanisms to verify whether "agents" were actually autonomous AI or simply humans with scripts.

Also read: How “Unseeable Prompt Injections” Threaten AI Agents
  •  

Substack Discloses Breach Exposing its User Details After Four-Month Delay

Substack Breached

Data accessed in October 2025 went undetected until February, affecting subscribers across the newsletter platform with no evidence of misuse yet identified.

Substack disclosed a security breach that exposed user email addresses, phone numbers and internal metadata to unauthorized third parties, revealing the incident occurred four months before the company detected the compromise. CEO Chris Best notified users Tuesday that attackers accessed the data in October 2025, though Substack only identified evidence of the breach on February 3.

"I'm incredibly sorry this happened. We take our responsibility to protect your data and your privacy seriously, and we came up short here," Best wrote in the notification sent to affected users.

Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

The breach allowed an unauthorized third party to access limited user data without permission through a vulnerability in Substack's systems. The company confirmed that credit card numbers, passwords and financial information were not accessed during the incident, limiting exposure to contact information and unspecified internal metadata.

Substack's Breach Detection Delay a Concern

The four-month detection gap raises questions about Substack's security monitoring capabilities and incident response procedures. Modern security frameworks typically emphasize rapid threat detection, with leading organizations aiming to identify breaches within days or hours rather than months. The extended dwell time—the period attackers maintained access before detection—gave threat actors ample opportunity to exfiltrate data undetected.

Substack claims it has fixed the vulnerability that enabled the breach but provided no technical details about the nature of the flaw or how attackers exploited it. The company stated it is conducting a full investigation and taking steps to improve systems and processes to prevent future incidents.

Best urged users to exercise caution with emails or text messages they receive, warning that exposed contact information could enable phishing attacks or social engineering campaigns. While Substack claims no evidence of data misuse exists, the four-month gap between compromise and detection means attackers had significant time to leverage stolen information.

The notification's vague language about "other internal metadata" leaves users uncertain about the full scope of exposed information. Internal metadata could include account creation dates, IP addresses, subscription lists, payment history or other details that, when combined with email addresses and phone numbers, create comprehensive user profiles valuable to attackers.

Substack Breach Impact

Newsletter platforms like Substack represent attractive targets for threat actors because they aggregate contact information for engaged audiences across diverse topics. Compromised email lists enable targeted phishing campaigns, while phone numbers facilitate smishing attacks—phishing via text message—that many users find less suspicious than email-based attempts.

The breach affects Substack's reputation as the platform competes for writers and subscribers against established players and emerging alternatives. Trust forms the foundation of newsletter platforms, where creators depend on reliable infrastructure to maintain relationships with paying subscribers.

Substack has not disclosed how many users were affected, whether the company will offer identity protection services, or if it has notified law enforcement about the breach. The company also has not confirmed whether it will face regulatory scrutiny under data protection laws in jurisdictions where affected users reside.

Users should remain vigilant for suspicious communications, enable two-factor authentication where available, and monitor accounts for unauthorized activity following the disclosure.

Also read: EU Data Breach Notifications Surge as GDPR Changes Loom
  •  

Russian Cyberattacks Target Milan-Cortina Winter Olympics Ahead of Opening Ceremony

Russian cyberattacks

With the Milan-Cortina Winter Olympics just hours from opening, Russian cyberattacks have forced Italian authorities into a full-scale security response that blends digital defence with boots on the ground. Italy confirmed this week that it successfully thwarted a coordinated wave of cyber incidents targeting government infrastructure and Olympic-linked sites, exposing how global sporting events are now frontline targets in geopolitical conflict. Italian Foreign Minister Antonio Tajani revealed that the Russian cyberattacks hit around 120 websites, including Italy’s foreign ministry offices abroad and several Winter Olympics-related locations, such as hotels in Cortina d’Ampezzo. While officials insist the attacks were “effectively neutralised,” the timing sends a clear message: cyber operations are now as much a part of Olympic security planning as physical threats.

Russian Cyberattacks and the Olympics: A Political Signal

According to Tajani, the attacks began with foreign ministry offices, including Italy’s embassy in Washington, before spreading to Olympic-linked infrastructure. A Russian hacker group known as Noname057 claimed responsibility, framing the Russian cyberattacks as retaliation for Italy’s political support for Ukraine. In a statement shared on Telegram, the group warned that Italy’s “pro-Ukrainian course” would be met with DDoS attacks—described provocatively as “missiles”—against Italian websites. While AFP could not independently verify the group’s identity, cybersecurity analysts noted that the tactics and messaging align with previous operations attributed to the same network. DDoS attacks may seem unsophisticated compared to advanced espionage campaigns, but their impact during high-profile events like the Olympics is strategic. Disrupting hotel websites, travel systems, or government portals creates confusion, undermines confidence, and grabs headlines—all without crossing into kinetic conflict.

Digital Threats Meet Physical Security Lockdown

Italy’s response to the Russian cyberattacks has been layered and aggressive. More than 6,000 police officers and nearly 2,000 military personnel have been deployed across Olympic venues stretching from Milan to the Dolomites. Snipers, bomb disposal units, counterterrorism teams, and even skiing police are now part of the security landscape. The defence ministry has added drones, radars, aircraft, and over 170 vehicles, underlining how cyber threats are now treated as triggers for broader security escalation. Milan, hosting the opening ceremony at San Siro stadium, is under particular scrutiny, with global leaders—including US Vice President JD Vance—expected to attend. The International Olympic Committee, however, stuck to its long-standing position. “We don’t comment on security,” IOC communications director Mark Adams said, a response that feels increasingly outdated in an era where Russian cyberattacks are openly claimed and politically framed.

ICE Controversy Adds Fuel to a Tense Atmosphere

Cybersecurity is not the only issue complicating Winter Olympic 2026 preparations. The presence of US Immigration and Customs Enforcement (ICE) officials in Italy has sparked political backlash and public protests. Milan Mayor Giuseppe Sala went as far as to say ICE agents were “not welcome,” calling the agency “a militia that kills.” Italy’s interior minister Matteo Piantedosi pushed back hard, clarifying that ICE’s Homeland Security Investigations unit would operate strictly within US diplomatic missions and have no enforcement powers. Still, the optics matter—especially as Russian cyberattacks amplify fears of foreign interference and sovereignty breaches. Even symbolic gestures have changed. A US hospitality venue originally called “Ice House” was quietly renamed “Winter House,” highlighting how sensitive the political climate has become.
  •  

Critical n8n Vulnerability CVE-2026-25049 Enables Remote Command Execution

n8n CVE-2026-25049 vulnerability

A newly disclosed critical vulnerability,  tracked as CVE-2026-25049, in the workflow automation platform n8n, allows authenticated users to execute arbitrary system commands on the underlying server by exploiting weaknesses in the platform’s expression evaluation mechanism. With a CVSS score of 9.4, the issue is classified as critical and poses a high risk to affected systems.  The CVE-2026-25049 vulnerability is the result of insufficient input sanitization in n8n’s expression handling logic. Researchers found that the flaw effectively bypasses security controls introduced to mitigate CVE-2025-68613, an earlier critical vulnerability with a CVSS score of 9.9 that was patched in December 2025. Despite those fixes, additional exploitation paths remained undiscovered until now. 

Bypass of Previous Security Fixes for CVE-2026-25049 Vulnerability 

According to an advisory released Wednesday by n8n maintainers, the issue was uncovered during follow-up analysis after the earlier disclosure. The maintainers stated, “Additional exploits in the expression evaluation of n8n have been identified and patched following CVE-2025-68613.”  They further warned that “an authenticated user with permission to create or modify workflows could abuse crafted expressions in workflow parameters to trigger unintended system command execution on the host running n8n.”  The vulnerability is described as an “Expression Escape Vulnerability Leading to RCE,” reflecting its ability to break out of an n8n expression sandbox and reach the host operating system. The advisory was published under GitHub Security Advisory GHSA-6cqr-8cfr-67f8 and applies to the n8n package distributed via npm. 

Affected Versions and Mitigation Guidance 

The CVE-2026-25049 vulnerability affects all n8n versions earlier than 1.123.17 and 2.5.2. The issue has been fully patched in versions 1.123.17 and 2.5.2, and users are advised to upgrade immediately to these or later releases to remediate the risk.  For organizations unable to upgrade right away, the advisory outlines temporary workarounds. These include restricting workflow creation and modification permissions to fully trusted users and deploying n8n in a hardened environment with limited operating system privileges and constrained network access.   However, n8n’s maintainers emphasized that these measures do not fully resolve the vulnerability and should only be considered short-term mitigations.  From a severity standpoint, n8n has adopted CVSS 4.0 as the primary scoring system for its advisories, while continuing to provide CVSS 3.1 vector strings for compatibility. Under CVSS 3.1, CVE-2026-25049 carries the vector AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H. The CVSS 4.0 metrics similarly rate the issue as critical, citing low attack complexity, network-based exploitation, low required privileges, and high impact to confidentiality, integrity, and availability. 

Researcher Insights and Potential Impact

Although no specific Common Weakness Enumerations (CWEs) have been assigned, the real-world implications of exploiting this n8n vulnerability are severe. A successful attack could allow threat actors to compromise the server, steal credentials, exfiltrate sensitive data, and install persistent backdoors to maintain long-term access.  The vulnerability was discovered with contributions from as many as ten security researchers. Those credited include Fatih Çelik, who also reported CVE-2025-68613, as well as Endor Labs’ Cris Staicu, Pillar Security’s Eilon Cohen, SecureLayer7’s Sandeep Kamble, and several independent researchers.  In a technical deep dive covering both CVE-2025-68613 and CVE-2026-25049, Çelik stated that “they could be considered the same vulnerability, as the second one is just a bypass for the initial fix.” He explained that both issues allow attackers to escape the n8n expression sandbox mechanism and circumvent security checks designed to prevent command execution. 
  •  

US FDA Reissues Cybersecurity Guidance to Reflect QMSR Transition and ISO 13485 Alignment

FDA Cybersecurity Guidance

The US Food and Drug Administration (FDA) has reissued its final guidance on medical device cybersecurity to reflect the agency’s transition from the Quality System Regulation (QSR) to the Quality System Management Regulation (QMSR). The updated FDA cybersecurity guidance was published on 4 February, just two days after the QMSR officially took effect. The revision updates regulatory references throughout the document and aligns cybersecurity expectations with the new quality system framework under 21 CFR Part 820, which now incorporates ISO 13485 by reference.  According to the agency, the FDA cybersecurity guidance revisions were made under Level 2 guidance procedures. “Revisions issued [were] under Level 2 guidance procedures (21 CFR 10.115(g)(4)), including revisions to align with the amendments to 21 CFR 820 (the Quality Management System Regulation (QMSR)),” the FDA stated. The agency added that the updated document supersedes the final guidance titled Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions, which was published in June last year.  Throughout the revised FDA cybersecurity guidance, references to the former QSR have been replaced with references to the QMSR. The agency also updated the guidance to consistently reference ISO 13485, reflecting its central role in the new regulatory structure designed to harmonize US requirements with those of other global regulatory authorities. 

QMSR Framework Reshapes FDA Cybersecurity Guidance and Quality System Expectations 

The QMSR became effective on 2 February and amended the device's current good manufacturing practice (CGMP) requirements under 21 CFR Part 820. These CGMP requirements were first authorized under section 520(f) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) and initially codified in 1978. Significant revisions followed in 1996, when the FDA added design controls and sought closer alignment with international standards, including ISO 9001 and the early versions of ISO 13485.  With the QMSR, the FDA formally incorporated by reference ISO 13485:2016, Medical devices – Quality management systems – Requirements for regulatory purposes, as well as Clause 3 of ISO 9000:2015, which covers quality management system fundamentals and vocabulary. The agency stated that this approach promotes consistency in quality system requirements across global markets while reducing regulatory burden on manufacturers.  The QMSR applies to finished device manufacturers intending to commercially distribute medical devices in the United States. A finished device, as defined in 21 CFR 820.3(a), includes any device or accessory suitable for use or capable of functioning, regardless of whether it is packaged, labeled, or sterilized. Certain components, such as blood tubing and diagnostic x-ray components, are considered finished devices when they function as accessories and are therefore subject to QMSR requirements.  Although some devices are exempt from CGMP requirements under classification regulations in 21 CFR Parts 862 through 892, those exemptions do not eliminate obligations related to complaint handling or recordkeeping. In addition, devices manufactured under an investigational device exemption are not exempt from design and development requirements under 21 CFR 820.10(c) of the QMSR or the corresponding ISO 13485 provisions. 

FDA Cybersecurity Guidance Emphasizes QMSR-Based Design, Risk, and Inspection Changes 

The revised FDA cybersecurity guidance reiterates that documentation outputs demonstrating adherence to the QMSR can be used to address cybersecurity risks and provide reasonable assurance of safety and effectiveness. The agency directs sponsors to specific ISO 13485 clauses to support this approach. For example, the FDA noted that “21 CFR 820.10(c) requires that for all classes of devices automated with software, a manufacturer must comply with the requirements in Design and Development, Clause 7.3 and its subclauses of ISO 13485.”  The guidance highlights ISO 13485 Subclause 7.3.7, which requires design and development validation to ensure that a product is capable of meeting requirements for its specified application or intended use. “Design and development validation includes validation of device software,” the agency stated. The FDA also pointed to Subclause 7.1 of ISO 13485, which specifies that organizations must document one or more processes for risk management in product realization, an expectation closely tied to cybersecurity risk controls.  As part of the update, the FDA removed a substantial section from the prior guidance that referenced former QSR design control provisions, including requirements under 21 CFR 820.30(c) and (d) related to design inputs and design outputs. Those provisions are no longer cited in the updated FDA cybersecurity guidance. The transition to QMSR also introduced changes to FDA inspection practices. Beginning on 2 February, the agency stopped using the Quality System Inspection Technique (QSIT) and began conducting inspections under the updated Inspection of Medical Device Manufacturers Compliance Program: 7382.850. At the same time, the FDA discontinued use of Compliance Programs 7382.845 and 7383.001, which previously governed device manufacturer and PMA-related inspections. 
  •  

What the Incognito Market Sentencing Reveals About Dark Web Drug Trafficking

Incognito Market

The 30-year prison sentence handed to Rui-Siang Lin, the operator of the infamous Incognito Market, is more than just another darknet takedown story. Lin, who ran Incognito Market under the alias “Pharaoh,” oversaw one of the largest online narcotics operations in history, generating more than $105 million in illegal drug sales worldwide before its collapse in March 2024. Platforms like Incognito Market are not clever experiments in decentralization. They are industrial-scale criminal enterprises, and their architects will be treated as such. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

How Incognito Market Became a Global Narcotics Hub

Launched in October 2020, Incognito Market was designed to look and feel like a legitimate e-commerce platform, only its products were heroin, cocaine, methamphetamine, MDMA, LSD, ketamine, and counterfeit prescription drugs. Accessible through the Tor browser, the dark web marketplace allowed anyone with basic technical knowledge to buy illegal narcotics from around the globe. At its peak, Incognito Market supported over 400,000 buyer accounts, more than 1,800 vendors, and facilitated 640,000 drug transactions. Over 1,000 kilograms of cocaine, 1,000 kilograms of methamphetamine, and fentanyl-laced pills were likely sold, the authorities said. This was not a fringe operation—it was a global supply chain built on code, crypto, and calculated harm.
Also read: “Incognito Market” Operator Arrested for Running $100M Narcotics Marketplace

“Pharaoh” and the Business of Digital Drug Trafficking

Operating as “Pharaoh,” Lin exercised total control over Incognito Market. Vendors paid an entry fee and a 5% commission on every sale, creating a steady revenue stream that funded servers, staff, and Lin’s personal profit—more than $6 million by prosecutors’ estimates. The marketplace had a very professional-looking modus operandi from branding, customer service, vendor ratings, and even its own internal financial system—the Incognito Bank—which allowed users to deposit cryptocurrency and transact anonymously. The system was designed to remove trust from human relationships and replace it with platform-controlled infrastructure. This was not chaos. It was corporate-style crime.

Fentanyl, Fake Oxycodone, and Real Deaths

In January 2022, Lin explicitly allowed opiate sales on Incognito Market, a decision that proved deadly. Listings advertised “authentic” oxycodone, but laboratory tests later revealed fentanyl instead. In September 2022, a 27-year-old man from Arkansas died after consuming pills purchased through the platform. This is where the myth of victimless cybercrime collapsed. Incognito Market did not just move drugs—it amplified the opioid crisis and directly contributed to loss of life. U.S. Attorney Jay Clayton stated that Lin’s actions caused misery for more than 470,000 users and their families, a figure that shows the human cost behind the transactions.

Exit Scam, Extortion, and the Final Collapse

When Incognito Market shut down in March 2024, Lin didn’t disappear quietly. He stole at least $1 million in user deposits and attempted to extort buyers and vendors, threatening to expose their identities and crypto addresses. His message was blunt: “YES, THIS IS AN EXTORTION!!!” It was a fittingly brazen end to an operation built on manipulation and fear. Judge Colleen McMahon called Incognito Market the most serious drug case she had seen in nearly three decades, labeling Lin a “drug kingpin.” The message from law enforcement is unmistakable: dark web platforms, cryptocurrency, and blockchain are not shields against justice.
  •  

CISA Silently Updates Vulnerabilities Exploited by Ransomware Groups

CISA Silently Updates Vulnerabilities Exploited by Ransomware Groups

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has been “silently” updating its Known Exploited Vulnerabilities (KEV) catalog when it concludes that vulnerabilities have been exploited by ransomware groups, according to a security researcher. CISA adds a “known” or “unknown” field next to the “Known To Be Used in Ransomware Campaigns?” entry in its KEV catalog. The problem, according to a blog post by Glenn Thorpe of GreyNoise, is the agency doesn’t send out advisories when a vulnerability changes from “unknown” to “known” vulnerabilities exploited by ransomware groups. Thorpe downloaded daily CISA KEV snapshots for all of 2025 and found that the agency had flipped 59 vulnerabilities in 2025 from “unknown” to “known” evidence of exploitation by ransomware groups. “When that field flips from ‘Unknown’ to ‘Known,’ CISA is saying: ‘We have evidence that ransomware operators are now using this vulnerability in their campaigns,’" Thorpe wrote. “That's a material change in your risk posture. Your prioritization calculus should shift. But there's no alert, no announcement. Just a field change in a JSON file. This has always frustrated me.” In a statement shared with The Cyber Express, CISA Executive Assistant Director for Cybersecurity Nick Andersen suggested that the agency is considering Thorpe’s input. “We continue to streamline processes and enrich vulnerability data through initiatives like the KEV catalog, the Common Vulnerabilities and Exposures (CVE) Program, and Vulnrichment,” Andersen said. “Feedback from the cybersecurity community is essential as CISA works to enhance the KEV catalog and advance vulnerability prioritization across the ecosystem.”

Microsoft Leads in Vulnerabilities Exploited by Ransomware Groups

Of the 59 CVEs that flipped to “known” exploitation by ransomware groups last year, 27% were Microsoft vulnerabilities, Thorpe said. Just over a third (34%) involved edge and network CVEs, and 39% were for CVEs before 2023. And 41% of the flipped vulnerabilities occurred in a single month, May 2025. The “Fastest time-to-ransomware flip” was one day, while the longest lag between CISA KEV addition and the change to “known” ransomware exploitation status was 1,353 days. The “Most flipped vulnerability type” was Authentication Bypass at 14% of occurrences.

Ransomware Groups Target Edge Devices

Edge devices accounted for a high number of the flipped vulnerabiities, Thorpe said. Fortinet, Ivanti, Palo Alto and Check Point Security edge devices were among the flipped CVEs. “Ransomware operators are building playbooks around your perimeter,” he said. Thorpe said that 19 of the 59 flipped vulnerabilities “target network security appliances, the very devices deployed to protect organizations.” But he added: “Legacy bugs show up too; Adobe Reader vulnerabilities from years ago suddenly became ransomware-relevant.” Authentication bypasses and RCE vulnerabilities were the most common, “as ransomware operators prioritize ‘get in and go’ attack chains.” The breakdown by vendor of the 59 vulnerabilities “shouldn't surprise anyone,” he said. Microsoft was responsible for 16 of the flipped CVEs, affecting SharePoint, Print Spooler, Group Policy, Mark-of-the-Web bypasses, and more. Ivanti products were affected by 6 of the flipped CVEs, Fortinet by 5 (with FortiOS SSL-VPN heap overflows standing out), and Palo Alto Networks and Zimbra were each affected by 3 of the CVEs. “Ransomware operators are economic actors after all,” Thorpe said. “They invest in exploit development for platforms with high deployment and high-value access. Firewalls, VPN concentrators, and email servers fit that profile perfectly.” He also noted that the pace of vulnerability exploitation by ransomware groups accelerated in 2025. “Today, ransomware operators are integrating fresh exploits into their playbooks faster than defenders are patching,” he said. Thorpe created an RSS feed to track the flipped vulnerabilities; it’s updated hourly.
  •  

Ransomware Attacks Have Soared 30% in Recent Months

Ransomware Attacks 2026

Ransomware attacks have soared 30% since late last year, and they’ve continued that trend so far in 2026, with many of the attacks affecting software and manufacturing supply chains. Those are some of the takeaways of new research published by Cyble today, which also looked at the top ransomware groups, significant ransomware attacks, new ransomware groups, and recommended cyber defenses. Ransomware groups claimed 2,018 attacks in the last three months of 2025, averaging just under 673 a month to end a record-setting year. The elevated attack levels continued in January 2026, as the threat groups claimed 679 ransomware victims. In the first nine months of 2025, ransomware groups claimed an average of 512 victims a month, so the recent trend has been more than 30% above that, Cyble noted. Below is Cyble’s chart of ransomware attacks by month since 2021, which shows a sustained uptrend since mid-2025. ransomware attacks by month 2021-2026

Qilin Remains Top Ransomware Group as CL0P Returns

Qilin was once again the top ransomware group, claiming 115 victims in January. CL0P was second with 93 victims after claiming “scores of victims” in recent weeks in an as-yet unspecified campaign. Akira remained among the leaders with 76 attacks, and newcomers Sinobi and The Gentlemen rounded out the top five (chart below). [caption id="attachment_109255" align="aligncenter" width="845"]Top ransomware groups January 2026 Top ransomware groups January 2026 (Cyble)[/caption] “As CL0P tends to claim victims in clusters, such as its exploitation of Oracle E-Business Suite flaws that helped drive supply chain attacks to records in October, new campaigns by the group are noteworthy,” Cyble said. Victims in the latest campaign have included 11 Australia-based companies spanning a range of sectors such as IT, banking and financial services (BFSI), construction, hospitality, professional services, and healthcare. Other recent CL0P victims have included “a U.S.-based IT services and staffing company, a global hotel company, a major media firm, a UK payment processing company, and a Canada-based mining company engaged in platinum group metals production,” Cyble said. The U.S. once again led all countries in ransomware attacks (chart below), while the UK and Australia faced a higher-than-normal attack volume. “CL0P’s recent campaign was a factor in both of those increases,” Cyble said. [caption id="attachment_109256" align="aligncenter" width="831"]ransomware attacks by country January 2026 Ransomware attacks by country January 2026 (Cyble)[/caption] Construction, professional services and manufacturing remain opportunistic targets for threat actors, while the IT industry also remains a favorite target of ransomware groups, “likely due to the rich target the sector represents and the potential to pivot into downstream customer environments,” Cyble said (chart below). [caption id="attachment_109258" align="aligncenter" width="819"]ransomware attacks by industry January 2026 Ransomware attacks by industry January 2026 (Cyble)[/caption]

Ransomware Attacks Hit the Supply Chain

Cyble documented 10 significant ransomware attacks from January in its blog post, many of which had supply chain implications. One was an Everest ransomware group compromise of “a major U.S. manufacturer of telecommunications networking equipment ... Everest claims the data includes PDF documents containing sensitive engineering materials, such as electrical schematics, block diagrams, and service subsystem documentation.” Sinobi claimed a breach of an India-based IT services company. “Samples shared by the attackers indicate access to internal infrastructure, including Microsoft Hyper-V servers, multiple virtual machines, backups, and storage volumes,” Cyble said. A Rhysida ransomware group attack on a U.S. life sciences and biotechnology instrumentation company allegedly exposed sensitive information such as engineering blueprints and project documentation. A RansomHouse attack on a China-based electronics manufacturing for the technology and automotive manufacturers nay have exposed “extensive proprietary engineering and production-related data,” and “data associated with multiple major technology and automotive companies.” An INC Ransom attack on a Hong Kong–based components manufacturer for the global electronics and automotive industries may have exposed “client-related information associated with more than a dozen major global brands, plus confidential contracts and project documentation for at least three major IT companies.” Cyble also documented the rise of three new ransomware groups: Green Blood, DataKeeper and MonoLock, with DataKeeper and MonoLock releasing details on technical and payment features aimed at attracting ransomware affiliates to their operations.  
  •  

Mountain View Shuts Down Flock Safety ALPR Cameras After Year-Long Unrestricted Data Access

Flock Safety ALPR cameras

Mountain View’s decision to shut down its automated license plate reader program is a reminder of an uncomfortable truth that surveillance technology is only as trustworthy as the systems—and vendors—behind it. This week, Police Chief Mike Canfield announced that all Flock Safety ALPR cameras in Mountain View have been turned off, effective immediately. The move pauses the city’s pilot program until the City Council reviews its future at a February 24 meeting. The decision comes after the police department discovered that hundreds of unauthorized law enforcement agencies had been able to search Mountain View’s license plate camera data for more than a year—without the city’s awareness. For a tool that was sold to the public as tightly controlled and privacy-focused, this is a serious breach of trust.

Flock Safety ALPR Cameras Shut Down Over Data Access Failures

In his message to the community, Chief Canfield made it clear that while the Flock Safety ALPR pilot program had shown value in solving crimes, he no longer has confidence in the vendor. “I personally no longer have confidence in this particular vendor,” Canfield wrote, citing failures in transparency and access control. The most troubling issue, according to the police chief, was the discovery that out-of-state agencies had been able to search Mountain View’s license plate data—something that should never have been possible under state law or city policy. This wasn’t a minor technical glitch. It was a breakdown in oversight, accountability, and vendor responsibility.

Automated License Plate Readers Under Growing National Scrutiny

Automatic license plate readers, or ALPR surveillance cameras, have become one of the most controversial policing technologies in the United States. These cameras capture images of passing vehicles, including license plate numbers, make, and model. The information is stored and cross-checked with databases to flag stolen cars or vehicles tied to investigations. Supporters argue that ALPRs help law enforcement respond faster and solve crimes more efficiently. But critics have long warned that ALPR systems can easily become tools of mass surveillance—especially when data-sharing controls are weak. That concern has intensified under the Trump administration, as reports have emerged of license plate cameras being used for immigration enforcement and even reproductive healthcare-related investigations. Mountain View’s case shows exactly why the debate isn’t going away.

Mountain View Police Violated Its Own ALPR Policies

According to disclosures made this week, the Mountain View Police Department unintentionally violated its own policies by allowing statewide and national access to its ALPR data. Chief Canfield admitted that “statewide lookup” had been enabled since the program began 17 months ago, meaning agencies across California could search Mountain View’s license plate records without prior authorization. Even more alarming, “national lookup” was reportedly turned on for three months in 2024, allowing agencies across the country to access the city’s data. State law prohibits sharing ALPR information with out-of-state agencies, especially for immigration enforcement purposes. So how did it happen? Canfield was blunt: “Why wasn’t it caught sooner? I couldn’t tell you.” That answer won’t reassure residents who were promised strict safeguards.

Community Trust Matters More Than Surveillance Tools

Chief Canfield’s message repeatedly emphasized one point: technology cannot replace trust. “Community trust is more important than any individual tool,” he wrote. That statement deserves attention. Police departments across the country have adopted surveillance systems with the promise of safety, only to discover later that the systems operate with far less control than advertised. When a vendor fails to disclose access loopholes—or when law enforcement fails to detect them—the public pays the price. Canfield acknowledged residents’ anger and frustration, offering an apology and stating that transparency is essential for community policing. It’s a rare moment of accountability in a space where surveillance expansion often happens quietly.

Flock Safety Faces Questions About Transparency and Oversight

Mountain View’s ALPR program began in May 2024, when the City Council approved a contract with Flock Safety, a surveillance technology company. Since August 2024, the city installed cameras at major entry and exit points. By January 2026, Mountain View had 30 Flock cameras operating. Now, the entire program is paused. Flock spokesperson Paris Lewbel said the company would address the concerns directly with the police chief, but the damage may already be done. This incident raises a bigger question: should private companies be trusted to manage sensitive surveillance infrastructure in the first place?

What Happens Next for the Flock Safety ALPR Program?

The City Council will now decide whether Mountain View continues with the Flock contract, modifies the program, or shuts it down permanently. But the broader lesson is already clear. ALPR surveillance cameras may offer law enforcement real investigative value, but without airtight safeguards, they risk becoming tools of unchecked monitoring. Mountain View’s shutdown is not just a local story—it’s part of a national reckoning over how much surveillance is too much, and whether public safety can ever justify the loss of privacy without full accountability.
  •  

Lakelands Public Health Confirms Cyberattack, Says Sensitive Data Unaffected

Lakelands Public Health cyberattack

Lakelands Public Health has confirmed that it is actively responding to a cyberattack discovered on January 29, 2026, which affected some of its internal systems. The organization is sharing information about the Lakelands Public Health cyberattack incident proactively to maintain transparency and public trust.  Immediately after detecting the breach, Lakelands Public Health implemented its incident response protocols, secured affected systems, and engaged a leading cybersecurity firm to support the investigation, containment, and recovery efforts. Experts are working closely with the organization to ensure that all systems are restored safely and efficiently.  While restoration efforts are underway, some programs and services may experience temporary disruptions. The organization has committed to directly contacting any individuals or partners affected by interruptions. 

Critical Public Health Data Remains Secure 

Initial investigations indicate that systems managing sensitive public health information, including infectious disease data, immunization records, and sexual health information, were not impacted by the Lakelands Public Health cyberattack. Lakelands Public Health has emphasized that protecting personal information remains a top priority as it continues essential public health operations.  Dr. Thomas Piggott, Medical Officer of Health and Chief Executive Officer of Lakelands Public Health, said, 
“Our priority response to this event is protecting the information entrusted to us and maintaining continuity of critical public health services. By taking a proactive approach and engaging specialized expertise, we are working diligently to restore systems and keep our community informed.” 
The organization serves Peterborough city and county, Northumberland and Haliburton counties, Kawartha Lakes, and the First Nations communities of Curve Lake and Alderville. The cyberattack prompted a review of all systems that could potentially be affected, ensuring that any vulnerabilities are mitigated. 

Lakelands Public Health Cyberattack Investigation

Lakelands Public Health has noted that the investigation into the cyberattack is ongoing. While no personal or health information appears to have been compromised, the organization has committed to alerting affected parties should any issues arise as the review continues.  Officials have advised that during the restoration period, certain programs and services may remain temporarily offline, and affected individuals will receive direct notifications.  The health unit is also closely monitoring its IT infrastructure for unusual activity, and administrators are implementing additional safeguards, including enhanced network monitoring and access controls. These measures are aimed at minimizing risk and ensuring the integrity of public health data during the recovery process. 

Proactive Measures Strengthen Cybersecurity for Lakelands Public Health 

Residents, partners, and staff are encouraged to remain patient and vigilant as Lakelands Public Health continues to prioritize security, transparency, and the continuity of services. Updates regarding the cyberattack and ongoing recovery efforts are available at LakelandsPH.ca.  In response to the incident, Lakelands Public Health has reinforced its commitment to cybersecurity. By engaging specialized expertise and deploying additional monitoring and response tools, the organization aims to reduce the risk of future incidents.  Dr. Piggott reinforced the importance of public confidence, stating that the organization will continue to communicate openly and ensure that all necessary steps are taken to protect sensitive information while maintaining public health services without interruption. 
  •  

Foxit Releases Security Updates for PDF Editor Cloud XSS Vulnerabilities

Foxit PDF Editor

Foxit Software has released security updates addressing multiple cross-site scripting (XSS) vulnerabilities affecting Foxit PDF Editor Cloud and Foxit eSign, closing gaps that could have allowed attackers to execute arbitrary JavaScript within a user’s browser. The patches were issued as part of Foxit’s ongoing security and stability improvements, with the most recent update for Foxit PDF Editor Cloud released on February 3, 2026.  The vulnerabilities stem from weaknesses in input validation and output encoding within specific features of Foxit PDF Editor Cloud. According to Foxit’s official advisory, attackers could exploit these flaws when users interacted with specially crafted file attachments or manipulated layer names inside PDF documents. In such cases, untrusted input could be embedded directly into the application’s HTML structure without proper sanitization, enabling malicious script execution.  The advisory states that the update includes security and stability improvements, and that no manual action is required beyond ensuring the software is up to date. 

Details of Foxit PDF Editor Vulnerabilities CVE-2026-1591 and CVE-2026-1592 

Two vulnerabilities were identified in Foxit PDF Editor Cloud: CVE-2026-1591 and CVE-2026-1592. Both issues fall under Cross-Site Scripting (CWE-79) and carry a Moderate severity rating, with a CVSS v3.0 score of 6.3. The vulnerabilities affect the File Attachments list and Layers panel, where attackers could inject crafted payloads into file names or layer names.  CVE-2026-1591, considered the primary issue, allows attackers to exploit insufficient input validation and improper output encoding to execute arbitrary JavaScript in a user’s browser. CVE-2026-1592 presents the same risk through similar attack vectors and conditions. Both vulnerabilities were discovered and reported by security researcher Novee.  Although exploitation requires user interaction, the impact can be significant. Attackers must convince authenticated users to access specially crafted attachments or layer configurations. Once triggered, the malicious JavaScript runs within the browser context, potentially enabling session hijacking, exposure of sensitive data from open PDF documents, or redirection to attacker-controlled websites. 

Enterprise Risk and Attack Surface Considerations 

The attack surface is particularly relevant in enterprise environments where Foxit PDF Editor is widely used for document collaboration and editing. Employees often handle PDFs originating from external partners, customers, or public sources, increasing the likelihood of exposure to crafted payloads.  In addition to Foxit PDF Editor Cloud, Foxit also addressed a related XSS vulnerability affecting Foxit eSign, tracked as CVE-2025-66523. This flaw carries a CVSS score of 6.1 and occurs due to improper handling of URL parameters in specially crafted links.   When authenticated users visit these links, untrusted input may be embedded into JavaScript code and HTML attributes without adequate encoding, creating opportunities for privilege escalation and cross-domain data theft. The patch for Foxit eSign was released on January 15, 2026. 

Patches, Mitigation, and Security Guidance 

Foxit confirmed that CVE-2026-1591, CVE-2026-1592, and CVE-2025-66523 have all been fully patched. The fixes include improved input validation and output encoding mechanisms designed to prevent malicious script injection. Updates for Foxit PDF Editor Cloud are deployed automatically or available through standard update mechanisms, requiring no additional configuration.  Organizations using Foxit PDF Editor Cloud and Foxit eSign should confirm that their systems are running the latest versions. Administrators are also advised to monitor for unusual JavaScript execution, unexpected PDF editor behavior, or anomalies in application logs.  For environments handling sensitive documents, additional controls may help reduce risk. These include limiting PDF editing to trusted networks, enforcing browser-based content security policies, and restricting access to untrusted attachments. End users should remain cautious when opening PDF files from unknown sources and avoid clicking suspicious links within eSign workflows. 
  •  

Spain Ban Social Media Platforms for Kids as Global Trend Grows

Spain Ban Social Media Platforms

Spain is preparing to take one of the strongest steps yet in Europe’s growing push to regulate the digital world for young people. Spain will ban social media platforms for children under the age of 16, a move Prime Minister Pedro Sanchez framed as necessary to protect minors from what he called the “digital Wild West.” This, Spain ban social media platforms, is not just another policy announcement. The Spain ban social media decision reflects a wider global shift: governments are finally admitting that social media has become too powerful, too unregulated, and too harmful for children to navigate alone.

Spain Ban Social Media Platforms for Children Under Age of 16

Speaking at the World Government Summit in Dubai, Sanchez said Spain will require social media platforms to implement strict age verification systems, ensuring that children under 16 cannot access these services freely. “Social media has become a failed state,” Sanchez declared, arguing that laws are ignored and harmful behavior is tolerated online. The Spain ban social media platforms for children under age of 16 is being positioned as a child safety measure, but it is also a direct challenge to tech companies that have long avoided accountability. Sanchez’s language was blunt, and honestly, refreshing. For years, platforms have marketed themselves as neutral spaces while profiting from algorithms that amplify outrage, addictive scrolling, and harmful content. Spain’s message is clear: enough is enough.

Social Media Ban and Executive Accountability

Spain is not stopping at age limits. Sanchez also announced a new bill expected next week that would hold social media executives personally accountable for illegal and hateful content. That is a significant escalation. A social media ban alone may restrict access, but forcing executives to face consequences could change platform behavior at its core. The era of tech leaders hiding behind “we’re just a platform” excuses may finally be coming to an end. This makes Spain’s approach one of the most aggressive in Europe so far.

France Joins the Global Social Media Ban Movement

Spain is not acting in isolation. On February 3, 2026, French lawmakers approved their own social media ban for children under 15. The bill passed by a wide margin in the National Assembly and is expected to take effect in September, at the start of the next school year. French President Emmanuel Macron strongly backed the move, saying: “Our children’s brains are not for sale… Their dreams must not be dictated by algorithms.” That statement captures the heart of this debate. Social media is not just entertainment anymore. It is an attention economy designed to hook young minds early, shaping behavior, self-image, and even mental health. France’s decision adds momentum to the idea that a social media ban globally for children may soon become the norm rather than the exception.

Australia’s World-First Social Media Ban for Children Under 16

The strongest example so far comes from Australia, which implemented a world-first social media ban for children under 16 in December 2025. The ban covered major platforms including:
  • Facebook
  • Instagram
  • TikTok
  • Snapchat
  • Reddit
  • X
  • YouTube
  • Twitch
Messaging apps like WhatsApp were exempt, acknowledging that communication tools are different from algorithm-driven feeds. Since enforcement began, companies have revoked access to around 4.7 million accounts linked to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australia’s case shows that enforcement is possible, even at scale, through ID checks, third-party age estimation tools, and data inference. Yes, some children try to bypass restrictions. But the broader impact is undeniable: governments can intervene when platforms fail to self-regulate.

UK Exploring Similar Social Media Ban Measures

The United Kingdom is now considering its own restrictions. Prime Minister Keir Starmer recently said the government is exploring a social media ban for children aged 15 and under, alongside stricter age verification and limits on addictive features. The UK’s discussion highlights another truth: this is no longer just about content moderation. It’s about the mental wellbeing of an entire generation growing up inside algorithmic systems.

Is a Social Media Ban Globally for Children the Future?

Spain’s move, combined with France, Australia, and the UK, signals a clear global trend. For years, social media companies promised safety tools, parental controls, and community guidelines. Yet harmful content, cyberbullying, predatory behavior, and addictive design have continued to spread. The reality is uncomfortable: platforms were never built with children in mind. They were built for engagement, profit, and data. A social media ban globally for children may not be perfect, but it is becoming a political and social necessity. Spain’s decision to ban social media platforms for children under age of 16 is not just about restricting access. It is about redefining digital childhood, reclaiming accountability, and admitting that the online world cannot remain lawless. The digital Wild West era may finally be ending.
  •  

French Police Raid X Offices as Grok Investigations Grow

French Police Raid X Offices as Grok Investigations Grow

French police raided the offices of the X social media platform today as European investigations grew into nonconsensual sexual deepfakes and potential child sexual abuse material (CSAM) generated by X’s Grok AI chatbot. A statement (in French) from the Paris prosecutor’s office suggested that Grok’s dissemination of Holocaust denial content may also be an issue in the Grok investigations. X owner Elon Musk and former CEO Linda Yaccarino were issued “summonses for voluntary interviews” on April 20, along with X employees the same week. Europol, which is assisting in the investigation, said in a statement that the investigation is “in relation to the proliferation of illegal content, notably the production of deepfakes, child sexual abuse material, and content contesting crimes against humanity. ... The investigation concerns a range of suspected criminal offences linked to the functioning and use of the platform, including the dissemination of illegal content and other forms of online criminal activity.” The French action comes amid a growing UK probe into Grok’s use of nonconsensual sexual imagery, and last month the EU launched its own investigation into the allegations. Meanwhile, a new Reuters report suggests that X’s attempts to curb Grok’s abuses are failing. “While Grok’s public X account is no longer producing the same flood of sexualized imagery, the Grok chatbot continues to do so when prompted, even after being warned that the subjects were vulnerable or would be humiliated by the pictures,” Reuters wrote in a report published today.

French Prosecutor Calls X Investigation ‘Constructive’

The French prosecutor’s statement said the investigation “is, at this stage, part of a constructive approach, with the objective of ultimately guaranteeing the X platform's compliance with French laws, insofar as it operates in French territory” (translated from the French). The investigation initially began in January 2025, the statement said, and “was broadened following other reports denouncing the functioning of Grok on the X platform, which led to the dissemination of Holocaust denial content and sexually explicit deepfakes.” The investigation concerns seven “criminal offenses,” according to the Paris prosecutor’s statement:
  • Complicity in the possession of images of minors of a child pornography nature
  • Complicity in the dissemination, offering, or making available of images of minors of a child pornography nature by an organized group
  • Violation of the right to image (sexual deepfakes)
  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an automated data processing system by an organized group
  • Tampering with the operation of an automated data processing system by an organized group
  • Administration of an illicit online platform by an organized group
The Paris prosecutor’s office deleted its X account after announcing the investigation.

Grok Investigations in the UK Grow

In the UK, the Information Commissioner’s Office (ICO) announced that it was launching an investigation into Grok abuses, on the same day the UK Ofcom communications services regulator said its own authority to investigate chatbots may be limited. William Malcolm, ICO's Executive Director for Regulatory Risk & Innovation, said in a statement: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.” “Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights,” Malcolm added. “Where we find obligations have not been met, we will take action to protect the public.” Ilia Kolochenko, CEO at ImmuniWeb and a cybersecurity law attorney, said in a statement “The patience of regulators is not infinite: similar investigations are already pending even in California, let alone the EU. Moreover, some countries have already temporarily restricted or threatened to restrict access to X’s AI chatbot and more bans are probably coming very soon.” “Hopefully X will take these alarming signals seriously and urgently implement the necessary security guardrails to prevent misuse and abuse of its AI technology,” Kolochenko added. “Otherwise, X may simply disappear as a company under the snowballing pressure from the authorities and a looming avalanche of individual lawsuits.”
  •  

France Approves Social Media Ban for Children Under 15 Amid Global Trend

social media ban for children France

French lawmakers have approved a social media ban for children under 15, a move aimed at protecting young people from harmful online content. The bill, which also restricts mobile phone use in high schools, was passed by a 130-21 vote in the National Assembly and is expected to take effect at the start of the next school year in September. French President Emmanuel Macron has called for the legislation to be fast-tracked, and it will now be reviewed by the Senate. “Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said. “Our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Their dreams must not be dictated by algorithms.”

Why France Introduced a Social Media Ban for Children

The new social media ban for children in France is part of a broader effort to address the negative effects of excessive screen time and harmful content. Studies show that one in two French teenagers spends between two and five hours daily on smartphones, with 58% of children aged 12 to 17 actively using social networks. Health experts warn that prolonged social media use can lead to reduced self-esteem, exposure to risky behaviors such as self-harm or substance abuse, and mental health challenges. Some families in France have even taken legal action against platforms like TikTok over teen suicides allegedly linked to harmful online content. The French legislation carefully exempts educational resources, online encyclopedias, and platforms for open-source software, ensuring children can still access learning and development tools safely.

Lessons From Australia’s Social Media Ban for Children

France’s move mirrors global trends. In December 2025, Australia implemented a social media ban for children under 16, covering major platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, X, YouTube, and Twitch. Messaging apps like WhatsApp were exempt. Since the ban, social media companies have revoked access to about 4.7 million accounts identified as belonging to children. Meta alone removed nearly 550,000 accounts the day after the ban took effect. Australian officials said the measures restore children’s online safety and prevent predatory social media practices. Platforms comply with the ban through age verification methods such as ID checks, third-party age estimation technologies, or inference from existing account data. While some children attempted to bypass restrictions, the ban is considered a significant step in protecting children online.

UK Considers Following France and Australia

The UK is also exploring similar measures. Prime Minister Keir Starmer recently said the government is considering a social media ban for children aged 15 and under, along with stricter age verification, phone curfews, and restrictions on addictive platform features. The UK’s move comes amid growing concern about the mental wellbeing and safety of children online.

Global Shift Toward Child Cyber Safety

The introduction of a social media ban for children in France, alongside Australia’s implementation and the UK’s proposal, highlights a global trend toward protecting minors in the digital age. These measures aim to balance access to educational and creative tools while shielding children from online harm and excessive screen time. As more countries consider social media regulations for minors, the focus is clear: ensuring cyber safety, supporting mental health, and giving children the chance to enjoy a safe and healthy online experience.
  •  

Critical vLLM Flaw Exposes Millions of AI Servers to Remote Code Execution

vLLM

A newly disclosed security flaw has placed millions of AI servers at risk after researchers identified a critical vulnerability in vLLM, a widely deployed Python package for serving large language models. The issue, tracked as CVE-2026-22778 (GHSA-4r2x-xpjr-7cvv), enables remote code execution (RCE) by submitting a malicious video URL to a vulnerable vLLM API endpoint. The vulnerability affects vLLM versions 0.8.3 through 0.14.0 and was patched in version 0.14.1. The disclosure was released as breaking news and is still developing, with additional technical details expected as the investigation continues. Due to vLLM’s scale of adoption, reportedly exceeding three million downloads per month, the impact of CVE-2026-22778 is considered severe.

What Is vLLM and Why CVE-2026-22778 Matters 

vLLM is a high-throughput, memory-efficient inference engine designed to serve large language models efficiently in production environments. It is commonly used to address performance bottlenecks associated with traditional LLM serving, including slow inference speeds, poor GPU utilization, and limited concurrency. Compared to general-purpose local runners such as Ollama, vLLM is frequently deployed in high-load environments where scalability and throughput are critical. Because vLLM is often exposed through APIs and used to process untrusted user input, vulnerabilities like CVE-2026-22778 increase the attack surface. Any organization running vLLM with video or multimodal model support enabled is potentially affected. OX customers identified as vulnerable were notified and instructed to update their deployments. 

Impact: Full Server Takeover via Remote Code Execution 

CVE-2026-22778 allows attackers to achieve RCE by sending a specially crafted video link to a vLLM multimodal endpoint. Successful exploitation can result in arbitrary command execution on the underlying server. From there, attackers may exfiltrate data, pivot laterally within the environment, or fully compromise connected systems.  The vulnerability does not require authentication beyond access to the exposed API, making internet-facing deployments particularly at risk. Because vLLM is commonly used in clustered or GPU-backed environments, the blast radius of a single exploited instance may extend well beyond one server. 

Technical Analysis 

The root cause of CVE-2026-22778 is a chained exploit combining an information disclosure bug with a heap overflow that ultimately leads to remote code execution. According to OX Security, the first stage involves bypassing ASLR protections through memory disclosure. When an invalid image is submitted to a multimodal vLLM endpoint, the Python Imaging Library (PIL) raises an error indicating it cannot identify the image file.   In vulnerable versions, this error message includes a heap memory address. That address is located before libc in memory, reducing the ASLR search space and making exploitation more reliable. The patched code sanitizes these error messages to prevent leaking heap addresses.  With the leaked address available, the attacker proceeds to the second vulnerability. vLLM relies on OpenCV for video decoding, and OpenCV bundles FFmpeg 5.1.x. That FFmpeg release contains a heap overflow flaw in its JPEG2000 decoder.  JPEG2000 images use separate buffers for color channels: a large buffer for the Y (luma) channel and smaller buffers for the U and V (chroma) channels. The decoder incorrectly trusts the image’s cdef (channel definition) box, allowing channels to be remapped without validating buffer sizes. This means large Y channel data can be written into a smaller U buffer.  Because the attacker controls both the image geometry and the channel mapping, they can precisely control how much data overflows and which heap objects are overwritten. By abusing internal JPEG2000 headers and crafting specific channel values, the overflow can overwrite adjacent heap memory, including function pointers. Execution can then be redirected to a libc function such as system(), resulting in full RCE. 

Affected Versions and Recommended Actions 

The following vLLM Python package versions are affected: 
  • Affected versions: vLLM >= 0.8.3 and < 0.14.1
  • Fixed version: vLLM 0.14.1
Organizations are strongly advised to update immediately to vLLM 0.14.1, which includes an updated OpenCV release addressing the JPEG2000 decoder flaw. If upgrading is not immediately feasible, disabling video model functionality in production environments is recommended until patching can be completed.  CVE-2026-22778 demonstrates how vulnerabilities in third-party media processing libraries can cascade into critical RCE flaws in AI infrastructure. For teams operating vLLM at scale, prompt remediation and careful review of exposed multimodal endpoints are essential to reducing risk. 
  •  

Lt Gen (Dr) Rajesh Pant to Lead Webinar on AI-Driven Cyber Threats — Register Free Now

ai cybersecurity webinar February 2026

Cyble and The Cyber Express has announced a high-impact ai cybersecurity webinar February 2026, bringing urgent focus to the growing convergence of AI-driven cybercrime, ransomware escalation, and hacktivism-led disruption. Titled “AI, Ransomware & Hacktivism: The Cyber Risk Shift Most Leaders Are Failing to See,” this timely ai ransomware webinar February 2026 will feature Lt Gen (Dr) Rajesh Pant, Chairman, Cyber Security Association of India and Former National Cyber Security Coordinator, Government of India. The Zoom webinar will take place on: Tuesday, 24 February 2026 4:00 PM IST Moderator: Mihir Bagwe, Principal Correspondent, The Cyber Express Registration is now open with FREE seats available, but slots are limited and seats are filling quickly. Register Now (FREE, Limited Seats): [Insert Registration Link Here]

Bonus for Registered Attendees: Annual Threat Landscape Report 2025

All registered attendees of the ai ransomware webinar February 2026 will receive a downloadable copy of the Annual Threat Landscape Report 2025. The 2025 threat landscape shows ransomware, hacktivism, and AI-enabled attacks continuing to scale despite global law enforcement disruptions. Based on millions of observations across dark web and open web sources — spanning industries, regions, and sectors, the report reveals:
  • How attackers adapted
  • Where defenses failed
  • Which threats are set to persist into 2026
This makes the webinar a valuable learning and intelligence opportunity as organizations plan for ai cybersecurity 2026.

AI Cybersecurity Webinar February 2026: Why This Session Matters Now

This ai cybersecurity webinar February 2026 comes at a critical moment as the global cyber threat environment rapidly evolves under the influence of AI. Ransomware groups are increasingly using AI to automate targeting, improve evasion, and scale attacks across industries. At the same time, hacktivist campaigns are merging with organized cybercrime, creating hybrid threats that challenge both enterprise security teams and national infrastructure defenses. The rise of these combined risks is shaping the future of ai cybersecurity 2026, and leaders who fail to adapt now may face severe consequences in the year ahead.

Featuring Lt Gen (Dr) Rajesh Pant at the AI Ransomware Webinar February 2026

The upcoming ai ransomware webinar February 2026 will offer rare leadership-level insights from: Lt Gen (Dr) Rajesh Pant Chairman, Cyber Security Association of India Former National Cyber Security Coordinator, Government of India With decades of experience guiding national cyber preparedness and responding to global threat dynamics, Dr. Pant will share frontline perspectives on how AI is reshaping ransomware operations and hacktivism-driven cyber disruption.

What This AI Ransomware Webinar February 2026 Covers

This ai ransomware webinar February session will focus on the cyber risk shifts most leaders are still underestimating. Key discussion points include:
  • How threat actors are using AI to expand ransomware campaigns
  • Why hacktivism is converging with cybercrime networks
  • The most dangerous cyber risk trends heading into ai cybersecurity 2026
  • What CISOs must prioritize now to avoid reactive defenses later
  • How leadership, policy, and execution often fail to align
The webinar will also explore evolving activity across the ai hacktivism website February 2026 landscape, where AI-enabled tactics are accelerating rapidly.

Here's Why You Should Attend This AI Cybersecurity Webinar February 2026

This ai cybersecurity webinar February 2026 is designed for CISOs, cyber risk leaders, security professionals, and decision-makers who need clarity on what comes next. By attending the ai ransomware webinar February 2026, participants will gain:
  • Strategic understanding of AI-powered ransomware evolution
  • Insights into the hacktivism-cybercrime overlap
  • Practical guidance for preparing enterprise defenses for 2026
  • Direct perspectives from one of India’s top cyber leaders
For professionals tracking threats through any ai hacktivism website, this session provides essential context and actionable takeaways. Register Now: Cybersecurity Webinar February 2026 (FREE, Limited Seats) FREE Registration | Limited Seat Slots | Seats Filling Quickly Don’t miss this essential ai cybersecurity webinar February 2026 and the must-attend ai ransomware webinar February 2026 discussion on the future of AI-driven cyber threats. Register Now (FREE) 
  •  

Berchem School Hit by Cyberattack as Hackers Target Parents With €50 Ransom Demand

cyberattack on Berchem school

A cyberattack on Berchem school has raised serious concerns after hackers demanded ransom money not only from the institution but also directly from students’ families. The Berchem school cyberattack incident occurred at the secondary school Onze-Lieve-Vrouwinstituut Pulhof (OLV Pulhof), where attackers disrupted servers and later threatened to release sensitive information unless payments were made. The case, confirmed by the public prosecutor’s office and first reported by ATV, highlights the growing threat of ransomware attacks on schools, where cybercriminals increasingly target educational institutions due to their reliance on digital systems and the sensitive data they store.

Cyberattack on Berchem School Disrupted Servers

The Berchem school hacking incident took place shortly after the Christmas holidays, in early January. According to reports, the school’s servers were taken offline, causing disruption to internal systems. Hackers reportedly demanded a ransom from the school soon after the breach. However, the institution refused to comply with the demands. This decision appears to have triggered an escalation in the attackers’ strategy, shifting pressure onto parents.

School Files Police Complaint After Ransom Demand

Following the cyberattack on Berchem school, OLV Pulhof acted quickly by contacting law enforcement. The school filed a formal complaint against unknown persons and brought in the police’s Regional Computer Crime Unit (RCCU) to respond to the incident. In addition to involving authorities, the school also moved to secure its digital infrastructure. Out of concern for student safety and data protection, the institution reportedly set up a new, secure network environment soon after the breach. The incident is now under investigation by the Federal Judicial Police.

Hackers Target Parents With €50 Per Child Ransom Demand

This week, the cybercriminals expanded their attack by sending threatening messages directly to parents of students. The hackers demanded a ransom of 50 euros per child, warning that private information such as addresses or photos could be released if the payment was not made. A student described the situation, saying that the school required everyone to change passwords and warned students not to click on suspicious links. “We had to change all our passwords at school, otherwise they would release our addresses or photos,” the student said. Another student added that their father received an email demanding payment, which caused fear and uncertainty. “My dad also got an email last night. That scares me a little. They were asking for 50 euros per child.” This tactic reflects a disturbing trend in school cyberattacks, where criminals attempt to exploit families emotionally and financially.

Parents Advised Not to Pay and Not to Click

The school has strongly advised parents not to respond to the ransom demands. Families were told not to pay, and more importantly, not to click on any links or attachments included in the hackers’ communications, as these could lead to further compromise or malware infections. Cybersecurity experts generally warn against paying ransoms, as it does not guarantee that stolen data will be deleted or that systems will be restored. Paying can also encourage attackers to continue targeting schools and vulnerable communities.

Classes Continue Despite Cybersecurity Incident

Despite the attack, lessons at OLV Pulhof have continued. While the school’s servers were initially down, it appears that temporary solutions and new systems allowed teaching to proceed. However, the full consequences of the hacking have not yet been disclosed. It remains unclear what data may have been accessed or whether any personal information was stolen. Educational institutions often store sensitive records, including student details, contact information, and internal documents, making them attractive targets for cybercriminal groups.

Rising Concern Over Ransomware Attacks on Schools

The cyberattack on the Berchem secondary school is part of a wider pattern of increasing cybercrime targeting schools across Europe. Schools often face limited cybersecurity budgets, older IT systems, and large networks of users, making them easier to infiltrate than larger corporate organizations. Attacks like this demonstrate how ransomware incidents can go beyond technical disruption, affecting families and creating fear in local communities.

Investigation Ongoing

Authorities have not yet identified who is behind the attack. The Federal Judicial Police continue to investigate, while the school works to strengthen its systems and protect students and staff. For now, parents are being urged to remain cautious, avoid engaging with the attackers, and report any suspicious communications to law enforcement. The cyberattack on Berchem school incident serves as a reminder that cybersecurity in schools is no longer optional, but essential for protecting students, families, and the education system itself.
  •  

Benefits of Executive Monitoring Platforms for Business Growth

executive monitoring platforms

When a CEO's deepfake appears in a fraudulent investor call, when stolen credentials surface on dark web marketplaces, or when executive impersonation attempts trick employees into wire transfers, the damage isn't just technical—it's existential. Yet most organizations treat executive protection as an afterthought, if they think about it at all, instead of leveraging Executive Monitoring Platforms to detect and mitigate these threats proactively. Here's the uncomfortable reality; your executives aren't just high-value employees. They're walking attack vectors. Their social media presence, their public speaking engagements, their digital footprints across platforms—all of it creates opportunities for threat actors. And unlike technical vulnerabilities that can be patched, executive exposure is permanent, cumulative, and growing by the day. Executives understand visibility as a business necessity for leadership, brand building, and investor confidence. What they often lack is executive security intelligence that shows how attackers weaponize that visibility. The question isn't whether your leadership team needs executive monitoring. It's whether you can afford not to have it. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

The Executive Blind Spot Nobody Talks About

Traditional security frameworks focus on perimeter defense, endpoint protection, and network monitoring. Executive monitoring exists in a different dimension entirely—one that bridges digital risk, physical security, and reputational management in ways most security teams aren't equipped to handle. Consider what attackers see when they target executives: comprehensive LinkedIn profiles detailing career histories and professional networks, conference schedules announcing travel plans weeks in advance, published interviews revealing decision-making processes and strategic priorities, social media posts exposing family members and personal interests, and professional email addresses easily harvested for spear-phishing campaigns. This isn't reconnaissance requiring sophisticated hacking. It's open-source intelligence gathering anyone can perform in an afternoon. The real vulnerability is that executives themselves rarely understand their exposure. They view public visibility as part of the job—necessary for thought leadership, investor relations, and business development. They're not wrong. But they're also not thinking like attackers.

Why Executive Threats Are Business Continuity Issues

A compromised server gets fixed. A breached database gets contained. But when executives become attack targets, the damage radiates through the organization in ways that don't show up in incident reports. Business email compromise attacks targeting executives cost organizations an average of $4.1 million per incident. That's not counting the reputational damage, the eroded stakeholder trust, or the board-level questions about why leadership wasn't better protected. Deepfake technology has matured to the point where realistic video and audio impersonations can be generated in hours, not days. When a fake CEO video circulates making false claims about company performance, markets react before PR teams can even draft responses. Executive credential leaks create cascading risks. Unlike typical employee accounts, executive credentials often have elevated privileges, access to sensitive strategic information, and the authority to approve high-value transactions. A single compromised executive account can become the fulcrum for devastating attacks. This is where standard security tools fall short. They protect infrastructure—but they don’t deliver real-time executive protection. They don’t monitor the dark web for leaked executive credentials, track impersonation accounts on social platforms, or identify deepfakes before they go viral. That gap is precisely what executive monitoring solutions are designed to fill.
Interested in exploring how executive monitoring can strengthen your leadership protection and enable strategic growth? Learn more about comprehensive executive threat intelligence solutions at Cyble.com.

The Growth Multiplier Effect

Here's the business case that gets overlooked in security discussions; executive monitoring doesn't just prevent damage—it enables growth. When leaders can engage publicly with confidence, thought leadership accelerates. When executives travel internationally backed by executive protection services, deal-making and partnerships move faster. When boards know that leadership exposure is continuously monitored, governance concerns diminish and strategic focus increases. Organizations with robust executive monitoring platforms demonstrate operational maturity that resonates with investors, partners, and enterprise clients. It signals that security isn't just an IT function—it's embedded in how the business operates at the highest levels. For companies pursuing M&A activity, executive protection becomes due diligence table stakes. Acquiring companies want assurance that leadership teams come without hidden security liabilities. The velocity of business decisions improves when executives aren't second-guessing their digital exposure. Strategic communications happen more freely. Competitive intelligence can be gathered more aggressively. Innovation discussions occur with less fear of leakage.

What Effective Executive Monitoring Actually Looks Like

The difference between security theater and genuine protection is specificity. Generic threat intelligence doesn't translate to executive protection. What matters is real-time monitoring across the specific vectors where executive threats emerge. Effective platforms monitor dark web forums and cybercrime marketplaces for executive PII leaks, tracking when credentials, personal data, or sensitive information surfaces in underground channels. They deploy AI-driven deepfake detection across social media and video platforms, identifying manipulated content before it gains distribution. Social media impersonation tracking identifies fake accounts masquerading as executives, often used for business email compromise setup. Compromised credential monitoring alerts when executive email addresses or passwords appear in breach databases, enabling immediate password resets before exploitation. The challenge is scale and speed. Manual monitoring can't keep pace with how quickly threats emerge and spread. By the time a security analyst discovers an executive impersonation account, it may have already been used to contact employees or partners. This is where platforms like Cyble's Executive Monitoring solution demonstrate the value of automation paired with human expertise. Cyble delivers real-time executive protection across the surface web, deep web, and dark web. The platform combines real-time alerts delivered via email, SMS, or WhatsApp with AI-powered threat detection that identifies deepfakes, impersonations, and credential leaks across surface web, deep web, and dark web sources. It provides unified dashboard visibility that consolidates executive threats into a single view rather than fragmenting them across multiple tools, and integrates physical security intelligence for executives traveling to high-risk locations with contextualized threat assessments. What separates effective solutions from basic monitoring is context. Alerting about every potential threat creates noise. Understanding which threats pose genuine risk to specific executives based on their role, public profile, and current activities—that's intelligence. Cyble's approach emphasizes actionable insights over data dumps. When an executive's credentials appear in a breach, the platform doesn't just alert—it provides context about the source, potential impact, and recommended response actions. When deepfakes are detected, automated takedown processes can be initiated, removing fraudulent content before it spreads widely.
Also read: How Cyble is Leading the Fight Against Deepfakes with Real-Time Detection & Takedowns
Instead of flooding teams with noise, Cyble provides insight into severity, relevance, and recommended actions—turning raw data into Executive Security Intelligence.

The ROI Nobody Calculates

Traditional security investments justify themselves through prevented breaches and avoided downtime. Executive monitoring ROI is harder to quantify precisely because it's impossible to measure attacks that never happened due to deterrence and early intervention. But consider the inverse calculation: what's the cost of not having it? A single successful executive impersonation attack costs millions. A leaked executive credential that enables a broader breach amplifies damage exponentially. A deepfake crisis that damages brand reputation takes years to repair. The question shifts from "can we justify the investment" to "can we justify the exposure." Organizations serious about growth recognize that executive security is growth infrastructure, not a cost center. It's the same logic that drives investments in executive coaching, strategic advisors, and leadership development. You're protecting and amplifying the most valuable assets in the organization—the people making decisions.

Building Protection That Scales With Ambition

The final insight that separates mature organizations from reactive ones is that executive monitoring isn't static. As companies grow, executive profiles rise. As leadership becomes more publicly visible, attack surfaces expand. As strategic importance increases, threat actor interest intensifies. Effective senior executive threat protection must scale alongside ambition. Scalable executive protection means platforms that grow with organizational complexity, handling increased numbers of monitored executives as leadership teams expand. They adapt to evolving threat vectors, continuously updating detection capabilities as attack techniques mature. They integrate with existing security infrastructure rather than creating isolated silos, and provide graduated protection levels matching executive risk profiles rather than one-size-fits-all approaches. This requires platforms built on threat intelligence foundations, not bolt-on features added to existing security suites. Cyble's Executive Monitoring exists within a broader threat intelligence ecosystem that includes dark web monitoring, brand protection, and attack surface management. This integration means executive threats aren't isolated signals—they're correlated with broader organizational risk patterns. When an executive's name appears in dark web discussions alongside mentions of your company's infrastructure, that correlation matters. When brand impersonation campaigns coincide with executive travel to specific regions, that context informs protective measures.

The Strategic Imperative

Executive monitoring represents a fundamental shift in how organizations think about security. It acknowledges that protecting infrastructure isn't enough when people are targets. It recognizes that reputational risk and operational risk intertwine at the leadership level. It accepts that digital threats demand digital surveillance, not just digital defenses. For organizations pursuing growth, executive protection isn't optional anymore. It's foundational. The businesses that will dominate their markets in the coming decade aren't just those with the best products or strongest financials—they're the ones whose leadership can operate with confidence, visibility, and strategic aggression because their digital exposure is being actively managed. The threat landscape has evolved. Executive protection must evolve with it. The question is whether your organization will adapt proactively or learn these lessons the expensive way.
Interested in exploring how executive monitoring can strengthen your leadership protection and enable strategic growth? Learn more about comprehensive executive threat intelligence solutions at Cyble.com.
  •  

Russian APT28 Exploit Zero-Day Hours After Microsoft Discloses Office Vulnerability

APT28, Russia, Microsoft Office, Word, CERT-UA, Backdoor, SVR Exploiting Unpatched Vulnerabilities, Russia SVR, SVR, Vulnerabilities, Vulnerability Management, Patch Management

Ukraine's cyber defenders warn Russian hackers weaponized a Microsoft zero-day within 24 hours of public disclosure, targeting government agencies with malicious documents delivering Covenant framework backdoors.

Russian state-sponsored hacking group APT28 used a critical Microsoft Office zero-day vulnerability, tracked as CVE-2026-21509, in less than a day after the vendor publicly disclosed the flaw, launching targeted attacks against Ukrainian government agencies and European Union institutions.

Ukraine's Computer Emergency Response Team detected exploitation attempts that began on January 27—just one day after Microsoft published details about CVE-2026-21509.

Microsoft had acknowledged active exploitation when it disclosed the flaw on January 26, but details pertaining to the threat actors were withheld and it is still unclear if it is the same or some other exploitation campaign that the vendor meant. However, the speed at which APT28 deployed customized attacks shows the narrow window defenders have to patch critical vulnerabilities.

Also read: APT28’s Recent Campaign Combined Steganography, Cloud C2 into a Modular Infection Chain
Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

CERT-UA discovered a malicious DOC file titled "Consultation_Topics_Ukraine(Final).doc" containing the CVE-2026-21509 exploit on January 29. Metadata revealed attackers created the document on January 27 at 07:43 UTC. The file masqueraded as materials related to Committee of Permanent Representatives to the European Union consultations on Ukraine's situation.

[caption id="attachment_109153" align="aligncenter" width="700"]APT28, Russia, Microsoft Office, Word, CERT-UA, Backdoor Word file laced with malware (Source: CERT-UA)[/caption]

On the same day, attackers impersonated Ukraine's Ukrhydrometeorological Center, distributing emails with an attached DOC file named "BULLETEN_H.doc" to more than 60 email addresses. Recipients primarily included Ukrainian central executive government agencies, representing a coordinated campaign against critical government infrastructure.

The attack chain begins when victims open malicious documents using Microsoft Office. The exploit establishes network connections to external resources using the WebDAV protocol—a file sharing protocol that extends HTTP to enable collaborative editing. The connection downloads a shortcut file containing program code designed to retrieve and execute additional malicious payloads.

[caption id="attachment_109150" align="aligncenter" width="600"] Exploit chain. (Source CERT-UA)[/caption]

Successful execution creates a DLL file "EhStoreShell.dll" disguised as a legitimate "Enhanced Storage Shell Extension" library, along with an image file "SplashScreen.png" containing shellcode. Attackers implement COM hijacking by modifying Windows registry values for a specific CLSID identifier, a technique that allows malicious code to execute when legitimate Windows components load.

The malware creates a scheduled task named "OneDriveHealth" that executes periodically. When triggered, the task terminates and relaunches the Windows Explorer process. Because of the COM hijacking modification, Explorer automatically loads the malicious EhStoreShell.dll file, which then executes shellcode from the image file to deploy the Covenant framework on compromised systems.

Covenant is a post-exploitation framework similar to Cobalt Strike that provides attackers persistent command-and-control access. In this campaign, APT28 configured Covenant to use Filen.io, a legitimate cloud storage service, as command-and-control infrastructure. This technique, called living-off-the-land, makes malicious traffic appear legitimate and harder to detect.

CERT-UA discovered three additional malicious documents using similar exploits in late January 2026. Analysis of embedded URL structures and other technical indicators revealed these documents targeted organizations in EU countries. In one case, attackers registered a domain name on January 30, 2026—the same day they deployed it in attacks—demonstrating the operation's speed and agility.

"It is obvious that in the near future, including due to the inertia of the process or impossibility of users updating the Microsoft Office suite and/or using recommended protection mechanisms, the number of cyberattacks using the described vulnerability will begin to increase," CERT-UA warned in its advisory.

Microsoft released an emergency fix for CVE-2026-21509, but many organizations struggle to rapidly deploy patches across enterprise environments. The vulnerability affects multiple Microsoft Office products, creating a broad attack surface that threat actors will continue exploiting as long as unpatched systems remain accessible.

Read: Microsoft Releases Emergency Fix for Exploited Office Zero-Day

CERT-UA attributes the campaign to UAC-0001, the agency's designation for APT28, also known as Fancy Bear or Forest Blizzard. The group operates on behalf of Russia's GRU military intelligence agency and has conducted extensive operations targeting Ukraine since Russia's 2022 invasion. APT28 previously exploited Microsoft vulnerabilities within hours of disclosure, demonstrating consistent capability to rapidly weaponize newly discovered flaws.

CERT-UA recommends organizations immediately implement mitigation measures outlined in Microsoft's advisory, particularly Windows registry modifications that prevent exploitation. The agency specifically urges blocking or monitoring network connections to Filen cloud storage infrastructure, providing lists of domain names and IP addresses in its indicators of compromise section.

  •  

Britain and Japan Join Forces on Cybersecurity and Strategic Minerals

Japanese cybersecurity

Japan and Britain have agreed to expand cooperation on cybersecurity and critical mineral supply chains, framing the move as a strategic response to intensifying geopolitical, economic, and technological pressures. The British and Japanese cybersecurity strategy and agreement were confirmed during British Prime Minister Keir Starmer’s overnight visit to Tokyo, where leaders from both countries reaffirmed their commitment to collective security and economic resilience.  At a joint news conference in Tokyo, Starmer said the timing of his visit was shaped by mounting global instability. “Geopolitical, economic, and technological shocks are literally shaking the world,” he said, adding that he and Japanese Prime Minister Sanae Takaichi had agreed to strengthen collective security across the Atlantic and the Indo-Pacific. Central to those efforts is the launch of a new cyber strategic partnership intended “to improve our cybersecurity to protect our economy,” placing cybersecurity in Japan and in the UK at the core of bilateral cooperation.  Starmer’s Tokyo stop came immediately after he visited Beijing, where he met Chinese President Xi Jinping and agreed to seek a long-term, stable “strategic partnership.” Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

Britain and Japanese Cybersecurity Strategy Also Includes Minerals and Supply Chain Resilience 

Alongside British and Japanese cybersecurity strategies, leaders from both nations focused on the strategic importance of critical minerals, which are essential for advanced manufacturing, clean energy technologies, and defense systems. Prime Minister Takaichi pointed to growing concerns over global export restrictions, stressing the urgency of cooperation among trusted partners. “We agreed that the like-minded countries must work together” to strengthen supply chain resilience, she said.  Britain's and Japan's cybersecurity strategy also includes securing access to critical minerals and has become a national security issue as much as an economic one. Disruptions to supply chains could affect everything from digital infrastructure to defense readiness, making cooperation between Tokyo and London a key pillar of broader economic resilience. The bilateral discussions took place as Japan faces heightened tensions with China, particularly after comments by Takaichi regarding possible Japanese involvement if China were to take military action against Taiwan, the self-governing island claimed by Beijing. These tensions have added urgency to Japan and Britain’s efforts to diversify supply chains and reinforce strategic partnerships.

Wider Security Alignments Across Europe and the Indo-Pacific 

Tokyo talks unfolded against a backdrop of expanding international security cooperation. According to The Associated Press, Japan and the European Union announced a new security and defense partnership the previous day, marking the first such agreement between the EU and an Indo-Pacific country. Japanese Foreign Minister Takeshi Iwaya and EU foreign policy chief Josep Borrell said the partnership aims to strengthen military ties through joint exercises and increased exchanges between defense industries.  Borrell, speaking in Tokyo, described the global environment in stark terms. “We live in a very dangerous world. We live in a world of growing rivalries, climate accidents, and threats of war,” he said, arguing that “partnerships among friends” are the only effective response. He called the EU-Japan agreement “a historical and very timely step given the situation in both of our regions.” The partnership includes cooperation on cybersecurity and space defense, reinforcing the shared view that digital and hybrid threats are central to modern security challenges.  Borrell’s visit to Japan was part of a broader East Asia tour that also included South Korea, reflecting the EU’s increasing engagement in the Indo-Pacific. The tour comes as China and Russia expand joint military activities and North Korea deepens its cooperation with Moscow, including sending troops to Russia. The Tokyo discussions followed North Korea’s test launch of what is believed to be a new type of intercontinental ballistic missile.  Iwaya and Borrell expressed “grave concern” over Russia’s growing military cooperation with North Korea, including troop deployments and arms transfers, and reiterated their commitment to supporting Ukraine while condemning Russian aggression. 
  •  

Union Budget 2026–27: India Bets Big on Cloud, AI, and Cyber Resilience

Budget 2026

When Union Finance Minister Nirmala Sitharaman of India presented the Union Budget 2026–27 on February 1, it became clear that this year’s financial roadmap is not only about fiscal numbers, it is also about shaping the infrastructure of India’s digital future. The India Budget 2026 sends a strong and confident message: India wants to lead in the next phase of global growth, and that leadership will be built on AI, cloud, data centres, semiconductors, and cybersecurity. What stands out in Budget 2026 is the long-term thinking. Instead of short-term incentives or fragmented digital schemes, the government is laying down policy signals that stretch decades ahead, a rare and important move in the technology sector.

Budget 2026 Recognises Digital Infrastructure as Economic Infrastructure

For years, digital growth was often discussed as an enabler. In cyber budget 2026, digital infrastructure is being treated as core infrastructure, just like roads, railways, or energy. Union Minister for Electronics and IT Ashwini Vaishnaw rightly pointed out that AI data centres are now part of the foundation layer of modern economies. Without compute power, cloud capacity, and reliable digital networks, AI ambitions remain theoretical. India already has nearly USD 70 billion in investments underway in this space, with another USD 90 billion announced. That alone reflects how quickly the global market is betting on India’s data centre potential. But budget 2026 goes further. Cyble Annual Threat Landscape Report, Annual Threat Landscape Report, Cyble Annual Threat Landscape Report 2025, Threat Landscape Report 2025, Cyble, Ransomware, Hacktivism, AI attacks, Vulnerabilities, APT, ICS Vulnerabilities

The Tax Holiday Till 2047 Is a Bold Signal

One of the most defining announcements in India Budget 2026 is the proposed tax holiday till 2047 for foreign cloud companies providing services globally using Indian data centres. This is more than a tax incentive — it is a strategic invitation. By offering policy clarity over two decades, India is telling global cloud players: build here, scale here, and serve the world from here. In a world where cloud infrastructure is becoming as geopolitically important as energy supply chains, this move positions India as not just a consumer of cloud services, but a serious global hosting hub. The safe harbour provision of 15% on cost further strengthens confidence for companies operating through related entities. This long-term stability may prove to be one of the smartest digital policy bets India has made in years.

IT Services Get the Relief They Have Long Needed

India’s IT services industry has always been a powerhouse, with exports now exceeding USD 220 billion. Yet, the sector has often faced complexity in tax compliance and transfer pricing frameworks. Budget 2026–27 addresses this in a practical way. By grouping software development, IT-enabled services, KPO, and contract R&D under one unified category — Information Technology Services — the government is acknowledging how interconnected these segments truly are. The proposed safe harbour margin of 15.5% and the jump in threshold from Rs. 300 crore to Rs. 2,000 crore is not just a technical reform. It reduces friction, encourages growth, and allows companies to focus more on innovation than paperwork. Even more importantly, approvals through an automated, rule-driven process remove uncertainty — something businesses value as much as incentives.

Semiconductor Mission 2.0 Reflects Strategic Continuity

The announcement of India Semiconductor Mission (ISM) 2.0, with an allocation of Rs. 1,000 crore, reinforces that India is serious about building supply chain resilience. Semiconductors are no longer just an industrial component — they are a strategic necessity. ISM 2.0’s focus on full-stack Indian IP, equipment production, and skilled workforce development reflects a long-term push toward self-reliance, not isolation. It is about India becoming a meaningful contributor to global technology manufacturing, not just an importer.

Electronics Manufacturing Momentum Is Clearly Building

The expansion of the Electronics Components Manufacturing Scheme (ECMS), with the outlay proposed to rise to Rs. 40,000 crore, shows that India wants to capture the manufacturing opportunity created by global supply chain shifts. Investment commitments already exceeding targets indicate that industry is responding. This is one area where Budget 2026 could create a multiplier effect — across jobs, exports, innovation, and ecosystem development.

Industry Response Highlights the Bigger Picture

The technology and cybersecurity industry has largely welcomed the direction of Budget 2026, especially its long-term focus on cloud infrastructure, AI readiness, and digital resilience. Pinkesh Kotecha, Chairman and CEO of Ishan Technologies, noted that the Budget puts strong backing behind India’s infrastructure ambitions.
“Union Budget 2026 puts hard numbers behind India’s digital infrastructure ambition,” he said, pointing to the tax holiday till 2047 for global cloud providers using Indian data centres and the safe harbour provisions for IT services. According to him, these steps position India not only as a large digital market, but also as “a global hosting hub.”
He also stressed that as AI workloads grow, the need for secure, high-availability connectivity will become just as important as compute and storage. Cybersecurity leaders have echoed similar views. Major Vineet Kumar, Founder and Global President of CyberPeace, called the Budget a strong signal that India’s growth and security priorities are now deeply connected.
“India’s growth ambitions are now inseparable from its digital and security foundations,” he said.
He added that the focus on AI, cloud, and deep-tech infrastructure makes cybersecurity a core national and economic requirement, not a secondary concern. From the banking and services perspective, Manish S., Head of Trade Finance Implementation at Standard Chartered India, highlighted the opportunities the Budget creates for professionals and businesses.
“India’s Budget 2026–27 supports services with fiscal incentives for foreign cloud firms, a data centre push, GCC support and skilling commitments,”
he said, encouraging professionals to upskill in cloud, AI, data engineering, and cybersecurity to stay relevant in the evolving ecosystem. Infrastructure providers also see long-term impact. Subhasis Majumdar, Managing Director of Vertiv India, described the tax holiday as a major competitiveness boost.
“The long-term tax holiday for foreign cloud companies until 2047 is a game-changing move,”
he said, adding that it will attract large global investments and create a multiplier effect across power, cooling, and critical digital infrastructure. Sujata Seshadrinathan, Co-Founder and Director at Basiz Fund Service, also welcomed the Budget’s balanced approach to advanced technology adoption. She noted that the government has recognised both the benefits and challenges of emerging technologies like AI, including ecological concerns and labour displacement. She highlighted that the focus on skilling, reskilling, and DeepTech-led inclusive growth is “a push in the right direction.” Together, these reactions reflect a shared view across industry: Budget 2026 is not just supporting technology growth, but actively shaping the foundation for India’s long-term digital and cyber future.

Budget 2026 Sets the Stage for India’s Digital Decades

Overall, Budget 2026 feels less like an annual budget and more like a policy blueprint for India’s digital future. The focus on AI infrastructure, cloud investments, IT simplification, semiconductor capability, and cybersecurity readiness suggests India is preparing not just for the next fiscal year — but for the next generation. The foundation is being laid. The opportunity is clear. The next step will be execution — because if these measures translate into real infrastructure, skilled talent, and secure digital systems, India Budget 2026 could be remembered as the moment India firmly positioned itself as a global digital powerhouse.
  •