Normal view

Received before yesterday

UK MPs face rise in phishing attacks on messaging apps

11 December 2025 at 13:58

Hackers include Russia-based actors targeting WhatsApp and Signal accounts, parliamentary authorities warn

MPs are facing rising numbers of phishing attacks and Russia-based actors are actively targeting the WhatsApp and Signal accounts of politicians and officials, UK parliamentary authorities have warned.

MPs, peers and officials are being asked to step up their cybersecurity after a continued rise in attacks that have involved messages pretending to be from the app’s support team, asking a user to enter an access code, click a link or scan a QR code.

Continue reading...

© Photograph: Maureen McLean/REX/Shutterstock

© Photograph: Maureen McLean/REX/Shutterstock

© Photograph: Maureen McLean/REX/Shutterstock

Fraudulent gambling network may actually be something more nefarious

3 December 2025 at 12:23

A sprawling infrastructure that has been bilking unsuspecting people through fraudulent gambling websites for 14 years is likely a dual operation run by a nation-state-sponsored group that is targeting government and private-industry organizations in the US and Europe, researchers said Wednesday.

Researchers have previously tracked smaller pieces of the enormous infrastructure. Last month, security firm Sucuri reported that the operation seeks out and compromises poorly configured websites running the WordPress CMS. Imperva in January said the attackers also scan for and exploit web apps built with the PHP programming language that have existing webshells or vulnerabilities. Once the weaknesses are exploited, the attackers install a GSocket, a backdoor that the attackers use to compromise servers and host gambling web content on them.

All of the gambling sites target Indonesian-speaking visitors. Because Indonesian law prohibits gambling, many people in that country are drawn to illicit services. Most of the 236,433 attacker-owned domains hosting the gambling sites are hosted on Cloudflare. Most of the 1,481 hijacked subdomains were hosted on Amazon Web Services, Azure, and GitHub.

Read full article

Comments

© Getty Images

IP Camera Hacking Scandal: South Korea Targets Exploitative Video Network

3 December 2025 at 01:56

IP Camera Hacking

The National Investigation Headquarters of the National Police Agency has arrested four suspects involved in a major IP Camera Hacking case that resulted in the theft and sale of sensitive video footage from more than 120,000 devices. The police said the suspects edited the stolen footage and distributed illegally filmed material and other sexual exploitation material on an overseas website, causing serious privacy violations for victims. Authorities have launched wider investigations into website operators, content buyers, and viewers, while also beginning large-scale victim protection efforts to stop further harm.

IP Camera Hacking Suspects Sold Stolen Video Files

According to police, the four suspects, identified as B, C, D, and E, carried out extensive hacking activities targeting tens of thousands of IP cameras installed in homes and businesses. Many cameras were protected with weak passwords, such as repeated characters or simple number sequences.
  • Suspect B hacked around 30,000 cameras, edited the stolen footage into 545 videos, and earned virtual assets worth about 35 million won.
  • Suspect C created 648 files from around 70,000 hacked devices, earning about 18 million won.
  • Their videos made up 62% of all content uploaded on the illegal overseas website (Site A) in the past year.
  • Suspect D hacked about 15,000 cameras and stored child and youth sexual exploitation material.
  • Suspect E hacked 136 cameras but did not distribute any content.
Police said that no profits remained at the time of arrest, and the case has been forwarded to the National Tax Service for additional legal action.

Police Investigating Operators, Purchasers, and Viewers of Illegally Filmed Material

The investigation extends to the operator of Site A, which hosted illegally filmed material from victims in several countries. Police are working with foreign investigative agencies to identify and take action against the operator. Individuals who purchased sexually exploitative material, including illegally filmed material, are also under investigation. Three buyers have already been arrested. The police confirmed that viewers of such material will also face legal consequences under the Sexual Violence Punishment Act. To prevent further exposure, police have asked the Broadcasting Media and Communications Deliberation Committee to block access to Site A and are coordinating with international partners to shut down the platform.

Security Measures Issued After Large-Scale IP Camera Hacking Damage

Investigators have directly notified victims through visits, phone calls, and letters, guiding them on how to change passwords and secure their devices. The police are working with the Ministry of Science and ICT and major telecom companies to identify vulnerable IP cameras and inform users quickly. Users are being advised to strengthen passwords, enable two-factor authentication, and keep device software updated. Additionally, the Personal Information Protection Commission is assisting in identifying high-risk cases to prevent further leaks of sensitive videos.

Protection for Victims and Strong Action Against Secondary Harm

Authorities are prioritizing support for victims of illegally filmed material and sexual exploitation material. Victims can receive counseling, assistance with deleting harmful content, and help blocking its spread through the Digital Sex Crime Victim Support Center. Police stressed that strict action will also be taken against individuals who repost, share, or store such material. Park Woo-hyun, Cyber Investigation Director at the National Police Agency, emphasized the seriousness of these crimes, stating: “IP Camera Hacking and sexually exploitative material, including illegally filmed content, cause enormous pain to victims, and we will actively work to eradicate these crimes through strong investigation.” He added, “Illegal filming videos — including possessing them — is a serious crime, and we will investigate such acts firmly and without hesitation.”

Poland Arrests Russian Suspected of Hacking E-Commerce Databases Across Europe

27 November 2025 at 14:21

Poland

Polish authorities arrested a 23-year-old Russian citizen on November 16, after investigators linked him to unauthorized intrusions into e-commerce platforms, gaining access to databases containing personal data and transaction histories of customers across Poland and potentially other European Union member states. The suspect, who illegally crossed Poland's border in 2022 before obtaining refugee status in 2023, now faces three months of pre-trial detention as prosecutors examine connections to broader cybercrime operations targeting European infrastructure.

Officers from the Central Bureau for Combating Cybercrime detained the Russian national after gathering evidence confirming he operated without required authorization from online shop operators, breaching security protections to access IT systems and databases before interfering with their structure.

Expanding Investigation Into European Cyberattacks

Polish Interior Minister Marcin Kierwinski announced the arrest Thursday, stating that investigators established the suspect may have connections to additional cybercriminal activities targeting companies operating across Poland and EU member states. Prosecutors are currently verifying the scope of potential damages inflicted on victims of these cyberattacks.

According to Polish news outlets, the man was detained in Wroclaw where he had been living, with investigators saying he infiltrated a major e-commerce platform's database, gaining unauthorized access to almost one million customer records including personal data and transaction histories.

The District Court in Krakow approved prosecutors' request for three-month detention, with officials indicating additional arrests are likely as the investigation widens. Authorities are analyzing whether stolen data was used, sold, or transferred to groups outside Poland, including potential connections to organized cybercrime or state-backed networks.

Pattern of Russian Hybrid Warfare

The arrest occurs amid heightened tensions as Poland reports intensifying cyberattacks and sabotage attempts that officials believe link to Russian intelligence services. Poland has arrested 55 people over suspected sabotage and espionage over the past three years, with all charged under Article 130 of the penal code pertaining to espionage and sabotage.

The case represents part of a broader pattern of hostile cyber operations. Poland and other European nations have intensified surveillance of potential Russian cyberattacks and sabotage efforts since Moscow's full-scale invasion of Ukraine in 2022, monitoring suspected arson attacks and strikes on critical infrastructure across the region.

Polish cybersecurity officials previously warned the country remains a constant target of pro-Russian hackers responding to Warsaw's support for Ukraine. Strategic, energy, and military enterprises face particular risk, with attacks intensifying through DDoS operations, ransomware, phishing campaigns, and website impersonation designed to collect personal data and spread disinformation.

The Central Bureau for Combating Cybercrime emphasized that the investigation remains active and developmental, with prosecutors continuing to gather evidence about the full extent of the suspect's activities and potential co-conspirators.

Also read: DDoS-for-Hire Empire Dismantled as Poland Arrests Four, U.S. Seizes Nine Domains

How to know if your Asus router is one of thousands hacked by China-state hackers

21 November 2025 at 17:05

Thousands of Asus routers have been hacked and are under the control of a suspected China-state group that has yet to reveal its intentions for the mass compromise, researchers said.

The hacking spree is either primarily or exclusively targeting seven models of Asus routers, all of which are no longer supported by the manufacturer, meaning they no longer receive security patches, researchers from SecurityScorecard said. So far, it’s unclear what the attackers do after gaining control of the devices. SecurityScorecard has named the operation WrtHug.

Staying off the radar

SecurityScorecard said it suspects the compromised devices are being used similarly to those found in ORB (operational relay box) networks, which hackers primarily use to conduct espionage to conceal their identity.

Read full article

Comments

© Olly Curtis/MacFormat Magazine/Future via Getty Images

Chinese Hackers Weaponize Claude AI to Execute First Autonomous Cyber Espionage Campaign at Scale

14 November 2025 at 02:11

AI Agent, AI Assistant, Prompy Injection, Claude, Claude AI

The AI executed thousands of requests per second.

That physically impossible attack tempo, sustained across multiple simultaneous intrusions targeting 30 global organizations, marks what Anthropic researchers now confirm as the first documented case of a large-scale cyberattack executed without substantial human intervention.

In the last two weeks of September, a Chinese state-sponsored group, now designated as GTG-1002 by Anthropic defenders, manipulated Claude Code to autonomously conduct reconnaissance, exploit vulnerabilities, harvest credentials, move laterally through networks, and exfiltrate sensitive data with human operators directing just 10 to 20% of tactical operations.

The campaign represents a fundamental shift in threat actor capabilities. Where previous AI-assisted attacks required humans directing operations step-by-step, this espionage operation demonstrated the AI autonomously discovering vulnerabilities in targets selected by human operators, successfully exploiting them in live operations, then performing wide-ranging post-exploitation activities including analysis, lateral movement, privilege escalation, data access, and exfiltration.

Social Engineering the AI Model

The threat actors bypassed Claude's extensive safety training through sophisticated social engineering. Operators claimed they represented legitimate cybersecurity firms conducting defensive penetration testing, convincing the AI model to engage in offensive operations under false pretenses.

The attackers developed a custom orchestration framework using Claude Code and the open-standard Model Context Protocol to decompose complex multi-stage attacks into discrete technical tasks. Each task appeared legitimate when evaluated in isolation, including vulnerability scanning, credential validation, data extraction, and lateral movement.

By presenting these operations as routine technical requests through carefully crafted prompts, the threat actor induced Claude to execute individual components of attack chains without access to broader malicious context. The sustained nature of the attack eventually triggered detection, but this role-playing technique allowed operations to proceed long enough to launch the full campaign.

Unprecedented Autonomous Attack Lifecycle

Claude conducted nearly autonomous reconnaissance, using browser automation to systematically catalog target infrastructure, analyze authentication mechanisms, and identify potential vulnerabilities simultaneously across multiple targets. The AI maintained separate operational contexts for each active campaign independently.

[caption id="attachment_106770" align="aligncenter" width="600"]Claude, Claude AI Hacking The lifecycle of the cyberattack. (Image source: Anthropic)[/caption]

In one validated successful compromise, Claude autonomously discovered internal services, mapped complete network topology across multiple IP ranges, and identified high-value systems including databases and workflow orchestration platforms. Similar autonomous enumeration occurred against other targets, with the AI independently cataloging hundreds of discovered services and endpoints.

Exploitation proceeded through automated testing with Claude independently generating attack payloads tailored to discovered vulnerabilities, executing testing through remote command interfaces, and analyzing responses to determine exploitability without human direction. Human operators maintained strategic oversight only at critical decision gates, including approving progression from reconnaissance to active exploitation and authorizing use of harvested credentials.

Upon receiving authorization, Claude executed systematic credential collection across targeted networks, querying internal services, extracting authentication certificates, and testing harvested credentials autonomously. The AI independently determined which credentials provided access to which services, mapping privilege levels and access boundaries.

Intelligence Extraction at Machine Speed

Collection operations demonstrated the most extensive AI autonomy. Against one targeted technology company, Claude independently queried databases, extracted data, parsed results to identify proprietary information, and categorized findings by intelligence value without human analysis.

In documented database extraction operations spanning two to six hours, Claude authenticated with harvested credentials, mapped database structure, queried user account tables, extracted password hashes, identified high-privilege accounts, created persistent backdoor user accounts, downloaded complete results, parsed extracted data for intelligence value, and generated summary reports. Human operators reviewed findings and approved final exfiltration targets in just five to twenty minutes.

The operational infrastructure relied overwhelmingly on open-source penetration testing tools orchestrated through custom automation frameworks built around Model Context Protocol servers. Peak activity included thousands of requests representing sustained request rates of multiple operations per second, confirming AI actively analyzed stolen information rather than generating explanatory content for human review.

AI Hallucination Limitation

An important operational limitation emerged during investigation. Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that did not work or identifying critical discoveries that proved to be publicly available information.

This AI hallucination in offensive security contexts required careful validation of all claimed results. Anthropic researchers assess this remains an obstacle to fully autonomous cyberattacks, though the limitation did not prevent the campaign from achieving multiple successful intrusions against major technology corporations, financial institutions, chemical manufacturing companies, and government agencies.

Anthropic's Response

Upon detecting the activity, Anthropic immediately launched a ten-day investigation to map the operation's full extent. The company banned accounts as they were identified, notified affected entities, and coordinated with authorities.

Anthropic implemented multiple defensive enhancements including expanded detection capabilities, improved cyber-focused classifiers, prototyped proactive early detection systems for autonomous cyber attacks, and developed new techniques for investigating large-scale distributed cyber operations.

This represents a significant escalation from Anthropic's June 2025 "vibe hacking" findings where humans remained very much in the loop directing operations.

Read: Hacker Used Claude AI to Automate Reconnaissance, Harvest Credentials and Penetrate Networks

Anthropic said the cybersecurity community needs to assume a fundamental change has occurred. Security teams must experiment with applying AI for defense in areas including SOC automation, threat detection, vulnerability assessment, and incident response. The company notes that the same capabilities enabling these attacks make Claude crucial for cyber defense, with Anthropic's own Threat Intelligence team using Claude extensively to analyze enormous amounts of data generated during this investigation.

Rigged Poker Games

6 November 2025 at 07:02

The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games.

In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand. This information was transmitted to an off-site member of the conspiracy, who then transmitted that information via cellphone back to a member of the conspiracy who was playing at the table, referred to as the “Quarterback” or “Driver.” The Quarterback then secretly signaled this information (usually by prearranged signals like touching certain chips or other items on the table) to other co-conspirators playing at the table, who were also participants in the scheme. Collectively, the Quarterback and other players in on the scheme (i.e., the cheating team) used this information to win poker games against unwitting victims, who sometimes lost tens or hundreds of thousands of dollars at a time. The defendants used other cheating technology as well, such as a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an x-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards. ...

The post Rigged Poker Games appeared first on Security Boulevard.

Rigged Poker Games

6 November 2025 at 07:02

The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games.

In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand. This information was transmitted to an off-site member of the conspiracy, who then transmitted that information via cellphone back to a member of the conspiracy who was playing at the table, referred to as the “Quarterback” or “Driver.” The Quarterback then secretly signaled this information (usually by prearranged signals like touching certain chips or other items on the table) to other co-conspirators playing at the table, who were also participants in the scheme. Collectively, the Quarterback and other players in on the scheme (i.e., the cheating team) used this information to win poker games against unwitting victims, who sometimes lost tens or hundreds of thousands of dollars at a time. The defendants used other cheating technology as well, such as a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an x-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards.

News articles.

AI Summarization Optimization

3 November 2025 at 07:05

These days, the most important meeting attendee isn’t a person: It’s the AI notetaker.

This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence.

But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).

Optimizing for algorithmic manipulation

AI summarization optimization has a well-known precursor: SEO.

Search-engine optimization is as old as the World Wide Web. The idea is straightforward: Search engines scour the internet digesting every possible page, with the goal of serving the best results to every possible query. The objective for a content creator, company, or cause is to optimize for the algorithm search engines have developed to determine their webpage rankings for those queries. That requires writing for two audiences at once: human readers and the search-engine crawlers indexing content. Techniques to do this effectively are passed around like trade secrets, and a $75 billion industry offers SEO services to organizations of all sizes.

More recently, researchers have documented techniques for influencing AI responses, including large-language model optimization (LLMO) and generative engine optimization (GEO). Tricks include content optimization—adding citations and statistics—and adversarial approaches: using specially crafted text sequences. These techniques often target sources that LLMs heavily reference, such as Reddit, which is claimed to be cited in 40% of AI-generated responses. The effectiveness and real-world applicability of these methods remains limited and largely experimental, although there is substantial evidence that countries such as Russia are actively pursuing this.

AI summarization optimization follows the same logic on a smaller scale. Human participants in a meeting may want a certain fact highlighted in the record, or their perspective to be reflected as the authoritative one. Rather than persuading colleagues directly, they adapt their speech for the notetaker that will later define the “official” summary. For example:

  • “The main factor in last quarter’s delay was supply chain disruption.”
  • “The key outcome was overwhelmingly positive client feedback.”
  • “Our takeaway here is in alignment moving forward.”
  • “What matters here is the efficiency gains, not the temporary cost overrun.”

The techniques are subtle. They employ high-signal phrases such as “key takeaway” and “action item,” keep statements short and clear, and repeat them when possible. They also use contrastive framing (“this, not that”), and speak early in the meeting or at transition points.

Once spoken words are transcribed, they enter the model’s input. Cue phrases—and even transcription errors—can steer what makes it into the summary. In many tools, the output format itself is also a signal: Summarizers often offer sections such as “Key Takeaways” or “Action Items,” so language that mirrors those headings is more likely to be included. In effect, well-chosen phrases function as implicit markers that guide the AI toward inclusion.

Research confirms this. Early AI summarization research showed that models trained to reconstruct summary-style sentences systematically overweigh such content. Models over-rely on early-position content in news. And models often overweigh statements at the start or end of a transcript, underweighting the middle. Recent work further confirms vulnerability to phrasing-based manipulation: models cannot reliably distinguish embedded instructions from ordinary content, especially when phrasing mimics salient cues.

How to combat AISO

If AISO becomes common, three forms of defense will emerge. First, meeting participants will exert social pressure on one another. When researchers secretly deployed AI bots in Reddit’s r/changemyview community, users and moderators responded with strong backlash calling it “psychological manipulation.” Anyone using obvious AI-gaming phrases may face similar disapproval.

Second, organizations will start governing meeting behavior using AI: risk assessments and access restrictions before the meetings even start, detection of AISO techniques in meetings, and validation and auditing after the meetings.

Third, AI summarizers will have their own technical countermeasures. For example, the AI security company CloudSEK recommends content sanitization to strip suspicious inputs, prompt filtering to detect meta-instructions and excessive repetition, context window balancing to weight repeated content less heavily, and user warnings showing content provenance.

Broader defenses could draw from security and AI safety research: preprocessing content to detect dangerous patterns, consensus approaches requiring consistency thresholds, self-reflection techniques to detect manipulative content, and human oversight protocols for critical decisions. Meeting-specific systems could implement additional defenses: tagging inputs by provenance, weighting content by speaker role or centrality with sentence-level importance scoring, and discounting high-signal phrases while favoring consensus over fervor.

Reshaping human behavior

AI summarization optimization is a small, subtle shift, but it illustrates how the adoption of AI is reshaping human behavior in unexpected ways. The potential implications are quietly profound.

Meetings—humanity’s most fundamental collaborative ritual—are being silently reengineered by those who understand the algorithm’s preferences. The articulate are gaining an invisible advantage over the wise. Adversarial thinking is becoming routine, embedded in the most ordinary workplace rituals, and, as AI becomes embedded in organizational life, strategic interactions with AI notetakers and summarizers may soon be a necessary executive skill for navigating corporate culture.

AI summarization optimization illustrates how quickly humans adapt communication strategies to new technologies. As AI becomes more embedded in workplace communication, recognizing these emerging patterns may prove increasingly important.

This essay was written with Gadi Evron, and originally appeared in CSO.

Microsoft Digital Defense Report 2025: Extortion and Ransomware Lead Global Cybercrime Surge

Digital Defense Report

The newly released Microsoft Digital Defense Report 2025 reveals new data on global cyber threats. According to the report, more than half of all cyberattacks with known motives, 52%, are driven by extortion and ransomware.  In contrast, espionage accounts for only 4%, a shift toward financially motivated cybercrime rather than state-sponsored operations. Published on October 22, 2025, the report stresses that today’s attackers are largely opportunistic about criminals seeking monetary gain rather than geopolitical advantage.  The findings show that in 80% of incidents, attackers aimed primarily to steal data. This trend highlights the universality of the threat, as organizations across every industry face mounting pressure to protect sensitive information against both small-scale criminals and organized syndicates. 

Digital Defense Report 2025: Data Behind the Threat 

Microsoft’s digital infrastructure gives it a unique vantage point on global cybercrime trends. Each day, the company processes over 100 trillion signals, blocks approximately 4.5 million new malware attempts, analyzes 38 million identity-risk detections, and scans 5 billion emails for phishing and malicious content.  Automation and widely available hacking tools have enabled attackers to scale operations faster than ever. The report warns that artificial intelligence (AI) is now accelerating this process, making phishing lures, fake websites, and social-engineering content more convincing and harder to detect.  A major takeaway from the Digital Defense Report is that cybersecurity can no longer be viewed as a purely technical issue. It must be treated as a strategic business priority. The report urges leaders to integrate security into every layer of digital transformation, arguing that modern defenses are essential for long-term resilience.  For individual users, Microsoft recommends the use of multi-factor authentication (MFA), especially phishing-resistant MFA, which can block over 99% of identity-based attacks, even when criminals have stolen valid credentials. 

Regional Focus: Urgency in the Adriatic 

Tomislav Vračić, NTO Europe South Multi-country Cluster at Microsoft, emphasized the growing urgency across Southeast Europe:  “Across the Adriatic region, the urgency to strengthen cybersecurity awareness and readiness has never been greater,” Vračić said. “As digital transformation accelerates in Croatia, Slovenia, Serbia, Albania, Bulgaria, and neighboring markets, both public and private sectors must act decisively to safeguard critical infrastructure and citizen trust. Proactive defense is a strategic imperative for securing our shared digital future.”  The report highlights that hospitals, schools, and local governments are frequent targets of ransomware and data-theft campaigns. These institutions often lack sufficient resources to recover quickly, which makes them appealing to targets. The fallout is severe, ranging from delayed medical care to disrupted education and halted public services. Because operational continuity is so critical in these sectors, attackers often succeed in extorting quick payments. 

Modernization Is Non-Negotiable 

Outdated security systems are no longer enough. The Digital Defense Report stresses that modernization, strong public-private collaboration, and shared threat intelligence are key to countering today’s cybercrime landscape. Governments and industries must work together to reinforce defense infrastructure before the next major wave of ransomware and data-theft attacks.  While financially motivated actors dominate, nation-state attacks continue to pose serious risks. The report identifies: 
  • China, expanding its operations across industries and NGOs by exploiting vulnerable devices for covert access. 
  • Iran, targeting logistics companies in Europe and the Persian Gulf, is likely to disrupt trade. 
  • Russia, extending operations beyond Ukraine and focusing on small NATO countries ' businesses as potential entry points into larger networks. 
  • North Korea, combining espionage and profit motives, often uses overseas IT workers whose earnings are sent back to the regime. 

Autonomous AI Hacking and the Future of Cybersecurity

10 October 2025 at 07:06

AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.

Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne’s US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA’s AI Cyber Challenge collectively found 54 new vulnerabilities in a target system, in four hours (of compute). Also in August, Google announced that its Big Sleep AI found dozens of new vulnerabilities in open-source projects.

It gets worse. In July Ukraine’s CERT discovered a piece of Russian malware that used an LLM to automate the cyberattack process, generating both system reconnaissance and data theft commands in real-time. In August, Anthropic reported that they disrupted a threat actor that used Claude, Anthropic’s AI model, to automate the entire cyberattack process. It was an impressive use of the AI, which performed network reconnaissance, penetrated networks, and harvested victims’ credentials. The AI was able to figure out which data to steal, how much money to extort out of the victims, and how to best write extortion emails.

Another hacker used Claude to create and market his own ransomware, complete with “advanced evasion capabilities, encryption, and anti-recovery mechanisms.” And in September, Checkpoint reported on hackers using HexStrike-AI to create autonomous agents that can scan, exploit, and persist inside target networks. Also in September, a research team showed how they can quickly and easily reproduce hundreds of vulnerabilities from public information. These tools are increasingly free for anyone to use. Villager, a recently released AI pentesting tool from Chinese company Cyberspike, uses the Deepseek model to completely automate attack chains.

This is all well beyond AIs capabilities in 2016, at DARPA’s Cyber Grand Challenge. The annual Chinese AI hacking challenge, Robot Hacking Games, might be on this level, but little is known outside of China.

Tipping point on the horizon

AI agents now rival and sometimes surpass even elite human hackers in sophistication. They automate operations at machine speed and global scale. The scope of their capabilities allows these AI agents to completely automate a criminal’s command to maximize profit, or structure advanced attacks to a government’s precise specifications, such as to avoid detection.

In this future, attack capabilities could accelerate beyond our individual and collective capability to handle. We have long taken it for granted that we have time to patch systems after vulnerabilities become known, or that withholding vulnerability details prevents attackers from exploiting them. This is no longer the case.

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance.

The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage.

The AI-assisted evolution of cyberdefense

AI technologies can benefit defenders as well. We don’t know how the different technologies of cyber-offense and cyber-defense will be amenable to AI enhancement, but we can extrapolate a possible series of overlapping developments.

Phase One: The Transformation of the Vulnerability Researcher. AI-based hacking benefits defenders as well as attackers. In this scenario, AI empowers defenders to do more. It simplifies capabilities, providing far more people the ability to perform previously complex tasks, and empowers researchers previously busy with these tasks to accelerate or move beyond them, freeing time to work on problems that require human creativity. History suggests a pattern. Reverse engineering was a laborious manual process until tools such as IDA Pro made the capability available to many. AI vulnerability discovery could follow a similar trajectory, evolving through scriptable interfaces, automated workflows, and automated research before reaching broad accessibility.

Phase Two: The Emergence of VulnOps. Between research breakthroughs and enterprise adoption, a new discipline might emerge: VulnOps. Large research teams are already building operational pipelines around their tooling. Their evolution could mirror how DevOps professionalized software delivery. In this scenario, specialized research tools become developer products. These products may emerge as a SaaS platform, or some internal operational framework, or something entirely different. Think of it as AI-assisted vulnerability research available to everyone, at scale, repeatable, and integrated into enterprise operations.

Phase Three: The Disruption of the Enterprise Software Model. If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production—a shift we might call continuous discovery/continuous repair (CD/CR). Third-party risk management (TPRM) offers a natural adoption route, lower-risk vendor testing, integration into procurement and certification gates, and a proving ground before wider rollout.

Phase Four: The Self-Healing Network. If organizations can independently discover and patch vulnerabilities in running software, they will not have to wait for vendors to issue fixes. Building in-house research teams is costly, but AI agents could perform such discovery and generate patches for many kinds of code, including third-party and vendor products. Organizations may develop independent capabilities that create and deploy third-party patches on vendor timelines, extending the current trend of independent open-source patching. This would increase security, but having customers patch software without vendor approval raises questions about patch correctness, compatibility, liability, right-to-repair, and long-term vendor relationships.

These are all speculations. Maybe AI-enhanced cyberattacks won’t evolve the ways we fear. Maybe AI-enhanced cyberdefense will give us capabilities we can’t yet anticipate. What will surprise us most might not be the paths we can see, but the ones we can’t imagine yet.

This essay was written with Heather Adkins and Gadi Evron, and originally appeared in CSO.

“A dare, a challenge, a bit of fun:” Children are hacking their own schools’ systems, says study

16 September 2025 at 06:20

As if ransomware wasn’t enough of a security problem for the sector, educational institutions also need to worry about their own students, a recent study shows.

Last week, the UK Information Commissioner’s Office (ICO) published a report about the “insider threat of students”. Here are a few key points:

  • Over half of school insider cyberattacks were caused by students.
  • Almost a third of insider attack incidents caused by students involved guessing weak passwords or finding them jotted down on bits of paper.
  • Teen hackers are not breaking in, they are logging in.

The conclusion of the ICO is that:

“Children are hacking into their schools’ computer systems – and it may set them up for a life of cyber crime.”

The ICO examined a total of 215 personal data breach reports caused by insider attacks from the education sector between January 2022 and August 2024. They found that students were responsible for 57% of them, and that students covered 97% of the incidents that were caused by stolen login details.

The British National Crime Agency (NCA) reported about a survey of children aged 10-16 which showed that 20% engage in behaviors that violate the Computer Misuse Act, which criminalizes unauthorized access to computer systems and data. It adds a warning:

“The consequences of committing Computer Misuse Act offences are serious. In addition to being arrested and potentially given a criminal record, those caught can have their phone or computer taken away from them, risk expulsion from school, and face limits on their internet use, career opportunities and international travel.”

The reasons that children provided for hacking included dares, notoriety, financial gain, revenge and rivalries. Security experts also mention cases of students altering grades or using staff credentials.

While the ICO report highlights a troubling trend in the UK, US data shows it faces similar problems. A March 2025 Center for Internet Security survey found 82% of K-12 schools experienced a cyber incident between July 2023 and December 2024, and security analysts say students pose an inside threat to the education sector.

In one high-profile US prosecution, a 19-year-old faced charges in connection with the 2024 PowerSchool compromise that exposed millions of records, student and teacher data. In the end, that incident that led to extortion attempts against districts and caused major operational disruption.

While seemingly less harmless, the consequences of student hacking can be just as serious as something like a ransomware attack, ending up spilling the personal data from students and teachers.

As Heather Toomey, Principal Cyber Specialist at the ICO put it:

“What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure.”

Parents and schools need to warn children about the possible implications, no matter how innocent it may start. And more strict authorization of school staff and teachers could prevent a lot of these incidents, given that 30% of incidents were caused by stolen login details.

Protecting yourself or your children after a data breach

There are some actions you can take if you are, or suspect you or your children may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.
❌