Normal view

Received before yesterday

How phishers hide banking scams behind free Cloudflare Pages

8 December 2025 at 10:26

During a recent investigation, we uncovered a phishing operation that combines free hosting on developer platforms with compromised legitimate websites to build convincing banking and insurance login portals. These fake pages don’t just grab a username and password–they also ask for answers to secret questions and other “backup” data that attackers can use to bypass multi-factor authentication and account recovery protections.

Instead of sending stolen data to a traditional command-and-control server, the kit forwards every submission to a Telegram bot. That gives the attackers a live feed of fresh logins they can use right away. It also sidesteps many domain-based blocking strategies and makes swapping infrastructure very easy.​

Phishing groups increasingly use services like Cloudflare Pages (*.pages.dev) to host their fake portals, sometimes copying a real login screen almost pixel for pixel. In this case, the actors spun up subdomains impersonating financial and healthcare providers. The first one we found was impersonating Heartland bank Arvest.

fake Arvest log in page
Fake Arvest login page

On closer look, the phishing site shows visitors two “failed login” screens, prompts for security questions, and then sends all credentials and answers to a Telegram bot.

Comparing their infrastructure with other sites, we found one impersonating a much more widely known brand: United Healthcare.

HealthSafe ID overpayment refund
HealthSafe ID overpayment refund

In this case, the phishers abused a compromised website as a redirector. Attackers took over a legitimate-looking domain like biancalentinidesigns[.]com and saddle it with long, obscure paths for phishing or redirection. Emails link to the real domain first, which then forwards the victim to the active Cloudflare pages phishing site. Messages containing a familiar or benign-looking domain are more likely to slip past spam filters than links that go straight to an obviously new cloud-hosted subdomain.​

Cloud-based hosting also makes takedowns harder. If one *.pages.dev hostname gets reported and removed, attackers can quickly deploy the same kit under another random subdomain and resume operations.​

The phishing kit at the heart of this campaign follows a multi-step pattern designed to look like a normal sign-in flow while extracting as much sensitive data as possible.​

Instead of using a regular form submission to a visible backend, JavaScript harvests the fields and bundles them into a message sent straight to the Telegram API.. That message can include the victim’s IP address, user agent, and all captured fields, giving criminals a tidy snapshot they can use to bypass defenses or sign in from a similar environment.​

The exfiltration mechanism is one of the most worrying parts. Rather than pushing credentials to a single hosted panel, the kit posts them into one or more Telegram chats using bot tokens and chat IDs hardcoded in the JavaScript. As soon as a victim submits a form, the operator receives a message in their Telegram client with the details, ready for immediate use or resale.​

This approach offers several advantages for the attackers: they can change bots and chat IDs frequently, they do not need to maintain their own server, and many security controls pay less attention to traffic that looks like a normal connection to a well-known messaging platform. Cycling multiple bots and chats gives them redundancy if one token is reported and revoked.​

What an attack might look like

Putting all the pieces together, a victim’s experience in this kind of campaign often looks like this:​

  • They receive a phishing email about banking or health benefits: “Your online banking access is restricted,” or “Urgent: United Health benefits update.”
  • The link points to a legitimate but compromised site, using a long or strange path that does not raise instant suspicion.​
  • That hacked site redirects, silently or after a brief delay, to a *.pages.dev phishing site that looks almost identical to the impersonated brand.​
  • After entering their username and password, the victim sees an error or extra verification step and is asked to provide answers to secret questions or more personal and financial information.​
  • Behind the scenes, each submitted field is captured in JavaScript and sent to a Telegram bot, where the attacker can use or sell it immediately.​

From the victim’s point of view, nothing seems unusual beyond an odd-looking link and a failed sign-in. For the attackers, the mix of free hosting, compromised redirectors, and Telegram-based exfiltration gives them speed, scale, and resilience.

The bigger trend behind this campaign is clear: by leaning on free web hosting and mainstream messaging platforms, phishing actors avoid many of the choke points defenders used to rely on, like single malicious IPs or obviously shady domains. Spinning up new infrastructure is cheap, fast, and largely invisible to victims.

How to stay safe

Education and a healthy dose of skepticism are key components to staying safe. A few habits can help you avoid these portals:​

  • Always check the full domain name, not just the logo or page design. Banks and health insurers don’t host sign-in pages on generic developer domains like *.pages.dev*.netlify.app, or on strange paths on unrelated sites.​
  • Don’t click sign-in or benefits links in unsolicited emails or texts. Instead, go to the institution’s site via a bookmark or by typing the address yourself.​
  • Treat surprise “extra security” prompts after a failed login with caution, especially if they ask for answers to security questions, card numbers, or email passwords.​
  • If anything about the link, timing, or requested information feels wrong, stop and contact the provider using trusted contact information from their official site.
  • Use an up-to-date anti-malware solution with a web protection component.

Pro tip: Malwarebytes free Browser Guard extension blocked these websites.

Browser Guard Phishing block

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

The Cloudflare Outage May Be a Security Roadmap

19 November 2025 at 09:07

An intermittent outage at Cloudflare on Tuesday briefly knocked many of the Internet’s top destinations offline. Some affected Cloudflare customers were able to pivot away from the platform temporarily so that visitors could still access their websites. But security experts say doing so may have also triggered an impromptu network penetration test for organizations that have come to rely on Cloudflare to block many types of abusive and malicious traffic.

At around 6:30 EST/11:30 UTC on Nov. 18, Cloudflare’s status page acknowledged the company was experiencing “an internal service degradation.” After several hours of Cloudflare services coming back up and failing again, many websites behind Cloudflare found they could not migrate away from using the company’s services because the Cloudflare portal was unreachable and/or because they also were getting their domain name system (DNS) services from Cloudflare.

However, some customers did manage to pivot their domains away from Cloudflare during the outage. And many of those organizations probably need to take a closer look at their web application firewall (WAF) logs during that time, said Aaron Turner, a faculty member at IANS Research.

Turner said Cloudflare’s WAF does a good job filtering out malicious traffic that matches any one of the top ten types of application-layer attacks, including credential stuffing, cross-site scripting, SQL injection, bot attacks and API abuse. But he said this outage might be a good opportunity for Cloudflare customers to better understand how their own app and website defenses may be failing without Cloudflare’s help.

“Your developers could have been lazy in the past for SQL injection because Cloudflare stopped that stuff at the edge,” Turner said. “Maybe you didn’t have the best security QA [quality assurance] for certain things because Cloudflare was the control layer to compensate for that.”

Turner said one company he’s working with saw a huge increase in log volume and they are still trying to figure out what was “legit malicious” versus just noise.

“It looks like there was about an eight hour window when several high-profile sites decided to bypass Cloudflare for the sake of availability,” Turner said. “Many companies have essentially relied on Cloudflare for the OWASP Top Ten [web application vulnerabilities] and a whole range of bot blocking. How much badness could have happened in that window? Any organization that made that decision needs to look closely at any exposed infrastructure to see if they have someone persisting after they’ve switched back to Cloudflare protections.”

Turner said some cybercrime groups likely noticed when an online merchant they normally stalk stopped using Cloudflare’s services during the outage.

“Let’s say you were an attacker, trying to grind your way into a target, but you felt that Cloudflare was in the way in the past,” he said. “Then you see through DNS changes that the target has eliminated Cloudflare from their web stack due to the outage. You’re now going to launch a whole bunch of new attacks because the protective layer is no longer in place.”

Nicole Scott, senior product marketing manager at the McLean, Va. based Replica Cyber, called yesterday’s outage “a free tabletop exercise, whether you meant to run one or not.”

“That few-hour window was a live stress test of how your organization routes around its own control plane and shadow IT blossoms under the sunlamp of time pressure,” Scott said in a post on LinkedIn. “Yes, look at the traffic that hit you while protections were weakened. But also look hard at the behavior inside your org.”

Scott said organizations seeking security insights from the Cloudflare outage should ask themselves:

1. What was turned off or bypassed (WAF, bot protections, geo blocks), and for how long?
2. What emergency DNS or routing changes were made, and who approved them?
3. Did people shift work to personal devices, home Wi-Fi, or unsanctioned Software-as-a-Service providers to get around the outage?
4. Did anyone stand up new services, tunnels, or vendor accounts “just for now”?
5. Is there a plan to unwind those changes, or are they now permanent workarounds?
6. For the next incident, what’s the intentional fallback plan, instead of decentralized improvisation?

In a postmortem published Tuesday evening, Cloudflare said the disruption was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind.

“Instead, it was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a ‘feature file’ used by our Bot Management system,” Cloudflare CEO Matthew Prince wrote. “That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.”

Cloudflare estimates that roughly 20 percent of websites use its services, and with much of the modern web relying heavily on a handful of other cloud providers including AWS and Azure, even a brief outage at one of these platforms can create a single point of failure for many organizations.

Martin Greenfield, CEO at the IT consultancy Quod Orbis, said Tuesday’s outage was another reminder that many organizations may be putting too many of their eggs in one basket.

“There are several practical and overdue fixes,” Greenfield advised. “Split your estate. Spread WAF and DDoS protection across multiple zones. Use multi-vendor DNS. Segment applications so a single provider outage doesn’t cascade. And continuously monitor controls to detect single-vendor dependency.”

Cloudflare Outage or Cyberattack? The Real Reason Behind the Massive Disruption

19 November 2025 at 01:29

Cloudflare Outage

A major Cloudflare outage struck on 18 November 2025, beginning at 11:20 UTC and spreading across its global network within minutes. Although the issue initially looked like a large-scale Cloudflare cyberattack, it was later confirmed to be an internal configuration error that disrupted company’s core traffic-routing systems.

According to Cloudflare, the disruption began when one of the company’s database systems generated incorrect data and published it across the network. The problem stemmed from altered permissions in a ClickHouse database cluster, which inadvertently caused the system to output duplicate rows into a “feature file” used by Cloudflare’s Bot Management module. The feature file, normally stable in size, doubled unexpectedly. Once this oversized file propagated across Cloudflare’s machines, the software responsible for distributing global traffic encountered a hard limit and failed. This internal malfunction translated into widespread HTTP 5xx errors for users trying to reach websites that rely on Cloudflare’s network. A screenshot shared by the company showed the generic error page millions of users saw during the outage. Cloudflare initially suspected that the symptoms resembled a hyper-scale DDoS attack, a concern shaped partly by recent “Aisuru” attack campaigns, raising fears of a potential cyberattack on Cloudflare. The company later clarified that “the issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind.” Once engineers discovered the faulty feature file, they halted its propagation and reinserted an earlier, stable version.  Core traffic began recovering by 14:30 UTC, and Cloudflare reported full restoration of all systems by 17:06 UTC. “Given Cloudflare’s importance in the Internet ecosystem, any outage of any of our systems is unacceptable,” the company wrote, noting that the incident was “deeply painful to every member of our team. 

Why the System Failed During the Cloudflare Outage 

The root cause of the Cloudflare outage originated with a permissions change applied at 11:05 UTC. Cloudflare engineers were in the process of improving how distributed queries run in ClickHouse. Historically, internal processes assumed that metadata queries returned results only from the “default” database. The new permissions change allowed these queries to also surface metadata from the underlying “r0” database.  A machine learning–related query, used to build the Bot Management feature configuration file, combined metadata from both locations without filtering database names. The oversight caused the file to double in size as duplicate features were added. Bot Management modules preallocate memory based on a strict feature limit of 200 entries; the malformed file exceeded this threshold, triggering a Rust panic within the proxy system.  Because Cloudflare’s core proxy (called FL, or “Frontline”) touches nearly every request on the network, the failure cascaded quickly. The newer version of the proxy system, FL2, also encountered 5xx errors. Legacy FL systems did not crash, but they produced invalid bot scores, defaulting everything to zero and potentially leading to false positives for customers who blocked bot traffic. 

Systems Impacted 

The Cloudflare outage disrupted multiple services: 
  • Core CDN and security services returned widespread HTTP 5xx errors. 
  • Turnstile, Cloudflare’s verification system, failed to load, preventing many users from logging into the Cloudflare dashboard. 
  • Workers KV experienced a sharp increase in error rates until engineers applied a bypass patch at 13:04, stabilizing dependent services. 
  • Cloudflare Access experienced authentication failures from the start of the incident. Existing sessions remained valid, but new attempts failed and returned error pages. 
  • Email Security continued processing email but temporarily lost access to an IP reputation source, slightly reducing spam-detection accuracy. 
Cloudflare also noted latency spikes across its CDN during the incident as debugging and observability tools consumed excess CPU while attempting to analyze the errors.  Complicating the investigation further, Cloudflare’s external status page briefly went offline, despite being completely hosted outside Cloudflare’s network, adding to internal suspicion that an attacker might be targeting multiple systems simultaneously. This coincidence reinforced early fears of a potential Cloudflare cyberattack, though this theory was later dismissed. 

Post-Incident Actions and Next Steps 

After restoring service, Cloudflare implemented a series of fixes, strengthening configuration protection, improving kill-switch controls, refining proxy error-handling, and preventing diagnostic tools from overwhelming system resources. The company described the event as its most serious outage since 2019, noting that while it briefly raised concerns about a potential cyberattack on Cloudflare, the root cause was purely internal.   Events like this highlight the value of proactive threat intelligence. Cyble, ranked #1 globally in Cyber Threat Intelligence Technologies on Gartner Peer Insights, provides AI-native, autonomous threat detection and attack-surface visibility. To assess your organization’s exposure and strengthen resilience, book a personized demo or start a free External Threat Assessment today. 

Microsoft Fends Off Massive DDoS Attack by Aisuru Botnet Operators

18 November 2025 at 14:30
BADBOT 2.0,DanaBot, operation, botnets, DDOS attacks, FBI IPStorm botnet DDoS

Microsoft mitigated what it called a record-breaking DDoS attack by bad actor using the Aisuru botnet, a collection of about 300,000 infected IoT devices. The size of the attack and the botnet used in it is the latest example of a DDoS environment that continues to scale in pace with the internet.

The post Microsoft Fends Off Massive DDoS Attack by Aisuru Botnet Operators appeared first on Security Boulevard.

Cloudflare Scrubs Aisuru Botnet from Top Domains List

5 November 2025 at 21:04

For the past week, domains associated with the massive Aisuru botnet have repeatedly usurped Amazon, Apple, Google and Microsoft in Cloudflare’s public ranking of the most frequently requested websites. Cloudflare responded by redacting Aisuru domain names from their top websites list. The chief executive at Cloudflare says Aisuru’s overlords are using the botnet to boost their malicious domain rankings, while simultaneously attacking the company’s domain name system (DNS) service.

The #1 and #3 positions in this chart are Aisuru botnet controllers with their full domain names redacted. Source: radar.cloudflare.com.

Aisuru is a rapidly growing botnet comprising hundreds of thousands of hacked Internet of Things (IoT) devices, such as poorly secured Internet routers and security cameras. The botnet has increased in size and firepower significantly since its debut in 2024, demonstrating the ability to launch record distributed denial-of-service (DDoS) attacks nearing 30 terabits of data per second.

Until recently, Aisuru’s malicious code instructed all infected systems to use DNS servers from Google — specifically, the servers at 8.8.8.8. But in early October, Aisuru switched to invoking Cloudflare’s main DNS server — 1.1.1.1 — and over the past week domains used by Aisuru to control infected systems started populating Cloudflare’s top domain rankings.

As screenshots of Aisuru domains claiming two of the Top 10 positions ping-ponged across social media, many feared this was yet another sign that an already untamable botnet was running completely amok. One Aisuru botnet domain that sat prominently for days at #1 on the list was someone’s street address in Massachusetts followed by “.com”. Other Aisuru domains mimicked those belonging to major cloud providers.

Cloudflare tried to address these security, brand confusion and privacy concerns by partially redacting the malicious domains, and adding a warning at the top of its rankings:

“Note that the top 100 domains and trending domains lists include domains with organic activity as well as domains with emerging malicious behavior.”

Cloudflare CEO Matthew Prince told KrebsOnSecurity the company’s domain ranking system is fairly simplistic, and that it merely measures the volume of DNS queries to 1.1.1.1.

“The attacker is just generating a ton of requests, maybe to influence the ranking but also to attack our DNS service,” Prince said, adding that Cloudflare has heard reports of other large public DNS services seeing similar uptick in attacks. “We’re fixing the ranking to make it smarter. And, in the meantime, redacting any sites we classify as malware.”

Renee Burton, vice president of threat intel at the DNS security firm Infoblox, said many people erroneously assumed that the skewed Cloudflare domain rankings meant there were more bot-infected devices than there were regular devices querying sites like Google and Apple and Microsoft.

“Cloudflare’s documentation is clear — they know that when it comes to ranking domains you have to make choices on how to normalize things,” Burton wrote on LinkedIn. “There are many aspects that are simply out of your control. Why is it hard? Because reasons. TTL values, caching, prefetching, architecture, load balancing. Things that have shared control between the domain owner and everything in between.”

Alex Greenland is CEO of the anti-phishing and security firm Epi. Greenland said he understands the technical reason why Aisuru botnet domains are showing up in Cloudflare’s rankings (those rankings are based on DNS query volume, not actual web visits). But he said they’re still not meant to be there.

“It’s a failure on Cloudflare’s part, and reveals a compromise of the trust and integrity of their rankings,” he said.

Greenland said Cloudflare planned for its Domain Rankings to list the most popular domains as used by human users, and it was never meant to be a raw calculation of query frequency or traffic volume going through their 1.1.1.1 DNS resolver.

“They spelled out how their popularity algorithm is designed to reflect real human use and exclude automated traffic (they said they’re good at this),” Greenland wrote on LinkedIn. “So something has evidently gone wrong internally. We should have two rankings: one representing trust and real human use, and another derived from raw DNS volume.”

Why might it be a good idea to wholly separate malicious domains from the list? Greenland notes that Cloudflare Domain Rankings see widespread use for trust and safety determination, by browsers, DNS resolvers, safe browsing APIs and things like TRANCO.

“TRANCO is a respected open source list of the top million domains, and Cloudflare Radar is one of their five data providers,” he continued. “So there can be serious knock-on effects when a malicious domain features in Cloudflare’s top 10/100/1000/million. To many people and systems, the top 10 and 100 are naively considered safe and trusted, even though algorithmically-defined top-N lists will always be somewhat crude.”

Over this past week, Cloudflare started redacting portions of the malicious Aisuru domains from its Top Domains list, leaving only their domain suffix visible. Sometime in the past 24 hours, Cloudflare appears to have begun hiding the malicious Aisuru domains entirely from the web version of that list. However, downloading a spreadsheet of the current Top 200 domains from Cloudflare Radar shows an Aisuru domain still at the very top.

According to Cloudflare’s website, the majority of DNS queries to the top Aisuru domains — nearly 52 percent — originated from the United States. This tracks with my reporting from early October, which found Aisuru was drawing most of its firepower from IoT devices hosted on U.S. Internet providers like AT&T, Comcast and Verizon.

Experts tracking Aisuru say the botnet relies on well more than a hundred control servers, and that for the moment at least most of those domains are registered in the .su top-level domain (TLD). Dot-su is the TLD assigned to the former Soviet Union (.su’s Wikipedia page says the TLD was created just 15 months before the fall of the Berlin wall).

A Cloudflare blog post from October 27 found that .su had the highest “DNS magnitude” of any TLD, referring to a metric estimating the popularity of a TLD based on the number of unique networks querying Cloudflare’s 1.1.1.1 resolver. The report concluded that the top .su hostnames were associated with a popular online world-building game, and that more than half of the queries for that TLD came from the United States, Brazil and Germany [it’s worth noting that servers for the world-building game Minecraft were some of Aisuru’s most frequent targets].

A simple and crude way to detect Aisuru bot activity on a network may be to set an alert on any systems attempting to contact domains ending in .su. This TLD is frequently abused for cybercrime and by cybercrime forums and services, and blocking access to it entirely is unlikely to raise any legitimate complaints.

Perplexity AI ignores no-crawling rules on websites, crawls them anyway

6 August 2025 at 08:45

Imagine putting up a no-trespassing sign for people walking their dogs, and then finding out that one person dresses up their Great Dane as a calf and walks it on your grounds.

Well that’s sort of what AI answer engine Perplexity has been doing, by evading the no-crawl directives of websites, according to Cloudflare.

The no-trespassing sign in this case would be a robots.txt file—a small text file placed on a website that tells search engines and other automated tools (often called “bots” or “crawlers”) which pages or sections of the site they are allowed to access and which parts they should not visit.

Cloudflare started an investigation after customers complained that Perplexity accessed their content despite the fact that they had disallowed Perplexity crawling activity in their robots.txt files. They had also created Web Application Firewall (WAF) rules to specifically block both of Perplexity’s declared crawlers: PerplexityBot and Perplexity-User.

So Cloudflare set up some test domains and queried Perplexity about them to see if it would return the information that it shouldn’t be able to access. What the researchers found was surprising, because it looks like Perplexity uses a very deliberate system to bypass the no-crawling directives.

The researchers discovered the use of a user-agent designed to impersonate Google Chrome on macOS (to look like a regular user) when their declared crawler was blocked. The User Agent String shows the website you’re visiting some basic information about your browser and operating system, so the site can provide an experience optimized for that setup. The Perplexity crawler camouflaged by this User Agent String used regularly changing IP-addresses outside of Perplexity’s official IP range, allowing it to successfully access the content it wasn’t supposed to.

I wondered Perplexity’s view on the whole thing, so I asked it whether it knew what a robots.txt file was for and why it was bad to ignore one. It returned the following:

Perplexity answers why it's wrong to bypass robots.txt files

Why is it wrong to ignore robots.txt?

  • Respect for Privacy and Security: Some parts of a website may contain private, sensitive, or unfinished content not meant for public search engines. Ignoring robots.txt can expose data that should remain private.
  • Fair Resource Usage: Crawling websites takes up bandwidth and server resources. By telling robots where not to go, website owners can prevent wasteful use of these resources, ensuring the site runs smoothly for real users.
  • Legal and Ethical Standards: Ignoring robots.txt can be considered unethical, and in certain circumstances, it might even break terms of service or data protection regulations depending on what information is accessed and how it’s used.
  • Trust and Reputation: If a robot repeatedly ignores robots.txt, it can be banned or blacklisted by websites, and its reputation could suffer in the public and professional community.

The official reaction of Perplexity is somewhat different. It hides behind the difference between a regular bot and an AI agent that is on a quest for a user to answer a specific question.

The bluster around this issue reveals that Cloudflare’s leadership is either dangerously misinformed on the basics of AI, or simply more flair than cloud. https://t.co/NgliGZCspP

— Perplexity (@perplexity_ai) August 5, 2025


Perplexity reasons that:

“Modern AI assistants work fundamentally differently from traditional web crawling. When you ask Perplexity a question that requires current information—say, “What are the latest reviews for that new restaurant?”—the AI doesn’t already have that information sitting in a database somewhere. Instead, it goes to the relevant websites, reads the content, and brings back a summary tailored to your specific question.

This is fundamentally different from traditional web crawling, in which crawlers systematically visit millions of pages to build massive databases, whether anyone asked for that specific information or not.”

Although I see Perplexity’s point, there is a big difference between crawling websites to gather as much information as you can and seeking to answer a specific question for one user, the decision whether a website owner wants to allow either is up to them. And there should be no need for sneaking around.

So why not create a User Agent String that tells website owners “this is just a short visit to find some specific information” to discern it from actual crawlers that siphon up every bit they can find, and then let the website owners decide whether they will allow them or not?

Either way, this discussion seems far from over, and with the rise of AI agents we will probably see problems arise that were not on the radar before we all started using AI.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

KrebsOnSecurity Hit With Near-Record 6.3 Tbps DDoS

20 May 2025 at 17:30

KrebsOnSecurity last week was hit by a near record distributed denial-of-service (DDoS) attack that clocked in at more than 6.3 terabits of data per second (a terabit is one trillion bits of data). The brief attack appears to have been a test run for a massive new Internet of Things (IoT) botnet capable of launching crippling digital assaults that few web destinations can withstand. Read on for more about the botnet, the attack, and the apparent creator of this global menace.

For reference, the 6.3 Tbps attack last week was ten times the size of the assault launched against this site in 2016 by the Mirai IoT botnet, which held KrebsOnSecurity offline for nearly four days. The 2016 assault was so large that Akamai – which was providing pro-bono DDoS protection for KrebsOnSecurity at the time — asked me to leave their service because the attack was causing problems for their paying customers.

Since the Mirai attack, KrebsOnSecurity.com has been behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content. Google Security Engineer Damian Menscher told KrebsOnSecurity the May 12 attack was the largest Google has ever handled. In terms of sheer size, it is second only to a very similar attack that Cloudflare mitigated and wrote about in April.

After comparing notes with Cloudflare, Menscher said the botnet that launched both attacks bears the fingerprints of Aisuru, a digital siege machine that first surfaced less than a year ago. Menscher said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second.

“It was the type of attack normally designed to overwhelm network links,” Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). “For most companies, this size of attack would kill them.”

A graph depicting the 6.5 Tbps attack mitigated by Cloudflare in April 2025. Image: Cloudflare.

The Aisuru botnet comprises a globally-dispersed collection of hacked IoT devices, including routers, digital video recorders and other systems that are commandeered via default passwords or software vulnerabilities. As documented by researchers at QiAnXin XLab, the botnet was first identified in an August 2024 attack on a large gaming platform.

Aisuru reportedly went quiet after that exposure, only to reappear in November with even more firepower and software exploits. In a January 2025 report, XLab found the new and improved Aisuru (a.k.a. “Airashi“) had incorporated a previously unknown zero-day vulnerability in Cambium Networks cnPilot routers.

NOT FORKING AROUND

The people behind the Aisuru botnet have been peddling access to their DDoS machine in public Telegram chat channels that are closely monitored by multiple security firms. In August 2024, the botnet was rented out in subscription tiers ranging from $150 per day to $600 per week, offering attacks of up to two terabits per second.

“You may not attack any measurement walls, healthcare facilities, schools or government sites,” read a notice posted on Telegram by the Aisuru botnet owners in August 2024.

Interested parties were told to contact the Telegram handle “@yfork” to purchase a subscription. The account @yfork previously used the nickname “Forky,” an identity that has been posting to public DDoS-focused Telegram channels since 2021.

According to the FBI, Forky’s DDoS-for-hire domains have been seized in multiple law enforcement operations over the years. Last year, Forky said on Telegram he was selling the domain stresser[.]best, which saw its servers seized by the FBI in 2022 as part of an ongoing international law enforcement effort aimed at diminishing the supply of and demand for DDoS-for-hire services.

“The operator of this service, who calls himself ‘Forky,’ operates a Telegram channel to advertise features and communicate with current and prospective DDoS customers,” reads an FBI seizure warrant (PDF) issued for stresser[.]best. The FBI warrant stated that on the same day the seizures were announced, Forky posted a link to a story on this blog that detailed the domain seizure operation, adding the comment, “We are buying our new domains right now.”

A screenshot from the FBI’s seizure warrant for Forky’s DDoS-for-hire domains shows Forky announcing the resurrection of their service at new domains.

Approximately ten hours later, Forky posted again, including a screenshot of the stresser[.]best user dashboard, instructing customers to use their saved passwords for the old website on the new one.

A review of Forky’s posts to public Telegram channels — as indexed by the cyber intelligence firms Unit 221B and Flashpoint — reveals a 21-year-old individual who claims to reside in Brazil [full disclosure: Flashpoint is currently an advertiser on this blog].

Since late 2022, Forky’s posts have frequently promoted a DDoS mitigation company and ISP that he operates called botshield[.]io. The Botshield website is connected to a business entity registered in the United Kingdom called Botshield LTD, which lists a 21-year-old woman from Sao Paulo, Brazil as the director. Internet routing records indicate Botshield (AS213613) currently controls several hundred Internet addresses that were allocated to the company earlier this year.

Domaintools.com reports that botshield[.]io was registered in July 2022 to a Kaike Southier Leite in Sao Paulo. A LinkedIn profile by the same name says this individual is a network specialist from Brazil who works in “the planning and implementation of robust network infrastructures, with a focus on security, DDoS mitigation, colocation and cloud server services.”

MEET FORKY

Image: Jaclyn Vernace / Shutterstock.com.

In his posts to public Telegram chat channels, Forky has hardly attempted to conceal his whereabouts or identity. In countless chat conversations indexed by Unit 221B, Forky could be seen talking about everyday life in Brazil, often remarking on the extremely low or high prices in Brazil for a range of goods, from computer and networking gear to narcotics and food.

Reached via Telegram, Forky claimed he was “not involved in this type of illegal actions for years now,” and that the project had been taken over by other unspecified developers. Forky initially told KrebsOnSecurity he had been out of the botnet scene for years, only to concede this wasn’t true when presented with public posts on Telegram from late last year that clearly showed otherwise.

Forky denied being involved in the attack on KrebsOnSecurity, but acknowledged that he helped to develop and market the Aisuru botnet. Forky claims he is now merely a staff member for the Aisuru botnet team, and that he stopped running the botnet roughly two months ago after starting a family. Forky also said the woman named as director of Botshield is related to him.

Forky offered equivocal, evasive responses to a number of questions about the Aisuru botnet and his business endeavors. But on one point he was crystal clear:

“I have zero fear about you, the FBI, or Interpol,” Forky said, asserting that he is now almost entirely focused on their hosting business — Botshield.

Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.

DomainTools finds the same Sao Paulo street address in the registration records for botshield[.]io was used to register several other domains, including cant-mitigate[.]us. The email address in the WHOIS records for that domain is forkcontato@gmail.com, which DomainTools says was used to register the domain for the now-defunct DDoS-for-hire service stresser[.]us, one of the domains seized in the FBI’s 2023 crackdown.

On May 8, 2023, the U.S. Department of Justice announced the seizure of stresser[.]us, along with a dozen other domains offering DDoS services. The DOJ said ten of the 13 domains were reincarnations of services that were seized during a prior sweep in December, which targeted 48 top stresser services (also known as “booters”).

Forky claimed he could find out who attacked my site with Aisuru. But when pressed a day later on the question, Forky said he’d come up empty-handed.

“I tried to ask around, all the big guys are not retarded enough to attack you,” Forky explained in an interview on Telegram. “I didn’t have anything to do with it. But you are welcome to write the story and try to put the blame on me.”

THE GHOST OF MIRAI

The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief — lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google’s Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet’s capabilities.

In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.

As first revealed by KrebsOnSecurity in January 2017, the Mirai authors were two U.S. men who co-ran a DDoS mitigation service — even as they were selling far more lucrative DDoS-for-hire services using the most powerful botnet on the planet.

Less than a week after the Mirai botnet was used in a days-long DDoS against KrebsOnSecurity, the Mirai authors published the source code to their botnet so that they would not be the only ones in possession of it in the event of their arrest by federal investigators.

Ironically, the leaking of the Mirai source is precisely what led to the eventual unmasking and arrest of the Mirai authors, who went on to serve probation sentences that required them to consult with FBI investigators on DDoS investigations. But that leak also rapidly led to the creation of dozens of Mirai botnet clones, many of which were harnessed to fuel their own powerful DDoS-for-hire services.

Menscher told KrebsOnSecurity that as counterintuitive as it may sound, the Internet as a whole would probably be better off if the source code for Aisuru became public knowledge. After all, he said, the people behind Aisuru are in constant competition with other IoT botnet operators who are all striving to commandeer a finite number of vulnerable IoT devices globally.

Such a development would almost certainly cause a proliferation of Aisuru botnet clones, he said, but at least then the overall firepower from each individual botnet would be greatly diminished — or at least within range of the mitigation capabilities of most DDoS protection providers.

Barring a source code leak, Menscher said, it would be nice if someone published the full list of software exploits being used by the Aisuru operators to grow their botnet so quickly.

“Part of the reason Mirai was so dangerous was that it effectively took out competing botnets,” he said. “This attack somehow managed to compromise all these boxes that nobody else knows about. Ideally, we’d want to see that fragmented out, so that no [individual botnet operator] controls too much.”

❌