New York is contemplating a bill that adds surveillance to 3D printers:
New York’s 20262027 executive budget bill (S.9005 / A.10005) includes language that should alarm every maker, educator, and small manufacturer in the state. Buried in Part C is a provision requiring all 3D printers sold or delivered in New York to include “blocking technology.” This is defined as software or firmware that scans every print file through a “firearms blueprint detection algorithm” and refuses to print anything it flags as a potential firearm or firearm component.
I get the policy goals here, but the solution just won’t work. It’s the same problem as DRM: trying to prevent general-purpose computers from doing specific things. Cory Doctorow wrote about it in 2018 and—more generally—spoke about it in 2011.
At Rapid7, we track a wide range of threats targeting cloud environments, where a frequent objective is hijacking victim infrastructure to host phishing or spam campaigns. Beyond the obvious security risks, this approach allows threat actors to offload their operational costs onto the target company, often resulting in significant, unwanted bills for services the victim never intended to use.
Rapid7 recently investigated a cloud abuse incident in which threat actors leveraged compromised AWS credentials to deploy phishing and spam infrastructure using AWS WorkMail, bypassing the anti-abuse controls normally enforced by AWS Simple Email Service (SES). AWS SES is a general-purpose, API-driven email platform intended for application-generated email such as transactional notifications and marketing messages. This allows the threat actor to leverage Amazon’s high sender reputation to masquerade as a valid business entity, with the ability to send email directly from victim-owned AWS infrastructure. Generating minimal service-attributed telemetry also makes threat actor activity difficult to distinguish from routine activity. Any organization with exposed AWS credentials and permissive Identity and Access Management (IAM) policies are potentially at risk, particularly those without guardrails or monitoring around WorkMail and SES configuration.
In this post, we analyzed a real-world incident observed by our MDR team in which threat actors abused native AWS email services to build phishing and spam infrastructure inside a compromised cloud environment. We will reconstruct the attacker’s progression from credential validation and IAM reconnaissance to bypassing Amazon SES safeguards by pivoting to AWS WorkMail. Along the way, we highlight how legitimate service abstractions can be leveraged to evade detection, examine the resulting logging and attribution gaps, and outline practical detection and prevention strategies defenders can use to identify and disrupt similar cloud-native abuse.
Background: AWS WorkMail and its key components
AWS WorkMail is a fully managed business email and calendaring service that allows organizations to operate corporate mailboxes without deploying or maintaining their own mail servers. It supports standard email protocols such as IMAP and SMTP, as well as common desktop and mobile clients, making it a lightweight, pay-as-you-go alternative for teams already operating within AWS.
To understand the activities performed by threat actors in the incident, it’s important to first introduce several core concepts within AWS WorkMail.
Organization
An Organization is the top-level container in WorkMail. It represents an isolated email environment that holds all users, groups, and domains. Each WorkMail organization is region-specific and operates independently, which allows attackers to create disposable, self-contained email infrastructures with minimal setup.
Users
Users represent individual mail-enabled identities within a WorkMail organization. After a user is created using the “workmail:CreateUser” API call, a mailbox can be assigned via a “workmail:RegisterToWorkMail”API call. Once registered, the user can authenticate to the AWS WorkMail web client or connect via standard email protocols and immediately begin sending and receiving email.
Groups
Groups are collections of users that can receive email on behalf of multiple members. They are typically used for distribution lists or shared inboxes and can simplify bulk message delivery or internal coordination within a WorkMail organization.
Domains
Domains define the email address namespace used by a WorkMail organization (e.g.@example.com). Before a domain can be used, ownership must be verified. This verification process leverages the standard domain verification mechanism of Amazon Simple Email Service, typically via DNS records. Once verified, the domain can be actively used for sending and receiving email, enabling threat actors to operate from attacker-controlled, but seemingly legitimate, domains.
Attack analysis
The diagram below contains a graphical representation of the key events carried out by the attackers throughout the attack, starting with initial access actions, continuing through privilege escalation, and ending with the achievement of objectives.
Figure 1: Graphical visualization of the attack
Initial access
The compromise began with the exposure of long-term AWS access keys. The first indication of malicious activity was an “sts:GetCallerIdentity” API call with the User-Agent set to “TruffleHog Firefox.” This strongly suggests the use of TruffleHog, a tool commonly leveraged by adversaries to discover and validate leaked credentials from sources such as GitHub, GitLab, and public S3 buckets. Rapid7 has frequently observed TruffleHog usage in active campaigns, including activity attributed to groups such as the Crimson Collective.
Several days after this initial credential validation, we observed suspicious activity involving a second IAM user authenticated via long-term access keys. While we cannot conclusively prove that both users were accessed by the same operator, multiple factors suggest they were part of the same intrusion activity. Notably, both authentications originated from the same geographic region, which was anomalous for the victim’s normal operating patterns. Throughout the incident window, access to both accounts was conducted through a rotating set of IP addresses associated primarily with cloud service providers such as Amazon and DigitalOcean. This infrastructure choice is consistent with common adversary tradecraft used to obfuscate true origin and blend into legitimate cloud-to-cloud traffic.
⠀
Figure 2: Example TruffleHog output showing discovered credentials for Google Cloud Platform (GCP)
Discovery phase and privilege escalation
Following initial access, the first compromised user was used to perform basic environment discovery via native AWS APIs. These attempts repeatedly resulted in AccessDenied errors, indicating that the exposed credentials were constrained by limited permissions. The activity was conducted using the AWS command-line interface (CLI), suggesting hands-on, interactive exploration by the threat actor rather than automated tooling.
After encountering these limitations, the adversary shifted activity to the second set of compromised credentials, which possessed significantly broader permissions. With this user, enumeration became more deliberate and structured. The actor began with iam:ListUsers API calls to understand the identity landscape and then used a technique of intentionally triggering API errors to confirm specific permissions without making persistent changes.
As part of this broader discovery effort, the actor also queried Amazon SES to assess its current configuration and readiness for abuse. Specifically, they executed ses:GetAccountand ses:ListIdentities. These calls allowed the adversary to quickly map the operational status of SES within the account. The ses:ListIdentities API call was used to determine whether any verified identities (domains or email addresses) already existed that could be immediately leveraged for sending mail; none were present at the time. In parallel, ses:GetAccountwas used to identify whether the account was operating in the SES sandbox, which would impose strict sending limits and require additional steps before large-scale email campaigns could be launched.
This SES-focused reconnaissance indicates early intent to abuse email-sending capabilities and demonstrates how attackers can efficiently evaluate service readiness using only a small number of low-noise management API calls.
For example, the actor attempted to create an IAM user that already existed. The resulting error response confirmed possession of iam:CreateUserpermissions without successfully creating a new entity:
⠀
{
"userAgent": "aws-cli/1.22.34 Python/3.10.12 Linux/5.15.0-113-generic botocore/1.23.34",
"errorCode": "EntityAlreadyExistsException",
"errorMessage": "User with name xxxx already exists."
}
Listing 1: Part of the iam:CreateUser CloudTrail log
⠀
A similar validation was performed using iam:CreateLoginProfile. By supplying a password that violated the account’s password policy, the actor received a PasswordPolicyViolationException, confirming their ability to create console login profiles:
⠀
{
"userAgent": "aws-cli/1.22.34 Python/3.10.12 Linux/5.15.0-113-generic botocore/1.23.34",
"errorCode": "PasswordPolicyViolationException",
"errorMessage": "Password should have at least one uppercase letter"
}
Listing 2: Part of the iam:CreateLoginProfile CloudTrail log
⠀
After validating the scope of their privileges, the adversary created a new IAM user, attached the AWS managed policy “AdministratorAccess”, and established a login profile to enable AWS Management Console access. This marked a transition from CLI-based reconnaissance to full GUI-based control, providing unrestricted access and setting the stage for subsequent operational activity.
Action on objectives: Preparing email infrastructure for abuse
By the end of the discovery phase, the threat actor had established two critical facts:
No verified identities existed in Amazon Simple Email Service (SES).
The account remained restricted by the SES sandbox.
The SES sandbox is explicitly designed to limit fraud and abuse, and its restrictions effectively prevent meaningful phishing or spam campaigns. While an account remains in the sandbox, the following controls apply:
Emails can only be sent to verified identities (email addresses or domains) or the SES mailbox simulator.
A maximum of 200 messages per 24-hour period.
A maximum sending rate of 1 message per second.
These constraints made SES unsuitable for immediate abuse at scale. Rather than abandoning the service, the attacker initiated a process to legitimize higher-volume email sending.
First, they opened a support case with AWS requesting removal from the SES sandbox. In parallel, they requested a substantial increase to the daily sending quota— setting it to 100,000 emails per day —using the servicequotas:RequestServiceQuotaIncreaseAPI call.
Listing 3: Request parameters from RequestServiceQuotaIncrease API call
⠀
During this waiting period, the actor focused on persistence and stealth. Multiple IAM users were created.. These usernames were deliberately chosen to resemble region- or service-scoped automation accounts rather than human operators. To further reduce suspicion during IAM audits, the attacker attached narrowly scoped, SES-only policies to these users instead of broad administrative permissions. This approach allowed them to preserve operational access while minimizing obvious indicators of compromise such as over-privileged identities.
At this stage, the attacker had effectively prepared the account for large-scale email abuse-but they did not wait for AWS approval to proceed.
Bypassing SES controls by abusing AWS WorkMail
Rather than remaining idle while SES sandbox removal and quota increases were pending, the attacker pivoted to AWS WorkMail, which offers an alternative email-sending pathway with significantly fewer upfront restrictions.
Using the workmail:CreateOrganizationAPI, the threat actor created multiple WorkMail organizations. They then initiated domain verification workflows for domains designed to appear legitimate and business-like, including:
cloth-prelove[.]me
ipad-service-london[.]com
Domain verification was performed through ses:VerifyDomainIdentity and ses:VerifyDomainDkim, with the calls originating from workmail.amazonaws.com. This highlights an important nuance for defenders: although SES APIs are involved, the activity is driven by WorkMail provisioning rather than traditional SES email campaigns.
Once domain verification was completed, the actor created multiple mailbox users directly within WorkMail, such as:
service@ipad-service-london[.]com
marketing@ipad-service-london[.]com
These accounts served two purposes. First, they established persistence at the application layer, independent of IAM. Second, they provided credible sender identities for phishing and spam operations, closely resembling legitimate corporate email addresses.
There were also AWS directory service events logged by CloudTrail that show new aliases created for the new sender domains, using the victim’s directory tenant:
CreateAlias
AuthorizeAppication
This pivot is particularly impactful because AWS WorkMail does not implement a sandbox model comparable to SES. Emails can be sent immediately to external, unverified recipients. Additionally, WorkMail supports significantly higher sending volumes than SES sandbox limits. While Rapid7 has not empirically validated the maximum throughput, AWS documentation cites a default upper limit of 100,000 external recipients per day per organization, aggregated across all users.
Email sending methods and logging gaps
The attacker had two viable options for sending email through WorkMail:
1. Web interface Emails sent through the AWS WorkMail web client may surface indirectly in CloudTrail as “ses:SendRawEmail” events. These events are generated because WorkMail uses Amazon Simple Email Service (SES) as its underlying mail transport, even though the messages are composed and sent entirely through the WorkMail application.
While these events are not attributed to an IAM principal, they do expose several pieces of valuable metadata within the “requestParameters” field — most notably the sender’s email address and associated SES identity. This allows defenders to link outbound email activity to specific WorkMail users and recently verified domains, even in the absence of traditional application or message-level logs.
One notable limitation of these “ses:SendRawEmail” events is the absence of a true client source IP address. Because emails sent via the WorkMail web interface are executed by an AWS-managed service on behalf of the mailbox user, CloudTrail records the “sourceIPAddress” as “workmail.<region>.amazonaws.com” rather than the originating IP address of the actor’s browser session. This effectively obscures the attacker’s true network origin and prevents defenders from correlating email-sending activity with suspicious IP ranges, TOR exit nodes, or previously observed intrusion infrastructure.
Listing 4: SendRawEmail event logged after an email is sent via AWS WorkMail web interface⠀
⠀
While limited, this telemetry can still be valuable for correlating suspicious sending behavior with recently created WorkMail users or newly verified domains.
2. SMTP access Alternatively, the attacker can authenticate directly to WorkMail’s SMTP endpoint and send messages programmatically. Emails sent via SMTPdo not generate CloudTrail events, even when SES data events are enabled, creating a significant blind spot for defenders.
An example Python script used to send email through WorkMail SMTP is shown below:
⠀
import smtplib
from email.message import EmailMessage
# Configuration
SMTP_SERVER = "smtp.mail.us-east-1.awsapps.com"
SMTP_PORT = 465
EMAIL_ADDRESS = "email@example.com"
EMAIL_PASSWORD = "****"
# Create the message
msg = EmailMessage()
msg["Subject"] = "WorkMail SMTP"
msg["From"] = EMAIL_ADDRESS
msg["To"] = "<unverified_email>"
msg.set_content("Email Delivered to an Unverified Email via AWS WorkMail")
# Send the email
try:
with smtplib.SMTP_SSL(SMTP_SERVER, SMTP_PORT) as smtp:
smtp.login(EMAIL_ADDRESS, EMAIL_PASSWORD)
smtp.send_message(msg)
print("Email sent successfully!")
except Exception as e:
print(f"Error: {e}")
Listing 5: Example script sending messages via AWS WorkMail via SMTP
⠀
From an attacker’s perspective, this method is ideal: higher volume, immediate external reach, and minimal centralized logging. From a defender’s perspective, it underscores the importance of monitoring WorkMail organization creation, domain verification events, and mailbox provisioning, as these actions often precede phishing activity that will never be visible in CloudTrail.
Conclusion
This incident illustrates how threat actors can abuse higher-level AWS services to deploy phishing and spam infrastructure closely resembling legitimate enterprise usage. While AWS WorkMail is not designed to support bulk email operations, attackers can still leverage it as an interim capability alongside Amazon SES. By abusing WorkMail’s authenticated mailboxes and relaxed upfront controls, adversaries can begin sending lower volumes of email immediately — well before SES is moved out of the sandbox and higher sending quotas are approved. This staged approach allows attackers to establish sender reputation, validate infrastructure, and maintain operational momentum while bypassing many of the friction points intentionally built into SES.
To mitigate this class of abuse, organizations should combine preventive guardrails with focused detection. Where AWS WorkMail is not required, its use should be explicitly blocked using AWS Organizations Service Control Policies (SCPs) to prevent organization creation and mailbox provisioning. In environments where WorkMail is needed, IAM policies should enforce strict least-privilege access and treat WorkMail and SES administration as privileged operations subject to monitoring and approval. Finally, organizations should reduce the likelihood of initial access by implementing secure development and operational practices — such as secret scanning in code repositories, regular key rotation, and minimizing long-term access keys — to limit the impact of credential leakage and prevent attackers from converting compromised credentials into scalable email abuse.
MITRE ATT&CK techniques
Tactic
Technique
Details
Initial Access
Valid Accounts: Cloud Accounts (T1078.004)
The attacker authenticated to AWS using exposed long-term access keys validated with sts:GetCallerIdentity
Persistence
Create Account: Cloud Account (T1136.003)
The attacker created multiple IAM users and AWS WorkMail mailbox users to maintain persistent access
The attacker attached the AdministratorAccess managed policy to a newly created IAM user
Discovery
Cloud Infrastructure Discovery (T1580)
The attacker enumerated IAM users and assessed Amazon SES configuration and sandbox status via API calls
Impact
Resource Hijacking: Cloud Service Hijacking (T1496.004)
The attacker abused AWS WorkMail and SES to send high-volume phishing and spam emails from the victim account
Indicators of compromise (IOCs)
139.59.117[.]125
3.0.205[.]202
54.151.176[.]0
Note: IP addresses 3.0.205[.]202 and 54.151.176[.]0 are Amazon owned IP addresses so care should be taken when applying IP blocks.
Rapid7 customers
InsightIDR and Managed Detection and Response (MDR) customers have existing detection coverage through Rapid7’s expansive library of detection rules. These detections are deployed and will alert on the behaviors described in this technical analysis.
Just weeks after Australia rolled out the world’s first nationwide social media ban for children under 16, the British government has signaled it may follow a similar path. On Monday, Prime Minister Keir Starmer said the UK is considering a social media ban for children aged 15 and under, warning that “no option is off the table” as ministers confront growing concerns about young people’s online wellbeing.
The move places the British government ban social media proposal at the center of a broader national debate about the role of technology in childhood.
Officials said they are studying a wide range of measures, including tougher age checks, phone curfews, restrictions on addictive platform features, and potentially raising the digital age of consent.
UK Explores Stricter Limits on Social Media Ban for Children
In a Substack post on Tuesday, Starmer said that for many children, social media has become “a world of endless scrolling, anxiety and comparison.” “Being a child should not be about constant judgement from strangers or the pressure to perform for likes,” he wrote.
Alongside the possible ban, the government has launched a formal consultation on children’s use of technology. The review will examine whether a social media ban for children would be effective and, if introduced, how it could be enforced. Ministers will also look at improving age assurance technology and limiting design features such as “infinite scrolling” and “streaks,” which officials say encourage compulsive use.
The consultation will be backed by a nationwide conversation with parents, young people, and civil society groups. The government said it would respond to the consultation in the summer.
Learning from Australia’s Unprecedented Move
British ministers are set to visit Australia to “learn first-hand from their approach,” referencing Canberra’s decision to ban social media for children under 16. The Australian law, which took effect on December 10, requires platforms such as Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads, and YouTube to block underage users or face fines of up to AU$32 million.
Prime Minister Anthony Albanese made clear why his government acted. “Social media is doing harm to our kids, and I’m calling time on it,” he said. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.”
Parents and children are not penalized under the Australian rules; enforcement targets technology companies.
Early figures suggest significant impact. Australia’s eSafety Commissioner Julie Inman-Grant said 4.7 million social media accounts were deactivated in the first week of the policy. To put that in context, there are about 2.5 million Australians aged eight to 15.
“This is exactly what we hoped for and expected: early wins through focused deactivations,” she said, adding that “absolute perfection is not a realistic goal,” but the law aims to delay exposure, reduce harm, and set a clear social norm.
UK Consultation and School Phone Bans
The UK’s proposals go beyond a possible social media ban. The government said it will examine raising the digital age of consent, introducing phone curfews, and restricting addictive platform features. It also announced tougher guidance for schools, making it clear that pupils should not have access to mobile phones during lessons, breaks, or lunch.
Ofsted inspectors will now check whether mobile phone bans are properly enforced during school inspections. Schools struggling to implement bans will receive one-to-one support from Attendance and Behaviour Hub schools.
Although nearly all UK schools already have phone policies—99.9% of primary schools and 90% of secondary schools—58% of secondary pupils reported phones being used without permission in some lessons.
Education Secretary Bridget Phillipson said: “Mobile phones have no place in schools. No ifs, no buts.”
Building on Existing Online Safety Laws
Technology Secretary Liz Kendall said the government is prepared to take further action beyond the Online Safety Act.
“These laws were never meant to be the end point, and we know parents still have serious concerns,” she said. “We are determined to ensure technology enriches children’s lives, not harms them.”
The Online Safety Act has already introduced age checks for adult sites and strengthened rules around harmful content. The government said children encountering age checks online has risen from 30% to 47%, and 58% of parents believe the measures are improving safety.
The proposed British government ban social media initiative would build on this framework, focusing on features that drive excessive use regardless of content. Officials said evidence from around the world will be examined as they consider whether a UK-wide social media ban for children could work in practice.
As Australia’s experience begins to unfold, the UK is positioning itself to decide whether similar restrictions could reshape how children engage with digital platforms. The consultation marks the start of what ministers describe as a long-term effort to ensure young people develop a healthier relationship with technology.
Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.
The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.
The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.
The intellectual argument in favor of the moratorium is that “freedom“-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It’s a handy argument, useful not only to kill regulatory constraints, but also—companies hope—to win federal bailouts and energy subsidies.
Citizens should parse that argument from their own point of view, not Big Tech’s. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?
There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent “progressive” states from controlling AI’s future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn’t help that some in the parties also have direct financial interests in the AI supply chain.
But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz’s initial AI moratorium proposal, Republican Senator Masha Blackburn explained that “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can’t block states from making laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.
The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation—automobiles, children’s toys, food, and drugs—and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU’s AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can’t leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?
The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the “laboratories of democracy” to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.
We should embrace the ability of regulation to be a driver—not a limiter—of innovation. Regulations don’t restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don’t prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct private innovation to serve the public.
But, most importantly, regulations are needed to prevent the most dangerous impact of AI today: the concentration of power associated with trillion-dollar AI companies and the power-amplifying technologies they are producing. We outline the specific ways that the use of AI in governance can disrupt existing balances of power, and how to steer those applications towards more equitable balances, in our new book, Rewiring Democracy. In the nearly complete absence of Congressional action on AI over the years, it has swept the world’s attention; it has become clear that states are the only effective policy levers we have against that concentration of power.
Instead of impeding states from regulating AI, the federal government should support them to drive AI innovation. If proponents of a moratorium worry that the private sector won’t deliver what they think is needed to compete in the new global economy, then we should engage government to help generate AI innovations that serve the public and solve the problems most important to people. Following the lead of countries like Switzerland, France, and Singapore, the US could invest in developing and deploying AI models designed as public goods: transparent, open, and useful for tasks in public administration and governance.
Maybe you don’t trust the federal government to build or operate an AI tool that acts in the public interest? We don’t either. States are a much better place for this innovation to happen because they are closer to the people, they are charged with delivering most government services, they are better aligned with local political sentiments, and they have achieved greater trust. They’re where we can test, iterate, compare, and contrast regulatory approaches that could inform eventual and better federal policy. And, while the costs of training and operating performance AI tools like large language models have declined precipitously, the federal government can play a valuable role here in funding cash-strapped states to lead this kind of innovation.
This essay was written with Nathan E. Sanders, and originally appeared in Gizmodo.
The cycle of vulnerability disclosure and weaponization has shattered records once again. According to a new threat intel from Amazon Web Services (AWS), state-sponsored hacking groups linked to China began actively exploiting a critical vulnerability nicknamed "React2Shell," in popular web development frameworks mere hours after its public release.
The React2Shell vulnerability, tracked as CVE-2025-55182, affects React Server Components in React 19.x and Next.js versions 15.x and 16.x when using the App Router. The flaw carries the maximum severity score of 10.0 on the CVSS scale, enabling unauthenticated remote code execution (RCE).
The Rapid Weaponization Race
The vulnerability was publicly disclosed on Wednesday, December 3. AWS threat intelligence teams, monitoring their MadPot honeypot infrastructure, detected exploitation attempts almost immediately.
The threat actors identified in the flurry of activity are linked to known China state-nexus cyber espionage groups, including:
Earth Lamia: Known for targeting financial services, logistics, and government organizations across Latin America, the Middle East, and Southeast Asia.
Jackpot Panda: A group typically focused on East and Southeast Asian entities, often aligned with domestic security interests.
"China continues to be the most prolific source of state-sponsored cyber threat activity, with threat actors routinely operationalizing public exploits within hours or days of disclosure," stated an AWS Security Blog post announcing the findings.
The speed of operation showcased how the window between public disclosure and active attack is now measured in minutes, not days.
The AWS analysis also revealed a crucial insight into modern state-nexus tactics that threat groups are prioritizing volume and speed over technical accuracy.
Investigators observed that many attackers were attempting to use readily available, but often flawed, public Proof-of-Concept (PoC) exploits pulled from the GitHub security community. These PoCs frequently demonstrated fundamental technical misunderstandings of the flaw.
Despite the technical inadequacy, threat actors are aggressively throwing these PoCs at thousands of targets in a "volume-based approach," hoping to catch the small percentage of vulnerable configurations. This generates significant noise in logs but successfully maximizes their chances of finding an exploitable weak link.
Furthermore, attackers were not limiting their focus, simultaneously attempting to exploit other recent vulnerabilities, demonstrating a systematic, multi-pronged campaign to compromise targets as quickly as possible.
Call for Patching
While AWS has deployed automated protections for its managed services and customers using AWS WAF, the company is issuing an urgent warning to any entity running React or Next.js applications in their own environments (such as Amazon EC2 or containers).
The primary mitigation remains immediate patching.
"These protections aren't substitutes for patching," AWS warned. Developers must consult the official React and Next.js security advisories and update vulnerable applications immediately to prevent state-sponsored groups from gaining RCE access to their environments.
CVE-2025-55182 enables an attacker to achieve unauthenticated Remote Code Execution (RCE) in vulnerable versions of the following packages:
react-server-dom-webpack
react-server-dom-parcel
react-server-dom-turbopack
AWS' findings states a cautious tale that a vulnerability with a CVSS 10.0 rating in today's times becomes a national security emergency the moment it hits the public domain.
This is crazy. Lawmakers in several US states are contemplating banning VPNs, because…think of the children!
As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.
The EFF link explains why this is a terrible idea.
Every great product experience starts with a smooth beginning. But in the world of cloud security, onboarding can sometimes feel like an obstacle course. Detailed fine-grained Identity and Access Management (IAM) configurations, lengthy deployment steps, and manual permission setups can turn what should be an exciting first impression into a tedious chore.
That’s changing. Rapid7 has enhanced the onboarding experience for Exposure Command and InsightCloudSec by integrating with AWS IAM temporary delegation - a new AWS capability that lets customers approve deployment access directly in the AWS console. The result? A faster, simpler, and more secure path to getting up and running in the cloud.
Why onboarding matters - and why it often fails
The first minutes with a new platform matter. It’s the difference between “this is amazing” and “I’ll come back to it later.”
In cloud environments, setup usually involves multiple AWS services - compute, storage, networking, access management - all of which must be configured precisely to maintain security. Traditionally, customers have had to manually create IAM roles, adjust trust relationships, and fine-tune permissions just to let a partner solution like Rapid7 deploy resources.
It’s not just time-consuming; it’s error-prone. Misconfigured roles can cause deployment failures or unnecessary security risk. Support teams spend hours walking customers through the process, and the friction delays time-to-value. When scaling across dozens or hundreds of AWS accounts, those delays multiply fast.
Meet AWS IAM temporary delegation: What it is and why it matters
AWS IAM temporary delegation simplifies the entire setup journey. It allows trusted partners like Rapid7 to automate deployment securely - but only after the customer grants explicit, time-bound approval.
Here’s how it works: When you initiate onboarding from within Rapid7’s interface, you’re redirected to the AWS console. There, you can review the exact permissions Rapid7 is requesting and how long access will last. Once approved, AWS provides Rapid7 with temporary credentials to complete the setup. After the time window expires, that access ends automatically.
No long-term IAM keys, no manual role creation, and no guesswork. Customers stay in control, with full visibility and auditability. It’s automation with accountability built in.
How Rapid7 is putting this into action
With the latest release, Rapid7 has integrated this capability directly into Exposure Command and InsightCloudSec, creating a guided onboarding experience that happens almost entirely inside the Rapid7 interface.
Here’s the new flow:
Customers configure deployment options in Rapid7’s InsightCloudSec environment.
A temporary delegation request appears via an AWS console pop-up.
An authorized AWS user reviews and approves the request.
Rapid7 automatically deploys the necessary resources on the customer’s behalf.
This streamlined workflow eliminates dozens of manual steps and reduces onboarding time from hours to minutes. It’s faster, simpler, and still fully aligned with AWS’s strict security model.
Speed, simplicity, and security
This integration hits the sweet spot between automation and trust:
Speed: Customers can start realizing value from Rapid7’s cloud security solutions in minutes instead of days.
Simplicity: The UI-driven process means no wrestling with IAM policies or JSON templates.
Security: Access is temporary and permission-scoped. Customers retain complete oversight through the AWS console and CloudTrail logs.
For organizations with compliance or security governance requirements, this is the ideal balance: operational efficiency without compromising control.
Beyond onboarding: What this says about Rapid7 and AWS alignment
This update isn’t just about faster onboarding. It’s a glimpse into Rapid7’s broader partnership with AWS. Rapid7 has long been an AWS Advanced Tier Partner, building integrations that help customers manage security across cloud-native environments. From leveraging AWS telemetry in MXDR to integrating with AWS services like CloudTrail and GuardDuty, Rapid7’s platform has been designed to meet customers where they already operate within AWS.
By adopting AWS IAM temporary delegation early, Rapid7 reinforces its commitment to cloud-first innovation and shared responsibility principles. Customers get the assurance that their onboarding, deployment, and operations all align with AWS security best practices.
What this means for customers
If you’re deploying Rapid7 Exposure Command (Advanced or Ultimate) or InsightCloudSec on AWS, here’s what to expect:
A guided onboarding experience that automates AWS resource setup.
A faster, less error-prone workflow that still keeps you in control.
The ability for authorized users to approve temporary access requests directly in the AWS console.
Before onboarding, make sure someone in your organization has the permissions to approve delegation requests. After deployment, review your CloudTrail logs as part of normal governance; you’ll see every action logged and time-bounded.
Value from day one
Onboarding shouldn’t be a hurdle. And now with AWS IAM Temporary Delegation and Rapid7’s enhanced experience, it no longer is. Together, AWS and Rapid7 have reimagined what “getting started” looks like in the cloud - faster, more intuitive, and just as secure as you need it to be.
It’s one more way Rapid7 is helping organizations unlock value from day one, while staying aligned with AWS’s best practices for identity, access, and automation.
See how easy secure onboarding can be.Explore Rapid7’s listings for Exposure Command and InsightCloudSec straight from the AWS Marketplace.
Managing network security in a dynamic cloud environment is a constant challenge. As traffic volume grows and threat actors evolve their tactics, organizations need protection that can scale effortlessly while delivering robust, intelligent defense. That's where a service like AWS Network Firewall becomes essential, and we’re excited to partner with AWS to make it even more powerful.
What is AWS Network Firewall?
AWS Network Firewall (AWS NWF) is a managed service that provides essential, auto-scaling network protections for Amazon Virtual Private Clouds (VPCs). While its flexible rules engine offers granular control, defining and maintaining the right rules to defend against evolving threats is a complex and resource-intensive task.
Manually creating and updating rules often leads to coverage gaps and creates significant operational overhead. To simplify this process and empower teams to act with confidence, Rapid7 is proud to announce the availability of Curated Intelligence Rules for AWS Network Firewall. As an AWS partner, we convert our curated intelligence on Indicators of Compromise (IOCs) from into high-quality rule groups, delivering expert-vetted threat intelligence directly within your native AWS experience.
Harnessing industry-leading threat intelligence
In the world of threat intelligence, more isn’t always better. Too many low-fidelity alerts generate noise, distract analysts, and leave teams chasing false positives. At Rapid7, our approach is different. We focus on delivering high-fidelity intelligence, enabling customers to zero in on the threats most relevant to their unique environments.
Rapid7 Curated Intelligence Rules embody this same approach, and are built on three key principles:
⠀ Focus on quality over quantity - Rules emphasize meaningful, low-noise detection directly aligned with current, real-world threats, significantly reducing alert fatigue.
Curated global intelligence - Rule sets are powered by high-quality, region-specific data from unique sources, providing unparalleled visibility and context for actionable detections.
Dynamic and self-cleaning rule sets - Threat intelligence is not static. Using Rapid7’s proprietary , rules are automatically retired when an IOC passes a certain threshold, ensuring the delivered intelligence is always fresh, relevant, and current.
⠀
We’re launching with two distinct rule sets, each designed to address today’s most pressing threats:
Advanced Persistent Threat (APT) campaigns: Targets the subtle and persistent techniques used by state-sponsored and sophisticated threat actors.
Ransomware & cybercrime: Focuses on the tools, infrastructure, and indicators associated with financially motivated attacks.
⠀
These rule sets are updated daily to ensure you have the most current protections. Furthermore, our intelligence is dynamic. When an IOC passes a certain threshold in our proprietary Decay Scoring system, we remove it from the rule set. This process guarantees that the intelligence you receive is always current and actionable, significantly reducing alert fatigue.
The operational advantage
These Curated Intelligence Rules deliver immediate and tangible value, allowing your team to:
Automate threat protection: Reduce overhead with curated, continuously updated detections delivered natively within AWS Network Firewall.
Adopt protections faster: Deploy protections powered by Rapid7 Labs intelligence with just a few clicks in the console.
Maintain predictable operations: Rely on AWS-validated updates, clear rule group metadata, and transparent per-GB metering.
Common use cases addressed
Our rule sets provide practical defense against a wide range of attack scenarios. You can:
Block command and control (C2) communication from known malware families
Detect network reconnaissance activity associated with advanced persistent threats
Prevent data exfiltration to malicious domains linked to cybercrime groups
Identify and stop the download of malware payloads from compromised websites
Alert on traffic to newly registered domains used in malicious activities
Get started with Curated Intelligence Rules for AWS NFW today
We are delighted to announce Rapid7 launched a new Amazon Web Service (AWS) cloud region in India with the API name ap-south-2.
This follows an announcement in March 2025, when Rapid7 announced plans for expansion in India, including the opening of a new Global Capability Center (GCC) in Pune to serve as an innovation hub and Security Operations Center (SOC).
The GCC opened in April 2025, quickly followed by dedicated events in the country, to demonstrate our commitment to our partners and customers in the region. Three Security Day events took place in May, in Mumbai, Delhi, and Bangalore. These events brought together key stakeholders from the world of commerce, academia, and government to explore our advancements in Continuous Threat Exposure Management (CTEM) and Managed Extended Detection and Response (MXDR).
“Expanding into India is a critical step in accelerating Rapid7’s investments in security operations leadership and customer-centric innovation,” said Corey Thomas, chairman and CEO of Rapid7. “Innovation thrives when multi-dimensional teams come together to solve complex challenges, and this new hub strengthens our ability to deliver the most adaptive, predictive, and responsive cybersecurity solutions to customers worldwide. Establishing a security operations center in Pune also enhances our ability to scale threat detection and response globally while connecting the exceptional technical talent in the region to impactful career opportunities. We are excited to grow a world-class team in India that will play a pivotal role in shaping the future of cybersecurity.”
Rapid7 expands to 8 AWS platform regions
Today, Rapid7 operates in eight platform regions (us-east-1, us-east-2, us-west-1, ap-northeast-1, ap-southeast-2, ca-central-1, eu-central-1, govcloud).
These regions allow our customers to meet their data sovereignty requirements by choosing where their sensitive security data is hosted. We have extended this capability to ap-south-2 and me-central-1 to process additional data and serve more customers with region requirements we have not previously been able to meet.
What this means for Rapid7 customers in India
This gives our customers in India the ability to access and store data in the India region for our Exposure Management product family.
⠀
⠀
Exposure Command combines complete attack surface visibility with high-fidelity risk context and insight into your organization’s security posture, aggregating findings from both Rapid7’s native exposure detection capabilities – as well as third-party exposure and enrichment sources you’ve already got in place – allowing you to:
Extend risk coverageto cloud environments with real-time agentless assessment
Zero-in on exposures and vulnerabilities with threat-aware risk context
Continuously assess your attack surface, validate exposures, and receive actionable remediation guidance
Efficiently operationalize your exposure management program and automate enforcement of security and compliance policies with native, no-code automation
Former DoJ attorney John Carlin writes about hackback, which he defines thus: “A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are—by definition—not passive defensive measures.”
His conclusion:
As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.
At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation.
As for the broad range of other hack back tactics that fall in the middle of active defense and offensive measures, private parties should continue to engage in these tactics only with government oversight or authorization. These measures exist within a legal gray area and would likely benefit from amendments to the CFAA and CISA that clarify and carve out the parameters of authorization for specific self-defense measures. But in the absence of amendments or clarification on the scope of those laws, private actors can seek governmental authorization through an array of channels, whether they be partnering with law enforcement or seeking authorization to engage in more offensive tactics from the courts in connection with private litigation.