A Climate Supercomputer Is Getting New Bosses. It’s Not Clear Who.

© Caine Delacy for The New York Times

© Caine Delacy for The New York Times

© Adam Eastland/Alamy
Physics research drives technological innovation, from medical imaging to data processing, write Dr Phil Bull and Prof Chris Clarkson; plus letters from Tim Gershon and Vincenzo Vagnoni, and Prof Paul Howarth
Your article (UK ‘could lose generation of scientists’ with cuts to projects and research facilities, 6 February) is right to highlight the serious consequences of proposed 30% funding cuts on the next generation of physics and astronomy researchers. The proposals also risk a generational destruction of the country’s ability to produce skilled graduates, retain specialist knowledge, and support physical science in industrial and educational settings.
This comes against a backdrop of wider threats to university finances, from rising costs to declining international student numbers. An estimated one in four UK physics departments are already at risk of closure, and recent cuts and delays to Science and Technology Facilities Council (STFC) grants have further depleted finances and will result in the loss of some highly skilled technical staff.
Continue reading...
© Photograph: Murdo MacLeod/The Guardian

© Photograph: Murdo MacLeod/The Guardian

© Photograph: Murdo MacLeod/The Guardian

© Cassandra Klos for The New York Times
![]()
Darktrace researchers caught a sample of malware that was created by AI and LLMs to exploit the high-profiled React2Shell vulnerability, putting defenders on notice that the technology lets even lesser-skilled hackers create malicious code and build complex exploit frameworks.
The post Hackers Use LLM to Create React2Shell Malware, the Latest Example of AI-Generated Threat appeared first on Security Boulevard.

© Dr. Axelle Delaunay
The conversation around AI security is full of anxiety. Every week, new headlines warn of jailbreaks, prompt injection, agents gone rogue, and the rise of LLM-enabled cybercrime. It’s easy to come away with the impression that AI is fundamentally uncontrollable and dangerous, and therefore something we need to lock down before it gets out of hand.
But as a security practitioner, I wasn’t convinced. Most of these warnings are based on hypothetical examples or carefully engineered demos. They raise important questions, but rarely answer the most basic one: What does the real attack surface of today’s AI systems actually look like?
So instead of offering another opinion, I ran the numbers.
To ground the conversation in reality, I focused on MCP, the Model Context Protocol. This framework is widely used to help language models interact with tools, APIs, and external systems. It’s open source, replicated across many environments, and built for practical integration. That makes it an ideal test case for understanding actual exposure.
No adversarial prompting. No artificial exploits. Just a measurement of what real MCP servers expose. We used SDK import analysis to locate active repositories, filtered out those that wouldn’t run, and examined the tool schemas to understand what each was capable of.
The MCP servers that met our criteria showed a familiar pattern. They exposed well-understood primitives used throughout modern software systems.
Observed capability classes:
Filesystem access
HTTP requests
Database queries
Local script or process execution
Orchestration and tool chaining
Read-only API search
These are not exotic capabilities unique to AI. They’re already embedded in cloud automation, infrastructure-as-code, and modern DevOps stacks. MCP simply gives them structure.
One of the most unexpected findings was the rarity of arbitrary code execution. Despite warnings in the media, this turned out to be the least common capability among all operational MCP servers analyzed.
This matters. It suggests that real-world deployments of AI tooling are not as reckless as some narratives claim. The most common issues are the ones we’ve known for years: weak defaults, excessive permissions, and poor input handling. There’s no mystery there (and that’s encouraging).
The problem arises when those primitives are combined. Individually, most of the MCP servers we studied were low risk. But when orchestration enters the picture, the attack surface expands.
Some real-world examples we observed:
HTTP fetch + filesystem write = persistence or content injection
Database query + orchestration = stealthy exfiltration
Filesystem write + planning = poisoned output or config hijacking
HTTP + planning + execution = multi-stage agent attacks
These combinations reflect what adversaries already do in non-AI environments. MCP just reduces friction in putting the pieces together.
The focus on constraining the model via schema and architecture is essential for 'secure by design,' yet a critical counterpoint must be considered as the industry evolves: We may not be able to stop many insecure AI applications (e.g., those built on architectures like OpenClaw or Claude Code) from shipping with insecure design choices. Similarly, the insecure design path for AI could force security teams to rely on non-deterministic, 'best effort' prompt injection defenses to prevent data exfiltration and remote code execution, rather than influencing developers toward inherently secure application design.
While the secure boundary is the schema, and we must influence application developers to adopt secure-by-design principles, the future suggests there will be many cases where this influence fails. This means security leaders must also prepare for a hybrid reality of championing architectural security while also building and operating robust, best effort runtime defenses to manage the fallout from the inevitable wave of insecure AI applications.
As we embed AI deeper into operational systems, the control points change. Historically, we validated inputs at the UI layer, enforced roles through IAM, and wrapped logic in application code.
With AI agents, those controls now live in:
The orchestration layer
Tool composition workflows
Schema contracts
Execution sandboxes
Security needs to follow the shift. That means auditing tool chains, setting strict schema policies, isolating execution contexts, and applying existing practices like least privilege and defense in depth to this new architecture.
Security and architecture leaders can start applying pressure in the right places today:
Map AI tooling to known primitives
Don’t treat these systems as unknowns. Most expose capabilities like file handling, HTTP fetches, or basic shell commands - all of which are familiar territory for teams leveraging threat intelligence effectively.
Assess schema design before worrying about prompts
The schema defines what tools the AI can call and how. Poorly scoped parameters, such as unbounded URLs or file paths, are far more dangerous than clever prompts.
Limit orchestration where possible
Composability increases risk. If orchestration is required, monitor it like critical automation infrastructure.
Audit your environment for capability sprawl
Look for AI-connected services that may expose multiple sensitive capabilities together. Risk scales when these tools are combined.
Apply existing enterprise controls
Network segmentation, credential scoping, logging, and behavioral detections still work. Least privilege access is especially relevant in AI-integrated environments where tool chaining can escalate access unintentionally. AI requires adaptation, not reinvention.
This blog condenses findings from my recent research, where I set out to answer a straightforward question: what are AI systems actually exposing in the real world today? Instead of relying on hypotheticals or fear-driven narratives, I looked at real, runnable Model Context Protocol (MCP) servers and measured their exposed capabilities and architectural design.
If you're looking for the technical deep dive, including methodology, data sets, and schema-level breakdowns, you can read the original research published on HackerNoon. You can also explore more of our ongoing threat analysis and security research on the Rapid7 Research Hub.
The bottom line: AI introduces complexity and scale, but the fundamental security principles remain the same. The real challenge is whether security teams can adapt traditional controls to new environments and influence developers toward inherently secure application design, rather than being forced to rely on non-deterministic, 'best effort' defenses like prompt injection mitigation.

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don't have the best of intentions, but Google has some handy tools to address that, and they've gotten an upgrade today. The "Results About You" tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?
With today's upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver's license, and Social Security. You can access the option to add these to Google's ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.
Naturally, Google has to know what it's looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver's license number, which is fine, as it's not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.


© Aurich Lawson

© Edu Bayer for The New York Times

© Bok Jin Kim, Chang W. Lee/The New York Times

© Fabio Consoli

© Fabio Consoli

© Angela Weiss/Agence France-Presse — Getty Images

© Tetiana Dzhafarova/Agence France-Presse — Getty Images
It is one of the most powerful involuntary actions the human body can perform. But is a big sneeze a sign of illness, pollution or something else entirely?
How worried should we be about a sneeze? It depends who you ask. In the Odyssey, Telemachus sneezes after Penelope’s prayer that her husband will soon be home to sort out her house-sitting suitors – which she sees as a good omen for team Odysseus, and very bad news for the suitors. In the Anabasis, Xenophon takes a sneeze from a soldier as godly confirmation that his army can fight their way back to their own territory – great news for them – while St Augustine notes, somewhat disapprovingly, that people of his era tend to go back to bed if they sneeze while putting on their slippers. But is a sneeze an omen of anything apart from pathogens, pollen or – possibly – air pollution?
“It’s a physical response to get rid of something that’s irritating your body,” says Sheena Cruickshank, an immunologist and professor at the University of Manchester. “Alongside the obvious nasal hairs that a few people choose to trim, all of us have cilia, or microscopic hairs in our noses that can move and sense things of their own accord. And so if anything gets trapped by the cilia, that triggers a reaction to your nerve endings that says: ‘Right, let’s get rid of this.’ And that triggers a sneeze.”
Continue reading...
© Composite: Guardian Design; deeepblue/Getty Images

© Composite: Guardian Design; deeepblue/Getty Images

© Composite: Guardian Design; deeepblue/Getty Images

© Aurelien Bergot for The New York Times

© Aurelien Bergot for The New York Times
Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you'll find some key caveats ahead.
On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company's Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.
Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.


© akinbostanci via Getty Images

© Chang W. Lee/The New York Times
When Rapid7 published its analysis of the Chrysalis backdoor linked to a compromise of Notepad++ update infrastructure, it raised understandable questions from customers and security teams. The investigation showed that attackers did not exploit a flaw in the application itself. Instead, they compromised the hosting infrastructure used to deliver updates, allowing a highly targeted group to selectively distribute a previously undocumented backdoor associated with the Lotus Blossom APT.
Subsequent reporting from outlets including BleepingComputer, The Register, SecurityWeek, and The Hacker News has helped clarify the scope of the incident. What’s clear is that this was a supply chain attack against distribution infrastructure, not source code. The attackers maintained access for months, redirected update traffic selectively, and limited delivery of the Chrysalis payload to specific targets, helping them stay hidden and focused on espionage rather than mass compromise.
This incident highlights how modern supply chain attacks have evolved. Rather than targeting application code, attackers abused shared hosting infrastructure and weaknesses in update verification to quietly deliver malware. The broader takeaway is that supply chain risk now extends well beyond build systems and repositories. Update mechanisms, hosting providers, and distribution paths have become attractive targets, especially when they sit outside an organization’s direct control.
Based on public statements from the Notepad++ maintainer and independent reporting, there is no evidence that the application’s source code or core development process was compromised. The risk stemmed from the update delivery infrastructure, reinforcing that even trusted software can become a delivery mechanism when upstream systems are abused.
Rapid7 was the first to publish attribution linking this activity to Lotus Blossom, a Chinese state-aligned advanced persistent threat (APT) group. Based on our analysis, we assess with moderate confidence that this group is responsible for the Notepad++ infrastructure compromise and the deployment of the Chrysalis backdoor.
Lotus Blossom has been active since at least 2009 and is known for long-running espionage campaigns targeting government, telecommunications, aviation, critical infrastructure, and media organiations, primarily across Southeast Asia, and more recently, Latin America.
The tactics, tooling, and infrastructure used in this campaign - including the abuse of update infrastructure, the use of selective targeting, and the deployment of custom malware, are consistent with the group’s historical tradecraft. As with any attribution, this conclusion is based on observed behaviors and intelligence correlations, not a single, definitive indicator.
Based on what we know today, there are several immediate actions organizations should take:
Check and update Notepad++ installations. Ensure any instances are running the latest version, which includes improved certificate and signature verification.
Review historical telemetry. Even though attacker infrastructure has been taken down, organizations should scan logs and environments going back to October 2025 for indicators of compromise associated with this campaign.
Hunt, don’t just scan. This activity was selective and low‑volume. Absence of alerts does not guarantee absence of compromise.
Use available intelligence. Rapid7 Intelligence Hub customers have access to the Chrysalis campaign intelligence, along with follow‑up indicators provided by partners such as Kaspersky, to support targeted hunting across endpoints and network telemetry.
This incident is a case study in how trust is exploited in modern environments. The attackers didn’t rely on zero days or noisy malware. They abused update workflows, hosting relationships, and assumptions about trusted software. That same approach applies across countless tools and platforms used daily inside enterprise environments.
It also reinforces a broader trend we’ve seen over the last year: attackers are patient, selective, and focused on long‑term access rather than immediate impact. That has implications for detection strategies, incident response planning, and supply chain risk management.
For defenders, this incident reinforces several lessons:
Supply chain security must include distribution and hosting infrastructure, not just source code.
Update mechanisms should enforce strong signature and metadata validation by default.
Shared hosting environments represent an often overlooked risk, especially for widely deployed tools.
Trust in software must be continuously validated, not assumed.
The Chrysalis incident is not just about a single tool or a single campaign. It reflects a broader shift in how advanced threat actors think about access, persistence, and trust. Software supply chains are no longer just a development concern. They are an operational and security concern that extends into hosting providers, update mechanisms, and the assumptions organizations make about what is “safe.”
As attackers continue to favor selective targeting and long‑term access over noisy, large‑scale compromise, defenders need to adapt accordingly. That means moving beyond basic scanning, validating trust continuously, and treating update and distribution infrastructure as part of the attack surface.
If you’d like to hear directly from the researchers behind this discovery, watch the full Chrysalis: Inside the Supply Chain Compromise of Notepad++ webinar, now available on BrightTALK. In this detailed session, Christian Beek (Senior Director, Threat Analytics) and Steve Edwards (Director, Threat Intel & Detection Engineering) walk through the full attack chain, from initial compromise to malware behavior, attribution to Lotus Blossom, and what organizations can do right now to assess exposure and strengthen supply chain security. [Watch Now]


© Chang W. Lee/The New York Times

© Chris Wattie for The New York Times
Rapid7 Labs, together with the Rapid7 MDR team, has uncovered a sophisticated campaign attributed to the Chinese APT group Lotus Blossom. Active since 2009, the group is known for its targeted espionage campaigns primarily impacting organizations across Southeast Asia and more recently Central America, focusing on government, telecom, aviation, critical infrastructure, and media sectors.
Our investigation identified a security incident stemming from a sophisticated compromise of the infrastructure hosting Notepad++, which was subsequently used to deliver a previously undocumented custom backdoor, which we have dubbed Chrysalis.
⠀

⠀
Beyond the discovery of the new implant, forensic evidence led us to uncover several custom loaders in the wild. One sample, “ConsoleApplication2.exe”, stands out for its use of Microsoft Warbird, a complex code protection framework, to hide shellcode execution. This blog provides a deep technical analysis of Chrysalis, the Warbird loader, and the broader tactic of mixing straightforward loaders with obscure, undocumented system calls.
Forensic analysis conducted by the MDR team suggests that the initial access vector aligns with publicly disclosed abuse of the Notepad++ distribution infrastructure. While reporting references both plugin replacement and updater-related mechanisms, no definitive artifacts were identified to confirm exploitation of either. The only confirmed behavior is that execution of “notepad++.exe” and subsequently “GUP.exe” preceded the execution of a suspicious process “update.exe” which was downloaded from 95.179.213.0.

⠀
Analysis of “update.exe” shows the file is actually an NSIS installer, a tool commonly used by Chinese APT to deliver initial payload.
The following are the extracted NSIS installer files:
Description: renamed Bitdefender Submission Wizard used for DLL sideloading
⠀
Installation script is instructed to create a new directory “Bluetooth” in “%AppData%” folder, copy the remaining files there, change the attribute of the directory to HIDDEN and execute BluetoothService.exe.
Shortly after the execution of BluetoothService.exe, which is actually a renamed legitimate Bitdefender Submission Wizard abused for DLL sideloading, a malicious log.dll was placed alongside the executable, causing it to be loaded instead of the legitimate library. Two exported functions from log.dll are called by Bitdefender Submission Wizard: LogInit and LogWrite.
LogInit loads BluetoothService into the memory of the running process.
LogWrite has a more sophisticated goal – to decrypt and execute the shellcode.
The decryption routine implements a custom runtime decryption mechanism used to unpack encrypted data in memory. It derives key material from previously calculated hash value and applies a stream‑cipher–like algorithm rather than standard cryptographic APIs. At a high level, the decryption routine relies on a linear congruential generator, with the standard constants 0x19660D and 0x3C6EF35F, combined with several basic data transformation steps to recover the plaintext payload.
Once decrypted, the payload replaces the original buffer and all temporary memory is released. Execution is then transferred to this newly decrypted stage, which is treated as executable code and invoked with a predefined set of arguments, including runtime context and resolved API information.

Log.dll implements an API hashing subroutine to resolve required APIs during execution, reducing the likelihood of detection by antivirus and other security solutions.
The hashing algorithm will hash export names using FNV‑1a (fnv-1a hash 0x811C9DC5, fnv-1a prime 0x1000193 observed), then apply a MurmurHash‑style avalanche finalizer (murmur constant 0x85EBCA6B observed), and compare the result to a salted target hash.
The shellcode, once decrypted by log.dll, is a custom, feature-rich backdoor we've named “Chrysalis”. Its wide array of capabilities indicates it is a sophisticated and permanent tool, not a simple throwaway utility. It uses legitimate binaries to sideload a crafted DLL with a generic name, which makes simple filename-based detection unreliable. It relies on custom API hashing in both the loader and the main module, each with its own resolution logic. This is paired with layered obfuscation and a fairly structured approach to C2 communication. Overall, the sample looks like something that has been actively developed over time, and we’ll be keeping an eye on this family and any future variants that show up.
Once the execution is passed to decrypted shellcode from log.dll, malware starts with decryption of the main module via a simple combination of XOR, addition and subtraction operations, with a hardcoded key gQ2JR&9;. See below the pseudocode of decryption routine:
⠀
char XORKey[8] = "gQ2JR&9;";
DWORD counter = 0;
DWORD pos = BufferPosition;
while (counter < size) {
BYTE k = XORKey[counter & 7];
BYTE x = encrypted[pos];
x = x + k;
x = x ^ k;
x = x - k;
decrypted[pos] = x;
pos++;
counter++;
}⠀
XOR operation is performed 5 times in total, suggesting a section layout similar to PE format. Following the decryption, malware will proceed to yet another dynamic IAT resolution using LoadLibraryA to acquire a handle to Kernel32.dll and GetProcAddress. Once exports are resolved, the jump is taken to the main module.
The decrypted module is a reflective PE-like module that executes the MSVC CRT initialization sequence before transferring control to the program’s main entry point. Once in the Main function, the malware will dynamically load DLLs in the following order: oleaut32.dll, advapi32.dll, shlwapi.dll, user32.dll, wininet.dll, ole32.dll and shell32.dll.
Names of targeted DLLs are constructed on the run, using two separate subroutines. These two subroutines implement a custom, position-dependent character obfuscation scheme. Each character is transformed using a combination of bit rotations, conditional XOR operations, and index-based arithmetic, ensuring that identical characters encrypt differently depending on their position. The second routine reverses this process at runtime, reconstructing the original plaintext string just before it is used. The purpose of these two functions is not only to conceal strings, but also to intentionally complicate static analysis and hinder signature-based detection.
After the DLL name is reconstructed, the Main module implements another, more sophisticated API hashing routine.

⠀
The first difference between this and the API hashing routine used by the loader is that this subroutine accepts only a single argument: the hash of the target API. To obtain the DLL handle, the malware walks the PEB to reach the InMemoryOrderModuleList, then parses each module’s export table, skipping the main executable, until it resolves the desired API. Instead of relying on common hashing algorithms, the routine employs multi-stage arithmetic mixing with constants of MurmurHash-style finalization. API names are processed in 4-byte blocks using multiple rotation and multiplication steps, followed by a final diffusion phase before comparison with the supplied hash. This design significantly complicates static recovery of resolved APIs and reduces the effectiveness of traditional signature-based detection. As a fallback, the resolver supports direct resolution via GetProcAddress if the target hash is not found through the hashing method. The pointer to GetProcAddress is obtained earlier during the “main module preparation” stage.
⠀

The next step in the malware’s execution is to decrypt the configuration. Encrypted configuration is stored in the BluetoothService file at offset 0x30808 with the size of 0x980. Algorithm for the decryption is RC4 with the key qwhvb^435h&*7. This revealed the following information:
The URL structure of the C2 is interesting, especially the section /a/chat/s/{GUID}), which appears to be the identical format used by Deepseek API chat endpoints. It looks like the actor is mimicking the traffic to stay below the radar.
Decrypted configuration doesn’t give much useful information besides the C2. The name of the module is too generic and the user agent belongs to Google Chrome browser. The URL resolves to 61.4.102.97, IP address based in Malaysia. At the time of the writing of this blog, no other file has been seen to communicate with this IP and URL.
To determine the next course of action, malware checks command-line arguments highlighted in Table 1 and chooses one of four potential paths. If the amount of the command-line arguments is greater than two, the process will exit. If there is no additional argument, persistence is set up primarily via service creation or registry as a fall back mechanism.
See Table 2 below:
Argument | Mode | Action |
(None) | Installation | Installs persistence (Service or Registry) pointing to binary with -i flag, then terminates. |
-i | Launcher | Spawns a new instance of itself with the -k flag via ShellExecuteA, then terminates. |
-k | Payload | Skips installation checks and executes the main malicious logic (C2 & Shellcode). |
⠀
With the expected arguments present, the malware proceeds to its primary functionality - to gather information about the infected asset and initiate the communication with C2.
A mutex Global\\Jdhfv_1.0.1 is registered to enforce single instance execution on the host. If it already exists, malware is terminated. If the check is clear, information gathering begins by querying for the following: current time, installed AVs, OS version, user name and computer name. Next, computer name, user name, OS version and string 1.01 are concatenated and the data are hashed using FNV-1A. This value is later turned into its decimal ascii representation and used most likely as a unique identifier of the infected host.
Final buffer uses a dot as delimiter and follows this pattern:
⠀
<UniqueID>.<ComputerName>.<UserName>.<OSVersion>.<127.0.0.1>.<AVs>.<DateAndTime>
⠀
The last piece of information added to the beginning of the buffer is a string 4Q. The buffer is then RC4 encrypted with the key vAuig34%^325hGV.
Following data encryption, the malware establishes an internet connection using previously mentioned user agent and C2 api.skycloudcenter.com over port 443. Data is then transferred via HttpSendRequestA using the POST method. Response from the server is then read to a temporary buffer which is later decrypted using the same key vAuig34%^325hGV.
Note: C2 server was already offline during the initial analysis, preventing recovery of any network data. As a result, and due to the complexity of the malware, parts of the following analysis may contain minor inaccuracies.
The response from the C2 undergoes multiple checks before further processing. First, the HTTP response code is compared against the hardcoded value 200 (0xC8), indicating a successful request, followed by a validation of the associated WinInet handle to ensure no error occurred. The malware then verifies the integrity of the received payload and execution proceeds only if at least one valid structure is detected. Next, malware looks into the response data for a small tag to determine what to do next. Tag is used as a condition for a switch statement with 16 possible cases. The default case will simply set up a flag to TRUE. Setting up this flag will result in completely jumping out of the switch. Other switch cases includes following options:
⠀
Char representation | Hex representation | Purpose |
4T | 0x3454 | Spawn interactive shell |
4U | 0x3455 | Send ‘OK’ to C2 |
4V | 0x3456 | Create process |
4W | 0x3457 | Write file to disk |
4X | 0x3458 | Write chunk to open file |
4Y | 0x3459 | Read & send data |
4Z | 0x345A | Break from switch |
4\\ | 0x345C | Uninstall / Clean up |
4] | 0x345D | Sleep |
4_ | 0x345F | Get info about logical drives |
4` | 0x3460 | Enumerate files information |
4a | 0x3661 | Delete file |
4b | 0x3662 | Create directory |
4c | 0x3463 | Get file from C2 |
4d | 0x3464 | Send file to C2 |
⠀
4T - The malware implements a fully interactive cmd.exe reverse shell using redirected pipes. Incoming commands from the C2 are converted from UTF‑8 to the system OEM code page before being written to the shell’s standard input, while a dedicated thread continuously reads shell output, converts it from OEM encoding to UTF‑8 using GetOEMCP API, and forwards the result back to the C2.
4V - This option allows remote process execution by invoking CreateProcessW on a C2-supplied command line and relaying execution status back to the C2.
4W - This option implements a remote file write capability, parsing a structured response containing a destination path and file contents, converting encodings as necessary, writing the data to disk, and returning a formatted status message to the command-and-control server.
4X - Similar to the previous switch, it supports a remote file-write capability, allowing the C2 to drop arbitrary files on the victim system by supplying a UTF-8 filename and associated data blob.
4Y - Switch implements a remote file-read capability. It opens a specified file with, retrieves its size, reads the entire contents into memory, and transmits the data back to the C2.
4\\ - The option implements a full self-removal mechanism. It deletes auxiliary payload files, removes persistence artifacts from both the Windows Service registry hive and the Run key, generates and executes a temporary batch file u.bat to delete the running executable after termination, and finally removes the batch script itself.
4_ - Here malware enumerates information about logical drivers using GetLogicalDriveStringsA and GetDriveTypeA APIs and sends the information back to the C2.
4` - This switch option shares similarities with previously analyzed data exfiltration function - 4Y. However, its primary purpose differs. Instead of transmitting preexisting data, it enumerates files within a specified directory, collects per-file metadata (timestamps, size, and filename), serializes the results into a custom buffer format, and sends the aggregated listing to the C2.
4a - 4b - 4c - 4d - In the last 4 cases, malware implements a custom file transfer protocol over its C2 channel. Commands 4a and 4b act as control messages used to initialize file download and upload operations respectively, including file paths, offsets, and size validation. Once initialized, the actual data transfer occurs in a chunked fashion using commands 4c (download) and 4d (upload). Each chunk is wrapped in a fixed-size 40-byte response structure, validated for successful HTTP status and correct structure count before processing. Transfers continue until the C2 signals completion via a non-zero termination flag, at which point file handles and buffers are released.
During the initial forensics analysis of the affected asset, Rapid7’s MDR team observed execution of following command:
⠀
C:\ProgramData\USOShared\svchost.exe-nostdlib -run C:\ProgramData\USOShared\conf.c
⠀
The retrieved folder “USOShared” from the infected asset didn’t contain svchost.exe but it contained “libtcc.dll” and “conf.c”. The hash of the binary didn’t match any known legitimate version but the command line arguments and associated “libtcc.dll” suggested that svchost.exe is in fact renamed Tiny-C-Compiler. To confirm this, we replicated the steps of the attacker successfully loaded shellcode from “conf.c” into the memory of “tcc.exe”, confirming our previous hypothesis.
The C source file contains a fixed size (836) char buffer containing shellcode bytes which is later casted to a function pointer and invoked. The shellcode is consistent with 32-bit version of Metasploit’s block API.
The shellcode loads Wininet.dll using LoadLibraryA, resolves Internet-related APIs such as InternetConnectA and HttpSendRequestA, and downloads a file from api.wiresguard.com/users/admin. The file is read into a newly allocated buffer, and execution is then transferred to the start of the 2000-byte second-stage shellcode.
⠀

⠀
This stub is responsible for decrypting the next payload layer and transferring execution to it. It uses a rolling XOR-based decryption loop before jumping directly to the decrypted code.
A quick look into the decrypted buffer revealed an interesting blob with a repeated string CRAZY, hinting at an additional XORed layer, later confirmed by a quick test.
⠀

⠀

⠀
Parsing of the decrypted configuration data confirms that retrieved shellcode is Cobalt Strike (CS) HTTPS beacon with http-get api.wiresguard.com/update/v1 and http-post api.wiresguard.com/api/FileUpload/submit urls.
Analysis of the initial evidence revealed a consistent execution chain: a loader embedding Metasploit block_api shellcode that downloads a Cobalt Strike beacon. The unique decryption stub and configuration XOR key CRAZY allowed us to pivot into an external hunt, uncovering additional loader variants.
⠀

In the last year, four similar files were uploaded to public repositories.
SHA-256: 0a9b8df968df41920b6ff07785cbfebe8bda29e6b512c94a3b2a83d10014d2fd
Shellcode SHA-256: 4c2ea8193f4a5db63b897a2d3ce127cc5d89687f380b97a1d91e0c8db542e4f8
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4472.114 Safari/537.36
URL hosting CS beacon: http://59[.]110.7.32:8880/uffhxpSy
CS http-get URL: http://59[.]110.7.32:8880/api/getBasicInfo/v1
CS http-post URL: http://59[.]110.7.32:8880/api/Metadata/submit
SHA-256: e7cd605568c38bd6e0aba31045e1633205d0598c607a855e2e1bca4cca1c6eda
Shellcode SHA-256: 078a9e5c6c787e5532a7e728720cbafee9021bfec4a30e3c2be110748d7c43c5
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4472.114 Safari/537.36
URL hosting CS beacon: http://124[.]222.137.114:9999/3yZR31VK
CS http-get URL: http://124[.]222.137.114:9999/api/updateStatus/v1
CS http-post URL: http://124[.]222.137.114:9999/api/Info/submit
SHA-256: b4169a831292e245ebdffedd5820584d73b129411546e7d3eccf4663d5fc5be3
Shellcode SHA-256: 7add554a98d3a99b319f2127688356c1283ed073a084805f14e33b4f6a6126fd
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36
URL hosting CS beacon: https://api[.]wiresguard[.]com/users/system
CS http-get URL: https://api[.]wiresguard[.]com/api/getInfo/v1
CS http-post URL: https://api[.]wiresguard[.]com/api/Info/submit
SHA-256: fcc2765305bcd213b7558025b2039df2265c3e0b6401e4833123c461df2de51a
Shellcode SHA-256: 7add554a98d3a99b319f2127688356c1283ed073a084805f14e33b4f6a6126fd
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36
URL hosting CS beacon: https://api[.]wiresguard[.]com/users/system
CS http-get URL: https://api[.]wiresguard[.]com/api/getInfo/v1
CS http-post URL: https://api[.]wiresguard[.]com/api/Info/submit
⠀
From all the loaders we analyzed, Loader 3 piqued our interest for three reasons - shellcode encryption technique, execution , and almost identical C2 to beacon that was found on the infected asset. All the previous samples used a pretty common technique to execute the shellcode - decrypt embedded shellcode in user space, change the protection of memory region to executable state, and invoke decrypted code via CreateThread / CreateRemoteThread; Loader 3 (original name “ConsoleApplication2.exe”) violates this approach.
At the first glance, the logic of the sample is straightforward: Load the DLL clipc.dll, overwrite first 0x490 bytes, change the protection to PAGE_EXECUTE_READ (0x20), and then invoke NtQuerySystemInformation. Two interesting notes to highlight here - bytes copied into the memory region of clipc.dll are not valid shellcode and NtquerySystemInformation is used to “Retrieve the specified system information”, not to execute code.
⠀

⠀
Looking into the copied data reveals two “magic numbers” DEADBEEF and CAFEAFE, but nothing else. However, the execution of shellcode is somehow successful, so what’s going on?
⠀

⠀
According to the official documentation, the first parameter of NtQuerySystemInformation is of type SYSTEM_INFORMATION_CLASS which specifies the category of system information to be queried. During static analysis in IDA Pro, this parameter was initially identified as SystemExtendedProcessInformation|0x80 but looking for this value in MSDN and other public references didn’t provide any explanation on how the execution was achieved. But, searching for the original value passed to the function (0xB9) uncovered something interesting. The following blog by DownWithUp covers Microsoft Warbird, which could be described as an internal code protection and obfuscation framework. These resources confirm IDA misinterpretation of the argument which should be SystemCodeFlowTransition, a necessary argument to invoke Warbird functionality. Additionally, DownWithUp’s blog post mentioned the possible operations:
⠀

⠀
Referring to the snippet we saw from “ConsoleApplication2.exe”, the operation is equal to WbHeapExecuteCall which gives us the answer on how the shellcode gained execution. Thanks to work of other researchers, we also know that this technique only works if the code resides inside of memory of Microsoft signed binary, thus revealing why clipc.dll has been used. The blog post from cirosec also contains a link for their POC of this technique which is almost the same replica of “ConsoleApplication2.exe”, hinting that author of “ConsoleApplication2.exe” simply copied it and modified to execute Metasploit block_api shellcode instead of the benign calc from POC. The comparison of the Cobalt Strike beacon configuration delivered via “conf.c” and “ConsoleApplication2.exe” revealed shared trades between these two, most notably domain, public key, and process injection technique.
Attribution is primarily based on strong similarities between the initial loader observed in this intrusion and previously published Symantec research. Particularly the use of a renamed “Bitdefender Submission Wizard” to side-load “log.dll” for decrypting and executing an additional payload.
In addition, similarities of the execution chain of “conf.c” retrieved from the infected asset and other loaders that we found, supported by the same public key extracted from CS beacons delivered through “conf.c” and “ConsoleApplication2.exe” suggests with moderate confidence, that the threat actor behind this campaign is likely Lotus Blossom.
The discovery of the Chrysalis backdoor and the Warbird loader highlights an evolution in Lotus Blossom's capabilities. While the group continues to rely on proven techniques like DLL sideloading and service persistence, their multi-layered shellcode loader and integration of undocumented system calls (NtQuerySystemInformation) mark a clear shift toward more resilient and stealth tradecraft.
What stands out is the mix of tools: the deployment of custom malware (Chrysalis) alongside commodity frameworks like Metasploit and Cobalt Strike, together with the rapid adaptation of public research (specifically the abuse of Microsoft Warbird). This demonstrates that Lotus Blossom is actively updating their playbook to stay ahead of modern detection.
InsightIDR and Managed Detection and Response customers have existing detection coverage through Rapid7's expansive library of detection rules. Suspicious Process - Child of Notepad++ Updater (gup.exe) and Suspicious Process - Chrysalis Backdoor are two examples of deployed detections that will alert on behavior related to Chrysalis. Rapid7 will also continue to iterate detections as new variants emerge, giving customers continuous protection without manual tuning.
Customers using Rapid7’s Intelligence Hub gain direct access to Chrysalis backdoor, Metasploit loaders and Cobalt Strike IOCs, including any future indicators as they are identified.
Note: data may appear cut-off or hidden due to the string lengths in column 2. You can copy the full string by highlighting what is visible.
update.exe | a511be5164dc1122fb5a7daa3eef9467e43d8458425b15a640235796006590c9 |
[NSIS.nsi] | 8ea8b83645fba6e23d48075a0d3fc73ad2ba515b4536710cda4f1f232718f53e |
BluetoothService.exe | 2da00de67720f5f13b17e9d985fe70f10f153da60c9ab1086fe58f069a156924 |
BluetoothService | 77bfea78def679aa1117f569a35e8fd1542df21f7e00e27f192c907e61d63a2e |
log.dll | 3bdc4c0637591533f1d4198a72a33426c01f69bd2e15ceee547866f65e26b7ad |
u.bat | 9276594e73cda1c69b7d265b3f08dc8fa84bf2d6599086b9acc0bb3745146600 |
conf.c | f4d829739f2d6ba7e3ede83dad428a0ced1a703ec582fc73a4eee3df3704629a |
libtcc.dll | 4a52570eeaf9d27722377865df312e295a7a23c3b6eb991944c2ecd707cc9906 |
admin | 831e1ea13a1bd405f5bda2b9d8f2265f7b1db6c668dd2165ccc8a9c4c15ea7dd |
loader1 | 0a9b8df968df41920b6ff07785cbfebe8bda29e6b512c94a3b2a83d10014d2fd |
uffhxpSy | 4c2ea8193f4a5db63b897a2d3ce127cc5d89687f380b97a1d91e0c8db542e4f8 |
loader2 | e7cd605568c38bd6e0aba31045e1633205d0598c607a855e2e1bca4cca1c6eda |
3yzr31vk | 078a9e5c6c787e5532a7e728720cbafee9021bfec4a30e3c2be110748d7c43c5 |
ConsoleApplication2.exe | b4169a831292e245ebdffedd5820584d73b129411546e7d3eccf4663d5fc5be3 |
system | 7add554a98d3a99b319f2127688356c1283ed073a084805f14e33b4f6a6126fd |
s047t5g.exe | fcc2765305bcd213b7558025b2039df2265c3e0b6401e4833123c461df2de51a |
95.179.213.0 |
api[.]skycloudcenter[.]com |
api[.]wiresguard[.]com |
61.4.102.97 |
59.110.7.32 |
124.222.137.114 |
ATT&CK ID | Name |
T1204.002 | User Execution: Malicious File |
T1036 | Masquerading |
T1027 | Obfuscated Files or Information |
T1027.007 | Obfuscated Files or Information: Dynamic API Resolution |
T1140 | Deobfuscate/Decode Files or Information |
T1574.002 | DLL Side-Loading |
T1106 | Native API |
T1055 | Process Injection |
T1620 | Reflective Code Loading |
T1059.003 | Command and Scripting Interpreter: Windows Command Shell |
T1083 | File and Directory Discovery |
T1005 | Data from Local System |
T1105 | Ingress Tool Transfer |
T1041 | Exfiltration Over C2 Channel |
T1071.001 | Application Layer Protocol: Web Protocols (HTTP/HTTPS) |
T1573 | Encrypted Channel |
T1547.001 | Boot or Logon Autostart Execution: Registry Run Keys |
T1543.003 | Create or Modify System Process: Windows Service |
T1480.002 | Execution Guardrails: Mutual Exclusion |
T1070.004 | Indicator Removal on Host: File Deletion |
*IOCs contributed by @AIexGP on X.
Rapid7 recommends updating to the latest version of Notepad++. In addition, the IoCs provided above and within Rapid7 Intelligence Hub can be used to hunt within your logs during the timeframe of June through November, 2025, as this is the timeframe when the backdoor activity is known to have been taking place.
Catch Inside Chrysalis, Rapid7's webinar led by Christiaan Beek, on-demand via BrightTALK.

DataDome blocked 16M+ bot requests from 3.9M IPs targeting a global sports organization's ticket sales. See how we stopped industrial-scale scalpers.
The post How DataDome Stopped Millions of Ticket Scalping Bots Targeting a Global Sports Organization appeared first on Security Boulevard.

© Melody Baran/University of California-San Diego-Scripps Institution of Oceanography, via Associated Press

© Alamy

© Chang W. Lee/The New York Times
At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it's hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?
Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls "disempowering patterns" across 1.5 million anonymized real-world conversations with its Claude AI model. While the results show that these kinds of manipulative patterns are relatively rare as a percentage of all AI conversations, they still represent a potentially large problem on an absolute basis.
In the newly published paper "Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage," researchers from Anthropic and the University of Toronto try to quantify the potential for a specific set of "user disempowering" harms by identifying three primary ways that a chatbot can negatively impact a user's thoughts or actions:


© Getty Images

© Tierney L. Cross/The New York Times
Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?"
The post Moltbot Personal Assistant Goes Viral—And So Do Your Secrets appeared first on Security Boulevard.

On Tuesday, OpenAI released a free AI-powered workspace for scientists. It's called Prism, and it has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling "AI slop" in academic publishing.
To be clear, Prism is a writing and formatting tool, not a system for conducting research itself, though OpenAI's broader pitch blurs that line.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor (a standard used for typesetting documents), allowing researchers to draft papers, generate citations, create diagrams from whiteboard sketches, and collaborate with co-authors in real time. The tool is free for anyone with a ChatGPT account.


© Moor Studio via Getty Images

© Chang W. Lee/The New York Times

© Chang W. Lee/The New York Times
It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.
In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google's Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.
There are, of course, multiple versions of Gemini 3, and Google doesn't like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you're searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex "long tail" query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).


At first glance, an email address ending in .eu.org looks trustworthy. It feels institutional, maybe even official. Many people implicitly associate it with Europe, nonprofits, or established organizations.
That assumption is wrong more often than you might expect.
Because the domain looks legitimate, it has become attractive to fraudsters.
The post You see an email ending in .eu.org. Must be legit, right? appeared first on Security Boulevard.
