Normal view

There are new articles available, click to refresh the page.
Yesterday — 17 May 2024Main stream
Before yesterdayMain stream

Winamp to “open up” its source code

16 May 2024 at 18:50

Winamp has announced that on 24 September 2024, the application’s source code will be open to developers worldwide.

Winamp will open up its code for the player used on Windows, enabling the entire community to participate in its development. This is an invitation to global collaboration, where developers worldwide can contribute their expertise, ideas, and passion to help this iconic software evolve.

↫ Winamp press release

Nice, I guess, but twenty years to late to be of any relevance. At least it’ll be great for software preservation.

But what’s up with the odd language used in the press release, and the weirdly specific date that’s month from now? They really seem to want to avoid the term “open source”, which makes me think this is going to be one of those cases where they hope the community will work for them for free without actually using a real open source license. You know, those schemes that always – no exception – fail.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

By: Rapid7
30 April 2024 at 10:29
Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

By Dr. Mike Cohen and Carlos Canto

Rapid7 is very excited to announce that version 0.7.2 of Velociraptor is now fully available for download.

In this post we’ll discuss some of the interesting new features.

EWF Support

Velociraptor has introduced the ability to analyze dead disk images in the past. Although we don’t need to analyze disk images very often, it comes up occasionally.

Previously, Velociraptor only supported analysis of DD images (AKA “Raw images”). Most people use standard acquisition software to acquire images, which uses the common EWF format to compress them.

In this 0.7.2 release, Velociraptor supports EWF (AKA E01) format using the ewf accessor. This allows Velociraptor to analyze E01 image sets.

To analyze dead disk images use the following steps:

  1. Create a remapping configuration that maps the disk accessors into the E01 image. This automatically diverts VQL functions that look at the filesystem into the image instead of using the host’s filesystem. In this release you can just point the --add_windows_disk option to the first disk of the EWF disk set (the other parts are expected to be in the same directory and will be automatically loaded).
    The following creates a remapping file by recognizing the windows partition in the disk image.

$ velociraptor-v0.72-rc1-linux-amd64 deaddisk
--add_windows_disk=/tmp/e01/image.E01 /tmp/remapping.yaml -v

2. Next we launch a client with the remapping file. This causes any VQL queries that access the filesystem to come from the image instead of the host. Other than that, the client looks like a regular client and will connect to the Velociraptor server just like any other client. To ensure that this client is unique you can override the writeback location (where the client id is stored) to a new file.

$ velociraptor-v0.72-rc1-linux-amd64 --remap /tmp/remapping.yaml
--config ~/client.config.yaml client -v
--config.client-writeback-linux=/tmp/remapping.writeback.yaml

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Allow remapping clients to use SSH accessor

Sometimes we can’t deploy the Velociraptor client on a remote system. (For example, it might be an edge device like an embedded Linux system or it may not be directly supported by Velociraptor.)

In version 0.7.1, Velociraptor introduced the ssh accessor which allows VQL queries to use a remote ssh connection to access remote files.

This release added the ability to apply remapping in a similar way to the dead disk image method above to run a Virtual Client which connects to the remote system via SSH and emulates filesystem access over the sftp protocol.

To use this feature you can write a remapping file that maps the ssh accessor instead of the file and auto accessors:

remappings:

  • type: permissions
    permissions:

    • COLLECT_CLIENT
    • FILESYSTEM_READ
    • READ_RESULTS
    • MACHINE_STATE
  • type: impersonation
    os: linux
    hostname: RemoteSSH

  • type: mount
    scope: |
    LET SSH_CONFIG <= dict(hostname='localhost:22',
    username='test',
    private_key=read_file(filename='/home/test/.ssh/id_rsa'))

    from:
    accessor: ssh

    "on":
    accessor: auto
    path_type: linux

  • type: mount
    scope: |
    LET SSH_CONFIG <= dict(hostname='localhost:22',
    username='test',
    private_key=read_file(filename='/home/test/.ssh/id_rsa'))

    from:
    accessor: ssh

    "on":
    accessor: file
    path_type: linux

Now you can start a client with this remapping file to virtualize access to the remote system via SSH.

$ velociraptor-v0.72-rc1-linux-amd64 --remap /tmp/remap_ssh.yaml
--config client.config.yaml client -v
--config.client-writeback-linux=/tmp/remapping.writeback_ssh.yaml
--config.client-local-buffer-disk-size=0

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

GUI Changes

The GUI has been significantly improved in this release.

Undo/Redo for notebook cells

Velociraptor offers an easy way to experiment and explore data with VQL queries in the notebook interface. Naturally, exploring the data requires going back and forth between different VQL queries.

In this release, Velociraptor keeps several versions of each VQL cell (by default 5) so as users explore different queries they can easily undo and redo queries. This makes exploring data much quicker as you can go back to a previous version instantly.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Hunt view GUI is now paged

Previously, hunts were presented in a table with limited size. In this release, the hunt table is paged and searchable/sortable. This brings the hunts table into line with the other tables in the interface and allows an unlimited number of hunts to be viewable in the system.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Secret Management

Many Velociraptor plugins require secrets to operate. For example, the ssh accessor requires a private key or password to log into the remote system. Similarly the s3 or smb accessors require credentials to upload to the remote file servers. Many connections made over the http_client() plugin require authorization – for example an API key to send Slack messages or query remote services like Virus Total.

Previously, plugins that required credentials needed those credentials to be passed as arguments to the plugin. For example, the upload_s3() plugin requires AWS S3 credentials to be passed in as parameters.

This poses a problem for the Velociraptor artifact writer: how do you safely provide the credentials to the VQL query in a way that does not expose them to every user of the Velociraptor GUI? If the credentials are passed as parameters to the artifact then they are visible in the query logs and request, etc.

This release introduces Secrets as a first class concept within VQL. A Secret is a specific data object (key/value pairs) given a name which is used to configure credentials for certain plugins:

  1. A Secret has a name which we use to refer to it in plugins.
  2. Secrets have a type to ensure their data makes sense to the intended plugin. For example a secret needs certain fields for consumption by the s3 accessor or the http_client() plugin.
  3. Secrets are shared with certain users (or are public). This controls who can use the secret within the GUI.
  4. The GUI is careful to not allow VQL to read the secrets directly. The secrets are used by the VQL plugins internally and are not exposed to VQL users (like notebooks or artifacts).

Let’s work through an example of how Secrets can be managed within Velociraptor. In this example we store credentials for the ssh accessor to allow users to glob() a remote filesystem within the notebook.

First we will select manage server secrets from the welcome page.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Next we will choose the SSH PrivateKey secret type and add a new secret.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

This will use the secret template that corresponds to the SSH private keys. The acceptable fields are shown in the GUI and a validation VQL condition is also shown for the GUI to ensure that the secret is properly populated. We will name the secret DevMachine to remind us that this secret allows access to our development system. Note that the hostname requires both the IP address (or dns name) and the port.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Next we will share the secrets with some GUI users

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More
Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

We can view the list of users that are able to use the secret within the GUI

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Now we can use the new secret by simply referring to it by name:

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Not only is this more secure but it is also more convenient since we don’t need to remember the details of each secret to be able to use it. For example, the http_client() plugin will fill the URL field, headers, cookies etc directly from the secret without us needing to bother with the details.

WARNING: Although secrets are designed to control access to the raw credential by preventing users from directly accessing the secrets' contents, those secrets are still written to disk. This means that GUI users with direct filesystem access can simply read the secrets from the disk.

We recommend not granting untrusted users elevated server permissions like EXECVE or Filesystem Read as it can bypass the security measures placed on secrets.

Server improvements

Implemented Websocket based communication mechanism

One of the most important differences between Velociraptor and some older remote DFIR frameworks such as GRR is the fact that Velociraptor maintains a constant, low latency connection to the server. This allows Velociraptor clients to respond immediately without needing to wait for polling on the server.

In order to enhance compatibility between multiple network configurations like MITM proxies, transparent proxies etc., Velociraptor has stuck to simple HTTP based communications protocols. To keep a constant connection, Velociraptor uses the long poll method, keeping HTTP POST operations open for a long time.

However as the Internet evolves and newer protocols become commonly used by major sites, the older HTTP based communication method has proven more difficult to use. For example, we found that certain layer 7 load balancers interfere with the long poll method by introducing buffering to the connection. This severely degrades communications between client and server (Velociraptor falls back to a polling method in this case).

On the other hand, modern protocols are more widely used, so we found that modern load balancers and proxies already support standard low latency communications protocols such as Web Sockets.

In the 0.7.2 release, Velociraptor introduces support for websockets as a communications protocol. The websocket protocol is designed for low latency and low overhead continuous communications methods between clients and server (and is already used by most major social media platforms, for example). Therefore, this new method should be better supported by network infrastructure as well as being more efficient.

To use the new websocket protocol, simply set the client’s server URL to have wss:// scheme:

Client:
server_urls:

You can use both https and wss URLs at the same time, Velociraptor will switch from one to the other scheme if one becomes unavailable.

Dynamic DNS providers

Velociraptor has the capability to adjust DNS records by itself (AKA Dynamic DNS). This saves users the hassle of managing a dedicated dynamic DNS service such as ddclient).

Traditionally we used Google Domains as our default Dynamic DNS provider, but Google has decided to shut down this service abruptly forcing us to switch to alternative providers.

The 0.7.2 release has now switched to CloudFlare as our default preferred Dynamic DNS provider. We also added noip.com as a second option.

Setting up CloudFlare as your preferred dynamic DNS provider requires the following steps:

  1. Sign into CloudFlare and buy a domain name.
  2. Go to https://dash.cloudflare.com/profile/api-tokens to generate an API token. Select Edit Zone DNS in the API Token templates.
Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More
Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

You will need to require the “Edit” permission on Zone DNS and include the specific zone name you want to manage. The zone name is the domain you purchased, e.g. “example.com”. You will be able to set the hostname under that domain, e.g. “velociraptor.example.com”.

Velociraptor 0.7.2 Release: Digging Deeper than Ever with EWF Support, Dynamic DNS and More

Using this information you can now create the dyndns configuration:

Frontend:
....
dyn_dns:
type: cloudflare
api_token: XXXYYYZZZ
zone_name: example.com

Make sure the Frontend.Hostname field is set to the correct hostname to update - for example

Frontend:
hostname: velociraptor.example.com

This is the hostname that will be updated.

Enhanced proxy support

Velociraptor is often deployed into complex enterprise networks. Such networks are often locked down with complicated controls (such as MITM inspection proxies or automated proxy configurations) which Velociraptor needs to support.

Velociraptor already supports MITM proxies but previously had inflexible proxy configuration. The proxy could be set or unset but there was no finer grained control over which proxy to choose for different URLs. This makes it difficult to deploy on changing network topologies (such as roaming use).

The 0.7.2 release introduces more complex proxy condition capabilities. It is now possible to specify which proxy to use for which URL based on a set of regular expressions:

Client:
proxy_config:
http: http://192.168.1.1:3128/
proxy_url_regexp:
"^https://www.google.com/": ""
"^https://.+example.com": "https://proxy.example.com:3128/"

The above configuration means to:

  1. By default connect to http://192.168.1.1:3128/ for all URLs (including https)
  2. Except for www.google.com which will be connected to directly.
  3. Any URLs in the example.com domain will be forwarded through https://proxy.example.com:3128

This proxy configuration can apply to the Client section or the Frontend section to control the server’s configuration.

Additionally, Velociraptor now supports a Proxy Auto Configuration (PAC) file. If a PAC file is specified, then the other configuration directives are ignored and all configuration comes from the PAC file. The PAC file can also be read from disk using the file:// URL scheme, or even provided within the configuration file using a data: URL.

Client:
proxy_config:
pac: http://www.example.com/wpad.dat

Note that the PAC file must obviously be accessible without a proxy.

Other notable features

Other interesting improvements include:

Process memory access on MacOS

On MacOS we can now use proc_yara() to scan process memory. This should work providing your TCT profile grants the get-task-allow, proc_info-allow and task_for_pid-allow entitlements. For example the following plist is needed at a minimum:

com.apple.springboard.debugapplications get-task-allow proc_info-allow task_for_pid-allow

Multipart uploaders to http_client()

Sometimes servers require uploaded files to be encoded using the mutipart/form method. Previously it was possible to upload files using the http_client() plugin by constructing the relevant request in pure VQL string building operations.

However this approach is limited by available memory and is not suitable for larger files. It is also non-intuitive for users.

This release adds the files parameter to the http_client() plugin. This simplifies uploading multiple files and automatically streams those files without memory buffering - allowing very large files to be uploaded this way.

For example:

SELECT *
FROM http_client(
url='http://localhost:8002/test/',
method='POST',
files=dict(file='file.txt', key='file', path='/etc/passwd', accessor="file")

Here the files can be an array of dicts with the following fields:

  • file: The name of the file that will be stored on the server
  • key: The name of the form element that will receive the file
  • path: This is an OSPath object that we open and stream into the form.
  • accessor: Any accessor required for the path.

Yara plugin can now accept compiled rules

The yara() plugin was upgraded to use Yara Version 4.5.0 as well as support compiled yara rules. You can compile yara rules with the yarac compiler to produce a binary rule file. Simply pass the compiled binary data to the yara() plugin’s rules parameter.

WARNING: We do not recommend using compiled yara rules because of their practical limitations:

  1. The compiled rules are not portable and must be used on exactly the same version of the yara library as the compiler that created them (Currently 4.5.0)
  2. Compiled yara rules are much larger than the text rules.

Compiled yara rules pose no benefit over text based rules, except perhaps being more complex to decompile. This is primarily the reason to use compiled rules - to try to hide the rules (e.g. from commercial reasons).

Conclusions

There are many more new features and bug fixes in the 0.7.2 release. If you’re interested in any of these new features, why not take Velociraptor for a spin by downloading it from our release page? It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing velociraptor-discuss@googlegroups.com. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Using Legitimate GitHub URLs for Malware

22 April 2024 at 11:26

Interesting social-engineering attack vector:

McAfee released a report on a new LUA malware loader distributed through what appeared to be a legitimate Microsoft GitHub repository for the “C++ Library Manager for Windows, Linux, and MacOS,” known as vcpkg.

The attacker is exploiting a property of GitHub: comments to a particular repo can contain files, and those files will be associated with the project in the URL.

What this means is that someone can upload malware and “attach” it to a legitimate and trusted project.

As the file’s URL contains the name of the repository the comment was created in, and as almost every software company uses GitHub, this flaw can allow threat actors to develop extraordinarily crafty and trustworthy lures.

For example, a threat actor could upload a malware executable in NVIDIA’s driver installer repo that pretends to be a new driver fixing issues in a popular game. Or a threat actor could upload a file in a comment to the Google Chromium source code and pretend it’s a new test version of the web browser.

These URLs would also appear to belong to the company’s repositories, making them far more trustworthy.

Other Attempts to Take Over Open Source Projects

18 April 2024 at 07:06

After the XZ Utils discovery, people have been examining other open-source projects. Surprising no one, the incident is not unique:

The OpenJS Foundation Cross Project Council received a suspicious series of emails with similar messages, bearing different names and overlapping GitHub-associated emails. These emails implored OpenJS to take action to update one of its popular JavaScript projects to “address any critical vulnerabilities,” yet cited no specifics. The email author(s) wanted OpenJS to designate them as a new maintainer of the project despite having little prior involvement. This approach bears strong resemblance to the manner in which “Jia Tan” positioned themselves in the XZ/liblzma backdoor.

[…]

The OpenJS team also recognized a similar suspicious pattern in two other popular JavaScript projects not hosted by its Foundation, and immediately flagged the potential security concerns to respective OpenJS leaders, and the Cybersecurity and Infrastructure Security Agency (CISA) within the United States Department of Homeland Security (DHS).

The article includes a list of suspicious patterns, and another list of security best practices.

Backdoor in XZ Utils That Almost Happened

11 April 2024 at 07:01

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.

Redis’ license change and forking are a mess that everybody can feel bad about

2 April 2024 at 16:28

Redis, a tremendously popular tool for storing data in-memory rather than in a database, recently switched its licensing from an open source BSD license to both a Source Available License and a Server Side Public License (SSPL).

The software project and company supporting it were fairly clear in why they did this. Redis CEO Rowan Trollope wrote on March 20 that while Redis and volunteers sponsored the bulk of the project’s code development, “the majority of Redis’ commercial sales are channeled through the largest cloud service providers, who commoditize Redis’ investments and its open source community.” Clarifying a bit, “cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge.”

[…]

This generated a lot of discussion, blowback, and action. The biggest thing was a fork of the Redis project, Valkey, that is backed by The Linux Foundation and, critically, also Amazon Web Services, Google Cloud, Oracle, Ericsson, and Snap Inc. Valkey is “fully open source,” Linux Foundation execs note, with the kind of BSD-3-Clause license Redis sported until recently. You might note the exception of Microsoft from that list of fork fans.

↫ Kevin Purdy at Ars Technica

Moves like this never go down well.

XZ Utils Backdoor

2 April 2024 at 14:50

The cybersecurity world got really lucky last week. An intentionally placed backdoor in XZ Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware.

It was an incredibly complex backdoor. Installing it was a multi-year process that seems to have involved social engineering the lone unpaid engineer in charge of the utility. More from ArsTechnica:

In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint function with a variant that has long been recognized as less secure. No one noticed at the time.

The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

There’s a lot more. The sophistication of both the exploit and the process to get it into the software project scream nation-state operation. It’s reminiscent of Solar Winds, although (1) it would have been much, much worse, and (2) we got really, really lucky.

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

❌
❌