Supply Chain Attacks
Ryan Koch (00:03.512)
Today I want to talk about something that I think is really important for anyone who works in, around, or with technology in the public sector to understand. Supply chain attacks. I know that might sound like a niche cybersecurity topic, and if you're someone on the policy side or the program side, you might be thinking, okay, that sounds like an IT thing. But I'd argue this is one of those topics where the technology, policy, and real world consequences are so intertwined
that everyone needs at least a working understanding of it. So let's build that knowledge up here together today.
What is a supply chain attack? The simplest way I can describe it is imagine one day you go out and buy a bottle of water and you get real unlucky. The water makes you feel sick. To you in that moment, the thing at fault must just be the water you drink, right? But a lot of things go into preparing it before it comes to quench your thirst. The labels were printed, the plastic was molded into some shape, and maybe the caps that get put on the end
came from some other supplier before they were used. As those things all came together, the water was added to the bottle. What if those caps that came on the last step were contaminated? And what if that contamination wasn't noticed before you drank from the bottle? That kind of scenario is a kind of analog to what a technology supply chain attack might be like.
Modern software is assembled from hundreds or even thousands of pieces, components. Those are open source libraries, third party services, vendor tools. Research suggests that open source code can make up somewhere between 70 and 90 % of a modern application. Every one of those components is a trust decision, much like the water bottle company, trusting the caps.
Ryan Koch (02:06.626)
When a developer includes a library in the project, they're trusting that the people who maintain it, the systems they use to build it, and the channels they use to distribute it haven't been compromised by some sort of malicious actor. A supply chain attack then exploits that trust. The attacker compromises something along that upstream path. Maybe it's a popular software library, a build system, a vendor's software update,
mechanism and it uses that relationship to arrive downstream into potentially thousands of victim organizations. The victims don't get hacked in the sense that you might see them in the movies, right? They just install a software update. They update a dependency. They do the stuff that they're supposed to do. And that's then how the malicious code finds its way inside. Now, why should we care about this? You might be wondering.
There's a few reasons. First is amplification, as we've been talking about. One compromise in one of these chains can reach a huge number of victims very quickly. Second is detection. It can be really, really difficult for these kinds of things because the bad code arrives through a channel that is meant to be trustworthy. It could be a signed software package update. It could be from a well-known
NPM package, know, like for your JavaScript folks. And these things then blend in with normal activity. Your security software, your scanning tools aren't, you know, looking for that to be something malicious. And so that's then something that can make this ever more dangerous. Third, the code that comes in through a supply chain attack may already have elevator privileges because of its entry point. It's running as the thing you built or in the build tools you have.
So there's no need for someone to kind of take that step where they get into a lower level thing and try to escalate because the nature of the attack got them there already. The scale of this problem is also growing quickly. Supply chain attacks more than doubled in 2025. Global losses hit an estimated $60 billion. Over 70 % of organizations reported experiencing at least one supply chain related security incident.
Ryan Koch (04:32.962)
So hearing all that, can, you know, I think safely think that this isn't a theoretical concern that we're talking about here today. But how do these things tend to play out? Let's walk through a couple of common patterns where there's some interesting bits to pick up. One such common way is through a classic, you know, stolen credentials. An attacker gets a hold of a package maintainer's login somehow and
or a long-lived access token they use for a build system. And they use that to publish some poisoned version of what would otherwise be a legitimate software package. From the outside, things look fine. Packages with same They haven't obviously taken it over. Version number probably changed. But it's not obvious that there's been tampering. Another way is social engineering, which can be very sophisticated.
A very striking example of this is the XZutils backdoor that came up in 2024. XZutils is a compression library used across a lot of Linux distributions. So you can imagine it's something that ends up in a lot of really interesting and important infrastructure. The attackers spent two years making legitimate contributions to the project. They built up credibility, they sought to earn trust, and eventually used sock puppet accounts.
to help pressure the maintainer to give them shared access of the project. And then once they have that trusted position after investing all that time, they used it to insert a backdoor that could have given them access to millions of SSH enabled systems worldwide. SSH is a remote access tool. So you can access computers that are far away. And what's interesting is it was caught by accident. A Microsoft engineer
happened to notice a 500 millisecond latency increase during some testing and chose to investigate why that change in performance happened, which is what led to the whole thing being uncovered.
Ryan Koch (06:40.641)
Another pattern is attacking the build system itself. A classic or a big example of that is the SolarWinds incident, which happened in December of 2020. Russian affiliate attackers compromised the build infrastructure for the SolarWinds Orion platform and put a backdoor into their legitimate software updates channel. Those updates went out to 18,000 different organizations, including those within the US federal government.
Fortune 500 companies and the cybersecurity firm FireEye, then happened to become the organization that discovered the breach. The attackers managed to operate undetected for nine months. The source code itself being clean, the malicious thing, the poison, for lack of a better way to put it, was introduced during the build steps in the build system.
Ryan Koch (07:34.392)
There's another example that I want to spend a little bit of time on, both because it's recent and because it shows that ever increasing sophistication that we're getting in this thread. At the end of March, 2026, just days ago as I record this, the Axios NPM package was compromised. If you're not in JavaScript world, this Axios is a very widely used HTTP client library.
It has over 83 million weekly downloads. It's all over the place. People's front end applications, their backend services, enterprise systems. It's very popular. The attackers compromised the account of the primary maintainer of this project and managed to publish not one, but two malicious versions of it. What's interesting is that they injected a fake dependency that contaminated the kind of post install script process, the things that run after install.
And what that process then did, the poisoned one, is drop a cross-platform remote access trojan. And they had pre-built payloads for macOS, Windows, and Linux. Both release branches were hit within 39 minutes, and after execution, the malware would delete itself, replace its own package manifest with a clean version, basically, you know, work to cover the tracks, as it were. The level of planning here is what's super notable, though.
It's not something that could have been done opportunistically. And security researchers at Elastic found that the macOS payload shared significant overlap with malware attributed to a North Korean threat group. So it's possible that this is something that is a state-sponsored actor attacking one of the world's most popular JavaScript packages. And so folks might be listening to this thinking, well, my agency doesn't use JavaScript.
That's not what's important. It's the pattern and the fact that a build process or a maintainer account is attacked. The same thing can happen for projects that are using Python, Java, dotnet, whatever tech stack you happen to use.
Ryan Koch (09:48.344)
So this is the threat landscape, what can you realistically do? Is there someone at an organization listening to this? I'm not gonna pretend like there's some super easy obvious answer where you just do this one easy thing and you're done, because there really isn't. But there are some meaningful practices you can do to reduce your risk a little bit. First is the concept of a software bill of materials. Think of it as an ingredients list for your software. It's a structured inventory of all the components, libraries,
dependencies with their version numbers. Often package managers will have a lock file or something of that nature with this information in it. It's worth then keeping track of it. This is a kind of thing that can be automated and potentially can interact with things like your GRC tools, your governance, risk and compliance tools.
For example, there are incidents like the Log4j incident where organizations that had taken this practice were almost immediately able to identify their exposure, while those that weren't then had to kind of figure out, is this a thing we have installed in our tools or is it not?
Next is dependency management overall. You can pin your versions, you can use lock files, like I mentioned. You don't need to automatically pull the latest version of all the things. There's even research that suggests that waiting seven days before adopting a new package version might prevent eight out of 10 of the major supply chain attacks that have happened in 2025. That's a fairly simple thing one can do with maybe a big payoff retrospectively.
Third is pay attention to your CI-CD platforms. Third is give attention to your CI-CD platforms. Build systems are high-value targets. You should enforce code provenance, use reproducible builds where can, and review lifecycle scripts in your dependencies. The Axios attack, for example, happened entirely through a post-install hook and a transitive dependency. If those hooks were disabled or reviewed by default,
Ryan Koch (12:01.483)
that attack vector is less likely to be exploited. Fourth, consider your identity and access management. Fishing-resistant multi-factor authentication is important on your developer and service accounts. You want to get rid of long-lived access tokens where you can. Going back to the Axios compromise again, that happened because of a long-lived NPM token. If you can get rid of those, that's a little bit of attack surface that you can reduce.
Finally, for organizations that rely on a lot of external managed service providers, that sort of vendor risk being a part of their cycle, you maybe want to not rely just on annual questionnaires and contract clauses to manage that risk. It should be something where you're continuously monitoring what's going on with their solution. And you really want the ability to rapidly isolate a vendor integration if it becomes...
compromised.
Ryan Koch (13:06.881)
I want to close with a few bigger picture thoughts because I think there are some themes in here that go beyond technical stuff. Everything we talked about today comes back to explaining trust. Trust in a package maintainer, trust in a vendor, trust in an update mechanism. That's a fundamentally, not entirely different kind of problem than a traditional, hey, here's a cybersecurity vulnerability. I need to mitigate it with control. And it requires a little bit of different thinking.
There can be real structural gaps between how fast these things move and how fast our governance and risk management processes operate. Governance can often be said to move at document speed, while attacks move at machine speed. Closing the app is going to take more than just finding a shiny tool. It takes changes to how we think about procurement, vendor management, and what it means to trust.
the software you run on the systems at your organizations. I recently watched a talk given by AJ Yawn delving into the concept of approaching GRC, governance risk compliance, as an engineering discipline, taking it beyond manual evidence collection, building engineered systems, automations, integrations. I'll make sure I link it in the show notes as I think the ideas there are really valuable in this context and in others.
And it's certainly worth the brain cycles for y'all to take a peek at or listen to.
This is an area that's evolving fast. And I think it's one that anyone working in civic tech, government IT policy might want to have on their radar.
Ryan Koch (14:51.658)
And that's going to do it for today's episode. Thank you for listening in. If you found this useful, please share it with a friend, a colleague, your cat, or whoever in your life that enjoys listening to podcasts.