Blog

A Developer’s View: How Attackers Can Infect Open Source Codebases

4 min.

November 22, 2021

Most open source projects welcome contributions from anyone. That’s one of the key strengths of open source development as a whole – the fact that any developer can help build it.

But this permissiveness can also breed risks. If open source projects don’t adequately vet new contributors and validate their code, their open-door policies can become a vector for attackers to sneak malicious code into their repositories. This is bad not just for the projects themselves, but also for any third parties that incorporate vulnerable open source code into their own codebases.

Here’s a look at how attackers can exploit open source projects, and what that means for developers who depend on open source code to help build their own projects.

Attacker Infiltration of Open Source Projects

Unlike most proprietary codebases, which are not accessible to the public at large, open source projects typically allow anyone to contribute code to them, and even go out of their way to make it easy to do so. After all, part of the reason why platforms like GitHub have become massively popular for hosting open source projects is the ease with which users of those platforms can access open source code, modify it or extend it, and push their changes back into the main codebase.

This doesn’t mean that anyone can instantly contribute any code to an open source codebase without any sort of vetting process in place. Most projects carefully review proposed contributions from coders who haven’t worked with the projects before to ensure that the contributions meet the project’s standards for code quality and security. They also look at the backgrounds of the new contributors to check that they have an established track record of solid contributions and coding experience. This vetting process helps prevent bad code from sneaking into codebases.

However, the rigor of the vetting process for new contributors can vary widely from one open source project to another. Large, mature projects led by veteran coders tend to enforce high standards. But smaller projects, or those that are not managed well, may do a poorer job of keeping track of who they allow onto their teams of contributors.

That means that coders with malicious intent may be able to slip past the gatekeeping process that is supposed to protect open source projects.

Keep in mind that attackers need not contribute plainly malicious code in order to infiltrate open source projects in this way. They could merely propose a change that creates a configuration condition (like improper input validation or an overlooked bounds check) that enables an exploit against the application.

This means that malicious contributions aren’t always easy to detect. Even experienced coders may fail to notice the vulnerabilities that attackers have intentionally buried within code they propose to contribute to an open source project.

Malicious Insiders

Complicating matters further is the fact that, once a developer has been accepted as a trusted member of an open source community, his or her activity within the codebase may not be monitored very closely.

That means that an attacker could potentially make valid contributions initially in order to gain the trust of peers, then start adding malicious code to the codebase without being closely watched.

If you think scenarios like this sound rare, think again. GitHub reports that 20 percent of the bugs within code stored on its platform were planted by malicious actors. Similar issues have occurred on the NPM repository, where attackers uploaded malicious open source applications with names similar to legitimate ones in order to trick developers into using them.

Malicious Code’s Impact on Third Parties

It’s not just developers and users of the open source tool or platform itself who are harmed when attackers find their way into an open source project. Third-party developers who incorporate code from an open source project into their own, internal codebases are also at risk.

In other words, if you are building an application or tool for use inside your own company, and you import some open source code to help implement it, you run the risk that an exploit or vulnerability lurking inside that code will make its way into your own application.

Protecting Against Attacks in Open Source

What can you do to protect your business from the risks associated with attackers infiltration of open source codebases?

You could choose not to use open source, of course, but that would mean shooting yourself in the foot. Open source is a great resource that, when used properly, allows developers to build applications faster and more cost-effectively than they could if they had to implement everything themselves.

A better approach is to use open source when you need it, but to be sure that you get open source code from trusted, mature projects. As noted above, these projects are more likely to perform the thorough vetting necessary to keep hackers out of their codebases.

Still, no matter how much you trust your code’s source, it’s still important to perform your own checks using tools like Checkmarx Software Composition Analysis (SCA), which scans codebases for vulnerabilities and other security issues. Even well-managed open source projects may allow insecure code to slip through their gates, and you can only deploy your applications with confidence if you know that you’ve done your own validation to protect against security risks.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Download our Ultimate Guide to SCA Here.

Table of Contents

Read More

Want to learn more? Here are some additional pieces for you to read.