How Rust’s standard library was vulnerable for years and nobody noticed

Sergey "Shnatsel" Davidoff
10 min readAug 18, 2018

--

Rust is a new systems programming language that prides itself on memory safety and speed. The gist of it is that if you write code in Rust, it goes as fast as C or C++, but you will not get mysterious intermittent crashes in production or horrific security vulnerabilities, unlike in the latter two.

That is, until you explicitly opt in to that kind of thing. Uh oh.

Wait, what?

You see, Rust provides safe abstractions that let you do useful stuff without having to deal with the complexities of memory layouts and other low-level arcana. But dealing with those things is necessary to run code on modern hardware, so something has to deal with it. In memory-safe languages like Python or Go this is usually handled by the language runtime — and Rust is no exception.

In Rust, the nutty-gritty of hazardous memory accesses is handled by the standard library. It implements the basic building blocks such as vectors that expose a safe interface to the outside, but perform potentially unsafe operations internally. To do that, they explicitly opt in to potentially unsafe operations (read: barely reproducible crashes, security vulnerabilities) by annotating a block with unsafe, like this: unsafe { Dragons::hatch(); }

However, Rust is different from languages like Python or Go in that it lets you use unsafe outside the standard library. On one hand, this means that you can write a library in Rust and call into it from other languages, e.g. Python. Language bindings are unsafe by design, so the ability to write such code in Rust is a major advantage over other memory-safe languages such as Go. On the other hand, this opens the floodgates for judicious use of unsafe. In fact, a couple of months ago a promising library caught some flak for engaging in precisely this sort of thing. So when I was trying to gauge whether Rust actually delivers on its promise of memory safety, that’s where I started.

I’ve messed with popular Rust libraries over the course of a month and then described my findings in Auditing popular Rust crates: how a one-line unsafe has nearly ruined everything. The TL;DR version of it is that Rust crates do sometimes use unsafe when it’s not absolutely necessary, and bugs that lead to denial of service are abundant, but after poking six different crates I have failed to get an actual exploit.

Clearly, I had to kick it up a notch.

Ultimate Power, Bad Guys Only

There is a highly effective technique for discovering vulnerabilities that I haven’t applied to Rust yet. It beats everything else by a long shot, and can be used only by the bad guys who want to break stuff, not the good guys who fix it. It’s… searching the bug tracker.

You see, most people writing code in C or C++ are not actually security-minded. They just want their code to work and go fast. When they encounter a bug that makes the program output garbage or crash, they simply fix it and go investigate the next bug. What else is there to do?

Well, turns out in C and C++ many of those bugs are caused by mistakes in memory management. It’s those kinds of bugs that present remote code execution vulnerabilities, and that safe Rust is designed to prevent. The proper way to handle them is to file them into a database called Common Vulnerabilities and Exposures (CVE for short) so that people who care about security are alerted to it and ship fixes to users. In practice such bugs are silently fixed in the next release at best, or remain open for years at worst, until either someone discovers them independently or the bug is caught powering some kind of malware in the wild.

This leaves a lot of security vulnerabilities in plain sight on the public bug tracker, neatly documented, just waiting for someone to come along and weaponize them.

I particularly like an example of such a bug in libjpeg that was discovered in 2003 but not recognized as a security issue. The patch to fix it ended up in limbo until 2013, at which point it was incorporated into an update so obscure that nobody received it anyway. The fix did not even get a changelog entry. It was independently discovered later in 2013 by Michal Zalewski, author of afl-fuzz, and after 10 years since the vulnerability was discovered the fix has at last shipped.

That is, anyone who has bothered to just scroll through the bug tracker could steal cookies and passwords out of your web browser by simply loading an image and a bit of JavaScript for 10 years.

Touché.

The worst part is, bugs that are already fixed are not eligible for bug bounties. So the Bugtracker Search technique will not get you bug bounty money; it will, however, get you real exploits for production systems. This is why it’s unrivaled if you want to break stuff, and useless if you want to fix it and not go broke in the process.

Also, getting maintainers to take your “this is a security vulnerability” comments seriously can be problematic, and actually exploiting the bug to prove it can be a lot of work, which further discourages pro bono applications of this technique.

Into the woods

Actually applying the Bugtracker Search™ to Rust code was even easier than I expected. Turns out GitHub lets you search all the projects written in a certain language, so I’ve just typed “unsound” in search query, selected “Rust” as language and off we go! Bugs, bugs everywhere!

I did not have much time to spare at the moment, so typing “crash” instead of “unsound” in the search box is left as an exercise for the reader. Also, I’ve only searched for open bugs in recently updated projects and ignored the standard library (those guys gotta be responsible, right?).

This got me my first Rust zero-day exploit! It was discovered two months before I’ve found it through github search and comes with its own blogpost, albeit focusing on performance. After I pointed out that it is a security vulnerability, the crate maintainer has fixed it within two hours. And then has backported the fix to every affected series even though the crate is still in 0.x.x versions. Kudos!

Still, actually exploiting this bug in practice is tricky. It would be a good candidate for exploit chaining, but it’s hard to use by itself.

Okay, that was not ultimate enough. Time to kick it up another notch.

In the belly of the beast

At this point we’re looking for something that is straightforward to exploit (something like buffer overflow with data an attacker can control) and has not been recognized as a security vulnerability yet.

It doesn’t matter if the bug is fixed in the latest version of the code: people lack incentives to update to the latest version as long as whatever they’re using works for them, and have a very clear incentive to not upgrade because whatever they’re using is known to work well, while the latest update is not.

So even if there is an update that fixed the issue, a lot of people will not actually install it, because there is no reason to — unless it is marked as a security update.

I was contemplating my course of action when I’ve accidentally stumbled upon a reddit thread discussing the history of vulnerabilities in the Rust standard library, which pointed out this gem:

seg fault pushing on either side of a VecDeque
https://github.com/rust-lang/rust/issues/44800

This is a buffer overflow bug in the standard library’s implementation of a double-ended queue. The data written out of bounds is controlled by the attacker. This makes it a good candidate for a remote code execution exploit.

The bug affects Rust versions 1.3 to 1.21 inclusive. It is causing a crash that is relatively easy to observe, yet it has gone unnoticed for two years. In the release that fixed it it did not even get a changelog entry. No CVE was filed about this vulnerability.

As a result, Debian Stable still ships vulnerable Rust versions for some architectures. I expect many enterprise users to have vulnerable versions as well.

As usual, bad guys win.

Whooops!

I did not expect to find something like this in the standard library because Rust has a very well thought out and responsible security policy (other projects, take note!), and the Rust security team consists of people who regularly work on the compiler and standard library. The fix should not have gone unnoticed.

I have contacted the Rust security team about the issue, asking them to make an announcement and file a CVE. The reply was:

Hey Sergey,

This was fixed way back in September; we don’t support old Rusts. As such, it’s not really eligible for this kind of thing at this point, as far as our current policies goes. I’ll bring it up at a future core team meeting, just in case.

And then, shortly:

<snip>

We talked about this Wednesday evening.

- We do want to change our policy here
— The current policy is that we only support the latest Rust
— The general feeling is “if it’s important enough for a point release, it’s important enough for a CVE”
— This specific patch does seem like it should have gotten more attention at the time
— This stuff also obviously ties into LTS stuff as well
- We don’t have the time or inclination to work on updating this policy until after the [2018] edition ships
— We’d rather take the time to get it right, but don’t have the time right now

Okay, I have to admit that this sounds reasonable.

They have subsequently reaffirmed that they have no intention to file a CVE for this issue, so I went ahead and applied for one myself via http://iwantacve.org/. This is supposed to involve a confirmation by email, and I am yet to hear back. I have no clue how long this will take.

Update: this issue has been assigned CVE-2018-1000657.

Bugs, bugs everywhere!

This exposes a bigger issue with the standard library: insufficient verification. If this bug — which is relatively easy to observe! — has gone unnoticed for two years, surely something like it is still lurking in the depths of the standard library?

This problem is not unique to Rust. For example, Erlang — that funky language that people use to program systems with 99,9999999% uptime (no, that’s not an exaggeration) — has repeatedly shipped with a broken implementation of Map data structure in its standard library. There is a fascinating series of four articles detailing a systemic approach used to discover those issues.

To actually deliver on the safety guarantees, Rust standard library needs dramatically better testing and verification procedures. Some of its primitives were mathematically proven to be correct as part of RustBelt project, but that did not extend to implementations of data structures.

One way to do that would be to use the same approach as was used for verifying the Map structure in Erlang — building a model of the behavior of the structure in question and automatically generating tests based on it, then verifying that the outputs of the model and the implementation match for certain automatically generated inputs. Rust already has the tooling for that in the form of QuickCheck and proptest.

Another way to verify the implementations is to use a symbolic execution framework such as KLEE or SAW. They work by analyzing the code and figuring out all possible program states for all possible execution paths. This lets you either generate inputs that trigger faulty behavior or make sure that certain behavior is impossible. Sadly, neither of those tools supports recent versions of Rust.

Alas, both of those approaches are time-consuming and would require coordinated effort. It’s not something one can do for the entire standard library over a couple of weekends — otherwise I’d be opening a pull request for Rust standard library by now instead of writing this article.

Oh, and before you bring out the pitchforks and denounce Rust for all eternity: for reference, Python runtime gets at about 5 remote code execution vulnerabilities per year. And that’s just the already discovered ones that got a CVE! How many were silently fixed or still lurk in the depths of Python runtime? Only the bad guys know.

Everything is broken

I have once reported a buffer overflow in a popular C library that is used in one of the major web browsers. It was the textbook example of a security vulnerability, and could be triggered simply by opening a webpage. I was told that the bug was silently fixed in a subsequent release that nobody has upgraded to yet. When I asked the maintainers to file a CVE, they said that if they filed one for every such bug they fixed they’d never get any actual work done.

Oh, and the worst thing? The vulnerability I’ve reported in that library was found by a fully automated tool in less than a day. All I did to discover the vulnerability was basically point and click. Imagine how many more exploitable bugs a dedicated security expert could discover!

This was when I have actually understood and internalized that everything is broken.

The horrifying thing for me is that I still use that web browser. It’s not like I have any alternatives — every practical web browser relies on a huge mess of C code. And it is evident that humans are unable to write secure C code, unless they swear off dynamic memory allocation altogether.

This is why I’m so hopeful about Rust. It is the only language in existence that could really, truly, completely and utterly supplant C and C++ while providing memory safety. There is a mathematical proof of correctness for a practical subset of safe Rust and even some inherently unsafe standard library primitives, and ongoing work on expanding it to cover even more of the language.

So we know that safe Rust actually works. The really hard theoretical problems are solved. But the inherently unsafe parts of the implementation, such as the language runtime, could use more attention.

Update: Brian Troutwine has kicked off a project to validate Rust standard library primitives using QuickCheck! Check out bughunt-rust on GitHub, and join the hunt!

--

--

Responses (13)