How I’ve found vulnerability in a popular Rust crate (and you can too)
I have recently discovered a zero-day vulnerability in a fairly popular and well-designed Rust crate. In this article I’m going to discuss how I did it and why it wasn’t discovered earlier, and introduce a new tool, libdiffuzz, that I’ve created for the job. A recently discovered vulnerability in Rust standard library makes a cameo appearance.
In my earlier article about a one-line
unsafe block that has nearly ruined everything I’ve explained how I’ve used fuzzing to look for vulnerabilities in widely used Rust code. However, the titular one-life unsafe was found not through an automated process, but by manually reading the code. Why didn’t fuzzers discover it?
Fuzzers work by feeding your program random input and seeing what happens. They only detect that something is wrong if the program crashes. So in order to get fuzzers to actually discover memory issues that lead to vulnerabilities, you need some way to notice improper handling of memory when it happens. There have been many attempts to build such tools over the years, but the most practical and popular tool is Address Sanitizer. It reliably detects all sorts of bugs, is supported by Rust compiler out of the box, and is in fact enabled by default in one of Rust fuzzers, cargo-fuzz.
There is a tool that can detect reads from uninitialized memory, called Memory Sanitizer, but it currently doesn’t work with Rust standard library. So unless you completely avoid using Rust standard library, there is no tool that let you detect reads from uninitialized memory in Rust.
Well, bummer. That means I’ll have to build one.
The birth of libdiffuzz
Since I’m only interested in memory disclosure vulnerabilities, i.e. cases when contents of uninitialized memory show up in the program output, it should be sufficient to run the same operation twice and compare the results. If a program has decompressed the same zip file twice and got different results, that usually means that contents of uninitialized memory have shown up in the output.
With that in mind, I’ve written a simple test program that reads from uninitialized memory and tried to detect it using the “run twice, compare results” technique. I wanted to be able to check if results differ between runs at a glance without comparing huge amounts of data by hand, so this is what I ended up with:
This program will panic if the use of uninitialized memory is detected. Our goal here is to get it to panic reliably — we know it’s buggy, we just need to be able to detect the bug automatically.
Turns out it’s not that easy because what uninitialized memory actually contains varies depending on the memory allocator in use. And no matter what memory allocator I tried, I couldn’t get it to crash. When built with Rust’s default jemalloc,
sum_uninitialized() would always return 0. When built with system allocator (as in the code above), the return value would differ between different runs of the process, but not between different invocations of the function within the same process. I have even tried AFL’s libdislocator which is basically a poor man’s address sanitizer implemented as a memory allocator (which makes it usable on black-box binaries), and even that didn’t work: my
sum_uninitialized() always produced a stable result.
At this point I’ve (mentally) screamed “How hard can it be?!”, opened the source code of libdislocator and trivially patched it to fill every allocated buffer with a value that’s incremented on every allocation instead of a constant value. And it worked! This test program started crashing!
From the lab to real world
Armed with my newly-minted abomination I went looking for a prospective real-world target to use it on. I’ve picked claxon, a FLAC decoder written in Rust, for a few reasons:
- Code that does nontrivial binary parsing is the poster child for security vulnerabilities in memory management
- It contains 8 unsafe blocks per ~2000 lines of code, which is entirely too many for my liking and cannot possibly be not exploitable in a library that does complicated binary parsing (seriously, don’t do
- The author claimed that the library has been extensively fuzzed
- I have already fuzzed it myself for about 1 billion executions total, so I already had a bunch automatically generated files that exercise many different execution paths — a great starting point for further fuzzing
- Nobody has looked for this particular class of vulnerabilities in Claxon before — the only thing that could detect it, Memory Sanitizer, would not have worked with Claxon because it uses the Rust standard library
- This code has defied me before (see point 4) and I took that as a challenge
So I’ve thrown together a fuzz target that decoded the same file twice and checked that the result is the same (if you’re craving for fancy words, call this “differential fuzzing”), plugged it into AFL and left it overnight. And lo and behold, I woke up to 3 automatically discovered crashes!
And just as expected, the crashes were indeed happening on the
assert!() that was comparing results from two subsequent runs and failing, and it only happened under libdiffuzz; they went completely unnoticed otherwise.
I have reported the vulnerability to crate maintainer, who has promptly investigated and fixed it, then audited the rest of the code for similar bugs and added fuzzing a target similar to mine as a CI job. Swift handling of security vulnerabilities by maintainers is always great to see, and Claxon’s maintainer went above and beyond the call of duty.
Side note: it later turned out that I forgot to disable checksum verification in Claxon, so most inputs generated by the fuzzer were rejected early because of checksum mismatch (random data doesn’t have valid CRC16 in it, duh). But thanks to the sheer amount of inputs AFL has thrown at Claxon it has generated some files with valid CRC16 anyway, by sheer luck. To give you some context: AFL tests roughly 1 billion inputs per day on my mid-range CPU.
I’ve opened a PR to automatically disable checksum verification in Claxon during fuzzing so we wouldn’t have to deal with it anymore. With checksums disabled it only takes a few minutes to discover the bug using libdiffuzz.
I have also tried fuzzing with AFL + libdiffuzz on lodepng-rust and miniz-oxide, but got nothing. lodepng-rust was created as a largely automated translation of a C codebase where these issues have already been discovered with AFL, and miniz-oxide actually comes with a “run twice, compare results” fuzz harness that compares Rust and C implementations. For those projects it was mostly about not triggering false alarm.
However, the entire rest of Rust ecosystem has probably never been fuzzed with anything that could detect use of uninitialized memory. So if you want to claim some zero-day vulnerability discoveries to your name, just pick a crate that has
unsafe blocks in it, ideally with something like
vec.set_len(), and give it a spin in a “run twice, compare results” fuzzing harness with libdiffuzz. There should be plenty of low-hanging fruit because nobody’s tried picking any of it yet.
I have published a cleaned-up version of my tool in github, check it out if you want to learn more or give it a spin: https://github.com/Shnatsel/libdiffuzz
Why didn’t Rust prevent this?
The short answer is “Because people have deliberately opted out of its safety guarantees.” But why did they opt out?
In Claxon it was for the sake of optimization. Here’s the commit that introduced unsafe code:
Do not fill buffer with useless zeros initially · ruuda/claxon@cfeb761
This could expose garbage memory to Claxon internally. But then again, the previous way of dealing with this could…
Note that before this commit the buffer is diligently initialized with zeroes using
buffer.extend(repeat(0).take(new_len — len)); — quite a mouthful! Not only that’s complicated, it’s also slow — it compiles into something like a loop that fills the allocated memory with zeroes.
Other than the obvious issue with it being kinda slow on normal inputs, it can get excruciatingly slow on deliberately malformed inputs, which can be used to mount a denial-of-service attack. If the implementation is perfectly efficient and uses full memory bandwidth (roughly 100Gb/s for DDR4), filling the entire 64-bit address space would take 16,000,000,000 seconds - or 500 years. Even with memory usage limits it’s still not pretty, because a single file can do this over and over and over again.
However, modern operating systems let you request already zeroed memory, which not only is roughly 4x faster in my tests, but is also asynchronous and lazy: even if you allocate a lot of such memory, zeroing memory will not block your program you actually try to access the relevant parts of it.
Can you ask your OS to do that from Rust? Yes!
std::vec::from_elem() will simply request zeroed memory from the OS if you pass
0 as the element to fill the vector with. This function is not public, but that’s what
vec! macro desugars into, so the fastest way to initialize a vector of size
max_len is actually
vec![0; max_len];. After switching Claxon from using uninitialized memory to this macro there was no measurable performance difference.
Sadly, none of this is documented. The
vec! macro is used all over the place in Vec documentation, but it does not mention that this is the fastest way by far to safely initialize a vector, or discuss about efficient initialization at all.
Documenting the fastest way to safely initialize a vector would have prevented this vulnerability.
I have opened an issue against Rust to document this more clearly.
But wait, it gets weirder
I have also investigated the vulnerability in
inflate, discussed here.
inflate was not actually exploitable, since the code calling the vulnerable function was structured in such a way that it never passed it the specific values required to exploit it. Still, the vulnerable function is an example of security bug in real-world code.
Unsafe code was used in
inflate because there was no safe way to accomplish what they needed safely and efficiently. I have written a detailed analysis of it on the Rust internals forum, which I will not duplicate here.
I have also included a proposal for a safe abstraction that would prevent such issues in the future. The day after writing it the proposal I’ve started contemplating how I would go about implementing it, and then found that somebody has already written and posted a working prototype. Overnight. I didn’t even have to do anything. God I love Rust community.
In that thread Scott McMurray has brought up a similar function in the standard library, which could be used to solve the problem if it were generalized a bit. Then he took a closer look at it and realized that the standard library function was vulnerable too:
[stable] std: Check for overflow in `str::repeat` by alexcrichton · Pull Request #54397 ·…
This commit fixes a buffer overflow issue in the standard library discovered by Scott McMurray where if a large number…
This is the second-ever security vulnerability in the standard library. In case you’ve missed it, I’ve written an article detailing the first one.
Just like the first stdlib vulnerability, this one was introduced during refactoring. Unlike the first one, it does not require a sequence of specific function calls, and would be easily discovered via fuzzing if anyone has actually fuzzed that particular function.
This led me to contemplate automatically generating fuzzing harnesses for the standard library functions, but I haven’t gotten around to actually prototyping that yet.
First things first: if you haven’t fuzzed your code yet, you should. Doesn’t have to be with libdiffuzz either — most bugs and almost all really severe vulnerabilities can be discovered without it. In Rust it’s stupidly easy and won’t take you more than 15 minutes to set up.
My pet libdiffuzz might also be of use. Feel free to borrow it and subject your unsafe code to its unrelenting jaws.
However, fuzzing won’t find all of the bugs. Do not rely on it as proof that your 2-line
unsafe block is actually secure! And even if it is secure now, someone will refactor it later and it will become exploitable - just like it happened in the standard library.
So if you can help it, try to refactor your unsafe code into safe. And if you can’t, post on rust-internals forum and describe what’s slow or what kind of safe abstractions you’re missing. For example, lewton crate is 100% safe code because it has upstreamed its only unsafe function into the standard library, where it got a lot more eyeballs. And it’s beneficial for others too: I have recently used this very function at work without having to worry about auditing a transmute by myself.
Also, there is a project to verify the implementations of data structures in Rust standard library, and it could use all the help it can get. And if you’re interested in auto-generating fuzzing harnesses for stateless stdlib functions, let me know. I can handle generating fuzz harnesses, but I could use some help with listing stdlib functions and parsing parameter types.