Smoke-testing Rust HTTP clients

Sergey "Shnatsel" Davidoff
19 min readJan 16, 2020

Back in 2014 I was fetching frontpages of the top million websites to scan them for a particular vulnerability. Not only have I found 99,9% websites to be vulnerable to a trivial attack, I’ve also found that curl command was randomly crashing with a segmentation fault, indicating a likely vulnerability in libcurl — the HTTP client library that the whole world seems to depend on.

By that time I was already disillusioned in the security of software written in C and the willingness of maintainers to fix it, so I never followed up on the bug. However, this year I decided to repeat the test with software written in a language that’s less broken by design: Rust.

Here’s how 7 different HTTP clients fared.

Update: this article was published in January 2020 and no longer represents the current state of affairs.

Baseline

Before we start talking about specific software, let’s define what we’re comparing it against. I’m going to hold all software to the standard of being actually dependable and maybe even secure — which is an incredibly high bar that the vast majority of software currently in use fails to meet. To wit:

Linux kernel gets thousands of potentially exploitable memory safety bugs per year that are largely ignored — or at best silently fixed without any kind of security announcement, so the fixes don’t get into Linux distributions and are then found powering exploits in the wild.

libcurl is fairly benign by comparison with only 9 publicly reported security bugs per year (no matter how you count). Which is, you know, a new exploit every couple of months or so. But that’s just the vulnerabilities that were properly disclosed and widely publicized; dozens more are silently fixed every year, so all you need to do to find an exploit is look through the commit log. Don’t believe me? Here is a probably-exploitable bug that is still unpatched in the latest release of libcurl. You’re welcome.

And in case you’re wondering, this trick works for every open-source C library. Although if you want the exploit to work for the next 10 years, look at the bug tracker instead.

Oh, and all of that was just for libcurl itself. Underneath it there has to be a TLS implementation, which is usually OpenSSL.

OpenSSL is infamous for its Heartbleed, but had numerous other bugs and keeps getting more. The codebase has a custom reimplementation of most standard library functions, which makes it intractable to security analysis or exploit mitigation techniques. The quality of the code and documentation is said to be such that if there were a state agency program to sabotage publicly available cryptography, OpenSSL would be its crown jewel. There are even multiple forks trying to fix OpenSSL that have failed to gain wide adoption.

I could go on like that about almost any widely used piece of C code. We live in a world where all software is broken. And while I wish maintainers of some projects would be more diligent, it would not fix the underlying problems: that complexity of practical codebases exceeds the human ability to reason about it, and that humans inevitably make mistakes. When you’re writing software in a memory-unsafe language such as C, any trivial mistake can lead to a security vulnerability. This makes writing secure software in C about as easy as performing appendectomy on yourself. This one guy did it once in Antarctica, why can’t you?

Even if you don’t care about security, every one of those is a really tricky reliability issue too — even the non-exploitable ones! That’s the kind of issues that get you paged in the middle of the night to deal with it in production and then completely fail to reproduce in a test environment. The best kind.

“Hopelessly broken” is the baseline I will be comparing software against.

Yet there is still hope: C is no longer the only language you can write performant and reusable software in. Rust is a new kid on the block that does everything that C can, and exactly as fast, but makes computers perform all the safety checks instead of requiring humans to think about them. The vulnerabilities that plague C codebases are impossible in Rust!

…unless you explicitly opt in to unsafe operations for some parts of the code, at which point you’re effectively back to the good old C in those places. But at least you can do that rarely and only when you really need to. Right?

Methodology

The test I ran was the simplest possible workload imaginable that curl has already failed: read the URL from the command line, fetch it and exit. No content parsing, no async I/O, no connection reuse, no authentication, nothing. Just the simplest possible thing in a real-world setting.

The default TLS backend was used for every library. The binary using each library (with code looking roughly like this) was built with Address Sanitizer so that we’d notice a memory error if it happens instead of hoping the OS would suspect something is wrong and kill the process.

The list of top million websites is from Majestic-12. I’ve used 50 concurrent connections, which is fairly conservative — you can have way more than that on a mid-range desktop, let alone a server, but it should prevent us from being mistaken for a DDoS attack. One such run takes at about 8 hours with HTTP connection timeout set to 5 seconds.

I’ve used Google Cloud to run this (which conveniently provides $300 free credit), but it should also work from any regular server or public VPN. I do not recommend doing this from a plain old consumer ISP without protection — they tend to frown upon tons and tons of HTTPS connections.

I also briefly looked through the dependencies of each client to get an idea of the amount of unsafe code it relies on and what kind of failure modes I can expect.

reqwest

reqwest is the premier Rust HTTP client. Its number of downloads on crates.io leaves everything else in the dust. It rides on top of pure-Rust HTTP stack (with the notable exception of relying on OpenSSL by default, although rustls is an option) and it just had a new major release that migrated it to futures-0.3 to support async/await.

First things first: it didn’t segfault!

I am actually impressed because I had really low expectations going into this. cargo-geiger output on reqwest’s dependency tree does not instill confidence, and it relies on the HTTP stack that contains copies of standard library functions but with safety checks disabled and where something labeled as “A set of types for representing HTTP requests and responses” also contains a bespoke HashMap implementation with 1500 LoC, 32 unsafe blocks and its own DoS protection because “it’s faster”, without ever mentioning any of that in the README. Plus that code seems to predate migration of std::HashMap to a faster implementation, so it’s not clear if all of that custom code is even useful anymore; could be harmful for all I can tell, since I couldn’t find any benchmarks comparing it against the standard library.

I can’t help but wonder how many more bespoke unsafe primitives lurk in that HTTP stack.

Writing bespoke unsafe code is a bad idea for the same reason why writing anything important in C is a bad idea: all human-written code has bugs, but when you have bugs in unsafe code they tend to be exploitable vulnerabilities. The aforementioned HashMap code was written by humans, so it is no exception:

  1. https://github.com/hyperium/http/issues/354
  2. https://github.com/hyperium/http/issues/355

The maintainers have not filed security advisories for these two bugs despite my call to do so (granted, you need some rather unusual client code to trigger them), but at least they were taken seriously and fixed within days, and that’s is already incredibly responsible by C standards.

I’m not sure why this HTTP stack is not getting a publicized exploit every couple of months. Are exploits truly more rare than that? Or is it because nobody’s looking for them? Or perhaps they’re just getting silently fixed and we simply never learn about them?

Anyway, reqwest didn’t segfault on a basic smoke test, which beats the state of the art from 5 years ago. It didn’t really work though. 6% of the time it downloaded and printed the data, then hung. I had to wrap my test binary in timeout to keep things going.

That hang turned out to be a known deadlock. It’s not really surprising that it happened because the Rust async/await ecosystem is so immature, with async/await being stabilized literally two months ago and even the compiler itself failing to uphold its safety guarantees right now. Plus the bug easily could be in some dependency, not in reqwest itself. What is surprising to me is that they issued a new stable release with a known deadlock.

The previous release (0.9 series) has been in use for a while, so it shouldn’t have such glaring bugs. But if you don’t need to have thousands of HTTP connections open at the same time, use something simpler. Like, way simpler, without any async in it.

ureq

Minimal request library in rust.

Motivation: Minimal dependency tree, Obvious API

This, this is what my bitter, jaded eye likes to see.

ureq does not do any fancy cooperative multitasking or async I/O — just plain old blocking I/O that you can stuff into threads if you want concurrency.

You won’t be pulling your hair trying to catch a deadlock that happens on one request out of 100,000. Properly handling backpressure is a breeze. And the threaded architecture scales to a few hundred concurrent connections just fine. This is what the go-to HTTP client should look like. The use of async I/O on the client should be a weird thing that you resort to for niche use cases, not the default.

According to cargo-geiger, ureq relies on 10 times less unsafe Rust than reqwest, and the only scary thing in its dependency tree is SmallVec — which, granted, is still kinda scary, but at least it’s the only such thing in there. Plus the subset of SmallVec API actually in use looks easy enough to replace with safe code. I’ve tried swapping SmallVec for the 100% safe TinyVec and there was no difference in performance, so maybe I should open a PR for that.

ureq also decisively ditches OpenSSL for rustls. Using rustls may or may not be a good idea depending on what exactly you’re doing — it has not been audited for attacks such as SMACK that don’t have anything to do with memory safety— but at least Rust type system makes such mistakes easier to avoid.

As could have been expected, no segfaults! Also no deadlocks, because there is basically nowhere for them to come from. I did find a couple of panics though: one in ureq DNS handling on 13 websites out of a million (0.0013%) and also an assertion failure leading to panic on 7 websites on RSA validation in ring (0.0007%), which leads me to believe that the Rust parts of ring are less robust than I had hoped. It sure could use some fuzzing.

I’m sure there is plenty more panics in there — I’ve only just tested valid inputs, and feeding it invalid data should cause a whole lot more panics. Fortunately panics are designed to be possible to handle and recover from, so these conditions can be reasonably planned for, unlike deadlocks.

There are also three unsafe blocks in ureq itself that I’m not sure are necessary. I would have to audit them before using ureq for anything serious. A quick glance at the bug tracker also reveals that connection pooling is not really usable yet, but if you don’t use that you should be good.

The only glaring omission that I see in ureq is that it doesn’t allow setting a timeout for request completion, so if the remote host keeps replying, the connection will be open forever. This is a great vector for denial-of-service attacks: if you can get ureq to open connections to URLs you control, you can get your server to keep the connections open forever and easily exhaust some kind of resource on the client: thread pool, network ports, RAM, whatever runs out first.

isahc

isahc provides an idiomatic Rust interface to libcurl. Yeah, you’re talking to the devil, but it’s the devil everyone has already struck a deal with, so it’s probably fine! Plus it supports the deprecated and broken crypto protocol from 20 years ago!

Jokes aside, I am glad it exists, because it provides maximum interoperability and sometimes you need that above everything else — e.g. if you have to integrate with a weird legacy system and all connections are already inside a secure VPN such as WireGuard.

The API exposed by isahc does feel pretty nice and Rusty. But I did run into an interesting gotcha with it: was getting HTTP/2 protocol errors even though I’ve disabled HTTP/2. Turns out disabling http2 feature doesn’t actually disable HTTP/2.

I didn’t expect any fireworks from within libcurl this time around because Google has started continuously fuzzing it in 2017 as part of OSS-fuzz and found some 75 crashing bugs since then, so the really low-hanging fruit detectable by a basic smoke test should be picked by now.

You’d think cargo-geiger output would be totally benign, since all the complexity is inside libcurl, but no. Isahc pulls in the same crate from reqwest’s stack advertised as “HTTP types” but actually containing a bespoke HashMap implementation. And that crate in turn pulls in yet another bespoke and highly unsafe data structure with an equally misleading description.

Test results are promising: no segfaults! This is quite impressive because the bindings to libcurl’s C API are inherently unsafe and leave plenty of room for error. On the flip side, I might not be seeing the full picture because I’ve only compiled isahc with Address Sanitizer, not libcurl. If the bindings have caused some kind of memory corruption in libcurl data structures, this setup wouldn’t detect it.

I also didn’t see any other kind of runtime malfunction, which is a first!

However, I have some gripes not with the library itself, but with the tech stack it relies on, just like with reqwest. Specifically, curl-sys crate may choose to use its own bundled version of libcurl even if explicitly asked not to do that. This amounts to loading a gun, pointing it at your foot while you’re not looking and sticking a timer on the trigger.

You see, if the user has explicitly requested use of the shared library, they may reasonably assume that bug fixes or security patches in the system-wide library will also apply to the Rust application. But curl-sys may choose to violate that assumption depending on the build environment: if development headers for libcurl are not present, or if http2 support is requested and the system-wide libcurl is too old, curl-sys will statically link its own version of libcurl instead of erroring out.

Now your code will not only unexpectedly receive no security patches, which will make it trivially exploitable by the very next publicly known vulneraiblity, it will also be running a different version of libcurl in a wildly different configuration than everything else when you don’t expect it to, which is all kinds of fun to debug, especially in production at 3AM.

For example, you may think you made resource leaks impossible by setting a default connection timeout in libcurl, and then curl-sys smuggles in a totally different libcurl behind your back, stealthily reopening this can of worms. Plus the probability of the absence of version-specific bugs or behavioral differences between versions is basically nil, so good luck debugging that!

Bye-bye, security patches. Hello, debugging inexplicable production outages at 3AM.

Initially I had a hard time communicating to the maintainer that this is an issue, but they have eventually conceded that this could be improved and that PRs fixing this would be welcome.

I won’t even spend time commenting on the timeliness of security updates to the bundled libcurl (weeks after CVE disclosure) or the existence of RustSec advisories for such updates (none).

http_req

Simple HTTP client with built-in HTTPS support. Currently it’s in heavy development and may frequently change.

No bespoke unsafe, OpenSSL or rustls at your option and no other dependencies with unsafe code in them!

No segfaults (duh!), one panic, some hangs… wait, those seem to be legitimate hangs: the server is just submitting chunks of data really slowly! Then why didn’t ureq also hang here? Well, let’s report that to ureq.

Other than the panic, this looks like my idea of a perfect library, which sounds too good to be true. And it is too good to be true: http_req is so basic that there’s hardly anything in there to break. For example, it doesn’t follow redirects, so it didn’t even get to the frontpages of most websites I’ve thrown at it, it’s only seen a basic redirect response. There is also no support for setting a timeout for the entire request, so it’s susceptible to denial-of-service attacks.

In a nutshell: too basic for most uses right now, but perhaps something to keep an eye on.

attohttpc

I know I will have trouble telling anyone about this crate. Naming it something like “hytt” or “ht2p” would make it much easier to refer to.

It uses blocking I/O, which makes it a potential contender for being the go-to solution. The feature set is roughly on par with ureq, except it supports some features that are basically optimizations (like compression) and doesn’t support some features that are strictly required for some use cases (like cookies). Sadly it has no rustls option, so you’re locked into OpenSSL.

Dependencies are mostly sane and minimal, with one exception. That bespoke HashMap advertised as “HTTP types” is in here too, and it is of course pulling in that other data structure which is 8,000 lines of code and full of unsafe. Using that thing makes sense if you want to pass small chunks of HTTP data across different threads and you have no idea what the access patterns are (so basically async code), but attohttpc does none of that insanity, it parses the entire response in the same thread. So all attohttpc gets for its trouble here is a slowdown due to non-contiguous data layout and atomic reference-counting plus an extra 8,000 LoC of unsafe code to its attack surface. This crate would be much better off using the standard library Vec.

Initially attohttpc lacked any kind of timeouts, but the maintainers have implemented connection and read timeouts upon my request. A timeout for the entire request is still missing though, so DoS attacks remain possible.

A test of the version with timeouts triggered no segfaults, but did reveal one intermittent panic and caused a few expected hangs due to the absence of a timeout for the entire request.

minreq

Simple, minimal-dependency HTTP client. The library has a very minimal API, so you’ll probably know everything you need to after reading a few examples.

Exactly what it says on the tin! No scary dependencies whatsoever, HTTPS via rustls and optional JSON through serde-json. The API is extremely minimal, and judging by the description that’s on purpose, so I do not expect this to be a viable choice in any kind of serious project. You will eventually needs something not covered by the API and have to migrate to another library.

The documentation stated that it supported a timeout for the entire request, but upon reading the code I’ve found that it was not the case. Fortunately this is fixed now, but may come with a significant performance hit.

The test has revealed no segfaults, two panics and I can’t really comment on hangs because I’ve tested it before the fix.

cabot

No, it’s not a certificate authority bot, it’s an HTTP client. And I thought “attohttpc” was a bad name!

It uses async I/O, but what makes it distinct from reqwest is that it uses an entirely different async I/O stack with dramatically less unsafe code. And by virtue of being a different implementation it’s also going to exhibit new and exciting failure modes!

Sadly cabot is currently too basic to exhibit anything exciting. It doesn’t even follow redirections, so it will actually process less than a half of the websites I’ll throw at it. It also supports no timeouts of any kind, which makes it unusable in automated systems where the user can’t decide they’ve had enough and kill the process.

The amount of unsafe code in the dependencies is at 1/2 of that of reqwest. There is nothing particularly egregious in there except for two things. One, there is a hard dependency on pretty_env_logger, which alone pulls in more unsafe code than all of ureq’s dependencies combined. Two, it uses regex to parse some parts of HTTP — which is a bad idea, but given that it uses Rust’s supposedly DoS-resilient regex library and not PRCE, it’s not a terrible idea. Fortunately, both of these issues look simple enough to fix, and with these dependencies ditched it would go down to 1/3 of unsafety of reqwest.

The test results are basically in line with what I expected. No segfaults, but one frequent panic. I also got 25,000 hangs and I have no idea if those are deadlocks or the upstream simply didn’t respond in time because cabot doesn’t support any kind of timeouts.

So nothing to get excited about in here yet, but perhaps it will evolve into a viable contender for the weird use cases that require async I/O.

Not tested

surf

surf is a common interface on top of isahc and the HTTP stack that’s underlying reqwest. I’ve already tried both backends, so not much to see here.

yukikaze

yukikaze is built on basically the same stack as reqwest, but provides a different API on top. Since all the complaints I had about reqwest were actually about the underlying HTTP stack, they also apply here.

awc

An HTTP client built on the Actix framework. The HTTP stack seems to be shard with actix-web. It uses async I/O and thus relies on Rust’s immature async/await ecosystem, which brings up the same issues with complexity and exciting new failure modes as with reqwest. Definitely should not be your first choice when shopping for an HTTP client. And cargo-geiger output is also not comforting.

A quick glance at the dependencies reveals that it relies on actix-service, which underpins all of Actix and has a bespoke and unsound Cell implementation. For example, this method violates memory safety by handing out multiple mutable references to the same data, which can lead to e.g. a use-after-free vulnerability. I have reported the issue to the maintainers, but they have refused to investigate it.

There are no comments on their bespoke Cell implementation — not only no comments to justify why it’s needed, but no comments at all. So I dug through the commit log to see why they rolled their own unsafe primitive instead of using Rc<RefCell>which would be doing the exact same thing, except safely. Here’s the commit message justifying the change:

add custom cell

That’s it.

And while I probably could pressure the maintainers into fixing this particular bug (or maybe even dropping their bespoke cell altogether, if I’m really lucky!), that will do nothing to improve their general approach to safety which determines the shape of all their future code. So I just give up and admit that I can’t put my money on the security or reliability of this thing.

I want to highlight that Actix is not unique in having its own bespoke and unsound unsafe primitives — the HTTP stack that’s underlying reqwest is facing largely the same issues, although its maintainers are much more willing to fix reported bugs. And the solution to this problem is very simple: stop making bespoke unsafe primitives, because for everything other than toy projects reliability trumps performance.

I need an HTTP client…

So which library would I use if I needed one right now?

Well, I’ve found serious bugs in every single one of them, so none are usable as-is. Plus I would do more research than this before committing to one.

First I would check if the problem can be solved with blocking I/O, and if so, look no further. Clients with blocking I/O are simple, dependable, and boring — they have nothing in them that could break in an interesting way. I won’t be pulling my hair when debugging and nobody’s going to be woken up in the middle of the night because something has deadlocked in production.

Based on this cursory glance it seems that ureq and attohttpc could be hammered into a usable shape in a week or so each (assuming you’re willing to stick a panic catcher on them), plus however long it will take to add rustls as an optional TLS backend to attohttpc if you want to get rid of OpenSSL.

I’m letting the panics slide because they’re not a DoS vector: panics are trivial to guard against, especially if you’re spawning threads, and there is almost nothing in ureq dependency tree that could crash the entire process save for a panic-while-panicking.

But what if I had a use case that is not served by the clients with blocking I/O?

It pains me to say this, but… I wouldn’t go with reqwest. Credit where it’s due: unlike curl, it’s not hopelessly broken, and developers of the underlying async runtime go above and beyond due diligence in some respects. But first, the async/await ecosystem as a whole is still immature, and second, I won’t be able to trust reqwest’s underlying HTTP stack until they ditch most of their bespoke unsafe primitives.

Sadly isahc is also not a great candidate. For starters, I would still need to rely on the immature async ecosystem if I need high performance and run into the same deadlocks as with reqwest (or cabot?). Also, when something goes wrong in libcurl it brings down the entire process, which aborts all requests currently in flight (unlike ureq or attohttpc, where you can abort just one), so DoS resilience of libcurl is basically nil while the attack surface is enormous. Which is why there is no way I’m linking libcurl+OpenSSL into my main process, otherwise it would bring down not only all in-flight requests but the entirety of my backend as well. So I’d have to put all the code that does HTTP fetching into a separate process, sandbox it and communicate with it using something like RPC… Ewww.

Actually, why am I even trying to do this in Rust at this point? If I’m breaking the HTTP fetching into a separate process anyway, I might as well go for a mature async I/O stack in there.

I’m not sure how the HTTP client in Go is implemented, but at least Go gets async I/O mostly right, so that’s worth a look. And it comes with its own TLS implementation, so we won’t be stuck with OpenSSL. On the other hand, Go makes concurrency very easy to mess up (and no, channels are not a solution) and error handling in Go is a minefield, but hopefully fetching webpages would be simple enough not to run into these issues?

Alternatively, Erlang has a mature async I/O stack, its reliability is unparalleled and makes it really hard to mess up concurrency. But I don’t know if the HTTP client specifically is any good, and it may be hard to find people familiar with Erlang, so the decision to use it should not be taken lightly.

Conclusion

The place of the go-to Rust HTTP client is sadly vacant. Clients with async I/O will never be able to fill it due to the sheer operational complexity they bring, and none of the clients with blocking I/O have enough features to be viable contenders.

Instead of a single good HTTP client with blocking I/O Rust has at least 4 passable ones. I wish their maintainers got together and made a single good client instead.

Two months in, the async/await ecosystem is as immature as you’d expect. If I needed an HTTP client with async I/O right now, I’d use a different memory-safe language.

Only the libraries written in C and Rust can be integrated in code written in other languages, and the C libraries that the entire world relies on are hopelessly broken. Rust libraries are also broken, but not hopelessly so. Let’s fix them and usher in a new era of performant, secure and reliable software.

--

--