Security as Rust 2019 goal

Sergey "Shnatsel" Davidoff
8 min readJan 18, 2019
Our vision for Rust. Image courtesy of Sreejith K.

Note: I am publishing this on behalf of Secure Code Working Group because we do not have a WG blog established yet. Multiple people have contributed to this post.

Rust Secure Code Working Group is a bunch of curious people hanging out in a public chat on the Internet. You can click here to hang out with us.

Our mission is to make it easy to write secure code in Rust.

We have the following goals for the Rust language and ecosystem:

  • Most tasks shouldn’t require dangerous features such as unsafe. This includes FFI.
  • Mistakes in security-critical code should be easily caught by machines or, failing that, humans aided by machines.
  • It should be clear to programmers how to perform security-sensitive tasks.
  • Security-critical code which is relied on by Rust programmers should be bug free.

This article details what we have agreed on as especially critical areas that we would like to see improved upon in 2019.

Security updates

Safe Rust eliminates entire classes of security bugs, which makes it very promising for security-critical applications such as web servers. However, even memory-safe code may contain logic bugs leading to security breaches. No code is perfect, so security bugs will occur, and are already occurring.

Rust needs a mechanism to deliver security updates to any kind of production deployments in a timely manner. This involves finding good answers to the following questions:

  1. If you run Rust code in production, how do you get notified that you need to apply a security update? How do you set up a pipeline to apply these updates automatically? This is exacerbated by Rust’s static linking, since every affected program needs to be updated individually, even if a vulnerability is in a transitive dependency. We need solutions both for software installed via cargo installand via complex deployment pipelines used for production servers.
  2. How should fixes in compiler or standard library bugs be applied? Currently there is no “rebuild everything that was ever installed” command in Cargo. Also, how do we notify people that they need to rebuild everything? What if the code is non-trivially deployed, like a shared library linked into another language?
  3. How should security updates to statically linked C libraries be handled? What if the build is for Windows where the only reasonable way to build against C libraries is to bundle them with the -sys crate? Should the maintainer of Rust -sys crate be responsible for security updates to the C code, and if so, how do we make that manageable for the maintainer?

Prior art

RustSec project hosts a Rust security advisory database and provides a command-line tool that checks Cargo.lock for vulnerable dependencies. This is a great start, but currently you need to run it manually on each of your projects to check them, and doing that every day is impractical. It also doesn’t handle compiled binaries.

There is also a tool to cross-reference the crates.io index with the RustSec database. It has identified, for example, a crate with 2500+ downloads per month that depends on a grossly outdated and trivially exploitable version of OpenSSL. Right now crates.io itself does not present this info in any way, so the crate in question may keep accumulating unsuspecting users.

An RFC for some of this functionality has been proposed in 2016, but shelved. The issues already in the RustSec database are proof that it is needed. Reviving it is being discussed here.

Rust compiler encodes the rustc, LLVM and standard library versions into all binaries it produces. This allows easily checking for binaries with vulnerable stdlib versions, regardless of deployment method. However, the versions of all the other libraries used to compile the binary are not encoded.

The Update Framework provides protocols for resilient and timely delivery of security updates, which is harder than it sounds. An implementation of it in Rust is in progress.

Use of unsafe code

Many widely used libraries use unsafe code where it’s not strictly necessary. Typically this is done for performance reasons, i.e. there are currently no safe abstractions to achieve the goal safely and efficiently.

The goal here is to reduce or eliminate the use of unsafe code throughout the ecosystem where it is not strictly necessary without regressing correctness or performance. The action items for that include:

  1. Investigate why exactly people resort to unsafe code on a case-by-case basis. Compile a list of case studies so that we can identify missing safe abstractions or idioms.
  2. Try to rewrite unsafe code into safe without regressing performance. Document the patterns and anti-patterns, create guidelines and/or clippy warnings based on those.
  3. Create safe abstractions to serve common cases that are currently served by unsafe code, such as copying a part of a slice into itself.
  4. Prioritize language and compiler work items that allow better verification at compilation stage, such as better bounds check elision or const generics.

Rust ecosystem is fairly large these days, so there is a lot of code to cover. Perhaps a community effort akin to libs blitz is required.

Prior art

Non-lexical lifetimes that have landed in the 2018 edition of Rust made the borrow checker smarter, reducing the need for resorting to unsafe code. Kudos to everyone involved!

Some other highlights:

Verification of standard library

The Rust standard library is a truly impressive piece of engineering. It sets the bar for Rust API design and incorporates the latest advances in algorithms and data structures, with more on the way.

Due to its role as the foundation of the language providing essential safe abstractions over the hardware it is also full of unsafe code.

Two serious vulnerabilities have been discovered in libstd to date. Another one was introduced but reverted before release because it was so bad that it caused crashes even on valid data. All of these were introduced during optimization or refactoring, and have passed manual code review.

The fact that humans are no good at analyzing unsafe code is the very reason for Rust’s existence. We need computers to assist in verification of Rust’s standard library.

There are several ways to go about that:

  1. Static analysis would be a relatively cheap and scalable way to gain more confidence in the code. Rust is much more amenable to static analysis than C/C++ or dynamically typed languages, but there is no go-to security-oriented static analyzer yet.
  2. Fuzzing or parametric testing could also scale well, assuming fuzzing harnesses could be automatically generated based on type definitions of stdlib functions. It would not find all the bugs, but it is easy to run continuously and feasible to scale to the entirety of the standard library with little maintenance burden.
  3. Formal verification methods provide greater assurance in correctness, but require more effort and introduce a non-trivial maintenance burden. Even though verifying the entirety of standard library this way is probably not practical at this time, it would be great to apply them to verify the most essential parts of it.

One of the already discovered vulnerabilities was trivial and would have been flagged by a static analyzer or easily discovered via fuzzing — if any of those were actually employed.

Prior art

Parametrized testing is easy to use in Rust, with two mature frameworks available: QuickCheck inspired by Haskell tool of the same name and Proptest inspired by Hypothesis.

The guided fuzzer trifecta — AFL, libfuzzer and honggfuzz — are already adapted to work with Rust and take 15 minutes to deploy. The trophy case is quite impressive.

A new fuzzer called Angora has just been released; according to its authors, it is vastly superior to the AFL-inspired status quo. It is itself written in Rust, but cannot fuzz Rust code yet.

Bughunt-rust has experimented with probabilistic model checking for verifying standard library data structures inspired by similar work in Erlang, but used guided fuzzers instead of QuickCheck RNG to improve coverage.

Fuzzing relies on dynamic analyzers to detect issues. Rust supports the venerable LLVM sanitizers, although Address Sanitizer currently requires some workarounds and nobody’s really sure how to use Memory Sanitizer, which led some people to build custom tooling instead. There is also Rust-specific tool called MIRI, but so far it only supports a subset of Rust and does not compose well with fuzzing.

Clippy is the go-to heuristic static analyzer for Rust, although it doesn’t have many safety lints yet. MIRAI is a sound static analyzer for Rust based on theory of abstract interpretation, but it is in the early stages of development.

On the formal verification front, RustBelt project has proven certain properties of the Rust type system and verified correctness of several standard library primitives. SMACK software verification toolchain works with Rust and has been used to find real bugs, but does not take advantage of the Rust type system, which makes it somewhat cumbersome to use.

Some promising work has been done on proving absence of overflows and panics or even proving user-defined properties on unmodified Rust code, without manually writing additional proofs in a verification language. This project is known as Prusti. It only works with a subset of safe Rust so far, but the prospect of formally verifying properties of Rust code with little to no additional effort is very exciting.

Code authentication and trust

Trust towards third-party code is a hot topic right now due to the recent event-stream incident in NodeJS. Ironically, security researchers have warned about this years ago.

This is an important problem, and there is work being done on that front. For example, something like cargo-crev may solve it in some cases. But trust towards external code is an unsolved problem in general, even in programming languages with built-in sandboxing capabilities.

As such, we do not expect it to be completely solved in 2019. However, there are improvements that we can make right now.

Adopting better code authentication practices is one. Someone is going to get their account compromised sooner or later, and the recent ESLint compromise is quite illustrative of why a strategy for mitigating this is needed. Even basics such as requiring signatures from several maintainers to upload a package are currently not supported.

This has been brought up as early as 2014; the attitude towards it is generally positive, and there is some work being done in this direction, but nobody has stepped up to actually implement the remaining part of it yet.

We need your help!

Some of the items we’ve listed require participation from core Rust teams, but most of them really don’t.

This is where you come in.

Rust is a community-driven language. We are just random people on the Internet coming together to work on a shared goal.

If you feel that these goals are worthwhile, pick an interesting item from the WG issue tracker and see if you can help. After all, it takes more than a village to build a successful programming language.

And stop by to say hello on Zulip!

--

--