Original post

Like many developers, I have been interested in Rust for quite some time. Not only because it appears in so many headlines on Hacker News, or because of the novel approach the language takes to safety and performance, but also because people seem to talk about it with a particular sense of love and admiration. On top of that, Rust is of particular interest to me because it shares some of the same goals and features of my favorite go-to language: Swift. Since I’ve recently taken the time to do try out Rust in some small personal projects, I wanted to take a little time to document my impressions of the language, especially in how it compares to Swift.

The Big Picture

Rust and Swift have a lot of things in common: they are both compiled languages with powerful, modern type systems and a focus on safety. Features like algebraic types, and first-class handling of optional values help to move many classes of errors from runtime to compile time in both of these languages.

So how do these languages differ? The best way I can characterize the difference is:

Swift makes it easy to write safe code.
Rust makes it difficult to write unsafe code.

Those two statements might sound equivalent, but there is an important distinction. Both languages have tools to achieve safety, but they make different trade-offs to achieve it: Swift prioritizes ergonomics at the expense of performance, while Rust prioritizes performance at the expense of ergonomics.

The Trade-off: Performance vs Ergonomics

The biggest way this difference in priority is demonstrated is in the approach these languages have to memory management. I’ll start with Rust because the language’s approach to memory management is one of it’s unique selling points.

In Rust, memory is primarily managed statically (yes there are other modes of memory management like reference counting, but we’ll ignore those for now). What this means is, the Rust compiler analyzes your program, and according to a set of rules, decides when memory should be allocated and released.

In order to deliver safety, Rust uses a novel strategy called borrow checking. The way this works in practice is that, as a programmer, every time you pass around a variable (i.e. a reference to a memory location), you have to specify whether the reference is mutable or immutable. The compiler then uses a set of rules to ensure that you cannot mutate a single piece of memory in two places at once, thus making it provable that your program does not have data races.

This approach has some very beneficial properties with respect to memory usage and performance. Borrow checking can be very parsimonious with memory, since it generally avoids copying values. It also avoids the performance overhead of a solution like garbage collection, since the work is being done at compile time rather than runtime.

However, it does come with some drawbacks as far as ease-of-use. Due to the nature of ownership in Rust, there are some design patterns which simply do not work in Rust. For instance, it’s not trivial to implement something like a doubly linked list or a global variable. This likely becomes more intuitive with time, and there are workarounds for these issues, but Rust certainly imposes limitations on the programmer which are not present in other languages.

While it’s not so often talked about as Rust, Swift also has an interesting story when it comes to memory management.

Swift has two fundamental types of variables: reference types and value types. In general, reference types are heap-allocated, and are managed by reference counting. This means that at runtime, the number of references to a reference counted object are tracked, and the object is deallocated when the count reaches zero. Reference counting in Swift is always atomic: this means every time a reference count changes, there has to be a synchronization between all the CPU threads. This has the benefit of eliminating the possibility of a reference being mistakenly freed in a multi-threaded application, but comes at a significant performance cost as CPU synchronization is very expensive.

Rust also has tools for reference counting and atomic reference counting, but these are opt-in rather than being the default.

Value types, by contrast, are stack-allocated in general, and their memory is managed statically. However, the behavior of value types in Swift is much different to how Rust handles memory. In Swift, value types have what’s called “copy-on-write” behavior, which means every time a value type is written to a new variable, or passed to a function, a copy is made.

Copy-on-write achieves some of the same goals of reference counting: as a programmer you generally never have to worry about a value changing mysteriously due to some unexpected side-effect elsewhere in the program. It also requires a bit less cognitive load than borrow checking, since there are whole classes of ownership-related compile-time errors in Rust which simply do not exist in Swift. However, it does come at a cost: those additional copies require additional memory usage and CPU cycles to complete.

In Rust it’s also possible to copy values as a way to silence borrow checking errors, but this does add visual noise as copies have to be explicitly specified.

So here we have a good example of the trade-offs made by these two languages: Swift gives you some broad assumptions about how memory should be managed while still maintaining a level of safety. It’s a bit like how a C++ programmer might handle memory according to best practices before giving a lot of thought to optimization. This makes it very easy to jump in and write code without giving much thought to low level details, and also achieving some basic run-time safety and correctness guarantees you would not get in a language like Python or even Golang. However it does come with some performance cliffs, which it’s easy to fall off without even realizing it until you run your program. It is possible to write high performance Swift code, but this often requires careful profiling and optimization to achieve.

Rust, on the other hand, gives you a lot of specific tools for specifying how memory should be managed, and then places some hard restrictions on how you use them to avoid unsafe behavior. This gives you very nice performance characteristics right out of the box, but it does require you to take on the additional cognitive overhead of ensuring that all the rules are followed.

My takeaway from this has been that while these languages do have some common goals, they have fundamentally different characteristics which lend themselves to different use-cases. Rust, for example, seems the clear choice for something like embedded development, where optimal use of memory and CPU cycles is extremely important, and where the code-compile-run loop may be slower, so it’s valuable to catch every possible issue at compile time. Whereas Swift might be a better choice for something like data science, or serverless logic, where performance is a secondary concern, and it’s valuable to work closer to the problem domain without having to consider a lot of the low-level details.

In any case, I will be very interested to follow both of these languages in the future, and I will follow this post with more observations about the comparison between Swift and Rust.