Frustrated? It's not you, it's Rust

👋 This page was last updated ~4 years ago. Just so you know.

Learning Rust is... an experience. An emotional journey. I've rarely been more frustrated than in my first few months of trying to learn Rust.

What makes it worse is that it doesn't matter how much prior experience you have, in Java, C#, C or C++ or otherwise - it'll still be unnerving.

In fact, more experience probably makes it worse! The habits have settled in deeper, and there's a certain expectation that, by now, you should be able to get that done in a shorter amount of time.

Maybe, after years of successfully shipping code, you don't have quite the same curiosity, the same candor and willingness to feel "lost" that you did back when you started.

Learning Rust makes you feel like a beginner again - why is this so hard? This doesn't feel like it should be that hard. I've done similar things before. I know what I want. Now I just need to... make it happen.

I'm going to keep including introductions like these in all my beginner-level articles, because they're very important: if you're picking up Rust, expect roadblocks. Telling you that "you'll be up to speed in no time" would be a flat out lie, and I'm not big on lying.

There is, however, a very good reason learning is so hard. When you switch from another language to Rust, you're not switching from French to Spanish - you're not just learning new vocabulary, so that you can say the same things, only they're spelled and pronounced different.

You're learning new vocabulary and learning to talk about topics you've never had to discuss before. You're learning a completely new communication style. And speech (spoken or written) is so fundamental to so many of us, starting over is extremely unsettling.

You encounter problems that you cannot frame using any of your prior knowledge. Writing Rust involves playing by a set of rules, that you won't be able to describe by analogy to other languages. Which adds another level of difficulty on top: often, you won't even be able to describe what's wrong, to get some help.

General-purpose search engines are fairly useless when it comes to solving Rust issues. Your best bet is pretty much the Rust compiler itself, and its diagnostics. That, or, biting the bullet and accepting that you'll have to go back and read some more beginner-level material before you can come back to what it is you were trying to do, and have an "ahAH!" moment.

The compiler can only go so far, though - because not only is it, too, confronted with the difficulty of explaining concepts that have no equivalent in other languages, but also: it's working from your code, not your mind.

And when you take what's in your mind and put it into code, well, details get lost - and while those may not matter in other languages, in Rust, they matter very much.

You see, the crux of the problem is...

You're smarter than Rust

I'm not kidding!

This is especially true if you've done a lot of dynamic typing / weak typing work.

Coming from a language such as Python, Ruby, or JavaScript, you're used to writing functions that look like this:

// (not actually valid Rust)

fn add(a, b) {
    a + b
}

And when you have a function like that, you know to only call it on things that can be added together. Numbers, for example. And you know better than to try and call it with something like... objects, or dictionaries, because then the result might be nonsensical.

Cool bear

Cool bear's hot tip

In JavaScript, for example, the following:

console.log({} + {});

Prints:

[object Object][object Object]

Rust, however, is not that smart. First off, it really wants everything to have a type:

// (does not compile)

fn add(a: TypeA, b: TypeB) -> TypeResult {
    a + b
}

And we can't just conjure types out of thin air.

We can pick an existing type, like i32:

// works fine!

fn add(a: i32, b: i32) -> i32 {
    a + b
}

But if we want to make our add function work for any two things that can be added, we have to make our function generic - which is its own rabbit hole.

// (still doesn't compile)

fn add<T>(a: T, b: T) -> T {
    a + b
}

But that example still doesn't compile:

error[E0369]: cannot add `T` to `T`
 --> src/main.rs:6:7
  |
6 |     a + b
  |     - ^ - T
  |     |
  |     T
  |
help: consider restricting type parameter `T`
  |
5 | fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
  |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^

The "help" section there is right on the money - the core of the issue is that Rust won't let us add two things, unless it knows for sure that they can be added.

use std::ops::Add;

fn main() {
    println!("ten = {}", add(4, 6));
}

fn add<T>(a: T, b: T) -> T
where
    T: Add<Output = T>,
{
    a + b
}
Cool bear

Cool bear's hot tip

Whoa hey, that escalated quickly. What's all that syntax?

Don't worry about it for now.

Now that we've followed directions, it works, finally:

$ cargo run --quiet
ten = 10

So - you're smarter than Rust. Rust only knows exactly what you tell it. And you better be clear about what you mean, too!

But there's an upside: spending the time to carefully describe to Rust what it is you mean, prevents a lot of errors. It prevents entire classes of errors.

In that simple example, it's fairly obvious: since Rust doesn't have implicit coercion of all types to string by default, you'll never end up with an accidental [object Object][object Object].

And, most importantly, if you were to publish your add function as part of a crate (an npm package, a gem, an.. egg? or a wheel? y'all, is Python alright?), no one else could end up with an accidental [object Object][object Object], too.

Because the types are not just advisory - they're part of the interface of your library, even when it's used as part of another project.

Cool bear

What did we learn?

add() only takes values that can be added together is an invariant.

If you're used to more dynamic / weakly-typed languages, you've been "maintaining invariants" for a long time - possibly without ever having to use the word "invariant".

You can also call it an "assumption" - for the entire duration of a call to add, we assume that a and b can be added together. It's an "invariant", in the sense that it can never change. If at some point, either a or b become values that cannot be added together, then our code will be wrong.

There is a more technical term for "wrong", too - maintaining invariants is maintaining "soundness". Code that breaks invariants is called "unsound".

In Rust, instead of keeping invariants in mind, we keep them directly in the code. This allows the compiler to enforce them at compile time.

You can also think of an invariant as a "permanent assertion". C code, for example, tends to contain a lot of runtime assertions - if we reached this part of the code, then "ptr" must not be NULL.

The prize, though, is to try and prevent invalid programs from compiling in the first place - to catch the problem as early as possible. And to only resort to runtime errors for problems that are too hard to describe, or for situation involving uncontrolled user input.

will

Rust won't guess, but it deduce

You may be taking issue at that previous example. You may find yourself wanting to argue that a function like this:

// (again, does not compile)

fn add<T>(a: T, b: T) -> T {
    a + b
}

...has all the information required for Rust to constrain the type T itself, so that add can only be called with values that can be added together.

After all, Rust is able to deduce some things by itself. If you do:

fn get_some_numbers() -> Vec<usize> {
    vec![1, 2, 3]
}

fn main() {
    let v = get_some_numbers();
}

...then Rust is able to tell that v is of type Vec<usize>.

You don't have to spell it out, like this:

fn get_some_numbers() -> Vec<usize> {
    vec![1, 2, 3]
}

fn main() {
    // this `let` binding now has an explicit type:
    let v: Vec<usize> = get_some_numbers();
}

Rust also knows about other things.

For example, this C program compiles fine:

#include <stdint.h>
#include <stdio.h>

char *humanize_number(size_t n) {
    switch (n) {
        case 0:
            return "zero";
        case 1:
            return "one";
        case 2:
            return "two";
    }
}

int main() {
    printf("0 = %s\n", humanize_number(0));
    printf("1 = %s\n", humanize_number(1));
    printf("2 = %s\n", humanize_number(2));
    printf("3 = %s\n", humanize_number(3));
    return 0;
}

And crashes at runtime:

$ gcc main.c -o main && ./main
0 = zero
1 = one
2 = two
[1]    148103 segmentation fault (core dumped)  ./main

The C compiler knows something is wrong with this code. If we ask its opinion with -Wall, it'll tell us:

$ gcc -Wall main.c -o main
main.c: In function ‘humanize_number’:
main.c:13:1: warning: control reaches end of non-void function [-Wreturn-type]
   13 | }
      | ^

A similar Rust program will simply not compile:

fn main() {
    println!("0 = {}", humanize_number(0));
    println!("1 = {}", humanize_number(1));
    println!("2 = {}", humanize_number(2));
    println!("3 = {}", humanize_number(3));
}

fn humanize_number(n: usize) -> &'static str {
    match n {
        0 => "zero",
        1 => "one",
        2 => "two",
    }
}
error[E0004]: non-exhaustive patterns: `_` not covered
 --> src/main.rs:9:11
  |
9 |     match n {
  |           ^ pattern `_` not covered
  |
  = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms
  = note: the matched value is of type `usize`

Even if no one ever called humanize_number with a value other than 0, 1, or 2, it wouldn't matter to Rust. It simply won't let you compile that code as-is.

Since values of type usize can range from 0 to 4 billion (on 32-bit), or 0 to 18 quintillion (that's 18 billion billion), it wants you to make sure every case is handled.

Either by stopping the program yourself:

fn humanize_number(n: usize) -> &'static str {
    match n {
        0 => "zero",
        1 => "one",
        2 => "two",
        _ => panic!("n is too large"),
    }
}

Or returning a fallback value:

fn humanize_number(n: usize) -> &'static str {
    match n {
        0 => "zero",
        1 => "one",
        2 => "two",
        _ => "a big number",
    }
}

Or by choosing a return type that lets us signal a failure condition:

struct NumberTooBig;

fn humanize_number(n: usize) -> Result<&'static str, NumberTooBig> {
    match n {
        0 => Ok("zero"),
        1 => Ok("one"),
        2 => Ok("two"),
        _ => Err(NumberTooBig),
    }
}

...which will force the caller to handle that case themselves:

fn main() {
    println!("0 = {}", humanize_number(0).unwrap_or("a big number"));
    println!("1 = {}", humanize_number(1).unwrap_or("a big number"));
    println!("2 = {}", humanize_number(2).unwrap_or("a big number"));
    println!("3 = {}", humanize_number(3).unwrap_or("a big number"));
}
$ cargo run --quiet
0 = zero
1 = one
2 = two
3 = a big number

Why doesn't Rust want to let us write code that works in some cases, but no others? Because an immediate segmentation fault is kind of the best we can hope for in that case.

The problem becomes much more serious if we actually store the result of humanize_number somewhere, and use it later. Or if we end up passsing it to a function that expects a valid string. All sorts of invariant will be broken then, and who knows what could happen?

Cool bear

Cool bear's hot tip

Ohh, I know! I know what could happen.

You could accidentally give everyone super-user access.

Yeah. Or leak private customer data. Or have a surgery robot go haywire. Lots of bad things could happen.

But that doesn't answer our original question: why can Rust deduce the type of v here:

fn main() {
    // deduced to be of type `Vec<u8>`
    let v = vec![0u8, 3u8, 5u8];
}

...but it won't deduce the bounds on type T here:

fn add<T>(a: T, b: T) -> T {
    a + b
}

Well, for starters - specifying types and bounds on those types is not just useful for callers of a function.

ie., it doesn't only prevent this:

use std::ops::Add;

fn main() {
    let a = vec![0, 1];
    let b = vec![2, 3];
    // !!! calling `add` on values that can't be added together
    let c = add(a, b);
}

fn add<T>(a: T, b: T) -> T
where
    T: Add<Output = T>,
{
    a + b
}
error[E0277]: cannot add `std::vec::Vec<{integer}>` to `std::vec::Vec<{integer}>`
  --> src/main.rs:6:13
   |
6  |     let c = add(a, b);
   |             ^^^ no implementation for `std::vec::Vec<{integer}> + std::vec::Vec<{integer}>`
...
9  | fn add<T>(a: T, b: T) -> T
   |    --- required by a bound in this
10 | where
11 |     T: Add<Output = T>,
   |        --------------- required by this bound in `add`
   |
   = help: the trait `std::ops::Add` is not implemented for `std::vec::Vec<{integer}>`

It's also useful within the callee - ie., the function we're writing:

use std::ops::Add;

fn main() {
    let fourteen = add(7, 7);
    dbg!(fourteen);
}

fn add<T>(a: T, b: T) -> T
where
    T: Add<Output = T>,
{
    // subtracting b from a, but we only asked for types
    // that we can add!
    a - b
}
cargo check --quiet
error[E0369]: cannot subtract `T` from `T`
  --> src/main.rs:12:7
   |
12 |     a - b
   |     - ^ - T
   |     |
   |     T
   |
help: consider further restricting this bound
   |
10 |     T: Add<Output = T> + std::ops::Sub<Output = T>,
   |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^

But the real reason is to avoid constructs that are undecidable.

We're getting dangerously close to flirting with academic papers at this point, so let's go for an example immediately.

Rust has an Into trait, that describes the ability of a type to be converted to another type. It's distinct from casting (the as operator); you actually have to call the into() method:

fn main() {
    let a: u8 = 255;

    let b: u16 = a.into();
    let c: u32 = a.into();
    let d: u64 = a.into();

    dbg!(a, b, c, d);
}
$ cargo run --quiet
[src/main.rs:8] a = 255
[src/main.rs:8] b = 255
[src/main.rs:8] c = 255
[src/main.rs:8] d = 255

In this code sample, a is an unsigned 8-bit integer, and we convert it to an unsigned 16-bit integer, an unsigned 32-bit integer and an unsigned 64-bit integer, all using the same method: Into::into.

Which means that Into::into can return different types, not only depending on what type the receiver is (it's an u8 in all three calls), but also depending on what type is expected.

In other words, Into::into is "generic over its return type".

But now we have a conundrum.

Cool bear

Cool bear's hot tip

A what?

An, uh, "opportunity to get into trouble".

Consider the following code:

fn main() {
    let a: u8 = 255;

    let b = a.into();
    println!("b = {}", b);
}

What should the type of b be?

The Rust compiler is wondering, as well:

$ cargo run --quiet
error[E0282]: type annotations needed
 --> src/main.rs:4:9
  |
4 |     let b = a.into();
  |         ^ consider giving `b` a type

Clearly, we need a type, let's call it B, for which there exists impl Into<B> for A, and also impl Display for B, since we use it in a println! call.

But there are multiple such types - u16, u32, u64, u128, i16, i32, i64 and i128 would all work great.

Cool bear

Cool bear's hot tip

Note that i8 would not work, as it cannot represent all possible u8 values. In that case, we'd have to use the TryInto trait, which represents the ability to "try to convert", a fallible operation.

So which one should be used? Rust refuses to guess.

Since we're on the topic of integer types, there is one notable exception to that rule. In this code:

fn main() {
    let v = vec![1, 2, 3];
}

We get a Vec<i32>. Integer literals are not a specific type, they're {integer}. If a specific type is expected, then they can become u64, i8, or whatever else - but if not, it defaults to i32. Floating point literals (like 0.0) will default to f64.

For everything else, we need to spell things out.

Cool bear

What did we learn?

The Rust compiler has a lot of knowledge about types, their possible values, and the things they're capable of (to a large extent: the traits they implement).

It uses that knowledge all the time, to deduce the type of variable bindings, of literals, and type arguments (the T in fn add<T>).

However, there is a limit to the amount of deducing the Rust compiler will do. When it's starting to look too much like guessing, it will ask for more explicit instructions - type annotations.

Beyond integer types

Consider the following example program:

// (doesn't compile)

struct Wolf {}

impl Wolf {
    fn greet(&self) {
        println!("awoooo");
    }
}

struct Lizard {}

impl Lizard {
    fn greet(&self) {
        println!("*chirp chirp*");
    }
}

fn acquire_pet<T>(comfy: bool) -> T {
    if comfy {
        Wolf {}
    } else {
        Lizard {}
    }
}

fn main() {
    let pet = acquire_pet(true);
    pet.greet();
}

(Yeah, lizards make noise).

This doesn't compile. One of the errors is as follows:

error[E0282]: type annotations needed
  --> src/main.rs:27:5
   |
26 |     let pet = acquire_pet(true);
   |         --- consider giving `pet` a type
27 |     pet.greet();
   |     ^^^ cannot infer type
   |
   = note: type must be known at this point

...but even if we do give pet a type:

fn main() {
    let pet: Wolf = acquire_pet(true);
    pet.greet();
}

...we're still left with those errors:

error[E0308]: mismatched types
  --> src/main.rs:19:9
   |
17 | fn acquire_pet<T>(comfy: bool) -> T {
   |                -                  - expected `T` because of return type
   |                |
   |                this type parameter
18 |     if comfy {
19 |         Wolf {}
   |         ^^^^^^^ expected type parameter `T`, found struct `Wolf`
   |
   = note: expected type parameter `T`
                      found struct `Wolf`

error[E0308]: mismatched types
  --> src/main.rs:21:9
   |
17 | fn acquire_pet<T>(comfy: bool) -> T {
   |                -                  - expected `T` because of return type
   |                |
   |                this type parameter
...
21 |         Lizard {}
   |         ^^^^^^^^^ expected type parameter `T`, found struct `Lizard`
   |
   = note: expected type parameter `T`
                      found struct `Lizard`

What's the problem now? acquire_pet is generic - clearly it can return different types. We call it with true, so clearly, it should return a Wolf, and we also expect a Wolf (that's the type we gave our pet binding in the main function).

What gives?

Well, this particular case is decidable, but what happens if we do this?

fn ask_comfy_preference() -> bool {
    println!("Do you like comfy pets? (yes or no)");
    let mut answer = String::new();
    std::io::stdin().read_line(&mut answer).unwrap();

    match answer.trim() {
        "yes" => true,
        "no" => false,
        _ => {
            panic!("Sorry, I did not understand your answer: {:?}", answer);
        }
    }
}

fn main() {
    let comfy = ask_comfy_preference();
    let pet = acquire_pet(comfy);
    pet.greet();
}

Now the type of pet depends on user input. This would be no problem at all in a language with dynamic typing. But here, there's no duck to quack or walk like.

It doesn't matter that both Wolf and Lizard both have a greet method. Their structural similarity is not at all relevant.

The only thing that matters is the contracts various parts of the code have agreed to uphold.

There is a type in the Rust standard library that lets us return "anything". Well, it's a trait: Any.

// bad code ahoy

fn acquire_pet(comfy: bool) -> dyn std::any::Any {
    if comfy {
        Wolf {}
    } else {
        Lizard {}
    }
}
Cool bear

Cool bear's hot tip

The dyn keyword is needed here since the 2018 Rust Edition.

In dyn T, T is the trait (just a contract - a list of methods, some characteristics etc.) and dyn T is a "trait object", which contains both:

  • An object for which the trait T is implemented
  • A vtable containing the address of each method required by T, implemented for that object's type.

This doesn't work - we can't just use a trait as a return type like that.

Trying to compile that code gives you a lot of advice.

Among those, it says: "if all the returned values were of the same type you could use impl std::any::Any as the return type".

If? Aren't they? Let's try it:

fn acquire_pet(comfy: bool) -> impl std::any::Any {
    if comfy {
        Wolf {}
    } else {
        Lizard {}
    }
}

In this version, we promise to return a concrete type that implements Any. We just don't want to name it. This is handy in a lot of cases.

But it doesn't solve our problem:

cargo check --quiet
error[E0308]: `if` and `else` have incompatible types
  --> src/main.rs:21:9
   |
18 | /     if comfy {
19 | |         Wolf {}
   | |         ------- expected because of this
20 | |     } else {
21 | |         Lizard {}
   | |         ^^^^^^^^^ expected struct `Wolf`, found struct `Lizard`
22 | |     }
   | |_____- `if` and `else` have incompatible types

Because even though we're not specifying the concrete return type (just that it should implement the Any trait), the compiler should still be able to figure it out, given the function's signature (its argument types), and the code that's inside it.

And right now, it can't figure out if the concrete type should be struct Wolf, or struct Lizard.

The compiler did suggest two actual solutions, though: to either return a boxed trait object instead, or to make an enum with a variant for each returned type.

We'll go with the first one:

fn acquire_pet(comfy: bool) -> Box<dyn std::any::Any> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}

See, the problem with returning "a Wolf" or "a Lizard" is that those types may have completely different sizes.

So when program execution reaches this point:

    let pet = acquire_pet(comfy);

...we should reserve enough memory to store the pet, on the stack.

Cool bear

Cool bear's hot tip

Or, with suitable optimizations, part of it can even be stored in registers.

Right. Point is, we need to know what the actual type is - how big it is, what kind of fields it has, etc.

But if we return a Box<dyn Any>, we're simply returning the address of a value whose type implements Any. Box<T> is just a pointer, and we know the size of that (4 bytes on 32-bit, 8 bytes on 64-bit).

But our program still doesn't compile (a recurring theme...):

error[E0599]: no method named `greet` found for struct `std::boxed::Box<dyn std::any::Any>` in the current scope
  --> src/main.rs:40:9
   |
40 |     pet.greet();
   |         ^^^^^ method not found in `std::boxed::Box<dyn std::any::Any>`

This time though, the answer is clear - we're returning the address of something that implements Any.

But Any doesn't promise anything!

Its only required method is type_id, so we can do that:

fn main() {
    let comfy = ask_comfy_preference();
    let pet = acquire_pet(comfy);
    println!("We got a {:?}", pet.type_id());
}
$ cargo run --quiet
Do you like comfy pets? (yes or no)
yes
We got a TypeId { t: 13993700938491603631 }
$ cargo run --quiet
Do you like comfy pets? (yes or no)
no
We got a TypeId { t: 8639049246320250335 }

Another thing we can do is try to downcast the resulting value into a specific concrete type, like Wolf or Lizard:

fn main() {
    let comfy = ask_comfy_preference();
    let pet = acquire_pet(comfy);

    if let Some(wolf) = pet.downcast_ref::<Wolf>() {
        wolf.greet();
    } else if let Some(lizard) = pet.downcast_ref::<Lizard>() {
        lizard.greet();
    } else {
        println!("we don't know about this friend yet");
    }
}
$ cargo run --quiet
Do you like comfy pets? (yes or no)
yes
awoooo

As things stand, we're asking for less than we need.

What we need is for acquire_pet to promise it'll return something with a greet method. And we can express that by making a trait:

trait Greet {
    fn greet(&self);
}

And implementing it for Wolf and Lizard:

impl Greet for Wolf {
    fn greet(&self) {
        println!("awoooo");
    }
}

impl Greet for Lizard {
    fn greet(&self) {
        println!("*chirp chirp*");
    }
}

And then changing the signature of acquire_pet to promise we'll return something that implements Greet:

fn acquire_pet(comfy: bool) -> Box<dyn Greet> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}

And finally, this version of main works:

fn main() {
    let comfy = ask_comfy_preference();
    let pet = acquire_pet(comfy);
    pet.greet();
}

We can even get fancy with bounds: we can ask for values that can be greeted and also cloned.

fn greet_clones<P>(pet: &P)
where
    P: Clone + Greet,
{
    for _ in 0..3 {
        let clone = pet.clone();
        clone.greet();
    }
}

To get this to work, we'll have to implement Clone on our Wolf and Lizard types. This can be done easily with the derive attribute, which generates the impl Clone for T block for us, as long as all our fields are also Clone:

#[derive(Clone)]
struct Wolf {}

#[derive(Clone)]
struct Lizard {}

And now, we can do this:

fn main() {
    let wolf = Wolf {};
    greet_clones(&wolf);
}
$ cargo run --quiet
awoooo
awoooo
awoooo

But, and this is what I'm getting to, we can't do this:

fn main() {
    let pet = acquire_pet(ask_comfy_preference());
    greet_clones(pet.as_ref());
}
$ cargo run --quiet
error[E0277]: the trait bound `dyn Greet: std::clone::Clone` is not satisfied
  --> src/main.rs:59:18
   |
33 | fn greet_clones<P>(pet: &P)
   |    ------------ required by a bound in this
34 | where
35 |     P: Clone + Greet,
   |        ----- required by this bound in `greet_clones`
...
59 |     greet_clones(pet.as_ref());
   |                  ^^^^^^^^^^^^ the trait `std::clone::Clone` is not implemented for `dyn Greet`

...because acquire_pet only promises to return something that implements Greet, not Clone! Even though it (currently) only ever returns values of types that implement both.

So, we can constrain our acquire_pet method further - we can tell the Rust compiler more about our intentions:

// (doesn't actually work)

fn acquire_pet(comfy: bool) -> Box<dyn Greet + Clone> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}

Well, that particular way doesn't work:

error[E0225]: only auto traits can be used as additional traits in a trait object
  --> src/main.rs:25:48
   |
25 | fn acquire_pet(comfy: bool) -> Box<dyn Greet + Clone> {
   |                                        -----   ^^^^^
   |                                        |       |
   |                                        |       additional non-auto trait
   |                                        |       trait alias used in trait object type (additional use)
   |                                        first non-auto trait
   |                                        trait alias used in trait object type (first use)

But we can can find a way:

// (still doesn't work)

trait GreetClone: Greet + Clone {}

fn acquire_pet(comfy: bool) -> Box<dyn GreetClone> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}

Unless that way doesn't work either:

cargo run --quiet
error[E0038]: the trait `GreetClone` cannot be made into an object
  --> src/main.rs:27:32
   |
25 | trait GreetClone: Greet + Clone {}
   |       ----------          ----- ...because it requires `Self: Sized`
   |       |
   |       this trait cannot be made into an object...
26 |
27 | fn acquire_pet(comfy: bool) -> Box<dyn GreetClone> {
   |                                ^^^^^^^^^^^^^^^^^^^ the trait `GreetClone` cannot be made into an object

And to understand why, we have to do some more thinking.

Unsized types and trait objects

We've seen before that this doesn't work:

fn acquire_pet(comfy: bool) -> dyn std::any::Any {
    if comfy {
        Wolf {}
    } else {
        Lizard {}
    }
}

Because in that code, we have to know how much memory to reserve for pet:

fn main() {
    let pet = acquire_pet(true);
}

And now that our intuition about this exists, we can learn about the vocabulary we need to express that constraint: "locals" (such as pet) cannot be "unsized". And "trait objects" (dyn T) are "unsized".

How is this relevant to Clone?

Well, let's try to make our own MyClone trait:

trait MyClone {
    fn my_clone(&self) -> Self;
}

my_clone takes a reference to a value. References are actually pointers, and we've seen that pointers are sized. If it took self by value, then it wouldn't really be cloning, as it would destroy the value we're trying to clone in the first place.

But it returns Self. If Wolf implements MyClone, then wolf.my_clone() returns a Wolf.

So, Wolf has a certain size, we can build a value of type Wolf on the stack:

let wolf = Wolf {};

We can also "box it", ie. store it on the heap, and just hold a pointer to it:

let wolf = Box::new(Wolf {});

That's all fine. We can even call .my_clone() on it:

let wolf = Box::new(Wolf {});
let wolf2 = wolf.as_ref().my_clone();

What's not fine is if we hide the concrete type. If all we know about what's inside the Box is that it implements MyClone.

impl MyClone for Wolf {
    fn my_clone(&self) -> Self {
        // luckily `Wolf` has no fields right now,
        // so our implementation is trivial - just
        // construct another wolf.
        Self {}
    }
}

fn main() {
    let pet = Box::new(Wolf {}) as Box<dyn MyClone>;
    let pet2 = pet.my_clone();
}

This makes the Rust compiler very flustered.

It's trying to tell us a lot of things at the same time:

error[E0038]: the trait `MyClone` cannot be made into an object
  --> src/main.rs:70:36
   |
31 | trait MyClone {
   |       ------- this trait cannot be made into an object...
32 |     fn my_clone(&self) -> Self;
   |                           ---- ...because method `my_clone` references the `Self` type in its return type
...
70 |     let pet = Box::new(Wolf {}) as Box<dyn MyClone>;
   |                                    ^^^^^^^^^^^^^^^^ the trait `MyClone` cannot be made into an object
   |
   = help: consider moving `my_clone` to another trait

This says: due to the way the MyClone trait is defined, we can never hold on to values of type dyn MyClone. Even through a Box.

We then have two other instances of that particular error (highlighting different parts of the code), and then this:

error[E0277]: the size for values of type `dyn MyClone` cannot be known at compilation time
  --> src/main.rs:71:9
   |
71 |     let pet2 = pet.my_clone();
   |         ^^^^ doesn't have a size known at compile-time
   |
   = help: the trait `std::marker::Sized` is not implemented for `dyn MyClone`
   = note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
   = note: all local variables must have a statically known size
   = help: unsized locals are gated as an unstable feature

We've just now learned what this means. We cannot have "unsized locals". We must know how much memory to reserve for a local.

It also suggests we implement the marker trait Sized for MyClone.

Well, we can do that:

trait MyClone: Sized {
    fn my_clone(&self) -> Self;
}

And then we get slightly different variants of the same error, like:

error[E0038]: the trait `MyClone` cannot be made into an object
  --> src/main.rs:70:36
   |
31 | trait MyClone: Sized {
   |       -------  ----- ...because it requires `Self: Sized`
   |       |
   |       this trait cannot be made into an object...
...
70 |     let pet = Box::new(Wolf {}) as Box<dyn MyClone>;
   |                                    ^^^^^^^^^^^^^^^^ the trait `MyClone` cannot be made into an object

error: the `my_clone` method cannot be invoked on a trait object
  --> src/main.rs:71:20
   |
31 | trait MyClone: Sized {
   |                ----- this has a `Sized` requirement
...
71 |     let pet2 = pet.my_clone();
   |                    ^^^^^^^^

Does this mean we can never invoke my_clone? No, we still can!

This is perfectly fine:

fn main() {
    let pet = Wolf {};
    let pet2 = pet.my_clone();
}

And so is this:

fn main() {
    let pet = Box::new(Wolf {});
    let pet2 = pet.my_clone();
}

We just cannot invoke it on a "trait object", a value of type dyn MyClone.

And this is the exact error we had with our GreetClone trait:

trait GreetClone: Greet + MyClone {}

fn acquire_pet(comfy: bool) -> Box<dyn GreetClone> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}
cargo run --quiet
error[E0038]: the trait `GreetClone` cannot be made into an object
  --> src/main.rs:27:32
   |
25 | trait GreetClone: Greet + Clone {}
   |       ----------          ----- ...because it requires `Self: Sized`
   |       |
   |       this trait cannot be made into an object...
26 |
27 | fn acquire_pet(comfy: bool) -> Box<dyn GreetClone> {
   |                                ^^^^^^^^^^^^^^^^^^^ the trait `GreetClone` cannot be made into an object

The whole problem is that we're trying to return Self, which might have any size, depending on which concrete type is implementing MyClone.

If we're willing to get our hands dirty... there's a way around it. We can definitely get around Rust's limitations and return a pointer to some value on the heap.

trait MyClone {
    unsafe fn clone_ptr(&self) -> *mut ();
}

And of course, we have to implement it for both Wolf and Lizard:

impl MyClone for Wolf {
    unsafe fn clone_ptr(&self) -> *mut () {
        Box::into_raw(Box::new(self.clone())) as _
    }
}

impl MyClone for Lizard {
    unsafe fn clone_ptr(&self) -> *mut () {
        Box::into_raw(Box::new(self.clone())) as _
    }
}

And since we don't actually want to be dealing with raw pointers, we can make a helper method, in another trait, which we'll implement automatically for all types that also implement MyClone - even unsized types.

trait MyCloneExt {
    fn clone_box(&self) -> Box<Self>;
}

impl<T> MyCloneExt for T
where
    T: MyClone + ?Sized,
{
    fn clone_box(&self) -> Box<Self> {
        // avert your gaze for a few lines...
        let mut fat_ptr = self as *const Self;
        unsafe {
            let data_ptr = &mut fat_ptr as *mut *const T as *mut *mut ();
            assert_eq!(*data_ptr as *const (), self as *const T as *const ());
            *data_ptr = <T as MyClone>::clone_ptr(self);
        }
        // ...there we go
        unsafe { Box::from_raw(fat_ptr as *mut Self) }
    }
}

This is fairly advanced trickery, so, don't worry about it too much.

Point is - now, MyClone is "trait object safe" (it doesn't require Sized, it doesn't refer to Self), so we can use it as a super trait of GreetClone:

trait GreetClone: Greet + MyClone {}

And then use the trait object type dyn GreetClone as a return type in acquire_pet:

fn acquire_pet(comfy: bool) -> Box<dyn GreetClone> {
    if comfy {
        Box::new(Wolf {})
    } else {
        Box::new(Lizard {})
    }
}

Then change greet_clones to take MyClone + Greet rather than Clone + Greet:

fn greet_clones<P>(pet: &P)
where
    P: MyClone + Greet + ?Sized,
{
    for _ in 0..3 {
        let clone = pet.clone_box();
        clone.greet();
    }
}

And finally, at long last, use it from main:

fn main() {
    let pet = acquire_pet(ask_comfy_preference());
    greet_clones(pet.as_ref());
}
$ cargo run --quiet
Do you like comfy pets? (yes or no)
yes
awoooo
awoooo
awoooo

I didn't come up with all this trickery by myself: it's straight from the dyn-clone crate, which you should just use in case you need to do this particular thing.

Let me repeat this: you don't have to come up with trickery like that by yourself. I just chose this particularly gnarly example to demonstrate that, really, I mean it: it's not you, it's Rust.

Cool bear

What did we learn?

Rust needs to be confident that invariants will not be violated. It needs to be convinced that your code is sound.

In some cases it gets tricky. Tricky enough to warrant using an additional crate just so you don't have to deal with the dirty details yourself.

In particular, data structures are especially difficult to implement in Rust, and that's one of the things experienced developers (who are used to just roll their own in other languages) end up finding out sooner rather than later.

Lifetimes

I know, I know, I just wrote about lifetimes, and again before that and again before that. But lifetimes are one of the concepts of Rust that underpin the entire language.

And everyone understands lifetimes in their own time - there's not one single explanation that'll work for everyone. I could keep making up explanations my whole life, and there'd still be folks who don't quite see it yet.

This time, I'm trying it from the angle: "you're smarter than Rust".

Take this C99 program:

#include <stdio.h>
#include <pthread.h>
#include <unistd.h>

struct State {
    int a;
    int b;
};

void *t1_work(void *arg) {
    struct State *state = (struct State*) arg;

    while (state->a != 0) {
        printf("a = %d\n", state->a);
        state->a--;
        sleep(1);
    }
    return NULL;
}

void *t2_work(void *arg) {
    struct State *state = (struct State*) arg;

    while (state->b != 0) {
        printf("b = %d\n", state->b);
        state->b--;
        sleep(1);
    }
    return NULL;
}

int main() {
    struct State state = { .a = 3, .b = 3 };

    pthread_t t1, t2;
    pthread_create(&t1, NULL, t1_work, &state);
    pthread_create(&t2, NULL, t2_work, &state);

    pthread_join(t1, NULL);
    pthread_join(t2, NULL);

    return 0;
}

The code is a tad verbose, but the idea is: we have one global struct State instance, and we let two threads operate on it.

The program compiles and runs fine.

$ gcc -Wall main.c -o main -lpthread && ./main
a = 3
b = 3
b = 2
a = 2
b = 1
a = 1

We can do a literal translation of that program to Rust, using raw pointers:

// oh god, please don't copy paste this

use std::{thread, time::Duration};

struct State {
    a: i32,
    b: i32,
}

fn t1_work(arg: usize) {
    let state = arg as *mut State;

    while unsafe { (*state).a } != 0 {
        println!("a = {}", unsafe { (*state).a });
        unsafe { (*state).a -= 1 };
        thread::sleep(Duration::from_secs(1));
    }
}

fn t2_work(arg: usize) {
    let state = arg as *mut State;

    while unsafe { (*state).b } != 0 {
        println!("b = {}", unsafe { (*state).b });
        unsafe { (*state).b -= 1 };
        thread::sleep(Duration::from_secs(1));
    }
}

fn main() {
    let mut state = State { a: 3, b: 3 };
    let state_ptr = &mut state as *mut _ as usize;

    let t1 = thread::spawn(move || t1_work(state_ptr));
    let t2 = thread::spawn(move || t2_work(state_ptr));

    t1.join().unwrap();
    t2.join().unwrap();
}

And it'll work exactly the same way:

$ cargo run --quiet
a = 3
b = 3
a = 2
b = 2
a = 1
b = 1

But that is unsafe Rust. It's not what you would want to write, if you wanted to benefit from the memory safety guarantees that Rust offer.

Because, for example, things like that might happen:

fn main() {
    let state_ptr = {
        let mut state = State { a: 3, b: 3 };
        &mut state as *mut _ as usize
    };

    // `state_ptr` is now dangling
    let t1 = thread::spawn(move || t1_work(state_ptr));
    let t2 = thread::spawn(move || t2_work(state_ptr));

    t1.join().unwrap();
    t2.join().unwrap();
}

Want to see what a release build of that version shows?

$ cargo run --quiet --release

Nothing.

If I run it in GDB, it shows me nonsense, like:

[New Thread 0x7ffff7d7f700 (LWP 178699)]
a = -9808
[New Thread 0x7ffff7b7e700 (LWP 178700)]
b = 32767
[Thread 0x7ffff7b7e700 (LWP 178700) exited]
[Thread 0x7ffff7d7f700 (LWP 178699) exited]

Or, instead of silent data corruption, you could have a data race:

fn main() {
    let mut state = State { a: 3, b: 3 };
    let state_ptr = &mut state as *mut _ as usize;

    let t1 = thread::spawn(move || t1_work(state_ptr));
    // look closely...
    let t2 = thread::spawn(move || t1_work(state_ptr));

    t1.join().unwrap();
    t2.join().unwrap();
}
$ cargo run --quiet
a = 3
a = 2
a = 1
a = 1
a = -1
a = -2
a = -3
a = -3
a = -5
a = -5
a = -7
a = -7
^C

I was lucky to see the elusive data race the first time I ran it. But the second time...

$ cargo run --quiet
a = 3
a = 2
a = 1

Everything appeared to work fine.

So - that's not safe Rust. Safe Rust ensures memory safety and prevents data races, which we've just seen examples of.

So for example, if we remove some of the naughty from the code:

fn t1_work(state: *mut State) {
    // omitted
}

fn t2_work(state: *mut State) {
    // omitted
}

fn main() {
    let mut state = State { a: 3, b: 3 };
    // was `usize`, now a `*mut State`
    let state_ptr = &mut state as *mut _;

    let t1 = thread::spawn(move || t1_work(state_ptr));
    let t2 = thread::spawn(move || t1_work(state_ptr));

    t1.join().unwrap();
    t2.join().unwrap();
}

...then the compiler has enough information to avert the disaster. It's not a number (usize) we're passing... it's a pointer!

And you can't just send pointers to other threads willy-nilly.

error[E0277]: `*mut State` cannot be sent between threads safely
   --> src/main.rs:29:14
    |
29  |     let t1 = thread::spawn(move || t1_work(state_ptr));
    |              ^^^^^^^^^^^^^ -------------------------- within this `[closure@src/main.rs:29:28: 29:54 state_ptr:*mut State]`
    |              |
    |              `*mut State` cannot be sent between threads safely
    |
   ::: /home/amos/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/thread/mod.rs:616:8
    |
616 |     F: Send + 'static,
    |        ---- required by this bound in `std::thread::spawn`
    |
    = help: within `[closure@src/main.rs:29:28: 29:54 state_ptr:*mut State]`, the trait `std::marker::Send` is not implemented for `*mut State`
    = note: required because it appears within the type `[closure@src/main.rs:29:28: 29:54 state_ptr:*mut State]`

What you can send is a Mutex.

// (this doesn't work)

use std::{sync::Mutex, thread, time::Duration};

struct State {
    a: i32,
    b: i32,
}

fn t1_work(state: &Mutex<&mut State>) {
    let mut state = state.lock().unwrap();

    while state.a != 0 {
        println!("a = {}", state.a);
        state.a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn t2_work(state: &Mutex<&mut State>) {
    let mut state = state.lock().unwrap();

    while state.b != 0 {
        println!("b = {}", state.b);
        state.b -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn main() {
    let mut state = State { a: 3, b: 3 };
    let m = Mutex::new(&mut state);

    let t1 = thread::spawn(|| t1_work(&m));
    let t2 = thread::spawn(|| t2_work(&m));

    t1.join().unwrap();
    t2.join().unwrap();
}

But then we have other issues:

error[E0597]: `state` does not live long enough
  --> src/main.rs:30:24
   |
30 |     let m = Mutex::new(&mut state);
   |             -----------^^^^^^^^^^-
   |             |          |
   |             |          borrowed value does not live long enough
   |             argument requires that `state` is borrowed for `'static`
...
37 | }
   | - `state` dropped here while still borrowed

error[E0373]: closure may outlive the current function, but it borrows `m`, which is owned by the current function
  --> src/main.rs:32:28
   |
32 |     let t1 = thread::spawn(|| t1_work(&m));
   |                            ^^          - `m` is borrowed here
   |                            |
   |                            may outlive borrowed value `m`
   |
note: function requires argument type to outlive `'static`
  --> src/main.rs:32:14
   |
32 |     let t1 = thread::spawn(|| t1_work(&m));
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
help: to force the closure to take ownership of `m` (and any other referenced variables), use the `move` keyword
   |
32 |     let t1 = thread::spawn(move || t1_work(&m));
   |                            ^^^^^^^

Why does this happen? Because we don't respect the contract that thread::spawn asks for.

thread::spawn takes a function, and intends to run it til death do us part. So, it wants that function to have the lifetime 'static, in other words, it wants it to "never die".

But the function we're passing to thread::spawn is a closure, and it borrows (captures a reference to) some locals from main.

Again - we're smarter than Rust here. Or, more accurately: we have more knowledge. We know that we wait for both threads to finish before returning from main.

Cool bear

Cool bear's hot tip

Mhh...

What's that?

Cool bear

Cool bear's hot tip

Nothing, nothing, keep going.

So, since we know that, we can kick some doors down and have it our way:

fn t1_work(state: TrustMeItsFine) {
    // omitted
}

fn t2_work(state: TrustMeItsFine) {
    // omitted
}

struct TrustMeItsFine(*const Mutex<&'static mut State>);

impl std::ops::Deref for TrustMeItsFine {
    type Target = Mutex<&'static mut State>;

    fn deref(&self) -> &Self::Target {
        unsafe { self.0.as_ref().unwrap() }
    }
}

unsafe impl Send for TrustMeItsFine {}

fn main() {
    println!("Doing the work...");
    work();

    println!("Waiting a bit...");
    thread::sleep(Duration::from_secs(2));
    println!("Okay, bye now!");
}

fn work() {
    let state = Box::leak(Box::new(State { a: 3, b: 3 }));
    let m = Mutex::new(state);

    let t1_arg = TrustMeItsFine(&m as *const _);
    let t1 = thread::spawn(move || t1_work(t1_arg));
    let t2_arg = TrustMeItsFine(&m as *const _);
    let t2 = thread::spawn(move || t2_work(t2_arg));

    t1.join().unwrap();
    t2.join().unwrap();
}

This works perfectly fine!

$ cargo run --quiet
Doing the work...
a = 3
a = 2
a = 1
b = 3
b = 2
b = 1
Waiting a bit...
Okay, bye now!

Because we know exactly what our code does, we don't have to follow Rust's rigid rules, and we can just..

Cool bear

Cool bear's hot tip

Hoooooold on.

Hold on a minute.

What?

Cool bear

Cool bear's hot tip

You said "since we wait for both threads to end" (via join), then we're fine.

Are you sure about that?

Well yeah, look:

fn work() {
    let state = Box::leak(Box::new(State { a: 3, b: 3 }));
    // the Mutex is constructed here:
    let m = Mutex::new(state);

    let t1_arg = TrustMeItsFine(&m as *const _);
    let t1 = thread::spawn(move || t1_work(t1_arg));
    let t2_arg = TrustMeItsFine(&m as *const _);
    let t2 = thread::spawn(move || t2_work(t2_arg));

    // we're waiting for both threads here:
    t1.join().unwrap();
    t2.join().unwrap();

    // and here, the Mutex is freed
}

Looks okay to me?

Cool bear

Cool bear's hot tip

Yeah, but consider this:

fn t1_work(state: TrustMeItsFine) {
    panic!("uh oh");

    let mut state = state.lock().unwrap();

    while state.a != 0 {
        println!("a = {}", state.a);
        state.a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

So what? The entire app will just crash and burn as soon as we panic.

Cool bear

Cool bear's hot tip

And now this:

fn main() {
    println!("Doing the work...");
    std::panic::catch_unwind(|| {
        work();
    }).ok();

    println!("Waiting a bit...");
    thread::sleep(Duration::from_secs(2));
    println!("Okay, bye now!");
}

Oh.

$ cargo run --quiet
Doing the work...
thread '<unnamed>' panicked at 'uh oh', src/main.rs:11:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
b = 3
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/main.rs:64:5
Waiting a bit...
b = 21849
[1]    185366 segmentation fault (core dumped)  cargo run --quiet

Uh oh.

Yeah okay maybe Rust has a point.

Maybe we shouldn't be trusted, and maybe it's not fine. And that's a recurring theme when writing unsafe code.

So let's just not write unsafe code.

We have a couple of options here. We could make our state reference-counted, so that as long as either thread it's alive, it'll be alive.

use std::{
    sync::{Arc, Mutex},
    thread,
    time::Duration,
};

struct State {
    a: i32,
    b: i32,
}

fn t1_work(state: Arc<Mutex<State>>) {
    let mut state = state.lock().unwrap();

    while state.a != 0 {
        println!("a = {}", state.a);
        state.a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn t2_work(state: Arc<Mutex<State>>) {
    let mut state = state.lock().unwrap();

    while state.b != 0 {
        println!("b = {}", state.b);
        state.b -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn work() {
    let state1 = Arc::new(Mutex::new(State { a: 3, b: 3 }));
    let state2 = state1.clone();

    let t1 = thread::spawn(move || t1_work(state1));
    let t2 = thread::spawn(move || t2_work(state2));

    t1.join().unwrap();
    t2.join().unwrap();
}

// omitted: main

And now, there's no compiler errors left (and no unsafe code, either!)

$ cargo run --quiet
Doing the work...
a = 3
a = 2
a = 1
b = 3
b = 2
b = 1
Waiting a bit...
Okay, bye now!

Of course, uh, this is not really what we were going for. We have a single lock for the entire State, and we acquire it for the entire duration of either thread's life.

We could fix it like that:

fn t1_work(state: Arc<Mutex<State>>) {
    while state.lock().unwrap().a != 0 {
        println!("a = {}", state.lock().unwrap().a);
        state.lock().unwrap().a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn t2_work(state: Arc<Mutex<State>>) {
    while state.lock().unwrap().b != 0 {
        println!("b = {}", state.lock().unwrap().b);
        state.lock().unwrap().b -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

And, although a bit verbose, it works:

$ cargo run --quiet
Doing the work...
b = 3
a = 3
b = 2
a = 2
b = 1
a = 1
Waiting a bit...
Okay, bye now!

But now; well, not now, but later, when we increase the number of threads, we're going to have a "lock contention" problem.

Even though we're only locking the Mutex whenever we actually need to read from, or write to it, that's still a lot of locking for a single lock. If we have a couple hundred threads, and if we remove the sleep, we're definitely going to start feeling it.

So, what can we do? Use AtomicI32? Seems a bit silly there, doesn't it?

Here's my proposal: The first thing we want to be able to do is to have our worker threads borrow from the state. Borrow mutably, even.

There is a crate for that, and its name is crossbeam.

$ cargo add crossbeam
    Updating 'https://github.com/rust-lang/crates.io-index' index
      Adding crossbeam v0.7.3 to dependencies
fn t1_work(state: &Mutex<&mut State>) {
    // omitted (same code)
}

fn t2_work(state: &Mutex<&mut State>) {
    // same here
}

fn work() {
    let mut state = State { a: 3, b: 3 };
    let m = Mutex::new(&mut state);

    crossbeam::scope(|s| {
        s.spawn(|_| t1_work(&m));
        s.spawn(|_| t2_work(&m));
    })
    .unwrap();
}

How does this differ from std::thread::spawn? Well, now our threads are scoped. We know that when crossbeam::scope returns, *all the threads we've spawned will have terminated. Even if some of them panic.

So, threads in a crossbeam scope can borrow from their environment (here, from one of main's locals).

Now, we no longer need an std::sync::Arc - we no longer do any reference counting. Which means our program is more efficient. Hurray!

We still have a bunch of locking though:

fn t1_work(state: &Mutex<&mut State>) {
    while state.lock().unwrap().a != 0 {
        println!("a = {}", state.lock().unwrap().a);
        state.lock().unwrap().a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

How do we fix that?

Well - we know in our heart that t1_work only ever accesses state.a and t2_work only accesses state.b... but we haven't told the Rust compiler that.

Cool bear

Cool bear's hot tip

But again, can't it see it?

It can see it from the contents of t1_work and the contents of t2_work, but it isn't using that information to deduce anything. Because t1_work might actually be in another crate - and then all we have to go by is its type signature.

So, what if, instead of taking a mutable reference to the entire State, we only took a mutable reference to the fields we wanted?

fn t1_work(a: &mut i32) {
    while *a != 0 {
        println!("a = {}", a);
        *a -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn t2_work(b: &mut i32) {
    while *b != 0 {
        println!("b = {}", b);
        *b -= 1;
        thread::sleep(Duration::from_secs(1));
    }
}

Then we wouldn't even need a Mutex, since we can borrow mutably non-overlapping parts of state at the same time:

fn work() {
    let mut state = State { a: 3, b: 3 };
    // this is fine
    let a = &mut state.a;
    // this is also fine
    let b = &mut state.b;
    // there's nothing left to borrow from `state` at this point

    crossbeam::scope(|s| {
        s.spawn(|_| t1_work(a));
        s.spawn(|_| t2_work(b));
    })
    .unwrap();
}

Note that this approach generalizes well. For example, let's say we need to borrow multiple fields from our State struct:

use std::{thread, time::Duration};

struct State {
    a: i32,
    b: i32,
    max: i32,
}

fn work(name: &str, counter: &mut i32, max: &i32) {
    while *counter < *max {
        println!("{} = {}", name, counter);
        *counter += 1;
        thread::sleep(Duration::from_secs(1));
    }
}

fn main() {
    let mut state = State { a: 0, b: 0, max: 3 };
    let a = &mut state.a;
    let b = &mut state.b;
    let max = &state.max;

    crossbeam::scope(|s| {
        s.spawn(|_| work("a", a, max));
        s.spawn(|_| work("b", b, max));
    })
    .unwrap();
}
$ cargo run --quiet
b = 0
a = 0
b = 1
a = 1
b = 2
a = 2
Cool bear

What did we learn?

In Rust, we don't tend to think of "state" as a monolith. It's not just one big class (or one big struct) that everything feeds back into.

Because the amount of "valid Rust programs" (programs that compile) is severely limited by Rust's rules: lifetimes, marker traits, etc., it is often necessary to rethink the structure of a program just to get it to compile.

It's an essential part of "thinking in Rust".

In many cases, splitting application state in several separate structs helps a lot. More granularity helps expressing the actual lifetime constraints needed for the program to borrow-check.

Foundational learning takes time

Finally, I'd like to leave you with some words of encouragement.

Even if you try you darndest to learn Rust, and focus real hard, chances are, it'll take you some time (and a few tries) to "get it".

And is there really such a thing as "getting it"? Rust is an entirely new type of game. There's definitely a pro scene already, but there is still much to be done. The language is evolving, and we, collectively, haven't yet figured out everything that new way of thinking unlocks.

So start small, don't get discouraged, and just keep at it!

Comment on /r/fasterthanlime

(JavaScript is required to see this. Or maybe my stuff broke)

Here's another article just for you:

Frustrated? It's not you, it's Rust

Learning Rust is... an experience. An emotional journey. I've rarely been more frustrated than in my first few months of trying to learn Rust.

What makes it worse is that it doesn't matter how much prior experience you have, in Java, C#, C or C++ or otherwise - it'll still be unnerving.

In fact, more experience probably makes it worse! The habits have settled in deeper, and there's a certain expectation that, by now, you should be able to get that done in a shorter amount of time.

will