Thanks to my sponsors: Torben Clasen, Matt Jackson, Paul Schuberth, Corey Alexander, old.woman.josiah, Seth, joseph, henrique-pinto, Lena Schönburg, Marco Carmosino, Theodore Rose, Miguel Piedrafita, Xirvik Servers, WeblWabl, Zaki, Xavier Groleau, Yufan Lou, Jan De Landtsheer, Michael Alyn Miller, Santiago Lema and 235 more
Day 5 (Advent of Code 2022)
Part 1
The day 5 challenge actually looks fun!
Our input looks like this:
[D]
[N] [C]
[Z] [M] [P]
1 2 3
move 1 from 2 to 1
move 3 from 1 to 3
move 2 from 2 to 1
move 1 from 1 to 2
Which is a visual representation of stacks, and so, for once, we have some serious parsing to do, and that means I finally have a good reason to bust out the nom crate.
$ cargo add nom@7
Updating crates.io index
Adding nom v7 to dependencies.
Features as of v7.0.0:
+ alloc
+ std
- docsrs
We're going to need a bunch of use
statements, here they are in bulk:
use nom::{
branch::alt,
bytes::complete::{tag, take},
combinator::{all_consuming, map, opt},
sequence::{delimited, preceded},
Finish, IResult,
};
(This matters because nom
has both bytes::streaming
and bytes::complete
.
We have all the input here, so we don't need to write a streaming parser).
First we'll want to parse the crates: a crate is [c]
where c
is a char
(it
could be a u8
but let's waste a little memory to make our lives slightly
comfier).
#[derive(Debug)]
struct Crate(char);
fn parse_crate(i: &str) -> IResult<&str, Crate> {
let first_char = |s: &str| Crate(s.chars().next().unwrap());
let f = delimited(tag("["), take(1_usize), tag("]"));
map(f, first_char)(i)
}
Equally important is a hole, that's just three spaces.
fn parse_hole(i: &str) -> IResult<&str, ()> {
// `drop` takes a value and returns nothing, which is
// perfect for our case
map(tag(" "), drop)(i)
}
We must be able to parse both crates and holes, so let's have a function that
does either, and returns an Option<Crate>
:
fn parse_crate_or_hole(i: &str) -> IResult<&str, Option<Crate>> {
alt((map(parse_crate, Some), map(parse_hole, |_| None)))(i)
}
And then, one that parses a full line:
fn parse_crate_line(i: &str) -> IResult<&str, Vec<Option<Crate>>> {
let (mut i, c) = parse_crate_or_hole(i)?;
let mut v = vec![c];
loop {
let (next_i, maybe_c) = opt(preceded(tag(" "), parse_crate_or_hole))(i)?;
match maybe_c {
Some(c) => v.push(c),
None => break,
}
i = next_i;
}
Ok((i, v))
}
Finally, we can use it like so:
fn main() {
let mut crate_lines = vec![];
for line in include_str!("sample-input.txt").lines() {
if let Ok((_rest, crate_line)) = all_consuming(parse_crate_line)(line).finish() {
crate_lines.push(crate_line);
}
}
for line in &crate_lines {
println!("{line:?}");
}
}
This gives us, for the sample input:
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.40s
Running `target/debug/day5`
[None, Some(Crate('D')), None]
[Some(Crate('N')), Some(Crate('C')), None]
[Some(Crate('Z')), Some(Crate('M')), Some(Crate('P'))]
Which is a pretty good start! However, the problem has us consider those crates in "piles", which are columns, and we have lines/rows. So, we need a "transpose" implementation.
Here's a Rust tip: the language has been around for a while, you don't need to code everything yourself! I did a quick web search for "transpose vec of vecs in rust" and stumbled upon this StackOverflow answer, which we can use as-is:
fn transpose<T>(v: Vec<Vec<T>>) -> Vec<Vec<T>> {
assert!(!v.is_empty());
let len = v[0].len();
let mut iters: Vec<_> = v.into_iter().map(|n| n.into_iter()).collect();
(0..len)
.map(|_| {
iters
.iter_mut()
.map(|n| n.next().unwrap())
.collect::<Vec<T>>()
})
.collect()
}
Our main
function then becomes:
fn main() {
let mut crate_lines = vec![];
for line in include_str!("sample-input.txt").lines() {
if let Ok((_rest, crate_line)) = all_consuming(parse_crate_line)(line).finish() {
crate_lines.push(crate_line);
}
}
let crate_columns = transpose(crate_lines);
for col in &crate_columns {
println!("{col:?}");
}
}
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.47s
Running `target/debug/day5`
[None, Some(Crate('N')), Some(Crate('Z'))]
[Some(Crate('D')), Some(Crate('C')), Some(Crate('M'))]
[None, None, Some(Crate('P'))]
Now they're arranged by columns, but we still have the holes! We can get rid
of those by having transpose
accept a Vec<Vec<Option<T>>>
and switching
from map
to filter_map
, which already expects an Option<T>
and will simply
ignore the None
items.
// 👇
fn transpose<T>(v: Vec<Vec<Option<T>>>) -> Vec<Vec<T>> {
assert!(!v.is_empty());
let len = v[0].len();
let mut iters: Vec<_> = v.into_iter().map(|n| n.into_iter()).collect();
(0..len)
.map(|_| {
iters
.iter_mut()
// 👇
.filter_map(|n| n.next().unwrap())
.collect::<Vec<T>>()
})
.collect()
}
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.43s
Running `target/debug/day5`
[Crate('N'), Crate('Z')]
[Crate('D'), Crate('C'), Crate('M')]
[Crate('P')]
That's better. But there's still an issue: we have those in "top to bottom order". Because we're going to need to pop crates off of some piles and plop them over the top of some other piles (that's what the next few lines are instructing us to do), it would be more convenient to have them in the reverse order:
// renamed to 👇 better indicate functionality
fn transpose_rev<T>(v: Vec<Vec<Option<T>>>) -> Vec<Vec<T>> {
assert!(!v.is_empty());
let len = v[0].len();
let mut iters: Vec<_> = v.into_iter().map(|n| n.into_iter()).collect();
(0..len)
.map(|_| {
iters
.iter_mut()
// 👇
.rev()
.filter_map(|n| n.next().unwrap())
.collect::<Vec<T>>()
})
.collect()
}
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.46s
Running `target/debug/day5`
[Crate('Z'), Crate('N')]
[Crate('M'), Crate('C'), Crate('D')]
[Crate('P')]
Let's clean up the Debug
output a bit:
struct Crate(char);
impl fmt::Debug for Crate {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.48s
Running `target/debug/day5`
[Z, N]
[M, C, D]
[P]
And compare with our input:
[D]
[N] [C]
[Z] [M] [P]
1 2 3
move 1 from 2 to 1
move 3 from 1 to 3
move 2 from 2 to 1
move 1 from 1 to 2
Shape rotators of the world, unite! I think this looks okay.
Next up, we should parse instructions. We have decimal numbers:
fn parse_number(i: &str) -> IResult<&str, usize> {
map_res(take_while1(|c: char| c.is_ascii_digit()), |s: &str| {
s.parse::<usize>()
})(i)
}
Our piles are 0-indexed, not 1-indexed, so:
// convert from 1-indexed to 0-indexed
fn parse_pile_number(i: &str) -> IResult<&str, usize> {
map(parse_number, |i| i - 1)(i)
}
And then full instructions:
#[derive(Debug)]
struct Instruction {
quantity: usize,
src: usize,
dst: usize,
}
fn parse_instruction(i: &str) -> IResult<&str, Instruction> {
map(
tuple((
preceded(tag("move "), parse_number),
preceded(tag(" from "), parse_pile_number),
preceded(tag(" to "), parse_pile_number),
)),
|(quantity, src, dst)| Instruction { quantity, src, dst },
)(i)
}
Our main function becomes:
fn main() {
let mut lines = include_str!("sample-input.txt").lines();
let crate_lines: Vec<_> = (&mut lines)
.map_while(|line| {
all_consuming(parse_crate_line)(line)
.finish()
.ok()
.map(|(_, line)| line)
})
.collect();
let crate_columns = transpose_rev(crate_lines);
for col in &crate_columns {
println!("{col:?}");
}
// we've consumed the "numbers line" but not the separating line
assert!(lines.next().unwrap().is_empty());
let instructions: Vec<_> = lines
.map(|line| all_consuming(parse_instruction)(line).finish().unwrap().1)
.collect();
for ins in &instructions {
println!("{ins:?}");
}
}
Note that our error handling here is minimal. You can make it better if you want!
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.42s
Running `target/debug/day5`
[Z, N]
[M, C, D]
[P]
Instruction { quantity: 1, src: 1, dst: 0 }
Instruction { quantity: 3, src: 0, dst: 2 }
Instruction { quantity: 2, src: 1, dst: 0 }
Instruction { quantity: 1, src: 0, dst: 1 }
It's time to apply instructions, one by one! For convenience, we'll make Crate
Copy
:
#[derive(Clone, Copy)]
struct Crate(char);
And we'll make a Piles
type, with a Debug
impl:
struct Piles(Vec<Vec<Crate>>);
impl fmt::Debug for Piles {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
for (i, pile) in self.0.iter().enumerate() {
writeln!(f, "Pile {}: {:?}", i, pile)?;
}
Ok(())
}
}
So we can easily dump it:
let piles = Piles(transpose_rev(crate_lines));
println!("{piles:?}");
$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.00s
Running `target/debug/day5`
Pile 0: [Z, N]
Pile 1: [M, C, D]
Pile 2: [P]
Instruction { quantity: 1, src: 1, dst: 0 }
Instruction { quantity: 3, src: 0, dst: 2 }
Instruction { quantity: 2, src: 1, dst: 0 }
Instruction { quantity: 1, src: 0, dst: 1 }
And now we can have an apply
method on Piles
:
impl Piles {
fn apply(&mut self, ins: Instruction) {
for _ in 0..ins.quantity {
let el = self.0[ins.src].pop().unwrap();
self.0[ins.dst].push(el);
}
}
}
And our main
function can apply instructions as we parse them:
fn main() {
let mut lines = include_str!("sample-input.txt").lines();
let crate_lines: Vec<_> = (&mut lines)
.map_while(|line| {
all_consuming(parse_crate_line)(line)
.finish()
.ok()
.map(|(_, line)| line)
})
.collect();
let mut piles = Piles(transpose_rev(crate_lines));
println!("{piles:?}");
// we've consumed the "numbers line" but not the separating line
assert!(lines.next().unwrap().is_empty());
for ins in lines.map(|line| all_consuming(parse_instruction)(line).finish().unwrap().1) {
println!("{ins:?}");
piles.apply(ins);
println!("{piles:?}");
}
}
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.50s
Running `target/debug/day5`
Pile 0: [Z, N]
Pile 1: [M, C, D]
Pile 2: [P]
Instruction { quantity: 1, src: 1, dst: 0 }
Pile 0: [Z, N, D]
Pile 1: [M, C]
Pile 2: [P]
Instruction { quantity: 3, src: 0, dst: 2 }
Pile 0: []
Pile 1: [M, C]
Pile 2: [P, D, N, Z]
Instruction { quantity: 2, src: 1, dst: 0 }
Pile 0: [C, M]
Pile 1: []
Pile 2: [P, D, N, Z]
Instruction { quantity: 1, src: 0, dst: 1 }
Pile 0: [C]
Pile 1: [M]
Pile 2: [P, D, N, Z]
Finally, the answer has to be what's on top of each pile: for us, that's the
last item of the Vec
(since we are in "bottom to top" order):
println!(
"answer = {}",
piles.0.iter().map(|pile| pile.last().unwrap()).join("")
);
Joining iterators is not functionality that's in the Rust standard library: we
could collect to a Vec
first if we wanted to stick with libstd, but I don't
mind reaching for itertools again, so:
$ cargo add itertools
(cut)
use itertools::Itertools;
Now .join
works on iterators, and all that's left is to implement Display
for Crate
, which we can just forward to its Debug
impl:
impl fmt::Display for Crate {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt::Debug::fmt(self, f)
}
}
$ cargo run
Compiling either v1.8.0
Compiling itertools v0.10.5
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 1.72s
Running `target/debug/day5`
Pile 0: [Z, N]
Pile 1: [M, C, D]
Pile 2: [P]
Instruction { quantity: 1, src: 1, dst: 0 }
Pile 0: [Z, N, D]
Pile 1: [M, C]
Pile 2: [P]
Instruction { quantity: 3, src: 0, dst: 2 }
Pile 0: []
Pile 1: [M, C]
Pile 2: [P, D, N, Z]
Instruction { quantity: 2, src: 1, dst: 0 }
Pile 0: [C, M]
Pile 1: []
Pile 2: [P, D, N, Z]
Instruction { quantity: 1, src: 0, dst: 1 }
Pile 0: [C]
Pile 1: [M]
Pile 2: [P, D, N, Z]
answer = CMZ
That does give the right answer for the sample input... and for my real input! Neat!
Part 2
In part 2, we discover that in fact, when applying an instruction, we shouldn't lift crates off of the source pile and into the destination pile one by one, but all together!
This is a simple change:
impl Piles {
fn apply(&mut self, ins: Instruction) {
// N.B: `crate` is a keyword
for krate in (0..ins.quantity)
.map(|_| self.0[ins.src].pop().unwrap())
.rev()
{
self.0[ins.dst].push(krate);
}
}
}
Or.. it would be, if it compiled:
$ cargo check --quiet
error[E0501]: cannot borrow `self.0` as mutable because previous closure requires unique access
--> src/main.rs:60:13
|
56 | for krate in (0..ins.quantity)
| ______________________-
57 | | .map(|_| self.0[ins.src].pop().unwrap())
| | --- ------ first borrow occurs due to use of `self.0` in closure
| | |
| | closure construction occurs here
58 | | .rev()
| |__________________- first borrow later used here
59 | {
60 | self.0[ins.dst].push(krate);
| ^^^^^^ second borrow occurs here
For more information about this error, try `rustc --explain E0501`.
error: could not compile `day5` due to previous error
The problem here is that we cannot have two mutable references to the same
Vec
. This is actually fine, since they're (supposedly!) references to
different items of the Vec
, but it's just something the borrow checker
doesn't understand.
There's a bunch of ways to solve this, which is why it's interesting to discuss.
Borrow checker limitations and workarounds
It's worth noting that the borrow checker is already quite smart. I won't get into everything here, but, I just want to note that, for example, this compiles:
#[derive(Default)]
struct S {
a: u32,
b: u32,
}
let mut s = S::default();
let a = &mut s.a;
let b = &mut s.b;
*a = 1;
*b = 2;
*a = 3;
*b = 4;
Here, we're borrowing mutably two different fields from the same struct. That's no problem at all!
Things are different with a Vec
, or even an array:
let mut arr = [12_usize, 34];
let a = &mut arr[0];
let b = &mut arr[1];
*a = 1;
*b = 2;
*a = 3;
*b = 4;
$ cargo check --quiet
error[E0499]: cannot borrow `arr[_]` as mutable more than once at a time
--> src/main.rs:65:17
|
64 | let a = &mut arr[0];
| ----------- first mutable borrow occurs here
65 | let b = &mut arr[1];
| ^^^^^^^^^^^ second mutable borrow occurs here
66 | *a = 1;
| ------ first borrow later used here
|
= help: consider using `.split_at_mut(position)` or similar method to obtain two mutable non-overlapping sub-slices
For more information about this error, try `rustc --explain E0499`.
error: could not compile `day5` due to previous error
The compiler does give a suggestion though: that's our first workaround. It is obviously correct to hold two mutable references to different positions of an array/slice/vec, so we are able to achieve that with only safe code.
It's just that the API is a bit awkward for what we're trying to do:
// this is `[12, 34]`
let mut arr = [12_usize, 34];
let (xx, yy) = arr.split_at_mut(1);
// `xx` is now `[12]` and `yy` is `[34]`
let a = &mut xx[0];
let b = &mut yy[0];
*a = 1;
*b = 2;
*a = 3;
*b = 4;
"Why this is even an issue" might be especially puzzling if you're coming at Rust with a C/C++ mindset.
You'd expect to just be able to mutate whatever, and be careful as to what you're doing so you don't accidentally create data races etc.
And that's something "unsafe Rust" lets you do.
It doesn't change the normal rules, so this still doesn't compile:
unsafe {
let mut arr = [12_usize, 34];
let a = &mut arr[0];
let b = &mut arr[1];
*a = 1;
*b = 2;
*a = 3;
*b = 4;
}
$ cargo check --quiet
error[E0499]: cannot borrow `arr[_]` as mutable more than once at a time
--> src/main.rs:66:21
|
65 | let a = &mut arr[0];
| ----------- first mutable borrow occurs here
66 | let b = &mut arr[1];
| ^^^^^^^^^^^ second mutable borrow occurs here
67 | *a = 1;
| ------ first borrow later used here
|
= help: consider using `.split_at_mut(position)` or similar method to obtain two mutable non-overlapping sub-slices
warning: unnecessary `unsafe` block
--> src/main.rs:63:9
|
63 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
For more information about this error, try `rustc --explain E0499`.
error: could not compile `day5` due to previous error; 1 warning emitted
Which is to say: the borrow checker still runs on unsafe code. The compiler even helpfully points out that, since we're not doing unsafe things inside that block (merely illegal things), it's unnecessary and should be removed.
This is important, because the unsafe
keyword denote parts of the code that
warrant careful inspection, since it's where we can accidentaly invoke nasal
demons UB (undefined behavior).
Here's something unsafe: raw pointers!
unsafe {
let mut arr = [12_usize, 34];
let arr_ptr = arr.as_mut_ptr();
let a_index = 0;
let b_index = 1;
let a = &mut *arr_ptr.add(a_index);
let b = &mut *arr_ptr.add(b_index);
*a = 1;
*b = 2;
*a = 3;
*b = 4;
}
This code is fine. Probably. But this one isn't:
unsafe {
let mut arr = [12_usize, 34];
let arr_ptr = arr.as_mut_ptr();
let a_index = 0;
let b_index = 0;
let a = &mut *arr_ptr.add(a_index);
let b = &mut *arr_ptr.add(b_index);
*a = 1;
*b = 2;
*a = 3;
*b = 4;
}
Can you tell why?
We're creating two mutable references to the same memory location. And that's not just "potentially trouble down the line, if we have accesses to the two references from different threads", it's insta-UB and everything goes: the compiler may delete your program or shave your head if it so wishes. Nothing is guaranteed anymore.
Which is why, if we're going to do it this way, we'll want to build a safe abstraction on top of it:
pub trait BorrowTwoMut<T> {
fn borrow_two_mut(&mut self, a: usize, b: usize) -> (&mut T, &mut T);
}
impl<T> BorrowTwoMut<T> for [T] {
fn borrow_two_mut(&mut self, a: usize, b: usize) -> (&mut T, &mut T) {
assert!(a != b);
unsafe {
let ptr = self.as_mut_ptr();
let a_ptr = ptr.add(a);
let b_ptr = ptr.add(b);
(&mut *a_ptr, &mut *b_ptr)
}
}
}
And now, we can do this:
let mut arr = [12_usize, 34];
let (a, b) = arr.borrow_two_mut(0, 1);
*a = 1;
*b = 2;
*a = 3;
*b = 4;
It'll fail at runtime if you pass the same two indices. It's perfect!
Uh... no it's not?
Consider this:
let mut arr = [12_usize, 34];
// 👇
let (a, b) = arr.borrow_two_mut(8, 9);
*a = 1;
*b = 2;
*a = 3;
*b = 4;
Uh oh, that's right! I forgot about bound checks. Those are done when indexing on slices (you can use unchecked versions if that's really your bottleneck but chances are, it's really not).
Here's a more correct version:
impl<T> BorrowTwoMut<T> for [T] {
fn borrow_two_mut(&mut self, a: usize, b: usize) -> (&mut T, &mut T) {
assert!(a != b);
assert!(a < self.len());
assert!(b < self.len());
unsafe {
let ptr = self.as_mut_ptr();
let a_ptr = ptr.add(a);
let b_ptr = ptr.add(b);
(&mut *a_ptr, &mut *b_ptr)
}
}
}
I say "more correct" because there might still be bugs. I don't know! I don't write a lot of unsafe code — I don't have to.
In fact, I'd be much more comfortable with this implementation:
impl<T> BorrowTwoMut<T> for [T] {
fn borrow_two_mut(&mut self, a: usize, b: usize) -> (&mut T, &mut T) {
assert!(a != b);
if a < b {
let (l, r) = self.split_at_mut(b);
(&mut l[a], &mut r[0])
} else {
let (l, r) = self.split_at_mut(a);
(&mut r[0], &mut l[b])
}
}
}
And some tests, of course:
#[cfg(test)]
mod tests {
use crate::BorrowTwoMut;
#[test]
fn borrow_two_mut_works() {
let mut arr = [0, 1, 2, 3, 4, 5];
{
let (a, b) = arr.borrow_two_mut(1, 4);
assert_eq!(*a, 1);
assert_eq!(*b, 4);
}
{
let (a, b) = arr.borrow_two_mut(5, 2);
assert_eq!(*a, 5);
assert_eq!(*b, 2);
}
}
#[test]
#[should_panic]
fn borrow_two_mut_panics_on_same_index() {
let mut arr = [0, 1, 2, 3, 4, 5];
let (_, _) = arr.borrow_two_mut(1, 1);
}
}
Anyway, that's not the only way for us to get out of this pickle.
We can use a RefCell to do borrow-checking at runtime instead:
use std::cell::RefCell;
struct Piles(Vec<RefCell<Vec<Crate>>>);
We have to change some code:
// in main
let mut piles = Piles(
transpose_rev(crate_lines)
.into_iter()
.map(RefCell::new)
.collect(),
);
println!("{piles:?}");
// then, later
println!(
"answer = {}",
piles
.0
.iter()
.map(|pile| *pile.borrow().last().unwrap())
.join("")
);
And:
impl Piles {
fn apply(&mut self, ins: Instruction) {
// N.B: `crate` is a keyword
for krate in (0..ins.quantity)
.map(|_| self.0[ins.src].borrow_mut().pop().unwrap())
.rev()
{
self.0[ins.dst].borrow_mut().push(krate);
}
}
}
But it does work!
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.52s
Running `target/debug/day5`
Pile 0: RefCell { value: [Z, N] }
Pile 1: RefCell { value: [M, C, D] }
Pile 2: RefCell { value: [P] }
Instruction { quantity: 1, src: 1, dst: 0 }
Pile 0: RefCell { value: [Z, N, D] }
Pile 1: RefCell { value: [M, C] }
Pile 2: RefCell { value: [P] }
Instruction { quantity: 3, src: 0, dst: 2 }
Pile 0: RefCell { value: [] }
Pile 1: RefCell { value: [M, C] }
Pile 2: RefCell { value: [P, D, N, Z] }
Instruction { quantity: 2, src: 1, dst: 0 }
Pile 0: RefCell { value: [C, M] }
Pile 1: RefCell { value: [] }
Pile 2: RefCell { value: [P, D, N, Z] }
Instruction { quantity: 1, src: 0, dst: 1 }
Pile 0: RefCell { value: [C] }
Pile 1: RefCell { value: [M] }
Pile 2: RefCell { value: [P, D, N, Z] }
answer = CMZ
...wait, no it doesn't. Well I mean it builds, but it doesn't do the right thing!
In fact, .rev()
doesn't seem to do anything here...
Well I supposed if you iterate (0, 1, 2) or (2, 1, 0), and then lazily map that
to some_vec.pop()
... it makes no difference.
No yeah I can see the reasoning, I just... mhh. It feels wrong. But I get it.
So, huh, I suppose... we can move on to my next solution, which is simply:
impl Piles {
fn apply(&mut self, ins: Instruction) {
// N.B: `crate` is a keyword
for krate in (0..ins.quantity)
.map(|_| self.0[ins.src].pop().unwrap())
.collect::<Vec<_>>()
.into_iter()
.rev()
{
self.0[ins.dst].push(krate);
}
}
}
That way, we're forcing evaluation of the first map
early, and then iterating
those results in reverse order.
That code (which is what I wanted to do in the first place) does give the correct result:
$ cargo run
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished dev [unoptimized + debuginfo] target(s) in 0.57s
Running `target/debug/day5`
Pile 0: [Z, N]
Pile 1: [M, C, D]
Pile 2: [P]
Instruction { quantity: 1, src: 1, dst: 0 }
Pile 0: [Z, N, D]
Pile 1: [M, C]
Pile 2: [P]
Instruction { quantity: 3, src: 0, dst: 2 }
Pile 0: []
Pile 1: [M, C]
Pile 2: [P, Z, N, D]
Instruction { quantity: 2, src: 1, dst: 0 }
Pile 0: [M, C]
Pile 1: []
Pile 2: [P, Z, N, D]
Instruction { quantity: 1, src: 0, dst: 1 }
Pile 0: [M]
Pile 1: [C]
Pile 2: [P, Z, N, D]
answer = MCD
Heap allocation hate club
And I can hear you from here: "But Amos, that's unnecessary heap allocation! can't we get rid of it?"
And of course we can. I skimmed through my input and found that we never need to move more than 64 items, so, if you insist, we can allocate that on the stack instead:
$ cargo add smallvec
Updating crates.io index
Adding smallvec v1.10.0 to dependencies.
(cut)
impl Piles {
fn apply(&mut self, ins: Instruction) {
// N.B: `crate` is a keyword
for krate in (0..ins.quantity)
.map(|_| self.0[ins.src].pop().unwrap())
.collect::<SmallVec<[_; 64]>>()
.into_iter()
.rev()
{
self.0[ins.dst].push(krate);
}
}
}
Now let's track how many allocations we saved with tracking-allocator.
$ cargo add tracking-allocator@0.4 --no-default-features
Updating crates.io index
Adding tracking-allocator v0.4 to dependencies.
Features as of v0.4.0:
- tracing
- tracing-compat
- tracing-subscriber
Then, above main:
use std::alloc::System;
use tracking_allocator::{
AllocationGroupId, AllocationGroupToken, AllocationRegistry, AllocationTracker, Allocator,
};
#[global_allocator]
static GLOBAL: tracking_allocator::Allocator<std::alloc::System> =
tracking_allocator::Allocator::system();
#[global_allocator]
static GLOBAL: Allocator<System> = Allocator::system();
struct StdoutTracker;
impl AllocationTracker for StdoutTracker {
fn allocated(
&self,
addr: usize,
object_size: usize,
wrapped_size: usize,
group_id: AllocationGroupId,
) {
println!(
"allocation -> addr=0x{:0x} object_size={} wrapped_size={} group_id={:?}",
addr, object_size, wrapped_size, group_id
);
}
fn deallocated(
&self,
addr: usize,
object_size: usize,
wrapped_size: usize,
source_group_id: AllocationGroupId,
current_group_id: AllocationGroupId,
) {
println!(
"deallocation -> addr=0x{:0x} object_size={} wrapped_size={} source_group_id={:?} current_group_id={:?}",
addr, object_size, wrapped_size, source_group_id, current_group_id
);
}
}
And then, in main
:
AllocationRegistry::set_global_tracker(StdoutTracker)
.expect("no other global tracker should be set yet");
AllocationRegistry::enable_tracking();
for ins in lines.map(|line| all_consuming(parse_instruction)(line).finish().unwrap().1) {
println!("{ins:?}");
piles.apply(ins);
println!("{piles:?}");
}
AllocationRegistry::disable_tracking();
This generates output like:
$ cargo run --release
Finished release [optimized] target(s) in 0.00s
Running `target/release/day5`
Pile 0: [Q, S, W, C, Z, V, F, T]
Pile 1: [Q, R, B]
Pile 2: [B, Z, T, Q, P, M, S]
Pile 3: [D, V, F, R, Q, H]
Pile 4: [J, G, L, D, B, S, T, P]
Pile 5: [W, R, T, Z]
Pile 6: [H, Q, M, N, S, F, R, J]
Pile 7: [R, N, F, H, W]
Pile 8: [J, Z, T, Q, P, R, B]
Instruction { quantity: 3, src: 7, dst: 1 }
allocation -> addr=0x55d54070f2b8 object_size=12 wrapped_size=24 group_id=AllocationGroupId(1)
allocation -> addr=0x55d54070f308 object_size=32 wrapped_size=40 group_id=AllocationGroupId(1)
deallocation -> addr=0x55d54070f2b8 object_size=12 wrapped_size=24 source_group_id=AllocationGroupId(1) current_group_id=AllocationGroupId(1)
Pile 0: [Q, S, W, C, Z, V, F, T]
Pile 1: [Q, R, B, F, H, W]
Pile 2: [B, Z, T, Q, P, M, S]
Pile 3: [D, V, F, R, Q, H]
Pile 4: [J, G, L, D, B, S, T, P]
Pile 5: [W, R, T, Z]
Pile 6: [H, Q, M, N, S, F, R, J]
Pile 7: [R, N]
Pile 8: [J, Z, T, Q, P, R, B]
Instruction { quantity: 3, src: 0, dst: 4 }
allocation -> addr=0x55d54070f2b8 object_size=12 wrapped_size=24 group_id=AllocationGroupId(1)
allocation -> addr=0x55d54070ef08 object_size=64 wrapped_size=72 group_id=AllocationGroupId(1)
deallocation -> addr=0x55d54070f2b8 object_size=12 wrapped_size=24 source_group_id=AllocationGroupId(1) current_group_id=AllocationGroupId(1)
Pile 0: [Q, S, W, C, Z]
Pile 1: [Q, R, B, F, H, W]
Pile 2: [B, Z, T, Q, P, M, S]
Pile 3: [D, V, F, R, Q, H]
Pile 4: [J, G, L, D, B, S, T, P, V, F, T]
Pile 5: [W, R, T, Z]
Pile 6: [H, Q, M, N, S, F, R, J]
Pile 7: [R, N]
Pile 8: [J, Z, T, Q, P, R, B]
And we can count allocations:
$ cargo run --release | grep -E '^allocation' | wc -l
Finished release [optimized] target(s) in 0.00s
Running `target/release/day5`
532
That was with Vec
, let's do SmallVec
:
$ cargo run --release | grep -E '^allocation' | wc -l
Compiling day5 v0.1.0 (/home/amos/bearcove/aoc2022/day5)
Finished release [optimized] target(s) in 0.62s
Running `target/release/day5`
29
Wait, 29? Not zero? Let's find out where the allocation happened:
impl AllocationTracker for StdoutTracker {
fn allocated(
&self,
addr: usize,
object_size: usize,
wrapped_size: usize,
group_id: AllocationGroupId,
) {
println!(
"allocation -> addr=0x{:0x} object_size={} wrapped_size={} group_id={:?}",
addr, object_size, wrapped_size, group_id
);
// 👇
panic!();
}
// etc.
}
$ RUST_BACKTRACE=1 cargo run --release
Finished release [optimized] target(s) in 0.00s
Running `target/release/day5`
Pile 0: [Q, S, W, C, Z, V, F, T]
Pile 1: [Q, R, B]
Pile 2: [B, Z, T, Q, P, M, S]
Pile 3: [D, V, F, R, Q, H]
Pile 4: [J, G, L, D, B, S, T, P]
Pile 5: [W, R, T, Z]
Pile 6: [H, Q, M, N, S, F, R, J]
Pile 7: [R, N, F, H, W]
Pile 8: [J, Z, T, Q, P, R, B]
Instruction { quantity: 3, src: 7, dst: 1 }
allocation -> addr=0x562feb2eb2b8 object_size=32 wrapped_size=40 group_id=AllocationGroupId(1)
thread 'main' panicked at 'explicit panic', src/main.rs:31:9
stack backtrace:
0: rust_begin_unwind
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14
2: core::panicking::panic
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:48:5
3: <day5::StdoutTracker as tracking_allocator::AllocationTracker>::allocated
4: <tracking_allocator::allocator::Allocator<A> as core::alloc::global::GlobalAlloc>::alloc
5: __rg_realloc
6: alloc::raw_vec::finish_grow
7: alloc::raw_vec::RawVec<T,A>::reserve_for_push
8: day5::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
We get an even better stack trace if we turn up debug information for our release build:
[package]
name = "day5"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
itertools = "0.10.5"
nom = "7"
smallvec = "1.10.0"
tracking-allocator = { version = "0.4", default-features = false }
# 👇 new!
[profile.release]
debug = 1
$ RUST_BACKTRACE=1 cargo run --release
Finished release [optimized + debuginfo] target(s) in 0.16s
Running `target/release/day5`
Pile 0: [Q, S, W, C, Z, V, F, T]
Pile 1: [Q, R, B]
Pile 2: [B, Z, T, Q, P, M, S]
Pile 3: [D, V, F, R, Q, H]
Pile 4: [J, G, L, D, B, S, T, P]
Pile 5: [W, R, T, Z]
Pile 6: [H, Q, M, N, S, F, R, J]
Pile 7: [R, N, F, H, W]
Pile 8: [J, Z, T, Q, P, R, B]
Instruction { quantity: 3, src: 7, dst: 1 }
allocation -> addr=0x5619efa782b8 object_size=32 wrapped_size=40 group_id=AllocationGroupId(1)
thread 'main' panicked at 'explicit panic', src/main.rs:31:9
stack backtrace:
0: rust_begin_unwind
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14
2: core::panicking::panic
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:48:5
3: <day5::StdoutTracker as tracking_allocator::AllocationTracker>::allocated
at ./src/main.rs:31:9
4: <tracking_allocator::allocator::Allocator<A> as core::alloc::global::GlobalAlloc>::alloc::{{closure}}
at /home/amos/.cargo/registry/src/github.com-1ecc6299db9ec823/tracking-allocator-0.4.0/src/allocator.rs:85:21
5: tracking_allocator::token::try_with_suspended_allocation_group::{{closure}}
at /home/amos/.cargo/registry/src/github.com-1ecc6299db9ec823/tracking-allocator-0.4.0/src/token.rs:268:17
6: std::thread::local::LocalKey<T>::try_with
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/thread/local.rs:445:16
7: tracking_allocator::token::try_with_suspended_allocation_group
at /home/amos/.cargo/registry/src/github.com-1ecc6299db9ec823/tracking-allocator-0.4.0/src/token.rs:261:13
8: <tracking_allocator::allocator::Allocator<A> as core::alloc::global::GlobalAlloc>::alloc
at /home/amos/.cargo/registry/src/github.com-1ecc6299db9ec823/tracking-allocator-0.4.0/src/allocator.rs:74:13
9: core::alloc::global::GlobalAlloc::realloc
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/alloc/global.rs:264:32
10: __rg_realloc
at ./src/main.rs:15:16
11: alloc::alloc::realloc
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/alloc.rs:132:14
12: alloc::alloc::Global::grow_impl
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/alloc.rs:209:31
13: <alloc::alloc::Global as core::alloc::Allocator>::grow
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/alloc.rs:262:18
14: alloc::raw_vec::finish_grow
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/raw_vec.rs:466:13
15: alloc::raw_vec::RawVec<T,A>::grow_amortized
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/raw_vec.rs:400:19
16: alloc::raw_vec::RawVec<T,A>::reserve_for_push
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/raw_vec.rs:298:24
17: alloc::vec::Vec<T,A>::push
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/alloc/src/vec/mod.rs:1768:13
18: day5::Piles::apply
at ./src/main.rs:107:13
19: day5::main
at ./src/main.rs:74:9
20: core::ops::function::FnOnce::call_once
at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/ops/function.rs:248:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
And.. yeah, it happens when we actually push to another pile:
impl Piles {
fn apply(&mut self, ins: Instruction) {
// N.B: `crate` is a keyword
for krate in (0..ins.quantity)
.map(|_| self.0[ins.src].pop().unwrap())
.collect::<SmallVec<[_; 256]>>()
// .collect::<Vec<_>>()
.into_iter()
.rev()
{
// 👇 here!
self.0[ins.dst].push(krate);
}
}
}
Very well — we can simply reserve capacity for our piles, trading memory usage for speed:
fn transpose_rev<T>(v: Vec<Vec<Option<T>>>) -> Vec<Vec<T>> {
assert!(!v.is_empty());
let len = v[0].len();
let mut iters: Vec<_> = v.into_iter().map(|n| n.into_iter()).collect();
(0..len)
.map(|_| {
// 👇
let v = Vec::with_capacity(256); // just to be on the safe side
iters
.iter_mut()
.rev()
.filter_map(|n| n.next().unwrap())
// 👇
.collect_into(&mut v);
v
})
.collect()
}
Woops, that one is nightly-only (as of Rust 1.65), but we can use extend instead:
fn transpose_rev<T>(v: Vec<Vec<Option<T>>>) -> Vec<Vec<T>> {
assert!(!v.is_empty());
let len = v[0].len();
let mut iters: Vec<_> = v.into_iter().map(|n| n.into_iter()).collect();
(0..len)
.map(|_| {
let mut v = Vec::with_capacity(256); // just to be on the safe side
v.extend(iters.iter_mut().rev().filter_map(|n| n.next().unwrap()));
v
})
.collect()
}
And voila! No more heap allocations!
$ cargo run --release | grep -E '^allocation' | wc -l
Finished release [optimized + debuginfo] target(s) in 0.00s
Running `target/release/day5`
0
Happy now?
Cool bear's hot tip
Vec::with_capacity
still allocates on the heap, but it does so before we
start tracking allocations. We got rid of re-allocations that used to happen
while moving the crates around, not the initial allocations.
Removing all heap allocations is a fool's errand anyway. Don't do it! Except as a challenge... maybe.
Reader suggestion: use nom's digit1
parser
Instead of this:
fn parse_number(i: &str) -> IResult<&str, usize> {
map_res(take_while1(|c: char| c.is_ascii_digit()), |s: &str| {
s.parse::<usize>()
})(i)
}
We can use nom's built-in digit1 parser.
fn parse_number(i: &str) -> IResult<&str, usize> {
map_res(digit1, |s: &str| s.parse::<usize>())(i)
}
Which is simpler and shorter!
Thanks to salameme
on GitHub!
Reader suggestion: use nom's number parser
Simpler still (but it doesn't support usize), we can use nom's number parsers:
fn parse_number(i: &str) -> IResult<&str, usize> {
map(nom::character::complete::u32, |n| n as _)(i)
}
Thanks to afdbcreid
on Reddit for this suggestion.
Reader suggestion: use nom's separated_list1
combinator
In parse_crate_line
, instead of our complicated loop:
fn parse_crate_line(i: &str) -> IResult<&str, Vec<Option<Crate>>> {
let (mut i, c) = parse_crate_or_hole(i)?;
let mut v = vec![c];
loop {
let (next_i, maybe_c) = opt(preceded(tag(" "), parse_crate_or_hole))(i)?;
match maybe_c {
Some(c) => v.push(c),
None => break,
}
i = next_i;
}
Ok((i, v))
}
We can use nom's separated_list1
combinator:
fn parse_crate_line(i: &str) -> IResult<&str, Vec<Option<Crate>>> {
separated_list1(tag(" "), parse_crate_or_hole)(i)
}
Thanks to afdbcreid
on Reddit for this suggestion.
Reader suggestion: Iterator::by_ref
When we're parsing crate lines, and we don't want to move out of the iterator,
since we still have more stuff to parse after, instead of doing (&mut lines)
// 👇
let crate_lines: Vec<_> = (&mut lines)
.map_while(|line| {
all_consuming(parse_crate_line)(line)
.finish()
.ok()
.map(|(_, line)| line)
})
.collect();
We can use lines.by_ref()
:
let crate_lines: Vec<_> = lines
// 👇
.by_ref()
.map_while(|line| {
all_consuming(parse_crate_line)(line)
.finish()
.ok()
.map(|(_, line)| line)
})
.collect();
Thanks to afdbcreid
on Reddit for this suggestion.
Reader suggestion: use nightly rust's get_many_mut
We need to switch to Rust nightly for this, let's create a rust-toolchain.toml
file that pins a recent nightly:
[toolchain]
channel = "nightly-2022-12-06"
And then we can use get_many_mut
instead of our BorrowTwoMut
trait:
// at the top of the file
#![feature(get_many_mut)]
impl Piles {
fn apply(&mut self, ins: Instruction) {
let [src, dst] = self
.0
.get_many_mut([ins.src, ins.dst])
.expect("out of bounds / overlapping src/dst stacks");
(0..ins.quantity)
.map(|_| src.pop().unwrap())
.collect::<SmallVec<[_; 64]>>()
.into_iter()
.rev()
.for_each(|c| dst.push(c));
}
}
Thanks to afdbcreid
on Reddit for this
suggestion, which was slightly better than that but it overlapped with another
suggestion that was posted earlier, so I separated out just the get_many_mut
part.
Reader suggestion: use drain
and extend
Instead of popping each item from the end of the source stack, we can use
drain
with a range to drain only the part we need to move. Then, we can use
extend
on the destination stack to add everything from that iterator:
impl Piles {
fn apply(&mut self, ins: Instruction) {
let [src, dst] = self
.0
.get_many_mut([ins.src, ins.dst])
.expect("out of bounds / overlapping src/dst stacks");
dst.extend(src.drain((src.len() - ins.quantity)..))
}
}
This also works for part 1 if you call rev
on the iterator returned by drain
.
Thanks to usr_bin_nya
and afdbcreid
on
Reddit, and probably others, for pointing
this out.
Reader suggestion: collect into a String
Several folks, including psychonoff
on
Reddit suggested that, although, yes,
Iterator::join
wasn't in the standard library, in this case we could simply
collect our Iterator<Item = char>
into a String
:
This code:
println!(
"answer = {}",
piles.0.iter().map(|pile| *pile.last().unwrap()).join("")
);
Becomes:
println!(
"answer = {}",
piles
.0
.iter()
// this accesses the inner char 👇 (we could also use `.to_string()`)
.map(|pile| pile.last().unwrap().0)
.collect::<String>()
);
Here's another article just for you:
Image decay as a service
Since I write a lot of articles about Rust, I tend to get a lot
of questions about specific crates: "Amos, what do you think of oauth2-simd
?
Is it better than openid-sse4
? I think the latter has a lot of boilerplate."
And most of the time, I'm not sure what to responds. There's a lot of crates out there. I could probably review one crate a day until I retire!