Thanks to my sponsors: Scott Steele, Braidon Whatley, callym, Matěj Volf, you got maiL, Jonathan Adams, ofrighil, Jack Duvall, Matt Jackson, Dylan Anthony, Christoph Grabo, Herman J. Radtke III, Miguel Piedrafita, Borys Minaiev, Radu Matei, Sean Bryant, Corey Alexander, C J Silverio, Horváth-Lázár Péter, Ivo Murrell and 230 more
State of the fasterthanlime 2024
It's time for some personal and professional news!
TL;DR: I started a podcast with James, I'm stable on antidepressants, I'm giving a P99 CONF about my Rust/io_uring/HTTP work, I'm trying on "they/them" as pronouns, I'm open-sourcing merde_json, rubicon and others, I got a divorce in 2023, I found a new business model.
Now that we're on the same page: let's unpack this a bit!
A podcast (with James!)
This is the only time-sensitive thing, the one I'm supposed to promote today because, woo, big launch! Go check out the website on which you'll find links to Spotify, Apple Podcasts, or simply an RSS Feed
The first episode is about how I was wrong about Rust build times: remember when, among other things, I said you should break everything down into small crates? (in Why is my Rust build so slow?).
Well, that didn't work out. Some other advice from this article is still current, there's been some noteworthy news re: linkers, and rustc backends, and on the podcast we get a chance to review all that!
I've stayed away from podcasts most of my life because I didn't see the point, and because starting a podcast is a meme at this point, especially for white people in their 30s.
Then I binged Kill James Bond, and then Well There's Your Problem, and agreed that podcasts can be good — they pair especially well with manual labor if your brain doesn't deal well with being too idle.
I started thinking about potential formats: didn't want to just "show up and try to be funny", didn't want to just "do the news", didn't want to try to be yet another "Rust podcast" (the ones we already have are plenty good!), or have on the exact same guests everyone's already had.
That's why I was really excited when James approached me to start one.
He explained that he wanted a place to talk about concrete work we're both doing in the open-source software / Rust / embedded hardware space: to not only bounce ideas off of each other, but to hype it up to others, gathering eyeballs and perhaps, in time, funding.
For example, James has been building the postcard cinematic universe, with an emphasis on RPC lately, in the service of some bigger projects I'm not going to "spoil" here.
On my side, I've been working on an HTTP/1+2 implementation in Rust that leverages io_uring and kTLS, named fluke — work that was sponsored by fly.io, then Shopify.
I've also open-sourced a lightweight alternative to serde_json called merde_json, a library to enable a particular dynamic linking pattern in Rust, called rubicon, and I'm maintaining what I think is the best Rust zip implementation rc-zip (because it's sans-io).
(Check out Misia's other work if you like skulls! She's extremely talented.)
I've also moved off of fly.io and onto my own infrastructure over the past 18 months (building a CDN with kubernetes is surprisingly nice, actually — not bad for 85EUR/month) — and so many other things to do write-ups about.
I'm sitting on a mountain of drafts and side projects, and the Self-Directed Research Podcast drives me, every week, to pick one of them up and get them to a place where I can talk about them for 20 minutes.
It's like journaling, but instead of 750 words looking back at me, it's James going on a tangent about how AppleTalk was way ahead of its time.
And when I'm not the host (relaxing!), I get free lessons in embedded stuff! Interviewing someone competent and passionate about their area of expertise is a fantasy of mine, and I get to indulge a little bit when James is presenting.
I've already felt the positive effects of doing this podcast personally: every week I need to finish a thing (it's like a permanent game jam!), and that's great practice for me, because, well, gestures at non-existent publishing schedule for YouTube and articles.
I mean, there's other reasons for the publishing gaps, but yeah.
So yeah! I'm excited. This podcast would be a good thing for us both and the wider community even if nobody listened to it.
But, you know. Feel free to eavesdrop (over here).
👋 Also: we're actively looking for corporate sponsors — shoot me an e-mail if you'd like us to do a 30-second ad read for your company/product. If it's a good fit, I'll happily send you a media kit and make it happen!
All the personal news
I don't want to spend too long on any of these, but since I did post about it on social media a tiny bit, just for the record: I got divorced last year.
A friendly one, as far as divorce go! We went for drinks after signing and posted a "just divorced" selfie.
I'm super proud of my ex-wife and I for being super reasonable and kind throughout the whole thing, even though it hurt like a motherfucker because neither of us were pretending to have feelings.
While this was happening, due to significant rippling life changes (new place, living alone for the first time in forever, financial stress), I started being genuinely worried about my mental health.
Up until then, I had been able to "manage" my depression (in a more or less healthy way), and felt compelled to stick around, even if it's wasn't "for me". This stopped working.
With a bit of encouragement, I ended up seeking help, got on anti-depressants (Escitalopram 10mg, for the data-oriented), and after a rocky start, things got significantly easier.
In a "fuck, how many years have I wasted" kinda way.
I thought I was broken! I could've been happy, emotionally stable, full of energy and productive all this time?
Grief set in: it took some time to accept all this.
It's now eight months later, and as I'm trying to slowly lower the dosage, I can feel that it's a challenge to be all those things without medication.
But I know it's there if I need it. I know there's something that works for me, and that makes everything stop feeling so damn hard when it seems so easy for "everyone else".
With a lot more time and space for myself over the past year, I've had the chance to think about who I am!
After years of not particularly feeling attached to the "man" / "male" / "masculine" descriptors (and not feeling much kindred towards "men" in general), I've decided to start using "they/them" pronouns.
I love a good convention, so you can query my pronouns over DNS:
lith on main via 🦀 v1.80.0
❯ doggo pronouns.fasterthanli.me TXT
NAME TYPE CLASS TTL ADDRESS NAMESERVER
pronouns.fasterthanli.me. TXT IN 21600s "they/them" 100.100.100.100:53
It's not a huge deal for me — "he/him" felt wrong, so I'm switching to something more neutral! I don't care much about gender (mine at least): the bigger deal is that I feel comfortable saying that in public.
(I'm not sure what to do about other languages: I don't like "iel" in French, so I guess I'll still be using "il/lui" for the time being)
If you refer to me with as "him" out of habit, nobody will spontaneously combust. Just do your best! If you catch yourself using the wrong pronoun, correct yourself quickly and move on. As long as you don't persistently use the wrong pronoun on purpose to make some sort of point, you're golden. Thanks!
We're so back
Personal life and "permanent nerd-snipe state" aside, one reason I've been holding off on publishing more things is because I was unhappy with the state of my website — steadily moving to the lower-right corner of this graph:
This blog went through jekyll, nanoc, hugo, zola, and finally its own Rust codebase, in 2019, when it was still rather inadvisable to code something like that in Rust. I mean, we already has some form of async/await but a lot of frameworks were still uncooked.
I kept adding things to the blog over the years: pluggable sponsor backend (adding GitHub Sponsors to hedge against Patreon imploding — they did fire their entire security team a while back, and their API is still on life support), an ever improving image pipeline, video encoding, etc.
(That's right, I cloned the YouTube player for fun, serving H.264, VP9 and AV1, up to 4K@60 — I ended up delisting it, but I plan on bring it it back!)
And you know what happened?
What everyone complains about re: big Rust projects. The compile times skyrocketed. And link times,
too. Even in debug mode, even with debug = 1
, even with all LTO disabled, even with incremental
builds, and, yes, even with a codebase split into different crates, as soon as you pull in
certain crates, it's game over.
I've been hard at work fixing this over the past... a lot of months, and I have SO MANY tales to tell.
I went from building Docker images with a plain old Dockerfile to using nix, to now using Earthly.
I tried out Hashicorp Nomad and resolved to learn Kubernetes (through k3s), along with helm. I waded through Artifact Hub to find a proper Postgres controller, and some solution to back up my local volumes to object storage.
That's right: while some were betting on whether I was secretly a beautiful trans woman, the truth was much more terrible still: I was a devops all along!
I learned Terraform, I let it do what it's good at, and convinced myself to just use Ansible for the rest, fighting off the urge to rewrite it in Rust!
The trafficDistribution field (alpha as of Kubernetes 1.30) was the last piece of the puzzle when it came to actually having my setup behave like a proper CDN: Route53 routes traffic to a geograhically-close edge node, that edge node proxies traffic to a topologically-close service — hopefully local, unless it's unhealthy/overloaded, in which case it goes to a neighbor instead!
I have zero-downtime deployments now (no automatic rollbacks though — I just scp manifest files to a directory k3s is watching), and thanks to some other changes I made, I'm able to ship a code change around the world in 58 seconds, and that includes:
- Building the changes in CI
- Packaging it as a container image and pushing it
- Pushing the updated k8s manifest
- Having all nodes pull the updated image
- Creating new containers, waiting for them to be healthy
- Destroying the old containers
Just like in my wildest dreams, I have a setup that most companies would be jealous of.
(Of course you can deploy changes in 2 seconds if you're deploying Lua, PHP, or most other interpreted language. Not JavaScript though — despite competition in the package manager and bundler field, every time I pick one up I want to bang my head repeatedly on something hard)
And the latency is.. not bad, for 85EUR a month! And most of that is the dedicated server that runs the control plane (and is standing by for video encodes).
85EUR is a lot more than the "things are still cheap" tier of any serverless offering, but it's also a hard cap on how much I'll pay every month. It's my compromise between "shovel money to $serverless" and "run everything off of one small VPS", and it works great, provided you invest in your ops significantly.
I've completely redone the site layout as well: it's using a system font stack (looks great on macOS,
rip Linux users, sorry for that) so it's lighter on the network: the only webfont left is Berkeley Mono,
as a variable font, for code and stuff
, which weighs about 32KB.
(I lied, my patched Iosevka font is still getting pulled in at the moment because it has NerdFonts icons which I used in some places).
I can do colors in the terminal now!
lith on main via 🦀 v1.80.0
❯ cargo check
Checking lith v4.0.0 (/Users/amos/bearcove/lith/app)
error[E0425]: cannot find value `blah` in this scope
--> app/src/main.rs:23:13
|
23 | let a = blah;
| ^^^^ not found in this scope
For more information about this error, try `rustc --explain E0425`.
error: could not compile `lith` (bin "lith") due to 1 previous error
I went back to draw.io for diagrams because I found a way to render them to SVG that doesn't involve running Chrome headless: just Node.js.
The resulting SVGs only really play well with browsers (and macOS's Preview app, bless its soul) but that's where they're meant to be seen anyway!
In that new pipeline, text is finally selectable in SVGs: they use @font-face
to pull the fonts
from my website instead of having every bit of text converted to a path. I also wrote a custom
minified (svgo broke those files and svgcleaner is abandoned).
All the asset processing happens on-demand now: I'm no longer committing png+jpg+webp+avif to my website's content repository now. It's all served from https://cdn.fasterthanli.me (a cookie-less subdomain), transcoded from JPEG-XL to AVIF or WebP.
(Transcoded images are cached in-memory, on disk, and to S3 storage)
Thankfully there's no need to serve JPEG or PNG in the year 2024 (unless you truly need lossless compression, but I don't), and as a nice surprise, Safari will use JPEG-XL if it's there, so, I serve it too!
Wouldn't it be fun if I was bragging about all of this and my site immediately collapsed under moderate load, like it did in the past? (see I won free load testing)
It would. But I wouldn't lose sleep over it — even if all my stuff goes down because I pissed the wrong script kiddie, you can still find it on YouTube, various mirrors, and rogue translations of it to Japanese. This website is best effort, and there are still too many pages that do SQL queries when they shouldn't.
I'm just happy my site is in a state where I can deploy it again and it takes 3 seconds from when I hit Cmd+S to when I see the changes in the browser — all thanks to... dynamic linking!
(Again, check out Misia's work — it's usually... not rivers!)
My project rubicon enables a certain form of dynamic linking with Rust. It enables crates to export their thread-locals and "process-locals" (statics) so that other copies of the same code, in separate shared objects, can import it.
Among other things, this makes crates like tokio, parking_lot, eyre, and tracing work across shared objects, even though there's "wasteful" code duplication.
The result is a project that doesn't take half a minute to link, no matter how many weird modules I add: I choose exactly where the dynamic boundary lies, and I've done so much "trait object safety" I've developed a special kind of hatred for the current Rust limitations, but... it works.
And there's a ton of layer re-use across container images, which keeps deploys nice and fast!
I plan to write in-depth about all of this. But I've been thinking long and hard about financial stability.
And that's how I came up with my new business model!
A new business model
Well I'm not strictly speaking a business — I mean, I do contract work now and then (RowZero paid me to improve rc-zip and that was good!), but my main focus is on content creation: making articles and videos, and I'd like it to stay that way as much as possible.
I want knowledge to be freely accessible, including the content I make: my old model, of making some articles available in "early access" to sponsors and patrons, was good!
But a two-week exclusivity window is... not long.
I simply do not have anything "sizable enough to warrant signing up for the 10EUR/month tier" every two weeks.
So, here's my new model.
I will make two kinds of content:
- Articles: free for everyone from day one
- Features: free in video form, sponsor-exclusive as text, for 6 months
Sometimes there's something time-sensitive! A new crate just came out, some Rust feature finally stabilized, sometimes I want to write something that can end up on some news aggregator so that people know what's going on — that's regular articles.
For features, I put a bit more work into them, they can be 60min+ long reads, and I often want to play on the visual side, but also I understand that for anyone who's trying to seriously learn from them, they want to be able to scroll through at their own rhythm, copy and paste code, even get access to entire Git repos with working examples.
That's a lot of work for me. So I'm excited with this new model where:
- I like making videos — it's a medium I enjoy. Even though it takes time, I'm getting better at it, finding faster workflows, etc.
- Videos "pay for themselves" through YouTube ads, YouTube premium, sponsored segments, etc. — I don't even pay for hosting, it makes sense to have them free
- The text format is something that makes a ton of sense, but it's extra work, and so, I want to present it as a premium feature, for "paying customers"
...and everyone still eventually gets everything for free with a 6-month delay: enough that hopefully, at any given time, there's a solid backlog of features that could motivate someone to throw me 10EUR, even it just to binge everything in one month.
I still have some things to prepare on the infrastructure side to make everything happen, so I wouldn't blame you if you waited for the first feature to drop to support me, but: your support counts now more than ever.
A huge shout-out to all my current and past sponsors, you can check them out on Patreon and GitHub Sponsors.
Corporate sponsorships
Ever since I became fully independent, my work has been supported by corporate sponsors: at first fly.io, and then Shopify.
(I'm lucky to be able to present the result of that work, fluke, at P99 CONF later this year.)
This has allowed me to take the time to get my shit together, both personally and professionally.
I'm thankful to my corporate sponsors for this, and I am actively looking for further sponsorships (along with renewing past ones): it has provided me with extra security that let me experiment with projects like rubicon, which in my opinion will completely transform the Rust development experience for a lot of people in time.
(Of course, my long-term goal is to make rubicon
completely unnecessary, and to support this workflow at
the cargo/rustc level, which requires more work, requiring, in turn, you guessed it: more funding.)
That's it!
And that's all the news I have for you today.
Go enjoy the podcast now!
Here's another article just for you:
Aiming for correctness with types
The Nature weekly journal of science was first published in 1869. And after one and a half century, it has finally completed one cycle of carcinization, by publishing an article about the Rust programming language.
It's a really good article.
What I liked about this article is that it didn't just talk about performance, or even just memory safety - it also talked about correctness.