Thanks to my sponsors: henrique-pinto, Alex Krantz, Marcin Kołodziej, Christian Bourjau, Henrik Tudborg, Ben Wishovich, Ian McLinden, Dirkjan Ochtman, Tiziano Santoro, Lev Khoroshansky, James Leitch, Miguel Piedrafita, Aiden Scandella, Scott Steele, Katie Janzen, bbutkovic, Bob Ippolito, Andy Gocke, Chris Biscardi, Diego Roig and 232 more
Generating a docker image with nix
There it is. The final installment.
Over the course of this series, we've built a very useful Rust web service that shows us colored ASCII art cats, and we've packaged it with docker, and deployed it to https://fly.io.
We did all that without using nix
at all, and then in the last few chapters,
we've learned to use nix
, and now it's time to tell docker build
goodbye,
along with this whole-ass Dockerfile
:
# syntax = docker/dockerfile:1.4
################################################################################
FROM ubuntu:20.04 AS base
################################################################################
FROM base AS builder
# Install compile-time dependencies
RUN set -eux; \
apt update; \
apt install -y --no-install-recommends \
# 👇 new!
openssh-client git-core curl ca-certificates gcc libc6-dev pkg-config libssl-dev \
;
# Install rustup
RUN set -eux; \
curl --location --fail \
"https://static.rust-lang.org/rustup/dist/x86_64-unknown-linux-gnu/rustup-init" \
--output rustup-init; \
chmod +x rustup-init; \
./rustup-init -y --no-modify-path --default-toolchain stable; \
rm rustup-init;
# Add rustup to path, check that it works
ENV PATH=${PATH}:/root/.cargo/bin
RUN set -eux; \
rustup --version;
# Copy sources and build them
WORKDIR /app
COPY src src
COPY .cargo .cargo
COPY Cargo.toml Cargo.lock rust-toolchain.toml ./
RUN mkdir -p ~/.ssh/ && ssh-keyscan ssh.shipyard.rs >> ~/.ssh/known_hosts
RUN --mount=type=cache,target=/root/.rustup \
--mount=type=cache,target=/root/.cargo/registry \
--mount=type=cache,target=/root/.cargo/git \
--mount=type=cache,target=/app/target \
--mount=type=ssh \
# 👇 new!
--mount=type=secret,id=shipyard-token \
set -eux; \
export CARGO_REGISTRIES_CATSCII_TOKEN=$(cat /run/secrets/shipyard-token); \
rustc --version; \
cargo build --release; \
objcopy --compress-debug-sections ./target/release/catscii ./catscii
################################################################################
FROM base AS app
SHELL ["/bin/bash", "-c"]
# Install run-time dependencies, remove extra APT files afterwards.
# This must be done in the same `RUN` command, otherwise it doesn't help
# to reduce the image size.
RUN set -eux; \
apt update; \
apt install -y --no-install-recommends \
ca-certificates \
; \
apt clean autoclean; \
apt autoremove --yes; \
# Note: 👇 this only works because of the `SHELL` instruction above.
rm -rf /var/lib/{apt,dpkg,cache,log}/
# Copy app from builder
WORKDIR /app
COPY --from=builder /app/catscii .
# Copy Geolite2 database
RUN mkdir /db
COPY ./db/GeoLite2-Country.mmdb /db/
CMD ["/app/catscii"]
So, let's begin with:
$ git rm --force Dockerfile
rm 'Dockerfile'
$ git commit --message "Ahhhh."
[main ab28bcd] Ahhhh.
1 file changed, 76 deletions(-)
delete mode 100644 Dockerfile
And let out a sigh of relief.
Building catscii with nix build
Before we need to be able to build a Docker image, we need to be able to build an executable.
Remember when we learned nix, how we were able to do nix build
? Well it's time
to do that for catscii.
But luckily, we've set up everything we need, and so it'll be painless.
We'll use the crane nix library to build it.
The plan is to:
- add it as an input
- call one of its functions
- assign that to something under
packages
in our flake output
Good? Good.
Let's start by adding it:
# in `flake.nix`
{
inputs = {
crane = {
url = "github:ipetkov/crane";
};
# omitted: other inputs
}
# new: `crane` is in that list now 👇
outputs = { self, nixpkgs, flake-utils, rust-overlay, crane }:
# omitted: body of `outputs`
}
Before we go any further, vibe check:
$ nix flake metadata
warning: Git tree '/home/amos/catscii' is dirty
warning: updating lock file '/home/amos/catscii/flake.lock':
• Added input 'crane':
'github:ipetkov/crane/34633dd0d7ff2226627647cf44a5a197ca652c7d' (2023-02-19)
• Added input 'crane/flake-compat':
'github:edolstra/flake-compat/35bb57c0c8d8b62bbfd284272c928ceb64ddbde9' (2023-01-17)
• Added input 'crane/flake-utils':
'github:numtide/flake-utils/3db36a8b464d0c4532ba1c7dda728f4576d6d073' (2023-02-13)
• Added input 'crane/nixpkgs':
'github:NixOS/nixpkgs/6d33e5e14fd12f99ba621683ae90cebadda753ca' (2023-02-15)
• Added input 'crane/rust-overlay':
'github:oxalica/rust-overlay/a619538647bd03e3ee1d7b947f7c11ff289b376e' (2023-02-15)
• Added input 'crane/rust-overlay/flake-utils':
follows 'crane/flake-utils'
• Added input 'crane/rust-overlay/nixpkgs':
follows 'crane/nixpkgs'
warning: Git tree '/home/amos/catscii' is dirty
Resolved URL: git+file:///home/amos/catscii
Locked URL: git+file:///home/amos/catscii
Path: /nix/store/11p3avrd17fr78fl8qz05x1jjfn0dd6f-source
Last modified: 2023-02-19 19:19:54
Inputs:
├───crane: github:ipetkov/crane/34633dd0d7ff2226627647cf44a5a197ca652c7d
│ ├───flake-compat: github:edolstra/flake-compat/35bb57c0c8d8b62bbfd284272c928ceb64ddbde9
│ ├───flake-utils: github:numtide/flake-utils/3db36a8b464d0c4532ba1c7dda728f4576d6d073
│ ├───nixpkgs: github:NixOS/nixpkgs/6d33e5e14fd12f99ba621683ae90cebadda753ca
│ └───rust-overlay: github:oxalica/rust-overlay/a619538647bd03e3ee1d7b947f7c11ff289b376e
│ ├───flake-utils follows input 'crane/flake-utils'
│ └───nixpkgs follows input 'crane/nixpkgs'
├───flake-utils: github:numtide/flake-utils/3db36a8b464d0c4532ba1c7dda728f4576d6d073
├───nixpkgs: github:NixOS/nixpkgs/28319deb5ab05458d9cd5c7d99e1a24ec2e8fc4b
└───rust-overlay: github:oxalica/rust-overlay/3bab7ae4a80de02377005d611dc4b0a13082aa7c
├───flake-utils follows input 'flake-utils'
└───nixpkgs follows input 'nixpkgs'
Okay! crane
depends on nixpkgs
, rust-overlay
, and flake-utils
, which we
already have, so, let's deduplicate those:
# in inputs
crane = {
url = "github:ipetkov/crane";
inputs = {
nixpkgs.follows = "nixpkgs";
rust-overlay.follows = "rust-overlay";
flake-utils.follows = "flake-utils";
};
};
$ nix flake metadata
warning: Git tree '/home/amos/catscii' is dirty
Resolved URL: git+file:///home/amos/catscii
Locked URL: git+file:///home/amos/catscii
Path: /nix/store/k5rvn7vp6pfhk291r55r8g2ga7w7546w-source
Last modified: 2023-02-19 19:19:54
Inputs:
├───crane: github:ipetkov/crane/34633dd0d7ff2226627647cf44a5a197ca652c7d
│ ├───flake-compat: github:edolstra/flake-compat/35bb57c0c8d8b62bbfd284272c928ceb64ddbde9
│ ├───flake-utils follows input 'flake-utils'
│ ├───nixpkgs follows input 'nixpkgs'
│ └───rust-overlay follows input 'rust-overlay'
├───flake-utils: github:numtide/flake-utils/3db36a8b464d0c4532ba1c7dda728f4576d6d073
├───nixpkgs: github:NixOS/nixpkgs/28319deb5ab05458d9cd5c7d99e1a24ec2e8fc4b
└───rust-overlay: github:oxalica/rust-overlay/3bab7ae4a80de02377005d611dc4b0a13082aa7c
├───flake-utils follows input 'flake-utils'
└───nixpkgs follows input 'nixpkgs'
Alright! That's better.
Let's keep going:
# in flake.nix
{
inputs = {
# cut
};
outputs = { self, nixpkgs, flake-utils, rust-overlay, crane }:
flake-utils.lib.eachDefaultSystem
(system:
let
overlays = [ (import rust-overlay) ];
pkgs = import nixpkgs {
inherit system overlays;
};
rustToolchain = pkgs.pkgsBuildHost.rust-bin.fromRustupToolchainFile ./rust-toolchain.toml;
# this is how we can tell crane to use our toolchain!
craneLib = (crane.mkLib pkgs).overrideToolchain rustToolchain;
# cf. https://crane.dev/API.html#libcleancargosource
src = craneLib.cleanCargoSource ./.;
# as before
nativeBuildInputs = with pkgs; [ rustToolchain pkg-config ];
buildInputs = with pkgs; [ openssl sqlite ];
# because we'll use it for both `cargoArtifacts` and `bin`
commonArgs = {
inherit src buildInputs nativeBuildInputs;
};
cargoArtifacts = craneLib.buildDepsOnly commonArgs;
# remember, `set1 // set2` does a shallow merge:
bin = craneLib.buildPackage (commonArgs // {
inherit cargoArtifacts;
});
in
with pkgs;
{
packages =
{
# that way we can build `bin` specifically,
# but it's also the default.
inherit bin;
default = bin;
};
devShells.default = mkShell {
# instead of passing `buildInputs` / `nativeBuildInputs`,
# we refer to an existing derivation here
inputsFrom = [ bin ];
};
}
);
}
And that's it! We're good to go!
$ nix build
warning: Git tree '/home/amos/catscii' is dirty
error: not sure how to download crates from registry+ssh://git@ssh.shipyard.rs/catscii/crate-index.git.
for example, this can be resolved with:
craneLib = crane.lib.${system}.appendCrateRegistries [
(lib.registryFromDownloadUrl {
dl = "https://crates.io/api/v1/crates";
indexUrl = "https://github.com/rust-lang/crates.io-index";
})
# Or, alternatively
(lib.registryFromGitIndex {
url = "https://github.com/Hirevo/alexandrie-index";
rev = "90df25daf291d402d1ded8c32c23d5e1498c6725";
})
];
# Then use the new craneLib instance as you would normally
craneLib.buildPackage {
# ...
}
(use '--show-trace' to show detailed location information)
Oh, um. Almost. We do have a private registry in there, which I did on purpose, specifically to show how we could solve that problem with Docker and with crane.
Because we already have a secrets/shipyard-token
file (protected by
git-crypt
), it's actually fairly easy to get out of. Crane even has a whole
docs page about it.
Replace the craneLib =
line with these:
craneLibWithoutRegistry = (crane.mkLib pkgs).overrideToolchain rustToolchain;
shipyardToken = builtins.readFile ./secrets/shipyard-token;
craneLib = craneLibWithoutRegistry.appendCrateRegistries [
(craneLibWithoutRegistry.registryFromDownloadUrl {
dl = "https://crates.shipyard.rs/api/v1/crates";
indexUrl = "ssh://git@ssh.shipyard.rs/catscii/crate-index.git";
fetchurlExtraArgs = {
curlOptsList = [ "-H" "user-agent: shipyard ${shipyardToken}" ];
};
})
];
And this time, we're actually good to go!
Cool bear's hot tip
If this seems like a terrible no-good workaround until some standard is decided and implemented everywhere, that's because it is!
You can check the tracking issue to see where the discussion is at by the time you actually read this.
Anyway, with that, nix build
actually does work:
$ nix build
(cut)
And in result
, we have our binary!
$ ls --long --header --almost-all result
lrwxrwxrwx 1 amos amos 57 Feb 19 19:47 result -> /nix/store/dvfh98mvqish3jsih29rs3r45qn3yfpf-catscii-0.1.0
$ tree -ah result
[ 57] result
└── [4.0K] bin
└── [8.7M] catscii
1 directory, 1 file
Which only depends on things provided by nix:
$ ldd ./result/bin/catscii
linux-vdso.so.1 (0x00007ffd31d2b000)
libssl.so.3 => /nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/lib/libssl.so.3 (0x00007f7fb1362000)
libcrypto.so.3 => /nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/lib/libcrypto.so.3 (0x00007f7fb0600000)
libsqlite3.so.0 => /nix/store/zqf9r5d9yv4ccv64ja8xjn8kasgqg3cy-sqlite-3.40.1/lib/libsqlite3.so.0 (0x00007f7fb0ab1000)
libgcc_s.so.1 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/libgcc_s.so.1 (0x00007f7fb1348000)
libm.so.6 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/libm.so.6 (0x00007f7fb0520000)
libc.so.6 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/libc.so.6 (0x00007f7fb0200000)
libdl.so.2 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/libdl.so.2 (0x00007f7fb1341000)
libpthread.so.0 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/libpthread.so.0 (0x00007f7fb133c000)
libz.so.1 => /nix/store/9dz5lmff9ywas225g6cpn34s0wbldnxa-zlib-1.2.13/lib/libz.so.1 (0x00007f7fb131e000)
/nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib/ld-linux-x86-64.so.2 => /nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224/lib64/ld-linux-x86-64.so.2 (0x00007f7fb1411000)
And runs fine!
$ ./result/bin/catscii
{"timestamp":"2023-02-19T18:49:06.782034Z","level":"INFO","fields":{"message":"Creating honey client","log.target":"libhoney::client","log.module_path":"libhoney::client","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/client.rs","log.line":78},"target":"libhoney::client"}
{"timestamp":"2023-02-19T18:49:06.782112Z","level":"INFO","fields":{"message":"transmission starting","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":124},"target":"libhoney::transmission"}
{"timestamp":"2023-02-19T18:49:06.812601Z","level":"INFO","fields":{"message":"Listening on 0.0.0.0:8080"},"target":"catscii"}
^C{"timestamp":"2023-02-19T18:49:11.819055Z","level":"WARN","fields":{"message":"Initiating graceful shutdown"},"target":"catscii"}
Generating a docker image
Here it is! The last thing we have to do.
nix comes with facilities to generate Docker images, which we should really call OCI images, without using Docker at all: those images are, after all, "just" tarballs with a well-known structure and some JSON manifests.
Let's first try to make a docker image with just our binary: we know it won't be enough, since we need to have the geo-ip database in there, but it's a start.
In our outputs
' let
block, next to bin
, we can define a new key:
# in flake.nix
{
inputs = {
# cut
};
outputs = { self, nixpkgs, flake-utils, rust-overlay, crane }:
flake-utils.lib.eachDefaultSystem
(system:
let
# omitted: most keys
bin = craneLib.buildPackage (commonArgs // {
inherit cargoArtifacts;
});
# new! 👇
dockerImage = pkgs.dockerTools.buildImage {
name = "catscii";
tag = "latest";
copyToRoot = [ bin ];
config = {
Cmd = [ "${bin}/bin/catscii" ];
};
};
in
with pkgs;
{
packages =
{
# new package: 👇
inherit bin dockerImage;
default = bin;
};
devShells.default = mkShell {
inputsFrom = [ bin ];
};
}
);
}
Let's try building it and see what we got in there!
$ nix build .#dockerImage
warning: Git tree '/home/amos/catscii' is dirty
Huh, that was quick!
Yeah! The bin
target was already built, and since everything is
content-addressed, and none of the inputs changed, it really did just
generate the Docker image from that.
Let's see what the result
file is now:
$ file result
result: symbolic link to /nix/store/c8g0an3rabk3i0zivrxcpnivkmy15sq5-docker-image-catscii.tar.gz
A symbolic link to a tarball. We can load this into our local docker registry
with docker load
:
$ docker load < result
Command 'docker' not found, but can be installed with:
sudo snap install docker # version 20.10.17, or
sudo apt install docker.io # version 20.10.16-0ubuntu1
sudo apt install podman-docker # version 3.4.4+ds1-1ubuntu1
See 'snap info docker' for additional versions.
...ah, we haven't installed docker yet! If you needed proof that nix truly can build docker image without docker, I guess this is it.
In a real-world scenario, I'd probably want to have docker installed locally to be able to run the image, but for now let's look at it another way: with the dive utility.
Let's add it to our dev shell:
devShells.default = mkShell
{
inputsFrom = [ bin ];
# new! 👇
buildInputs = with pkgs; [ dive ];
};
For some reason, pressing Enter didn't do the trick for me this time, I had to close the shell and open a new one.
Now, dive
generally takes a name:tag
reference, which goes into the local
Docker registry. But we don't have Docker! It also supports the
docker-archive://
protocol, but that requires .tar
files, whereas we have
a .tar.gz
file.
So, let's gunzip it first:
$ gunzip --stdout result > /tmp/image.tar && dive docker-archive:///tmp/image.tar
Image Source: docker-archive:///tmp/image.tar
Fetching image... (this can take a while for large images)
Analyzing image...
Building cache...
And we're presented with a TUI (text-user interface), which we've actually already used in Part 6:
Wait... it's only 58MB?
Yes it is! Remember how we struggled to get it below 100MB when we wrote the Dockerfile by hand? Well, here's our first try, and it's almost half.
There's still a couple things missing from there, but nothing too big.
Note that we could've tried harder with the Dockerfile: the base Ubuntu system takes some space, we could've used a distroless image as a base - they even have an example for Rust!
But we would have to figure out how to ship sqlite, for example. I'm sure there's a way, I just... I'm just not sure why I'd bother, since nix lets us solve both the "build a binary that can run anywhere" and "generate a small docker image while we're at it".
Running our app locally with Docker
Before we deploy it to production, let's make sure it runs locally, which is one of the big selling points of Docker.
Digression: a dev shell docker?
Installing the latest Docker on Ubuntu was a bit of a hassle last time.
Can we just add docker
to our dev shell and see if it works?
Out of curiosity, I tried that. First off, it's not a small download (half a gig). Secondly, when you install Docker via Ubuntu packages, it does a couple things, like set up a systemd service to start the docker daemon, and set up user groups.
Without that, we have to start the daemon ourselves, and it's more subtle than you'd think.
The daemon does need to run, so docker info
doesn't work out of the box:
$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.9.1)
compose: Docker Compose (Docker Inc., 2.15.1)
Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
Starting the docker daemon as an unprivileged user is possible in theory, it's called "rootless mode", but it's not the default, so just running "dockerd" doesn't work:
$ dockerd
INFO[2023-02-21T14:12:51.135602597+01:00] Starting up
dockerd needs to be started with root privileges. To run dockerd in rootless mode as an unprivileged user, see https://docs.docker.com/go/rootless/
There's a dockerd-rootless
executable in $PATH
, but it ends up failing to
find newuidmap
.
$ dockerd-rootless
(cut)
+ exec rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /nix/store/g57cgil5aznzdnkimd8g3zq3avpfdlfv-moby-20.10.23/libexec/docker/dockerd-rootless.sh
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 73837 [0 1000 1 1 100000 65536] failed: : exec: "newuidmap": executable file not found in $PATH
That executable is in nixpkgs#shadow
, but adding it doesn't work any better:
$ dockerd-rootless
(cut)
+ exec rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /nix/store/g57cgil5aznzdnkimd8g3zq3avpfdlfv-moby-20.10.23/libexec/docker/dockerd-rootless.sh
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 86183 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
I suppose this all works swimmingly on NixOS, but it's a bit more delicate intruding into another distribution like that.
Anyway, we can run dockerd as root:
$ sudo dockerd
sudo: dockerd: command not found
Well, not like that - sudo
sanitizes the environment, at least on Ubuntu. You
can configure that, but let's not:
$ sudo $(which dockerd)
(cut: lots of INFO lines)
INFO[2023-02-21T14:20:30.740944094+01:00] API listen on /var/run/docker.sock
Now, the docker daemon is running, but a non-root user cannot talk to it:
$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.9.1)
compose: Docker Compose (Docker Inc., 2.15.1)
Server:
ERROR: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
As root though, we can!
$ sudo $(which docker) info
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.9.1)
compose: Docker Compose (Docker Inc., 2.15.1)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.23
(cut: tons of other info)
Violently chown
-ing /var/run/docker.sock
is an option, although not a great
one:
$ sudo chown --recursive amos:amos /var/run/docker.sock
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:6e8b6f026e0b9c419ea0fd02d3905dd0952ad1feea67543f525c73a0a790fefb
Status: Downloaded newer image for hello-world:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Really, you're probably better off installing Docker from Ubuntu packages. But if you really wanted to have docker in your dev shell, this is how you could do it.
Loading the image into the local docker registry
So, as promised, we can load our built image into the local docker registry
with docker load
:
$ docker load < result
77128d0a21af: Loading layer [==================================================>] 59.17MB/59.17MB
Loaded image: catscii:latest
And then we can try to run it!
$ docker run --rm --publish 8080:8080 catscii:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
thread 'main' panicked at '$SENTRY_DSN must be set: NotPresent', src/main.rs:40:37
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Ah yes! We must set a few environment variables.
If we had to do this a lot, a neat way to do it would be to:
- Split those environment variables from
.envrc
into.env
- Use the dotenv instruction to load the
.env
from the.envrc
- Pass
--env-file
todocker run
, or set up adocker-compose
file
But for now, let's juse use -env
(-e
) to pass a few environment variables
by hand:
$ docker run --rm --env SENTRY_DSN --env HONEYCOMB_API_KEY --env GEOLITE2_COUNTRY_DB --env ANALYTICS_DB --publish 8080:8080 catscii:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
{"timestamp":"2023-02-21T13:31:04.480284Z","level":"INFO","fields":{"message":"Creating honey client","log.target":"libhoney::client","log.module_path":"libhoney::client","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/client.rs","log.line":78},"target":"libhoney::client"}
{"timestamp":"2023-02-21T13:31:04.480386Z","level":"INFO","fields":{"message":"transmission starting","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":124},"target":"libhoney::transmission"}
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: MaxMindDb(IoError("No such file or directory (os error 2)"))', src/main.rs:59:75
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Ah, containerization in action: it can't read the geoip country database
because... it's stuck in its little container, and db/GeoLite2-Country.mmdb
is outside of it.
We need a tiny bit more nix to fix this.
Shipping the geoip database
So, we already have the geoip database locally, so we can just copy it from the nix flake.
Mhh before we do that though - surely someone has already packaged that database
into nixpkgs
, right?
Ehhh I don't know if the license terms permit th- oh would you look at that,
there it
is,
as clash-geoip
.
Well, I suppose we can try!
dockerImage = pkgs.dockerTools.buildImage {
name = "catscii";
tag = "latest";
copyToRoot = [ bin ];
config = {
Cmd = [ "${bin}/bin/catscii" ];
# new 👇
Env = with pkgs; [ "GEOLITE2_COUNTRY_DB=${clash-geoip}/etc/clash/Country.mmdb" ];
};
};
If the casing of Cmd
and Env
seem weirdly out-of-place, it's because they
map to the JSON format expected by the Docker Engine
API,
and Docker is written in Go, where public fields are UpperCamelCase.
Nice bit of trivia there but, wait, we can refer to any package? Just like that?
I'm doing what instinctively feels right!
Let's see if it actually works:
$ nix build .#dockerImage
warning: Git tree '/home/amos/catscii' is dirty
error: Package ‘clash-geoip-20230112’ in /nix/store/d5hx18py56yr4r1qzsx83q3idwxn6b19-source/pkgs/data/misc/clash-geoip/default.nix:26 has an unfree license (‘unfree’), refusing to evaluate.
a) To temporarily allow unfree packages, you can use an environment variable
for a single invocation of the nix tools.
$ export NIXPKGS_ALLOW_UNFREE=1
Note: For `nix shell`, `nix build`, `nix develop` or any other Nix 2.4+
(Flake) command, `--impure` must be passed in order to read this
environment variable.
b) For `nixos-rebuild` you can set
{ nixpkgs.config.allowUnfree = true; }
in configuration.nix to override this.
Alternatively you can configure a predicate to allow specific packages:
{ nixpkgs.config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [
"clash-geoip"
];
}
c) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add
{ allowUnfree = true; }
to ~/.config/nixpkgs/config.nix.
(use '--show-trace' to show detailed location information)
Ah! Right. It has an unfree license.
Exporting the environment variable and using --impure
seems... icky.
Alternative "b)" seems to only apply to NixOS, not nixpkgs, and I don't love the
idea of changing the global config either... maybe we can fix it in the flake
itself?
pkgs = import nixpkgs {
inherit system overlays;
# 👇 yolo
config.allowUnfree = true;
};
$ nix build .#dockerImage
warning: Git tree '/home/amos/catscii' is dirty
Hey, it built!
$ docker load < result
098ab327aa12: Loading layer [==================================================>] 64.83MB/64.83MB
The image catscii:latest already exists, renaming the old one with ID sha256:b78e239c264a2fdf4d6fb63b827abdc2515ae5f4ef3825b7f623bb6ee9cf3b5a to empty string
Loaded image: catscii:latest
Hey, it loaded!
# IMPORTANT NOTE: we're no longer exporting `GEOLITE2_COUNTRY_DB` - it's baked
# into the Docker image
$ docker run --rm --env SENTRY_DSN --env HONEYCOMB_API_KEY --env ANALYTICS_DB --publish 8080:8080 catscii:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
{"timestamp":"2023-02-21T13:49:06.476209Z","level":"INFO","fields":{"message":"Creating honey client","log.target":"libhoney::client","log.module_path":"libhoney::client","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/client.rs","log.line":78},"target":"libhoney::client"}
{"timestamp":"2023-02-21T13:49:06.476284Z","level":"INFO","fields":{"message":"transmission starting","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":124},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:49:06.481455Z","level":"INFO","fields":{"message":"Listening on 0.0.0.0:8080"},"target":"catscii"}
Hey, it ran!
$ curl http://localhost:8080/; echo
Something went wrong
Hey, something went wrong!
Debugging what went wrong
The app is running in sort of a weird environment here: it's a release build, so sentry thinks it's "production", but it also has Sentry & Honeycomb keys for "development".
The codepath we're hitting is in root_get_inner
, here:
// in `catscii/src/main.rs`
async fn root_get_inner(state: ServerState) -> Response<BoxBody> {
let tracer = global::tracer("");
match get_cat_ascii_art(&state.client)
.with_context(Context::current_with_span(
tracer.start("get_cat_ascii_art"),
))
.await
{
Ok(art) => (
StatusCode::OK,
[(header::CONTENT_TYPE, "text/html; charset=utf-8")],
art,
)
.into_response(),
Err(e) => {
get_active_span(|span| {
span.set_status(Status::Error {
description: format!("{e}").into(),
})
});
(StatusCode::INTERNAL_SERVER_ERROR, "Something went wrong").into_response()
}
}
}
We're not printing anything useful to stdout or stderr, but we are attaching the error to the active opentelemetry span.
So if we open Honeycomb, we should be able to see it.
And yet, I see nothing.
Running the service again with a little more logging enabled reveals the source of the problem:
$ docker run --rm --env RUST_LOG=libhoney::transmission=trace --env SENTRY_DSN --env HONEYCOMB_API_KEY --publish 8080:8080 catscii:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
{"timestamp":"2023-02-21T13:56:45.083939Z","level":"INFO","fields":{"message":"transmission starting","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":124},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:56:50.244910Z","level":"TRACE","fields":{"message":"Processing data event for dataset `catscii`","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":286},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:56:50.244965Z","level":"TRACE","fields":{"message":"Processing data event for dataset `catscii`","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":286},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:56:50.244983Z","level":"TRACE","fields":{"message":"Processing data event for dataset `catscii`","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":286},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:56:50.345329Z","level":"TRACE","fields":{"message":"Timer expired with 3 event(s)","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":326},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T13:56:50.345552Z","level":"TRACE","fields":{"message":"Sending payload: [\n EventData {\n data: {\n \"service.name\": String(\"unknown_service\"),\n \"trace.span_id\": String(\"8dae7119ba0e603b\"),\n \"duration_ms\": Number(273),\n \"start_time\": String(\"2023-02-21T13:56:49.971Z\"),\n \"trace.trace_id\": String(\"4f0d211abaf6d3def49db1e8c68c19a1\"),\n \"trace.parent_id\": String(\"5181dfe0551a078b\"),\n \"name\": String(\"api_headers\"),\n \"span.kind\": String(\"internal\"),\n \"response.status_code\": Number(0),\n },\n time: 2023-02-21T13:56:49.971Z,\n samplerate: 1,\n },\n EventData {\n data: {\n \"service.name\": String(\"unknown_service\"),\n \"trace.span_id\": String(\"5181dfe0551a078b\"),\n \"duration_ms\": Number(273),\n \"start_time\": String(\"2023-02-21T13:56:49.971Z\"),\n \"trace.trace_id\": String(\"4f0d211abaf6d3def49db1e8c68c19a1\"),\n \"trace.parent_id\": String(\"c5402bd23db803af\"),\n \"name\": String(\"get_cat_ascii_art\"),\n \"span.kind\": String(\"internal\"),\n \"response.status_code\": Number(0),\n },\n time: 2023-02-21T13:56:49.971Z,\n samplerate: 1,\n },\n EventData {\n data: {\n \"service.name\": String(\"unknown_service\"),\n \"error\": Bool(true),\n \"trace.span_id\": String(\"c5402bd23db803af\"),\n \"duration_ms\": Number(273),\n \"start_time\": String(\"2023-02-21T13:56:49.971Z\"),\n \"trace.trace_id\": String(\"4f0d211abaf6d3def49db1e8c68c19a1\"),\n \"status.message\": String(\"error sending request for url (https://cataas.com/cat): error trying to connect: error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs), error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs), error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs), error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs), error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs), error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1889: (unable to get local issuer certificate)\"),\n \"user_agent\": String(\"curl/7.85.0\"),\n \"name\": String(\"root_get\"),\n \"span.kind\": String(\"internal\"),\n \"response.status_code\": Number(2),\n },\n time: 2023-02-21T13:56:49.971Z,\n samplerate: 1,\n },\n]","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":426},"target":"libhoney::transmission"}
(This happened while running curl http://localhost:8080
from another shell).
In case your brain doesn't have a built-in JSON.parse
function yet, here's
a human-friendlier version of this:
error sending request for url (https://cataas.com/cat)
error trying to connect
error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs)
error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs)
error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs)
error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs)
error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):providers/implementations/storemgmt/file_store.c:267:calling stat(/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs)
error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1889
(unable to get local issuer certificate)
Oh! It can't find the CA certificates to verify cataas.com
's certificate!
Why yes, exactly!
When establishing a TLS connection, a lot of things happen (as wonderfully
explained in The Illustrated TLS 1.3 Connection),
but an important one is: the server presents a chain of certificates, starting
with a certificate for the domain in question (here, cataas.com
), and
ending with something you trust.
At the time of this writing, the certificate chain for cataas.com
goes:
- ISRG Root X1
- R3
- cataas.com
- R3
And how do we know we trust "ISRG Root X1", a CA ("Certificate Authority") from Let's Encrypt?
Well, we have a file on disk that says so!
❯ openssl x509 -in /etc/ssl/certs/ISRG_Root_X1.pem -nocert -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
82:10:cf:b0:d2:40:e3:59:44:63:e0:bb:63:82:8b:00
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Internet Security Research Group, CN = ISRG Root X1
Validity
Not Before: Jun 4 11:04:38 2015 GMT
Not After : Jun 4 11:04:38 2035 GMT
Subject: C = US, O = Internet Security Research Group, CN = ISRG Root X1
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:ad:e8:24:73:f4:14:37:f3:9b:9e:2b:57:28:1c:
87:be:dc:b7:df:38:90:8c:6e:3c:e6:57:a0:78:f7:
75:c2:a2:fe:f5:6a:6e:f6:00:4f:28:db:de:68:86:
6c:44:93:b6:b1:63:fd:14:12:6b:bf:1f:d2:ea:31:
9b:21:7e:d1:33:3c:ba:48:f5:dd:79:df:b3:b8:ff:
12:f1:21:9a:4b:c1:8a:86:71:69:4a:66:66:6c:8f:
7e:3c:70:bf:ad:29:22:06:f3:e4:c0:e6:80:ae:e2:
4b:8f:b7:99:7e:94:03:9f:d3:47:97:7c:99:48:23:
53:e8:38:ae:4f:0a:6f:83:2e:d1:49:57:8c:80:74:
b6:da:2f:d0:38:8d:7b:03:70:21:1b:75:f2:30:3c:
fa:8f:ae:dd:da:63:ab:eb:16:4f:c2:8e:11:4b:7e:
cf:0b:e8:ff:b5:77:2e:f4:b2:7b:4a:e0:4c:12:25:
0c:70:8d:03:29:a0:e1:53:24:ec:13:d9:ee:19:bf:
10:b3:4a:8c:3f:89:a3:61:51:de:ac:87:07:94:f4:
63:71:ec:2e:e2:6f:5b:98:81:e1:89:5c:34:79:6c:
76:ef:3b:90:62:79:e6:db:a4:9a:2f:26:c5:d0:10:
e1:0e:de:d9:10:8e:16:fb:b7:f7:a8:f7:c7:e5:02:
07:98:8f:36:08:95:e7:e2:37:96:0d:36:75:9e:fb:
0e:72:b1:1d:9b:bc:03:f9:49:05:d8:81:dd:05:b4:
2a:d6:41:e9:ac:01:76:95:0a:0f:d8:df:d5:bd:12:
1f:35:2f:28:17:6c:d2:98:c1:a8:09:64:77:6e:47:
37:ba:ce:ac:59:5e:68:9d:7f:72:d6:89:c5:06:41:
29:3e:59:3e:dd:26:f5:24:c9:11:a7:5a:a3:4c:40:
1f:46:a1:99:b5:a7:3a:51:6e:86:3b:9e:7d:72:a7:
12:05:78:59:ed:3e:51:78:15:0b:03:8f:8d:d0:2f:
05:b2:3e:7b:4a:1c:4b:73:05:12:fc:c6:ea:e0:50:
13:7c:43:93:74:b3:ca:74:e7:8e:1f:01:08:d0:30:
d4:5b:71:36:b4:07:ba:c1:30:30:5c:48:b7:82:3b:
98:a6:7d:60:8a:a2:a3:29:82:cc:ba:bd:83:04:1b:
a2:83:03:41:a1:d6:05:f1:1b:c2:b6:f0:a8:7c:86:
3b:46:a8:48:2a:88:dc:76:9a:76:bf:1f:6a:a5:3d:
19:8f:eb:38:f3:64:de:c8:2b:0d:0a:28:ff:f7:db:
e2:15:42:d4:22:d0:27:5d:e1:79:fe:18:e7:70:88:
ad:4e:e6:d9:8b:3a:c6:dd:27:51:6e:ff:bc:64:f5:
33:43:4f
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
79:B4:59:E6:7B:B6:E5:E4:01:73:80:08:88:C8:1A:58:F6:E9:9B:6E
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
55:1f:58:a9:bc:b2:a8:50:d0:0c:b1:d8:1a:69:20:27:29:08:
ac:61:75:5c:8a:6e:f8:82:e5:69:2f:d5:f6:56:4b:b9:b8:73:
10:59:d3:21:97:7e:e7:4c:71:fb:b2:d2:60:ad:39:a8:0b:ea:
17:21:56:85:f1:50:0e:59:eb:ce:e0:59:e9:ba:c9:15:ef:86:
9d:8f:84:80:f6:e4:e9:91:90:dc:17:9b:62:1b:45:f0:66:95:
d2:7c:6f:c2:ea:3b:ef:1f:cf:cb:d6:ae:27:f1:a9:b0:c8:ae:
fd:7d:7e:9a:fa:22:04:eb:ff:d9:7f:ea:91:2b:22:b1:17:0e:
8f:f2:8a:34:5b:58:d8:fc:01:c9:54:b9:b8:26:cc:8a:88:33:
89:4c:2d:84:3c:82:df:ee:96:57:05:ba:2c:bb:f7:c4:b7:c7:
4e:3b:82:be:31:c8:22:73:73:92:d1:c2:80:a4:39:39:10:33:
23:82:4c:3c:9f:86:b2:55:98:1d:be:29:86:8c:22:9b:9e:e2:
6b:3b:57:3a:82:70:4d:dc:09:c7:89:cb:0a:07:4d:6c:e8:5d:
8e:c9:ef:ce:ab:c7:bb:b5:2b:4e:45:d6:4a:d0:26:cc:e5:72:
ca:08:6a:a5:95:e3:15:a1:f7:a4:ed:c9:2c:5f:a5:fb:ff:ac:
28:02:2e:be:d7:7b:bb:e3:71:7b:90:16:d3:07:5e:46:53:7c:
37:07:42:8c:d3:c4:96:9c:d5:99:b5:2a:e0:95:1a:80:48:ae:
4c:39:07:ce:cc:47:a4:52:95:2b:ba:b8:fb:ad:d2:33:53:7d:
e5:1d:4d:6d:d5:a1:b1:c7:42:6f:e6:40:27:35:5c:a3:28:b7:
07:8d:e7:8d:33:90:e7:23:9f:fb:50:9c:79:6c:46:d5:b4:15:
b3:96:6e:7e:9b:0c:96:3a:b8:52:2d:3f:d6:5b:e1:fb:08:c2:
84:fe:24:a8:a3:89:da:ac:6a:e1:18:2a:b1:a8:43:61:5b:d3:
1f:dc:3b:8d:76:f2:2d:e8:8d:75:df:17:33:6c:3d:53:fb:7b:
cb:41:5f:ff:dc:a2:d0:61:38:e1:96:b8:ac:5d:8b:37:d7:75:
d5:33:c0:99:11:ae:9d:41:c1:72:75:84:be:02:41:42:5f:67:
24:48:94:d1:9b:27:be:07:3f:b9:b8:4f:81:74:51:e1:7a:b7:
ed:9d:23:e2:be:e0:d5:28:04:13:3c:31:03:9e:dd:7a:6c:8f:
c6:07:18:c6:7f:de:47:8e:3f:28:9e:04:06:cf:a5:54:34:77:
bd:ec:89:9b:e9:17:43:df:5b:db:5f:fe:8e:1e:57:a2:cd:40:
9d:7e:62:22:da:de:18:27
That's why, if you do:
$ curl --head https://cataas.com/
HTTP/1.1 404 Not Found
Server: nginx
Date: Wed, 13 Mar 2024 18:40:20 GMT
Connection: keep-alive
Set-Cookie: session=41cd42f1-552a-479a-91a1-2e0ddb837e3e; Path=/; SameSite=Strict; Secure; HttpOnly
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: X-Requested-With, Content-Type, Accept, Origin, Authorization
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
...it works!
But when our program inside the Docker image tries to do so, it does not work.
In fact, even outside the Docker image it doesn't work, because it links
against a build of openssl that looks for CA certificates not in
/etc/ssl/certs
(like an Ubuntu package would), but in
/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8/etc/ssl/certs
(like
a nix package would).
So, long story short, we need to pull that into the Docker image:
dockerImage = pkgs.dockerTools.buildImage {
name = "catscii";
tag = "latest";
# new! 👇
copyToRoot = [ bin pkgs.cacert ];
config = {
Cmd = [ "${bin}/bin/catscii" ];
Env = with pkgs; [ "GEOLITE2_COUNTRY_DB=${clash-geoip}/etc/clash/Country.mmdb" ];
};
};
Wait wait, but why didn't we see it in Honeycomb?
Well, how do you think events are sent to Honeycomb?
Over HTTPS? Oh.... Ohhhhhhhhhhh!
Yeah.
Streaming docker images
Let's build this once again:
$ nix build .#dockerImage && docker load < result
warning: Git tree '/home/amos/catscii' is dirty
a0d69db324f4: Loading layer [==================================================>] 65.81MB/65.81MB
The image catscii:latest already exists, renaming the old one with ID sha256:6bd980414ca2a765952467b7b7667c892567c824a36ddb1065d911a2f822cce9 to empty string
Loaded image: catscii:latest
Say, this seems a little wasteful, no? Writing the whole image to disk, only to import it back into the docker registry right after?
You're right! We can live with that, of course, but say we didn't, say we wanted to really obsess over that, we could switch to streamLayeredImage.
Let's do that now!
# was `buildImage` 👇
dockerImage = pkgs.dockerTools.streamLayeredImage {
name = "catscii";
tag = "latest";
# 👇 was `copyToRoot`
contents = [ bin pkgs.cacert ];
config = {
Cmd = [ "${bin}/bin/catscii" ];
Env = with pkgs; [ "GEOLITE2_COUNTRY_DB=${clash-geoip}/etc/clash/Country.mmdb" ];
};
};
Loading the image into the docker registry is now a bit different!
result
is no longer a gzipped tarball (.tar.gz
), but it is a script that
streams out an uncompressed tarball (.tar
).
So, we can call it and pipe the output to docker load
.
$ nix build .#dockerImage && ./result | docker load
warning: Git tree '/home/amos/catscii' is dirty
No 'fromImage' provided
Creating layer 1 from paths: ['/nix/store/jdjpni8kq3i95dj1d49nlf9m10wl0kqq-libunistring-1.0']
Creating layer 2 from paths: ['/nix/store/na1irnycfp8z5mab0g5jvrnhnscsaqsb-libidn2-2.3.2']
Creating layer 3 from paths: ['/nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224']
Creating layer 4 from paths: ['/nix/store/9dz5lmff9ywas225g6cpn34s0wbldnxa-zlib-1.2.13']
Creating layer 5 from paths: ['/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8']
Creating layer 6 from paths: ['/nix/store/zqf9r5d9yv4ccv64ja8xjn8kasgqg3cy-sqlite-3.40.1']
Creating layer 7 from paths: ['/nix/store/dvfh98mvqish3jsih29rs3r45qn3yfpf-catscii-0.1.0']
Creating layer 8 from paths: ['/nix/store/l1amkbsx8mqv11b4mg98f9lyw5nkrcbv-clash-geoip-20230112']
Creating layer 9 from paths: ['/nix/store/0mm127hlf3rsdph4p34385qxsrph20rp-nss-cacert-3.86']
Creating layer 10 with customisation...
Adding manifests...
Done.
0b99676422a9: Loading layer [==================================================>] 1.812MB/1.812MB
dc71b1c4d9c2: Loading layer [==================================================>] 297kB/297kB
541c11727f9e: Loading layer [==================================================>] 30.77MB/30.77MB
d86a062e6369: Loading layer [==================================================>] 143.4kB/143.4kB
e21cc889e6d8: Loading layer [==================================================>] 6.482MB/6.482MB
26544d61db4d: Loading layer [==================================================>] 1.485MB/1.485MB
1af5c770e073: Loading layer [==================================================>] 9.114MB/9.114MB
9c32b3e8b0f5: Loading layer [==================================================>] 5.663MB/5.663MB
77d56c064e6a: Loading layer [==================================================>] 501.8kB/501.8kB
314943a5b800: Loading layer [==================================================>] 10.24kB/10.24kB
The image catscii:latest already exists, renaming the old one with ID sha256:37a94963597827d5b7ceda467d44a501c38121bb7f5df736aec2f7cb0e9c18ca to empty string
Loaded image: catscii:latest
You'll notice this image has layers (we can control how many layers it has with
maxLayers
), so pushing updates should be faster, since dependencies like
openssl
, glibc
, zlib
etc. seldom change!
Note that our image is still delightfully small: dive catscii:latest
shows a
total image size of 55MB. Adding ca-certs
only cost us 486 kB:
Let's run it just like we did before:
$ docker run --rm -e SENTRY_DSN -e HONEYCOMB_API_KEY -e ANALYTICS_DB -p 8080:8080 catscii:latest
/nix/store/561wgc73s0x1250hrgp7jm22hhv7yfln-bash-5.2-p15/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
{"timestamp":"2023-02-21T14:33:52.577377Z","level":"INFO","fields":{"message":"Creating honey client","log.target":"libhoney::client","log.module_path":"libhoney::client","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/client.rs","log.line":78},"target":"libhoney::client"}
{"timestamp":"2023-02-21T14:33:52.577573Z","level":"INFO","fields":{"message":"transmission starting","log.target":"libhoney::transmission","log.module_path":"libhoney::transmission","log.file":"/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/2b916d13eeef5c4c978dd7cdbe0195cb3a2cb3e77889acf9f85433ddf74a8e14/libhoney-rust-0.1.6/src/transmission.rs","log.line":124},"target":"libhoney::transmission"}
{"timestamp":"2023-02-21T14:33:52.645842Z","level":"INFO","fields":{"message":"Listening on 0.0.0.0:8080"},"target":"catscii"}
And finally, we have our sweet ascii art cats again!
Deploying to fly.io
Deploying works much the same as it did before - we still have to opt out of fly.io's "remote builders", since we really do want to push a local image there.
Our Justfile
could look like this:
# just manual: https://github.com/casey/just#readme
_default:
just --list
docker:
#!/bin/bash -eux
nix build .#dockerImage
./result | docker load
deploy:
just docker
fly deploy --local-only
But of course, we'll want to to add just
and flyctl
to our dev shell first:
devShells.default = mkShell
{
inputsFrom = [ bin ];
# note: I ended up keeping `docker` in my dev shell 🤷
buildInputs = with pkgs; [ dive docker shadow flyctl just ];
};
Then,fly auth login
:
$ fly auth login
(cut)
And... let's just check our fly.toml
one last time.. oh, right! We have that
in there:
[env]
GEOLITE2_COUNTRY_DB = "/db/GeoLite2-Country.mmdb" # 🗑️ remove me!
ANALYTICS_DB = "/db/analytics.db"
We should remove that GEOLITE2_COUNTRY_DB
line from there - that environment
variable is baked into the Docker image now.
And then we're good to go!
$ just deploy
just docker
+ nix build .#dockerImage
warning: Git tree '/home/amos/catscii' is dirty
+ ./result
+ docker load
No 'fromImage' provided
Creating layer 1 from paths: ['/nix/store/jdjpni8kq3i95dj1d49nlf9m10wl0kqq-libunistring-1.0']
Creating layer 2 from paths: ['/nix/store/na1irnycfp8z5mab0g5jvrnhnscsaqsb-libidn2-2.3.2']
Creating layer 3 from paths: ['/nix/store/lqz6hmd86viw83f9qll2ip87jhb7p1ah-glibc-2.35-224']
Creating layer 4 from paths: ['/nix/store/9dz5lmff9ywas225g6cpn34s0wbldnxa-zlib-1.2.13']
Creating layer 5 from paths: ['/nix/store/dsf1m9azqqz6c3nqj9yk0nnardqmaia0-openssl-3.0.8']
Creating layer 6 from paths: ['/nix/store/zqf9r5d9yv4ccv64ja8xjn8kasgqg3cy-sqlite-3.40.1']
Creating layer 7 from paths: ['/nix/store/dvfh98mvqish3jsih29rs3r45qn3yfpf-catscii-0.1.0']
Creating layer 8 from paths: ['/nix/store/l1amkbsx8mqv11b4mg98f9lyw5nkrcbv-clash-geoip-20230112']
Creating layer 9 from paths: ['/nix/store/0mm127hlf3rsdph4p34385qxsrph20rp-nss-cacert-3.86']
Creating layer 10 with customisation...
Adding manifests...
Done.
Loaded image: catscii:latest
fly deploy --local-only
Update available 0.0.456 -> v0.0.462.
Run "flyctl version update" to upgrade.
==> Verifying app config
--> Verified app config
==> Building image
Searching for image 'catscii' locally...
image found: sha256:faf87069d1dbea4b141353485cfbd3f5f7b5ef3f0c1e3f8e37eb4ac119f78890
==> Pushing image to fly
The push refers to repository [registry.fly.io/old-frost-6294]
314943a5b800: Pushed
77d56c064e6a: Pushed
9c32b3e8b0f5: Pushed
1af5c770e073: Pushed
26544d61db4d: Pushed
e21cc889e6d8: Pushed
d86a062e6369: Pushed
541c11727f9e: Pushed
dc71b1c4d9c2: Pushed
0b99676422a9: Pushed
deployment-01GST6J50SRWXCE0MK0W78G2D3: digest: sha256:107885f7edc0b67b5dea9843b324fc3d6977a3697f9f6d74a183075aa792f325 size: 2417
--> Pushing image done
==> Creating release
--> release v10 created
--> You can detach the terminal anytime without stopping the deployment
==> Monitoring deployment
Logs: https://fly.io/apps/old-frost-6294/monitoring
v10 is being deployed
1 desired, 1 placed, 0 healthy, 0 unhealthy [restarts: 1]
Here's another article just for you:
Getting in and out of trouble with Rust futures
I started experimenting with asynchronous Rust code back when futures 0.1
was all we had - before async/await
. I was a Rust baby then (I'm at least
a toddler now), so I quickly drowned in a sea of .and_then
, .map_err
and Either<A, B>
.
But that's all in the past! I guess!
Now everything is fine, and things go smoothly. For the most part. But even with , there are still some cases where the compiler diagnostics are, just, .