All color is best-effort

This is a dual feature! It's available as a video too. Watch on YouTube

I do not come to you with answers today, but rather some observations and a lot of questions.

The weird glitch

Recently I was editing some video and I noticed this:

A screenshot of the video, there are visible circles at various places in the image. Some of them are black, some of them are white. The image itself shows some blue and white text composited on some blurry background, which doesn’t really matter for this, and there’s a red line horizontal up in the image. It’s very confusing.

Not what the finger is pointing at — the dots.

Here are the separate layers this image is made up of: the background is a stock image I’ve licensed from Envato Elements:

A picture of a canyon, darker than you’d expect.

Because I use it as a background image, I’ve cranked down the exposition in the Color tab:

The global exposition for that image is set to -3 as opposed to the default of 0.

And for a little added style and some additional readability for subtitles, I’ve added a tilt-shift blur:

A screenshot of the DaVinci Resolve interface in the Edit tab. We can see there’s an OpenFX effect enabled called Tilt-Shift Blur. The preview is annotated with blurry and less blurry text.

On top of that, we have some text captured from Safari as a transparent image by right-clicking on an element in the Inspector and picking “Capture Screenshot”, one of my favorite tricks as of late:

A screenshot of the Safari Web Inspector with the context menu enabled. The Elements tab, there’s a highlighted capture screenshot item in the context menu.

However, the transparency is not complete. GitHub’s CSS has tables with an opaque background. So I added an additional 3D keyer to remove the background:

Those two layers composited already show some strangeness:

Now the transparent text is on top of the background. And although it’s very hard to see, there are some, there is some glare around the edges of the text. There are some white dots.

Cool bear

Uhh I can hardly see anything.

Very well — enhance!

A zoomed in detail of the same picture, with red arrows pointing at the artifacts.

It’s still kind of subtle, but you can see the dots.

Cool bear

Are those dots present without the 3D Keyer?

No, they’re not:

The same image without the 3D keyer showing that there are no dots this time around.

They are also not here on the Fusion tab at all:

A screenshot of DaVinci Resolve with the Fusion tab opened, zoomed in to the same text showing that there are no dots here.

The dots are bad when playback is using full resolution:

A screen capture with full playback resolution, we see dots around the edges of the letters.

But things get a lot weirder once you scale down. This is half resolution:

The same picture as half resolution, we’re starting to see some star-like patterns and color aberration where we had those white dots.

And this is quarter resolution:

The artifacts have gotten even worse.

I have investigated this for half a day and I have good news and bad news.

The good news is: unpausing makes all the artifacts disappear!

Cool bear

Ah, great! So surely that means it’s not present in the export, yes?

Well bear, what do you think the bad news are?

Cool bear

Ah f-

A screenshot of QuickTimePlayer showing the glitches.mov file with the same dots.

Even if I considered those artifacts acceptable, the Tilt-Shift Blur effect makes them impossible to ignore: it blows up every single point into those large colored circles. Even with a strength of zero, it turns those dots into streaks that are reminiscent of memory corruption:

Disappointingly, I don’t know what is causing this particular problem.

There’s a whole bunch of things that don’t solve it: the 3D Keyer can work in different color spaces:

In behavior options, the 3D keyer has a color space picker with YUV, HSL, HSP, Lab.

…and has a bunch of knobs you can turn:

A bunch of knobs from the 3D keyer effect, including Chroma tolerance, Chroma softness, Chroma tilt, Chroma shift, Chroma rotate, Luma low, Luma high, Low softness, High softness, Pre-filter, Clean, Black clean white, Black clip, White clip, Blur radius, In/Out ratio, Matte shape and output.

But none of them truly fix it. The other keyers exhibit the same issue, here’s the Ultra Keyer instead:

The same dots and streaks are showing.

Same with the Luma Keyer, the Chroma Keyer, etc. No amount of checking “Pre-divide / post-multiply” makes a difference. Manually adding an “Alpha multiply” node does jack shit. Same with “Gamut limiter” — there goes my theory that it’s just outputting colors outside the gamut but that… because it might be using f32 values internally, uhhh something happened?

But nope.

Playing with color spaces

Cool bear

What about the source PNG file, is it in some weird colorspace?

Bear, it’s in the most vanilla colorspace you can imagine: sRGB.

The macOS color space information for that PNG image, showing that the PNG file is sRGB IEC61966-2.1

Cool bear

What’s with the numbers?

Oh that’s just the IEC standard for it:

A screenshot of the IEC 61966-2-1 standard shop page.

Among other things, the standard (published in 1999, amended in 2003) specifies the proper transfer function to use, which is almost like gamma = 2.2, but not quite:

The two solid curves are:

  • gamma 2.2 (top)
  • gamma 2.4 (bottom)

The dotted curve is sRGB IEC61966-2.1

Own work, graphed via Desmos

Close enough though: even if some part of the pipeline used “simplified sRGB” by doing gamma 2.2 instead of using the IEC curve, we wouldn’t see that.

Cool bear

What would we see?

In that case, probably nothing - but when we use the wrong color profile, things look, well, wrong. They can look washed out, they can look too saturated, too dark, too bright, any of these.

Believe it or not, HDR is the main reason I’m using iPhones all around now. By default, my iPhone 14 with the Camera app will shoot HDR video.

You can disable HDR in the settings:

The HDR Video toggle in iPhone settings.

…but personally, I prefer shooting with the Blackmagic Camera app for iPhone, which calls color spaces by their actual names:

The Blackmagic Camera color picker, showing Rec.709, Rec.2020 - HDR, and P3 D65.

I took 4 short videos in quick succession and dragged all those in DaVinci Resolve just to find out what would happen:

A collage of the four videos: top-left is default Camera app on iPhone 14, which gives us H.265 Main 10, BT.2020 HLG. Top-right is the Rec.709 setting in Blackmagic Cam, which gives us H.265 Main, HD. The P3 D65 setting gives us H.265 Main, P3 D65, and the Rec.2020 - HDR setting gives us H.265 Main 10, Bt.2020 HLG

Cool bear

They look different.

Indeed they do! First off, there’s most likely an exposure / white balance / etc. mismatch between the built-in Camera app (which had everything on full auto) and the Blackmagic Camera app, but even just looking at the three pieces of Blackmagic footage, things look different, even on this sRGB screenshot, shown on my DCI-P3 display.

CIE Chromaticity diagram

If we switch to Resolve’s Color tab, we can see which colors are actually used in each image, via CIE Chromaticity diagrams.

Rec.709:

A CIE chromaticity graph for the Rec.709 footage. It’s hard to describe if you’ve never seen one before — there’s an X axis and a Y axis, then some sort of cave-hole outline in white. Inside there, a much smaller triangle, labelled Rec.709, and inside that triangle, a constellation of colored dots, that correspond to what the is being used on the footage.

P3-D65:

A chromaticity graph for the P3-D65 footage. It looks largely the same, although there’s less “noise” outside the main blob of colors.

Rec.2020-HLG:

The chromaticity graph for Rec.2020-HLG footage. This time, the color blob is clearly cut off at the top-right edge of the triangle.

iPhone Camera app:

There are colors all over the Rec.709 triangle and it feels like they want to get out, too — there’s a lot of points on the bottom edge, in the purples, and a clear cut-off top-right near the reds.

The “horseshoe” shape in white represents the visible spectrum, and the triangle labelled “Rec.709” represents our color space: which colors of the visible spectrum we’re able to “represent”, or “encode”.

The circle is the “white point” — Rec.709, P3-D65 and Rec.2020-HLG are all using the same white point, D65, which…

…is intended to represent average daylight and has a correlated colour temperature of approximately 6500 K. CIE standard illuminant D65 should be used in all colorimetric calculations requiring representative daylight, unless there are specific reasons for using a different illuminant. Variations in the relative spectral power distribution of daylight are known to occur, particularly in the ultraviolet spectral region, as a function of season, time of day, and geographic location.

ISO 10526:1999/CIE S005/E-1998, CIE Standard Illuminants for Colorimetry

Without agonizing too much about the science of it, we can see a couple interesting things: the Rec.709 feels very “noisy” for some reason:

Zoomed-in chromaticity diagram for the Rec.709 footage — there are a bunch of dots astray from the main blob.

I have no idea if it’s normal or not, but we don’t have the same thing in the P3-D65 footage:

Same detail but on the P3-D65 footage, which has relatively few points outside the main blob of color.

So… maybe it’s an encoding artifact? Not sure.

But the thing that’s hard to ignore is on the Rec.2020 chromaticity diagram. Much like, if you crank the gain on an audio track, you can see it clip: you can see the peaks being flattened to the minimum and maximum representable value, as if “clipped” with a scissor:

Similarly, we can feel that our Rec.2020 footage feels a little cramped in our Rec.709 color space: it wants to get out!

Detail of the Rec.2020 footage chromaticity diagram, with an arrow showing where the oranges are clearly “clipped” by the edge of the Rec.709 triangle.

But that’s not even what feels wrong with this comparison… the Rec.709 footage looks “less wrong” to me — the Rec.2020 one has some colors that look… off. The floor is too bright. The cat’s fur becomes too bright too quick.

Cool bear

Hey, that’s the gamma curve!

Right!!! The whole reason we have a “gamma curve” in the first place is to spend our bits wisely.

If all you had was 16 shades of grey (sorry E.L James), which would you choose?

Linear, or sRGB?

(JavaScript is required for this)
Cool bear

Oooh, I’ll take the sRGB ones any day.

Exactly — with a “linear” progression (in terms of light emission), the shades become “too bright” much too quickly.

Cool bear Cool Bear's hot tip

You can refer to What every coder should know about gamma or the Krita docs about Gamma and Linear if you want to dive more in-depth.

And that’s what the curve I showed earlier is all about:

The sRGB IEC61966-2.1 reverse OETF

Own work, graphed via Desmos

Reading this graph, we can see that an encoded value of 0.1 is barely equivalent to 0.01 lightness. To obtain 0.1 lightness, we have to go all the way to 0.35!

If we weren’t using “gamma curves” like these, colors would look a lot worse, especially given how long we’ve been using 8-bit color.

Our first transfer function

Before things get a lot more confusing, let’s ask ourselves: what is the exact name of the transfer function we just plotted?

As far as I can tell, it’s a “reverse OETF”.

An OETF (opto-electronic transfer function) is used when capturing/encoding: The camera sensor detects some amount of light and we need to figure out which integer value to encode it as in the video signal.

Cool bear

Think sRGB, Rec.709, Rec.2020-HLG, Rec.2020-PQ, etc.

An EOTF (electro-optical transfer function) is used by monitors/displays, taking a signal and deciding how much light to emit for each integer value of the signal (more or less).

Cool bear

Think Display P3, sRGB, Adobe RGB, etc. — monitor color profiles:

A list of various color profiles supported by my display on macOS. There’s LG Ultrafine, the model of my monitor, ACES, Adobe RGB DisplayP3, various Rec.2020, Rec.709 derivatives, sRGB and some others.

Finally, an OOTF is the composition of the OETF and the EOTF and we do not need to worry about it at all — check the Wikipedia page on transfer functions in imaging if you really must.

Really, the only one we have to care about is the “OETF”, which has been done in our camera and that we need to reverse in order to know how things are.

For sRGB, the OETF is defined as:

E(v)={AvvV(1+C)v1/ΓCv>V

where V=0.0031308, A=12.92, C=0.055, and Γ=2.4

v is the lightness value (how much light hit the camera sensor), and E(v) is the encoded signal (the RGB value we put in the video file).

Cool bear Cool Bear's hot tip

We called it E(v), with E for “encoding”, and also for consistency,

The “reverse OETF” is then:

D(u)={uAuU(u+C1+C)Γu>U

where U=0.04045, and A, C, and Γ are the same as for E(v).

Cool bear Cool Bear's hot tip

Called D(u) for “decoding”, you guessed it.

We could’ve called it E1(u), but eh.

How do we know we got the inverse right?

Although not very rigorous, plotting them allows us to see a symmetry across the y=x axis.

It looks a little like an almond!

The sRGB OETF (top), a diagonal (straight), and the reverse OETF (👉👈)

Own work, graphed with Desmos

Another way is to do D(E(v)) and simplify to see if we fall back to v.

First the linear part:

D(u)=uA,E(v)=AvD(E(v))=E(v)AApply D=AvAApply E=vCancel out A terms

Then the exponential part:

D(u)=(u+C1+C)Γ,E(v)=(1+C)v1/ΓCD(E(v))=(E(v)+C1+C)ΓApply D=((1+C)v1/ΓC+C1+C)ΓApply E=((1+C)v1/Γ1+C)ΓCancel out C terms=(v1/Γ)ΓCancel out (1+C) terms=vCancel out exponents

And finally, the breakpoint for D(u) was at U=0.04045, which gives us:

For the linear segment:

D(U)=UA=0.0404512.920.0031308

For the exponential segment:

D(U)=(U+C1+C)Γ=(0.04045+0.0551+0.055)2.4=(0.095451.055)2.40.0031308

Yup, looks like we got it right — that explains where the breakpoint for E(v), V=0.0031308 comes from.

Parade scope

Back to our footage, there’s another fun color visualization we can look at: the “Parade”. This displays red, green, and blue values (here on a scale of 0 to 1024, to mimic 10-bit encoding, although that’s configurable) from left to right.

Our Rec.709 footage, which looks “good”, has a good spread:

A parade scope for the rec 709 footage. Values range from 1000 to 128, with a concentration between 900 and 512. Red is brighter than green which is brighter than blue.

Our Rec.2020 footage however, seems squished at the top of the scale:

A parade chart for the Rec 2020 footage. Almost all values are close to the maximum of 1023. The clear lines we saw on the Rec.709 graph are now concentrated between 1023 and 900.

Luckily, there’s a slider for that!

But here’s the thing: the footage looks fine when reviewing it on my iPhone:

Our Rec.2020 footage, opened in Blackmagic Camera’s Media tab, looking fine.

It’s not “blown out”, or anything.

Cool bear

Well, since we’re getting technical… what color space is that screenshot in?

Good question! I Airplay’d the screenshot to myself and got an sRGB JPEG file. It definitely looks more “vivid” on my iPhone screen than on the webpage. I’m not actually sure iPhones can take HDR screenshots?

But the point is: the iPhone did something to that Rec.2020 footage. It’s showing more colors to me on the iPhone screen, but it’s able to save an sRGB version as a screenshot that doesn’t look as wrong as what I had when I dragged the video file on my DaVinci Resolve timeline.

In fact, my mac can do “that” too: opening the footage in QuickTime shows “correct” colors as well:

A quicktime window superimposed on DaVinci Resolve. The quicktime window has the correct colors, and the DaVinci Resolve window has blown out colors (too bright).

And what “that” is, is tone mapping.

More transfer functions

How do we know for sure how color is encoded into our H.265 files?

The macOS finder gave us three numbers for each of these files:

  • 1-1-1 for Blackmagic Cam in “Rec.709” mode
  • 12-1-6 for Blackmagic Cam in “P3 D65” mode
  • 9-18-9 for Blackmagic Cam in “Rec.2020 - HDR” mode

Those numbers tell us all we need to know, and are standardized in the ITU-T H.265 Recommendation, which is a free download, and I have gone through its 728 pages to fish out the relevant information.

The first number is for color primaries. Notable numbers include:

  • 1 for Rec. ITU-R BT.709-6 and IEC 61966-2-1 sRGB or sYCC (HD but SDR video content)
  • 4 for Rec. ITU-R BT.601-7 625 (PAL)
  • 5 for Rec. ITU-R BT.601-7 525 (NTSC)
  • 9 for Rec. ITU-R BT.2020-2 and Rec. ITU-R BT.2100-2 (HDR)
  • 12 for SMPTE ST 2113 “P3D65” (hey that sounds familiar)

These are the xy coordinates of what the color space considers red, green, and blue, in the CIE 1931 color space.

An annotated graph of the CIE 1931 color space showing the Rec.709 and Rec.2020 gamuts, along with their color primaries.

Again, the horseshoe is the entire visible spectrum and the triangles are “color gamuts” that represent which colors we can actually encode.

The Rec.2020 gamut (thick circles) is larger, it covers more of the visible spectrum than Rec.709 (thin circles) — that’s why the cloud of points in this chromaticity graph feels cramped: it’s actually Rec.2020 footage that we’ve clamped to Rec.709: we have to throw away some of the colors that were encoded into the original footage.

Cool bear

Wait a minute, I have several questions.

Go right ahead.

Cool bear

First: why didn’t we just pick a super large gamut that covers the entire visible spectrum? Why is Rec. 709 so small to begin with?

The short answer is 8-bit color: we talked about spending our bits wisely and it’s time to visualize it:

(JavaScript is required for this)

Without gamma curves, all of our possible encoded colors are concentrated near the illuminant, and there’s a lot of gaps near pure red, green, and blue colors.

But if you check “Apply TRC”, you can see all the discrete colors spread out away from the illuminant, and closer to the color primaries. This is valid for Rec.709 and Rec.2020 just as well.

And that’s what the second number is for! TRC for Transfer characteristics, which really is “transfer functions”, or “tone response curve” (also TRC), or “gamma curve”, yada yada.

Well. It’s not that simple. Note 5 says, paraphrasing:

some values of transfer_characteristics are defined in terms of a reference OETF, and others are defined in terms of a reference EOTF, according to the convention that has been applied in other Specifications.

In the cases of Rec. ITU-R BT.709-6 and Rec. ITU-R BT.2020-2 (which can be indicated by transfer_characteristics equal to 1, 6, 14, or 15), although the value is defined in terms of a reference OETF, a suggested corresponding reference EOTF characteristic function for flat panel displays used in HDTV studio production has been specified in Rec. ITU-R BT.1886-0.

Here’s the ITU-R BT.1886 EOTF in question:

L=a(max[(V+b),0])γ

(Source: ITU-R BT.1886 on Wikipedia)

V is the “input video signal level”, in the range [0,1], so far so good — we can divide our 8-bit values by 256, or our 10-bit values by 1024, no worries there.

L is the screen luminance in cd/m2 (candelas per square meter, also called nits), which, uhhh, where did my beautiful normalized [0,1] light-linear value go? This is much too real-world for me. γ is 2.4, which is unsurprising for Rec.709 / SDR (see below).

Cool bear

Also, γ is just lower-case Γ, the greek letter “gamma”

a=(LW1/γLB1/γ)γ is user gain, previously known as “contrast”, because, that’s right, EOTFs are for displays.

Cool bear Cool Bear's hot tip

Mnemonics:

  • EOTF = Electro-optical TF = (electric ➡️ optic)
  • OETF = Opto-electrical TF = (optic ➡️ electric)

Displays are optical, video files are electrical.

b=LB1/γLW1/γLB1/γ is user black level lift, aka “brightness”.

Finally, LW and LB are screen luminance for white and black, respectively, also in cd/m2.

So yeah. What we plotted earlier isn’t the EOTF, it was a reverse OETF.

Cool bear

I hope someone other than you had that question.

Notable transfer_characteristics values for H.265 include 1, for Rec. ITU-R BT.709-6.

They give this OETF:

V=αLc0.45(α1)for 1>=Lc>=βV=4.500Lcfor β>Lc>=0

How do we know it’s an OETF? Because its input is L_c, a “linear optical intensity with a nominal real-valued range of 0 to 1”.

The value α and β are constants defined for the the curve segments to meet at the breakpoint.

For TRC 1, 6, 11, 14 and 15 we have:

β=0.018053968510807α=1+5.5β=1.099296826809442

Cool bear

Can we plot some of these? I’m getting a little lost in the theory.

Sure! Let me just rewrite the OETF in a way that’s a little more consistent with our sRGB work from earlier:

E(v)={AvvV(1+C)v1/ΓCv>V

where:

  • V=0.018053968510807
  • C=1α=0.099296826809442
  • Γ=10.45
  • A=4.5
Cool bear

Whoa hey that’s exactly the same formula as sRGB!

Yup, only with a different slope A for the linear bit, a different breakpoint V, a different constant C, and Γ=2.2 rather than 2.4.

Here’s a plot of the sRGB gamma curve, zoomed into v[0,0.01]

The sRGB OETF's linear segment (solid) and the start of its exponential segment (dotted)

Own work, graphed with Desmos

And here’s the Rec. ITU-R BT.709-6 curve, which we’ll hereafter lovingly refer to as just BT.709 (BT for “broadcast television”):

The BT.709 OETF's linear segment (solid) and the start of its exponential segment (dotted)

Own work, graphed with Desmos
Cool bear

Whoaaaa. Same shape, different constants.

Moving on to HDR, value 18 refers to ARIB STD-B67, also Rec. ITU-R BT.2100-2 HLG, which we’ll simply refer to as “HLG” moving forward, for Hybrid log-gamma.

The definition given in the H.265 spec V10 is:

V=aLn(12Lcb)+cfor 1>=Lc>1÷12V=Sqrt(3)Lc0.5for 1÷12>=Lc>=0

Where a=0.17883277, b=0.28466892, c=0.55991073.

Cool bear

Can uhhh.. can we make that more readable?

With pleasure:

E(v)={Av1/ΓvValn(12vb)+cv>V

Where V=112, A=3, Γ=2, and the same a, b, and c values already mentioned.

Cool bear

Hey! There’s no linear segment! And what’s with the twelves?

Well! Because in spirit, lightness levels are no longer v[0,1], they’re v[0,12].

That’s right, HDR is not just 10-bit (or 12-bit), it’s also brighter colors.

How white is your white?

There’s a cool demo called Wanna see a whiter white that embeds an HDR video to show you that, when you say #ffffff on a webpage (or rgb(255, 255, 255)), you’re really referring to “the brighest SDR white”, which really depends on your display.

Me, I have a pair of LG 27UP85NP-W computer displays at home, which sport the “VESA DisplayHDR(TM) 400” badge, meaning they should be able to reach all the way to 400 nits (remember, nits are just cd/m2, candelas per square meter), and they’ve been tested to reach 413, so that’s cool.

But that’s the absolute highest they’ll go.

Right now I have them using the DCI-P3 color profile:

A list of the color profiles available on my computer display, with DCI-P3 shown selected.

And I have the “High Dynamic Range” option left unchecked in macOS’s Display settings:

The macOS settings app showing my two LG ultrafine screens using display P3 color profile and with high dynamic range checked off.

So… how white is my white? I don’t actually know! Because I don’t have a luminance meter, like the Konica Minolta LS-150:

Cool bear

Foot-lamberts???? Good gravy.

So I can’t measure how much light my screen puts out: I can only compare various shades with my eyes, which adapt quickly to various lighting conditions!

In fact, that’s precisely why sRGB is traditionally displayed with a gamma of around 2.2, suitable for a typical office setting, Rec.709 is displayed with gamma 2.4, for darker living rooms in the evening, and DCI-P3 is displayed with gamma 2.6, for near-dark movie theaters! (DCI for “Digital Cinema Initiative”):

From top to bottom: a gamma 2.2 curve, a gamma 2.4 curve, and a gamma 2.6 curve.

Own work, graphed with Desmos

The higher the display gamma, the deeper the blacks — in other terms, it “improves contrast”.

But, and that’s where things get confusing, this display gamma is not the same as the encoding gamma that’s used in the OETFs we’ve seen.

The sRGB OETF is pretty close to a Gamma of 2.2:

sRGB OETF (solid) versus gamma 2.2 curve (dotted)

Own work, graphed with Desmos

The Rec.709 OETF, even though it uses Γ=145=2.2 for the exponential part, is overall better approximated by Gamma 1.96:

Rec.709 OETF (solid) versus gamma 1.96 curve (dotted)

Own work, graphed with Desmos

The gamma used in OETF, again is really just here to make sure that we make the most out of our bits, whereas the display gamma is usually adjustable on computer monitors and TVs, to suit the overall lighting environment: it’s all about contrast.

(JavaScript is required for this)

Gamma is such a fascinating operation for me, because it always maps back to [0,1], as opposed to something like lift:

(JavaScript is required for this)

Or gain:

(JavaScript is required for this)

Conclusion

I left this article in that state for months — working on other projects in the meantime.

At some point I thought I discovered that if you tell DaVinci Resolve to use an HDR color space while editing, then the stars were gone. But I never could really put my finger on what was going wrong.

And then it happened again:

A screenshot of DaVinci Resolve showing the same kind of artifacts.

Like before, it’s barely noticeable in Full resolution:

A few dots around letters

And impossible to ignore in Half resolution:

The full star pattern

I discovered that in the Color tab, increasing the gamma slider makes the problem lesser (although it also changes the colors), and decreasing the gamma slider makes the problem much, much more visible:

I even found a way to isolate just the artifacts, so you can see a night full of stars:

My conclusion was that if you use a gamut mapping node, then it works around the problem, since that’s how I worked around the problem for my unsynn video:

A gamut mapping node after the 3D keyer node

But you know what? It doesn’t work anymore. My workaround workaroundn’t.

And so, the mystery remains complete. It’s probably a floating point thing. But which one? Who knows. Not me! Not me.

This is a dual feature! It's available as a video too. Watch on YouTube

(JavaScript is required to see this. Or maybe my stuff broke)

Here's another article just for you:

Frustrated? It's not you, it's Rust

Learning Rust is… an experience. An emotional journey. I’ve rarely been more frustrated than in my first few months of trying to learn Rust.

What makes it worse is that it doesn’t matter how much prior experience you have, in Java, C#, C or C++ or otherwise - it’ll still be unnerving.

In fact, more experience probably makes it worse! The habits have settled in deeper, and there’s a certain expectation that, by now, you should be able to get that done in a shorter amount of time.