Lies we tell ourselves to keep using Golang
👋 This page was last updated ~3 years ago. Just so you know.
In the two years since I've posted I want off Mr Golang's Wild Ride, it's made the rounds time and time again, on Reddit, on Lobste.rs, on HackerNews, and elsewhere.
And every time, it elicits the same responses:
- You talk about Windows: that's not what Go is good at! (Also, who cares?)
- This is very one-sided: you're not talking about the good sides of Go!
- You don't understand the compromises Go makes.
- Large companies use Go, so it can't be that bad!
- Modelling problems "correctly" is too costly, so caring about correctness is moot.
- Correctness is a spectrum, Go lets you trade some for development speed.
- Your go-to is Rust, which also has shortcomings, so your argument is invalid.
- etc.
There's also a vocal portion of commenters who wholeheartedly agree with the rant, but let's focus on unpacking the apparent conflict here.
I'll first spend a short amount of time pointing out clearly disingenuous arguments, to get them out of the way, and then I'll move on to the fairer comments, addressing them as best I can.
The author is a platypus
When you don't want to hear something, one easy way to not have to think about it at all is to convince yourself that whoever is saying it is incompetent, or that they have ulterior motives.
For example, the top comment on HackerNews right now starts like this:
The author fundamentally misunderstands language design.
As an impostor syndrome enthusiast, I would normally be sympathetic to such comments. However, it is a lazy and dismissive way to consider any sort of feedback.
It doesn't take much skill to notice a problem.
In fact, as developers get more and more senior, they tend to ignore more and more problems, because they've gotten so used to it. That's the way it's always been done, and they've learned to live with them, so they've stopped questioning it any more.
Junior developers however, get to look at everything again with a fresh pair of eyes: they haven't learned to ignore all the quirks yet, so it feels uncomfortable to them, and they tend to question it (if they're made to feel safe enough to voice their concerns).
This alone is an extremely compelling reason to hire junior developers, which I wish more companies would do, instead of banking on the fact that "seniors can get up-to-speed with our current mess faster".
As it happens, I am not a junior developer, far from it. Some way or another, over the past 12 years, seven different companies have found an excuse to pay me enough money to cover rent and then some.
I did, in fact, design a language all the way back in 2009 (when I was a wee programmer baby), focused mainly on syntactic sugar over C. At the time it was deemed interesting enough to warrant an invitation to OSCON (my first time in Portland Oregon, the capital of grunge, coffee, poor weather and whiteness), where I got to meet other young and not-so-young whippersnappers (working on Io, Ioke, Wren, JRuby, Clojure, D, Go, etc.)
It was a very interesting conference: I'm still deeply ashamed by the presentation I gave, but I remember fondly the time an audience member asked the Go team "why did you choose to ignore any research about type systems since the 1970s"? I didn't fully understand the implications at the time, but I sure do now.
I have since thoroughly lost interest in my language, because I've started caring about semantics a lot more than syntax, which is why I also haven't looked at Zig, Nim, Odin, etc: I am no longer interested in "a better C".
But all of that is completely irrelevant. It doesn't matter who points out that "maybe we shouldn't hit ourselves in the head with a rake repeatedly": that feedback ought to be taken under advisement no matter who it comes from.
Mom smokes, so it's probably okay
One of the least effective way to shop for technologies (which CTOs, VPs of engineering, principals, senior staff and staff engineers need to do regularly) is to look at what other companies are using.
It is a great way to discover technologies to evaluate (that or checking ThoughtWorks' Tech Radar), but it's far from enough.
A piece from company X on "how they used technology Y", will very rarely reflect the true cost of adopting that technology. By the point the engineers behind the post have been bullied into filling out the company's tech blog after months of an uphill battle, the decision has been made, and there's no going back.
This kind of blog doesn't lend itself to coming out and admitting that mistakes were made. It's supposed to make the company look good. It's supposed to attract new hires. It's supposed to help us stay relevant.
Typically, scathing indictments of technologies come from individuals, who have simply decided that they, as a person, can afford making a lot of people angry. Companies typically cannot.
There are some exceptions: Tailscale's blog is refreshingly candid, for example. But when reading articles like netaddr.IP: a new IP address type for Go, or Hey linker, can you spare a meg? you can react in different ways.
You can be impressed, that very smart folks are using Go, right now, and that they have gone all the way to Davy Jones' Locker and back to solve complex problems that ultimately helps deliver value to customers.
Or you can be horrified, as you realize that those complex problems only exist because Go is being used. Those complex problems would not exist in other languages, not even in C, which I can definitely not be accused of shilling for (and would not recommend as a Go replacement).
A lot of the pain in the netaddr.IP
article is caused by:
- Go not having sum types — making it really awkward to have a type that is "either an IPv4 address or an IPv6 address"
- Go choosing which data structures you need — in this case, it's the one-size-fits-all slice, for which you pay 24 bytes on 64-bit machines.
- Go not letting you do operator overloading, harkening back to the Java days
where
a == b
isn't the same asa.equals(b)
- Go's lack of support for immutable data — the only way to prevent something from being mutated is to only hand out copies of it, and to be very careful to not mutate it in the code that actually has access to the inner bits.
- Go's unwillingness to let you make an opaque "newtype". The only way to do it is to make a separate package and use interfaces for indirection, which is costly and awkward.
Unless you're out for confirmation bias, that whole article is a very compelling argument against using Go for that specific problem.
And yet Tailscale is using it. Are they wrong? Not necessarily! Because their team is made up of a bunch of Go experts. As evidenced by the other article, about the Go linker.
Because they're Go experts, they know the cost of using Go upfront, and they're equipped to make the decision whether or not it's worth it. They know how Go works deep down (something Go marketing pinky-swears you never need to worry about, why do you ask?), so if they hit edge cases, they can dive into it, fix it, and wait for their fix to be upstreamed (if ever).
But chances are, this is not you. This is not your org. You are not Google either, and you cannot afford to build a whole new type system on top of Go just to make your project (Kubernetes) work at all.
The good parts
But okay - Tailscale's usage of Go is pretty out there still. Just like my 2020 piece about Windows raised an army of "but that's not what Go is good for" objections, you could dismiss Tailscale's posts as "well that's on you for wanting to ship stuff on iOS / doing low-level network stuff".
Fair enough! Okay. Let's talk about what makes Go compelling.
Go is a pretty good async runtime, with opinionated defaults, a state-of-the-art garbage collector with two knobs, and tooling that would make C developers jealous, if they bothered looking outside their bubble.
This also describes Node.js from the very start (which is essentially libuv + V8), and I believe it also describes "modern Java", with APIs like NIO. Although I haven't checked what's happening in Java land too closely, so if you're looking for an easy inaccuracy to ignore this whole article, there you go: that's a freebie.
Because the async runtime is core to the language, it comes with tooling that does make Rust developers jealous! I talk about it in Request coalescing in async Rust, for example.
Go makes it easy to dump backtraces (stack traces) for all running goroutines in a way tokio doesn't, at this time. It is also able to detect deadlocks, it comes with its own profiler, it seemingly lets you not worry about the color of functions, etc.
Go's tooling around package management, refactoring, cross-compiling, etc., is easy to pick up and easy to love — and certainly feels at first like a definite improvement over the many person-hours lost to the whims of pkg-config, autotools, CMake, etc. Until you reach some of the arbitrary limitations that simply do not matter to the Go team, and then you're on your own.
All those and more explains why many, including me, were originally enticed by it: enough to write piles and piles of it, until its shortcomings have finally become impossible to ignore, by which point it's too late. You've made your bed, and now you've got to make yourself feel okay about lying in it.
But one really good bit does not a platform make.
The really convenient async runtime is not the only thing you adopted. You also adopted a very custom toolchain, a build system, a calling convention, a single GC (whether it works for you or not), the set of included batteries, some of which you CAN swap out, but the rest of the ecosystem won't, and most importantly, you adopted a language that happened by accident.
I will grant you that caring too much about something is grounds for suspicion. It is no secret that a large part of what comes out of academia is woefully inapplicable in the industry at this time: it is easy to lose oneself in the abstract, and come up with convoluted schemes to solve problems that do not really exist for anyone else.
I imagine this is the way some folks feel about Rust.
But caring too little about something is dangerous too.
Evidently, the Go team didn't want to design a language. What they really liked was their async runtime. And they wanted to be able to implement TCP, and HTTP, and TLS, and HTTP/2, and DNS, etc., on top of it. And then web services on top of all of that.
And so they didn't. They didn't design a language. It sorta just "happened".
Because it needed to be familiar to "Googlers, fresh out of school, who probably learned some Java/C/C++/Python" (Rob Pike, Lang NEXT 2014), it borrowed from all of these.
Just like C, it doesn't concern itself with error handling at all. Everything is a big furry ball of mutable state, and it's on you to add ifs and elses to VERY CAREFULLY (and very manually) ensure that you do not propagate invalid data.
Just like Java, it tries to erase the distinction between "value" and "reference", and so it's impossible to tell from the callsite if something is getting mutated or not:
import "fmt"
type A struct {
Value int
}
func main() {
a := A{Value: 1}
a.Change()
fmt.Printf("a.Value = %d\n", a.Value)
}
Depending on whether the signature for change is this:
func (a A) Change() {
a.Value = 2
}
Or this:
func (a *A) Change() {
a.Value = 2
}
...the local a
in main
will either get mutated or not.
And since, just like C and Java, you do not get to decide what is mutable and
what is immutable (the const
keyword in C is essentially advisory,
kinda), passing a
reference to something (to avoid a costly copy, for example) is fraught with
risk, like it getting mutated from under you, or it being held somewhere
forever, preventing it from being freed (a lesser, but very real, problem).
Go fails to prevent many other classes of errors: it makes it easy to accidentally copy a mutex, rendering it completely ineffective, or leaving struct fields uninitialized (or rather, initialized to their zero value), resulting in countless logic errors.
Taken in isolation, each of these and more can be dismissed as "just a thing to be careful about". And breaking down an argument to its smallest pieces, rebutting them one by one, is a self-defense tactic used by those who cannot afford to adjust their position in the slightest.
Which makes perfect sense, because Go is really hard to move away from.
Go is an island
Unless you use cgo, (but cgo is not Go), you are living in the Plan 9 cinematic universe.
The Go toolchain does not use the assembly language everyone else knows about. It does not use the linkers everyone else knows about. It does not let you use the debuggers everyone knows about, the memory checkers everyone knows about, or the calling conventions everyone else has agreed to suffer, in the interest of interoperability.
Go is closer to closed-world languages than it is to C or C++. Even Node.js, Python and Ruby are not as hostile to FFI.
To a large extent, this is a feature: being different is the point. And it comes with its benefits. Being able to profile the internals of the TLS and HTTP stacks the same way you do your business logic is fantastic. (Whereas in dynamic languages, the stack trace stops at OpenSSL). And that code takes full advantage of the lack of function coloring: it can let the runtime worry about non-blocking I/O and scheduling.
But it comes at a terrible cost, too. There is excellent tooling out there for many things, which you cannot use with Go (you can use it for the cgo parts, but again, you should not use cgo if you want the Real Go Experience). All the "institutional knowledge" there is lost, and must be relearned from scratch.
It also makes it extremely hard to integrate Go with anything else, whether it's upstream (calling C from Go) or downstream (calling Go from Ruby). Both these scenarios involve cgo, or, if you're unreasonably brave, a terrifying hack.
Note: as of Go 1.13, binary-only packages are no longer supported
Making Go play nice with another language (any other language) is really hard. Calling C from Go, nevermind the cost of crossing the FFI boundary, involves manual descriptor tracking, so as to not break the GC. (WebAssembly had the same problem before reference types!)
Calling Go from anything involves shoving the whole Go runtime (GC included) into whatever you're running: expect a very large static library and all the operational burden of running Go code as a regular executable.
After spending years doing those FFI dances in both directions, I've reached the conclusion that the only good boundary with Go is a network boundary.
Integrating with Go is relatively painless if you can afford to pay the latency cost of doing RPC over TCP (whether it's a REST-ish HTTP/1 API, something like JSON-RPC, a more complicated scheme like GRPC, etc.). It's also the only way to make sure it doesn't "infect" your whole codebase.
But even that is costly: you need to maintain invariants on both sides of the boundary. In Rust, one would typically reach for something like serde for that, which, combined with sum types and the lack of zero values, lets you make reasonably sure that what you're holding is what you think you're holding: if a number is zero, it was meant to be zero, it wasn't just missing.
(All this goes out the window if you use a serialization format like protobuf, which has all the drawbacks of Go's type system and none of the advantages).
That still leaves you with the Go side of things, where unless you use some sort of validation package religiously, you need to be ever vigilant not to let bad data slip in, because the compiler does nothing to help you maintain those invariants.
And that brings us to the larger overall problem of the Go culture.
All or nothing (so let's do nothing)
I've mentioned "leaving struct fields uninitialized". This happens easily when you make a code change from something like this:
package main
import "log"
type Params struct {
a int32
}
func work(p Params) {
log.Printf("Working with a=%v", p.a)
}
func main() {
work(Params{
a: 47,
})
}
To something like this:
package main
import "log"
type Params struct {
a int32
b int32
}
func work(p Params) {
log.Printf("Working with a=%v, b=%v", p.a, p.b)
}
func main() {
work(Params{
a: 47,
})
}
That second program prints this:
2009/11/10 23:00:00 Working with a=47, b=0
We've essentially changed the function signature, but forgot to update a callsite. This doesn't bother the compiler at all.
Oddly enough, if our function was structured like this:
package main
import "log"
func work(a int32, b int32) {
log.Printf("Working with a=%v, b=%v", p.a, p.b)
}
func main() {
work(47)
}
Then we'd get a compile error:
./prog.go:6:40: undefined: p
./prog.go:10:7: not enough arguments in call to work
have (number)
want (int32, int32)
Go build failed.
Why does the Go compiler suddenly care if we provide explicit values now? If the language was self-consistent, it would let me omit both parameters, and just default to zero.
Because one of the tenets of Go is that zero values are good, actually.
See, they let you go fast. If you did mean for b
to be zero, you can just
not specify it.
And sometimes it works fine, because zero values do mean something:
package main
import "log"
type Container struct {
Items []int32
}
func (c *Container) Inspect() {
log.Printf("We have %v items", len(c.Items))
}
func main() {
var c Container
c.Inspect()
}
2009/11/10 23:00:00 We have 0 items
Program exited.
This is fine! Because the []int32
slice is actually a reference type, and its
zero value is nil
, and len(nil)
just returns zero, because "obviously", a
nil slice is empty.
And sometimes it's not fine, because zero values don't mean what you think they mean:
package main
type Container struct {
Items map[string]int32
}
func (c *Container) Insert(key string, value int32) {
c.Items[key] = value
}
func main() {
var c Container
c.Insert("number", 32)
}
panic: assignment to entry in nil map
goroutine 1 [running]:
main.(*Container).Insert(...)
/tmp/sandbox115204525/prog.go:8
main.main()
/tmp/sandbox115204525/prog.go:13 +0x2e
Program exited.
In that case, you should've initialized the map first (which is also actually
a reference type), with make
, or with a map literal.
That alone is enough to cause incidents and outages that wake people up at night, but everything gets worse real fast when you consider the Channel Axioms:
- A send to a
nil
channel blocks forever - A receive from a
nil
channel blocks forever - A send to a closed channel panics
- A receive from a closed channel returns the zero value immediately
Because there had to be a meaning for nil channels, this is what was picked. Good thing there's pprof to find those deadlocks!
And because there's no way to "move" out of values, there has to be meaning for receiving and sending to closed channels, too, because even after you close them you can still interact with them.
(Whereas in a language like Rust, a channel closes when its Sender is dropped, which only happens when nobody can touch it again, ever. The same probably applies to C++ and a bunch of other languages, this is not new stuff).
"Zero values have meaning" is naive, and clearly untrue when you consider the inputs of, like... almost everything. There's so many situations when values need to be "one of these known options, and nothing else", and that's where sum types come in (in Rust, that's enums).
And Go's response to that is: just be careful. Just like C's response before it.
Just don't access the return value if you haven't checked the error value. Just have a half-dozen people carefully review each trivial code change to make sure you're not accidentally propagating a nil, zero, or empty string way too deep into your system.
It's just another thing watch out for.
It's not like you can prevent all problems anyway.
That is true! There's a ton of things to watch out for, always. Something as simple as downloading a file to disk... isn't! At all!
And you can write logic errors in just about every language! And if you try hard enough I'm sure you can drive a train straight into a tree! It's just much easier with a car.
The fallacy here is that because it is impossible to solve everything, we shouldn't even attempt to solve some of it. By that same logic, it's always worthless to support any individual financially, because it does nothing to help every other individual who's struggling.
And this is another self-defense tactic: to refuse to consider anything but the most extreme version of a position, and point out how ridiculous it is (ignoring the fact that nobody is actually defending that ridiculous, extreme position).
So let's talk about that position.
"Rust is perfect and you're all idiots"
I so wish that was how I felt, because it would be so much simpler to explain.
That fantasy version of my argument is so easy to defeat, too. "How come you use Linux then? That's written in C". "Unsafe Rust is incredibly hard to write correctly, how do you feel about that?"
The success of Go is due in large part to it having batteries included and opinionated defaults.
The success of Rust is due in large part to it being easy to adopt piecemeal and playing nice with others.
They are both success stories, just very different ones.
If the boogeyman is to be believed, "Rust shills" would have everyone immediately throw away everything, and replace it with The Only Good Language Out there.
This is so very far from what's happening in the real world, it's tragic.
Firefox is largely a C++ codebase, but ships several crucial components in Rust. The Android project recently reimplemented its entire Bluetooth stack in Rust. Rust cryptography code has found its way into Python, Rust HTTP code has found its way into curl (as one of many available backends), and the Linux kernel Rust patches are looking better every round.
None of these are without challenges, and none of the people involved are denying said challenges. But all of these are incremental and pragmatic, very progressively porting parts to a safer language where it makes sense.
We are very far from a "throwing the baby out with the bathwater" approach. The Rust codegen backend literally everyone uses is a mountain of C++ code (LLVM). The alternatives are not competitors by any stretch of the imagination, except maybe for another mountain of C++ code.
The most hardcore Rust users are the most vocal about issues like build times, the lack of certain language features (I just want GATs!), and all the other shortcomings everyone else is also talking about.
And they're also the first to be on the lookout for other, newer languages, that tackle the same kind of problems, but do it even better.
But as with the "questioning your credentials" angle, this is all irrelevant. The current trends could be dangerous snake oil and we could have literally no decent alternative, and it would still be worth talking about. No matter who raises the point!
Creating false dichotomies isn't going to help resolve any of this.
Folks who develop an allergic reaction to "big balls of mutable state without sum types" tend to gravitate towards languages that gives them control over mutability, lifetimes, and lets them build abstractions. That those languages happen to often be Go and Rust is immaterial. Sometimes it's C and Haskell. Sometimes it's ECMAScript and Elixir. I can't speak to those, but they do happen.
You don't have to choose between "going fast" and "modelling literally every last detail of the problem space". And you're not stuck doing one or the other if you choose Go or Rust.
You can, at great cost, write extremely careful Go code that stays far away from stringly-typed values and constantly checks invariants — you just get no help from the compiler whatsoever.
And you can, fairly easily, decide not to care about a whole bunch of cases when
writing Rust code. For example, if you're not writing a low-level command-line
utility like ls
, you can decide to only care about paths that are valid UTF-8
strings by using camino.
When handling errors, it is extremely common to list a few options we do care about and want to do special handling for, and shove everything else into an "Other" or "Internal" or "Unknown" variant, which we can flesh out later as needed, when reviewing logs.
The "correct" way to assume an optional value is set, is to assert that it is,
not to use it regardless. That's the difference between calling json.Unmarshal
and crossing your fingers, and calling
unwrap()
on an Option<T>
.
And it's so much easier to do it correctly when the type system lets you spell out what the options are — even when it's as simple as "ok" or "not ok".
Which brings me to the next argument, by far the most reasonable of the bunch.
Go as a prototyping/starter language
We've reached the fifth stage of grief: acceptance.
Fine. It may well be that Go is not adequate for production services unless your shop is literally made up of Go experts (Tailscale) or you have infinite money to spend on engineering costs (Google).
But surely there's still a place for it.
After all, Go is an easy language to pick up (because it's so small, right?), and a lot of folks have learned it by now, so it's easy to recruit Go developers, so we can get lots of them on the cheap and just uhhh prototype a few systems?
And then later when things get hard (as they always do at scale) we'll either rewrite it to something else, or we'll bring in experts, we'll figure something out.
Except there is no such thing as throwaway code.
All engineering organizations I've ever seen are EXTREMELY rewrite-averse, and for good reason! They take time, orchestrating a seamless transition is hard, details get lost in the shuffle, you're not shipping new features while you're doing that, you have to retrain your staff to be effective at the new thing, etc.
Tons of good, compelling reasons.
So very few things eventually end up being rewritten. And as more and more components get written in Go, there's more and more reason to keep doing that: not because it's working particularly well for you, but because interacting with the existing codebases from literally anything else is so painful (except over the network, and even then.. see "Go is an island" above).
So things essentially never improve. All the Go pitfalls, all the things the language and compiler doesn't help you prevent, are an issue for everyone, fresh or experienced. Linters help some, but can never do quite as much as compiler for languages that took these problems seriously to begin with. And they slow down development, cutting into the "fast development" promise.
All the complexity that doesn't live in the language now lives in your codebase. All the invariants you don't have to spell out using types, you now have to spell out using code: the signal-to-noise ratio of your (very large) codebases is extremely poor.
Because it has been decided that abstractions are for academics and fools, and all you really need is slices and maps and channels and funcs and structs, it becomes extremely hard to follow what any program is doing at a high level, because everywhere you look, you get bogged down in imperative code doing trivial data manipulation or error propagation.
Because function signatures don't tell you much of anything (does this mutate
data? does it hold onto it? is a zero value there okay? does it start a
goroutine? can that channel be nil? what types can I really pass for this
interface{}
param?), you rely on documentation, which is costly to update, and
costlier still not to update, resulting in more and more bugs.
The very reason I don't consider Go a language "suitable for beginners" is precisely that its compiler accepts so much code that is very clearly wrong.
It takes a lot of experience about everything around the language, everything Go willfully leaves as an exercise to the writer, to write semi-decent Go code, and even then, I consider it more effort than it's worth.
The "worse is better" debate was never about some people wanting to feel superior by adding needless complexity, then mastering it.
Quite the contrary, it's an admission that humans suck at maintaining invariants. All of us. But we are capable of building tools that can help us doing that. And focusing our efforts on that has an upfront cost, but that cost is well worth it.
I thought we'd moved past the notion that "programming is typing on a keyboard" long ago, but when I keep reading "but it's fast to write lots of Go!", I'm not so sure.
Inherent complexity does not go away if you close your eyes.
When you choose not to care about complexity, you're merely pushing it onto other developers in your org, ops people, your customers, someone. Now they have to work around your assumptions to make sure everything keeps running smoothly.
And nowadays, I'm often that someone, and I'm tired of it.
Because there is a lot to like in Go at first, because it's so easy to pick up, but so hard to move away from, and because the cost of choosing it in the first place reveals itself slowly over time, and compounds, only becoming unbearable when it's much too late, this is not a discussion we can afford to ignore as an industry.
Until we demand better of our tools, we are doomed to be woken up in the middle
of the night, over and over again, because some nil
value slipped in where it
never should have.
It's the Billion Dollar Mistake all over again.
What did we learn?
Here's a list of lies we tell ourselves to keep using Golang:
- Others use it, so it must be good for us too
- Everyone who has concerns about it is an elitist jerk
- Its attractive async runtime and GC make up for everything else
- Every language design flaw is ok in isolation, and ok in aggregate too
- We can overcome these by "just being careful" or adding more linters/eyeballs
- Because it's easy to write, it's easy to develop production software with
- Because the language is simple, everything else is, too
- We can do just a little of it, or just at first, or we can move away from it easily
- We can always rewrite it later
Here's another article just for you:
The RustConf Keynote Fiasco, explained
Disclaimer:
At some point in this article, I discuss The Rust Foundation. I have received a $5000 grant from them in 2023 for making educational articles and videos about Rust.
I have NOT signed any non-disclosure, non-disparagement, or any other sort of agreement that would prevent me from saying exactly how I feel about their track record.
Disclaimer: