Despite their peaceful appearance, game developers actually lead thrilling lives! Here are three things I learned (or re-learned) about yesterday that I'd like to share with you, in the form of assumptions that revealed false.

VSync is relatively straightforward. Right?

As an obsessive-compulsive, bipolar, perfectionist game dev, getting your game to run smoothly on all kinds of operating systems, graphics cards, and drivers combination is something of a holy grail. Many look for it, but let's be honest here, it never really turns out as expected.

See, it would all be easier if 1000 / 60 was a round number. Except it's not. It's actually 16.6_ - that is, 16 dot 6 ad infinitam. Our lives as gamedevs would also be significantly easier if default OS timer resolutions were better than, say, one millisecond.

But, Amos, I hear you say, have you never heard of QueryPerformanceCounter? Sub-millisecond is totally a thing! Well, sure. But as it turns out, it doesn't even really matter.

When you want your game to run at a smooth 60 frames per second, aka "twice the framerate of Tomb Raider on XBox One", there is one obvious solution that comes to mind. Provided your game can do all its computations and rendering in less than 16ms, surely you can just measure how much time has passed since the last frame and sleep for the remainder, right? So let's say it takes:

Then you could just sleep for.. about 8ms, and everything would be fine, right? Right? Wrong. See, you can either enable VSync or disable VSync. If you disable VSync, you'll have tearing problems: tearing happens when what is show on the screen is a mix of several frames: the previous one, and the one being currently drawn. Sometimes it gets even worse:

And if you enable VSync - aka Vertical Synchronization - then screen upates are synchronized with the monitor's refresh rate. Let's say we live in a perfect world and every player of your game ever has a 60Hz refresh rate, that should line up perfectly with your 60FPS game engine right? RIGHT?

Well no. First off, as mentioned earlier, 16.6666 is not a whole number. Then, measuring the number of milliseconds since program start is not all that precise either. And third, sleeping for a given number of milliseconds is not guaranteed. It might sleep less, or it might sleep more. And does the driver say when render + game logic + sleep takes more than 16.666ms?

Well well well Mr Gamedev. I see you've taken more than 1000 / 60 ms to compute a frame. Would you like me to skip one maybe?

That's what happens. You skip frames, not because your game is too slow, not because the hardware is too slow, just because the operating system / GPU driver is too dumb. And then the game doesn't look smooth, and then you can kiss your IGF nomination goodbye.

But then there are other solutions! How about we do this:

That sounds great, right? Except, 60Hz is a lie. If you do that, chances are you'll notice the "frame time" oscillates not only from 16ms to 18ms as expected (in the best case), but much worse, from 13ms to 21ms on some hardware/os/driver combinations!

And then what happens?

And then your whole game looks even worse than before because sometimes there's one game update, sometimes there are none, and sometimes there are two.

My solution

It's not perfect, but it works for us so far: instead of targetting 60UPS (game updates per second), I'm targetting 120UPS. Two or fewer game updates are done per rendered frame, but never more. It looks smooth, and as an added bonus, the physics simulation is more precise than before (things go fast!).

One interesting thing to note about VSync on OSX: when a window is completely hidden, OSX doesn't care to synchronize the render framerate with the monitor's refresh rate - it goes up to 400FPS or more behind your back. That explains why for the longest time, XBMC would make my MacBookPro go hot and steamy when it was in the background, seemingly inactive. Fun times!

Let's move on to the second surprise.

MinGW is basically GCC. Right?

Well... yes and no. Mostly yes, but what I (re-)discovered yesterday was that including windows.h will make a certain number of words "reserved", as in "they're defined to some numeric value and you can't use them in a declaration ever".

In my case, I was compiling something that used cairo and discovered that all those symbols were affected:

Proof:

Let's see just how many of those windows.h pulls in:

$ i686-w64-mingw32-gcc -dM -E - < /dev/null \ | fgrep '#define' | wc -l
240
$ echo '#include <windows.h>' | i686-w64-mingw32-gcc -dM -E - \ | fgrep
'#define' | wc -l
21154

Oh.

Cairo is some dependencies-heavy GLib stuff. Right?

Cairo is that awesome vector drawing library that's open source and is the first thing everyone uses to do vector graphics except if they were born and raised on Windows or if they're Mozilla or some crazy russian guy (rest in peace, you beautiful bastard).

It's also the stuff underneath much of the GTK library, you know, the UI library used by Gnome and a ton of other stuff? What do you mean it's not the year of the Linux desktop? Get out there already!

Anyway, my understanding was that Cairo was rather dependancy heavy. The last time I used it for a game (I think it was Lonely Planet - ah the memories!) getting Cairo, Pixman, Freetype, libpng, GDK, GTK, oh wait, GLib, to compile was kind of a nightmare.

But I have grown and instead of fearing autotools, I have now come to embrace then with their full quirks, and here's the beautiful result: thanks to the autotools everyone hates (and careful project design from the cairo authors), if you use Cairo to draw to an image surface, all you need is cairo and pixman. That's it.

For the curious ones, I've put up a gist of how to do it. It looks something like this:

./configure --enable-xlib=no --enable-xcb=no --enable-xcb-shm=no
--enable-win32=no --enable-quartz=no --enable-script=no --enable-ft=no
--enable-fc=no --enable-ps=no --enable-pdf=no --enable-svg=no
--enable-gobject=no --enable-trace=no --enable-interpreter=no
--enable-png=no

Basically, disable ALL the things. When all compiled, on Linux64, the dynamic libraries for cairo and pixman weighed about 11MB - a "ahAH! I was right! It was bad!" moment for me. But then I realized I forgot to strip them, and it brought them down to 600Kb and 400Kb respectively.

So, sure, I won't be able to use Cairo's text rendering functions, nor export to PDF, nor render directly on an Win32 window, and I won't be loading PNGs through cairo ,and I won't be tracing cairo calls and replaying them, and.., etc. But 1MB to pay for gorgeous vector graphics that I can integrate seamlessly in my game via a streaming OpenGL texture? Worth it.

Have a nice week-end!