I started working on the itch.io desktop app over 4 years ago.

It has arguably been my main project ever since, along with companion projects like butler, capsule and many smaller libraries.

I'm fuzzy on the initial history, but I remember the codebase went through a lot of changes. As early as 2014, the whole codebase was ported from vanilla JavaScript to TypeScript. In 2016, I released a timeline of all the changes. In 2018, I released a postmortem for v25.

For this article, I want to focus specifically on the structure of the app, Electron's two-process nature, data flow, Redux usage, and the butler daemon.

The original model

In the beginning, itch was a "pure" Electron app. It downloaded files using node's http API, and extracted zip files with node-unzip (now defunct).

It became obvious pretty quickly that node-unzip was not going to cut it. It only supported .zip files, and it was easy to find .zip files it couldn't even extract.

Soon enough, I switched to 7-zip to extract archives. I was excited at first - this meant we could support .7z files! And .rar files! And other formats still. This seemed promising.

There were a few "wrappers" over 7-zip for node, but none of them did quite what I wanted. I wanted to show a proper progress bar, which meant:

With 7-zip, this involved parsing its output.

And let me tell you, 7-zip's output is not machine-friendly:

Shell session
$ 7z l butler.zip

7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.utf8,Utf16=on,HugeFiles=on,64 bits,2 CPUs Intel(R) Core(TM)2 Duo CPU     P8700  @ 2.53GHz (1067A),ASM)

Scanning the drive for archives:
1 file, 9782793 bytes (9554 KiB)

Listing archive: butler.zip

--
Path = butler.zip
Type = zip
Physical Size = 9782793

   Date      Time    Attr         Size   Compressed  Name
------------------- ----- ------------ ------------  ------------------------
2019-08-13 19:37:26 .....      2273248       957583  7z.so
2019-08-13 19:37:27 .....     21413416      8756569  butler
2019-08-13 19:37:27 .....       193584        68245  libc7zip.so
------------------- ----- ------------ ------------  ------------------------
2019-08-13 19:37:27           23880248      9782397  3 files

Were spaces in file names a challenge to handle correctly? You bet! Was output inconsistent across platforms (Windows / Linux / mac)? Absolutely.

In addition to these issues, 7-zip didn't always handle non-ASCII entry names properly. This was going to become a recurring theme in the years to follow.

Back then, I was still the proud owner of a not-yet-burned-out MacBookPro, so I was familiar with The Unarchiver:

The Unarchiver's homepage at the time of this writing (not an ad).

The Unarchiver supported a lot of archive formats, and it even had a JSON output mode! I still had to compute total file sizes myself, but this was a marked improvement. It also had better text encoding detection, which came in really handy because those archives up on itch.io come from a variety of folks and archivers.

There was one.. small... problem. I only had builds of The Unarchiver (well, its command-line counterpart) for macOS. And it was a pretty mac-y app, as in, it was written in Objective-C and required some frameworks.

But only a few days after discovering GNUStep, I had Windows & Linux builds up and running.

Shell session
$ lsar -j ./butler.zip                 
{     
  "lsarFormatVersion": 2,
  "lsarContents": [              
    {                        
      "ZipLocalDate": 1326290093,  
      "ZipCRC32": 1198917504,         
      "XADFileName": "7z.so", 
      "XADCompressionName": "Deflate",
      "ZipExtractVersion": 20,                                                
      "ZipFlags": 8,               
      "ZipFileAttributes": -2115174400, 
      "XADPosixPermissions": 33261,
      "ZipCompressionMethod": 8,
      "XADDataLength": 957583, 
      "ZipOSName": "Unix",                                                    
      "XADDataOffset": 44,
      "XADLastModificationDate": "2019-08-13 18:37:26 +0200",
      "ZipOS": 3,                
      "XADFileSize": 2273248,
      "XADCompressedSize": 957583,
      "XADIndex": 0
    },                       
    {                 
      "ZipLocalDate": 1326290093,
      "ZipCRC32": 726726454,
      "XADFileName": "butler",     
      "XADCompressionName": "Deflate",
      "ZipExtractVersion": 20,
      "ZipFlags": 8,
      "ZipFileAttributes": -2115174400, 
      "XADPosixPermissions": 33261,
(etc.)

And life was good. Well, sort of.

The lsar tool from The Unarchiver supported JSON output, but the unar tool didn't - so, back to text parsing we went.

Shell session
$ unar ./butler.zip
./butler.zip: Zip
  7z.so  (2273248 B)... OK.
  butler  (21413416 B)... OK.
  libc7zip.so  (193584 B)... OK.
Successfully extracted to "butler".

That output wasn't too machine-friendly either, but it was better. Worst case scenario, our progress bar was inaccurate, but the whole archive ended up being extracted anyway.

But wait, there's more

I've started working on butler pretty early too, its first commit happened only a month after my first commit on itch.

I had gotten pretty frustrated with trying to download files using the Node.JS APIs. Measuring progress was a headache. Pausing and resuming was, too. Any file system operation was a huge pain and, on Electron, introduced bugs that were annoying to find and to work around.

The best example is probably when we noticed the "uninstall" functionality was broken. But only some of the time, and only for some games. Also, if you restarted the app, then you could uninstall the offending game just fine. What was going on?

Well, electron apps tend to ship with ".asar archives". Instead of having a folder structure like this:

- MyApp/
  - electron.exe
  - dependency.dll
  - resources/
    - app/
        - index.js
        - foo.js
        - bar.js

They have a folder structure like this:

- MyApp/
  - electron.exe
  - dependency.dll
  - resources/
    - app.asar

And from the perspective of an electron app, the latter is transparently equivalent to the former, when you use node's fs API.

With some caveats, of course - anything inside an .asar archive is read-only, but apart from that it behaves pretty much exactly as if it was an actual folder.

Which is fine, except what is the purpose of the itch app (itself an Electron app)? To install games. And what technology is sometimes used to package up HTML5 games as native games? Electron.

So the itch app downloaded a .zip file (hooking through node streams to show a progress bar), called lsar to know its uncompressed size, called unar and parsed its streaming output to show another progress bar, and then used the node fs API to "configure" the game, which mostly involves poking a bunch of files until we can find something to launch (an executable, an index.html file, etc.)

And while it was configuring, it happily walked down any .asar files shipped with the game. Which had the side-effect of locking the .asar file for the lifetime of the itch app. Thus making it impossible to remove until the itch app was restarted.

How fun, right? How delicious.

Of course, there's a workaround, which is to use the original-fs module instead of the fs module. And there's another host of problems associated with that - of course - because, I wasn't using the fs module directly. I was using fstream. Or fs-extra. Or a promisified version of whatever random npm package was recommended back then. And those just required fs.

Of course, there's a workaround for that too - you can patch node.js's require mechanism to substitute a module with another one. And, oh yes, you better believe I shipped that workaround in production. What else was I to do???

Most of my experience with Electron and Node APIs can be summarized as "please, I have a family. I'll do anything you want, please have mercy".

But my attempts at humanizing myself in the face of the JavaScript ecosystem proved futile.

Enter butler

So, I was tired of fighting with Node and Electron APIs all the time and I thought, hey, you know what language is good with networks and files? Go. And I was right (sort of).

So I moved download functionality to butler, in the form of a simple command:

Shell session
$ butler dl http://archive.ubuntu.com/ubuntu/dists/eoan/main/installer-amd64/current/images/netboot/mini.iso ./mini.iso
Resuming (36.77 MiB + 36.23 MiB = 73.00 MiB) download from Apache/2.4.18 (Ubuntu) at archive.ubuntu.com
▐▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░▌  56.70% 

Learning from my past experience, I made butler have a JSON output mode:

Shell session
$ butler dl --json http://archive.ubuntu.com/ubuntu/dists/eoan/main/installer-amd64/current/images/netboot/mini.iso ./mini.iso
{"level":"info","message":"Resuming (41.62 MiB + 31.38 MiB = 73.00 MiB) download from Apache/2.4.18 (Ubuntu) at archive.ubuntu.com","time":1574446702,"type":"log"}
{"bps":0,"eta":0,"progress":0.5963806256856004,"time":1574446702,"type":"progress"}
{"bps":4764701.552129273,"eta":5.895,"progress":0.6335970212335456,"time":1574446703,"type":"progress"}
{"bps":4764701.552129273,"eta":5.322,"progress":0.6692214861308059,"time":1574446703,"type":"progress"}
{"bps":5436019.697367432,"eta":4.516,"progress":0.7054579747866278,"time":1574446704,"type":"progress"}
{"bps":5436019.697367432,"eta":3.943,"progress":0.7427981128431347,"time":1574446704,"type":"progress"}

This was easy to parse, it estimated the current download speed and remaining time, progress events were emitted regularly (but not too often, so as to not spend too much time updating the UI). It handled resuming downloads gracefully - and more importantly, I never had to look at node streams code ever again in my life. Well. Close enough, anyway.

This worked so well, I started moving other functionality over to butler.

It turns out the standard Go package for reading zip files is pretty dynamite.

Shell session
$ butler unzip ./butler.zip --json
{"level":"info","message":"• Extracting zip ./butler.zip to .","time":1574447089,"type":"log"}
{"level":"info","message":"Using 1 workers","time":1574447089,"type":"log"}
{"time":1574447090,"type":"result","value":{"type":"entry","path":"7z.so"}}
{"bps":0,"eta":0,"progress":0.9912272267859195,"time":1574447090,"type":"progress"}
{"time":1574447090,"type":"result","value":{"type":"entry","path":"butler"}}
{"time":1574447090,"type":"result","value":{"type":"entry","path":"libc7zip.so"}}
{"level":"info","message":"Extracted 0 dirs, 3 files, 0 symlinks, 22.77 MiB at 44.98 MiB/s/s","time":1574447090,"type":"log"}

By providing an unzip command, I could have full control over the extraction process: first list the archive, then compute the total size by reading data structures, and use that to emit events that reported our progress, speed, and ETA for the whole extraction, not just individual entries.

Since I even controlled how an individual entry was extracted, I was able to make the progress bar smoother. A large game file (1GB+) previously meant a stuck progress bar until it was fully extracted. Now, progress events would come in every half-second, no matter what.

Diffing and patching

In January of 2016, I started working on diffing & patching. Adam from Finji had contacted us about running a private beta round on itch.io. He expected they would be pushing frequent updates, and although I was pretty happy with how the itch app installed games, the way it upgraded them was unsatisfactory.

When all you have are .zip archives of all versions of a game, the best you can do is, well, download the next archive to disk, extract it fully, and swap it with the old installed folder. This meant, worst-case-scenario, needed 3x the size of the game in disk space. Once for the old build, once for the new archive, and once for the new build.

So I looked at diffing algorithms and found rsync. I've already done a whole write-up on diffing & patching in 2017, so you can go read that if you want more details.

So, a few new commands appeared in butler: diff to create patches, apply to apply them. push was a convenience method to diff and upload at the same time (this was meant for the Finji folks, at the time of this writing, over two hundred thousand builds have been pushed to itch.io with it).

dl was still used to download patch files. All of these had JSON output modes, so their progress could be monitored from the itch app and displayed in the UI.

But the whole process (calling butler dl, calling butler apply, configuring, etc.), was still driven by itch, the electron app. This ended up changing too.

A tale of three processes

Somewhere around v23 of the app, this is how the app roughly worked:

First, the electron app started up. By which I mean, a copy of a heavily-patched node executable started up. It read a bunch of javascript files from ./resources/app.asar and started executing them in what we'll call the main process.

Quickly after, that main process decides to create a BrowserWindow, which involves at least two extra processes (one of which only does rendering, the other doing whatever else a browser needs to do), which are a heavily-patched version of Chromium, modified - among other things - to play nicely with node's event loop.

For simplicity, we'll refer to all the Chromium-adjacent / UI-focused processes as "the renderer process".

In March of 2016, I had picked Redux to manage state throughout the app. I liked the basic principle, which is:

It's a fascinating rabbit hole to go down. You can combine multiple reducers, so that each of them is responsible for a different part of the state tree.

And v23's state was indeed divided in a few sections:

TypeScript code
/**
 * The entire application state, following the redux philosophy
 */
export interface IState {
    history: IHistoryState;
    modals: IModalsState;
    globalMarket: IGlobalMarketState;
    market: IUserMarketState;
    system: ISystemState;
    setup: ISetupState;
    rememberedSessions: IRememberedSessionsState;
    session: ISessionState;
    i18n: II18nState;
    ui: IUIState;
    selfUpdate: ISelfUpdateState;
    preferences: IPreferencesState;
    tasks: ITasksState;
    downloads: IDownloadsState;
    status: IStatusState;
    gameUpdates: IGameUpdatesState;
}

Some of these are subdivided into further subsections, etc. There's several upsides to this model. Any change in state has to go through a reducer, so it's something you can easily log. A bunch of tooling has been developed around Redux, like interactive debuggers that allow... time travel.

Another nice thing about that design was that both the main process and the renderer process shared states.

When a new browser window was opened, it received a snapshot of the current application state. Whenever an action was dispatched in either process, it was sent via IPC (Inter-Process Communication) over to the other process, and the same set of reducers ran on both sides, making sure the state was always consistent.

This meant that, if there was some code to drive the installation of a game from the main process, and it dispatched actions so that the app state contained information like: what are we installing right now? How far along are we? How much time is left? Then we could use that information from the renderer process, directly in React components, using react-redux.

This inter-process synchronization was achieved through the redux-electron-store module, which, believe or not, broke in the middle of a QA session right before a release. (I ended up forking it soon after, because I needed it to behave slightly differently.)

At the time, I wrote an ASCII art diagram to explain what both processes were doing:

                                    ||
                                    ||
         NODE.JS SIDE               ||                CHROMIUM SIDE
         aka 'metal'                ||                aka 'chrome'
  (process.type === 'browser')      ||         (process.type === 'renderer')
       _______________              ||            ___________________
      [               ]             ||           [                   ]
      [ browser store ] ---------- diff ------>> [ renderer store(s) ]
      [_______________]             ||           [___________________]
              ^                     ||                    ^
compose into  |                     ||                    | read from
              |                     ||                    |           
   ___________|____________         ||            ________|_________
  [                        ]        ||           [                  ]
  [  reactors + reducers   ] <<-- actions ------ [ react components ]
  [________________________]        ||           [__________________]
              |                     ||                    |
interact with |                     ||                    | render to
              v                     ||                    v
            [ OS ]                  ||                 [ DOM ]
                                    ||               
     (stuff like windowing,         ||           (which we can only touch
     file system, etc., things      ||           from this process)
     HTML5 sandboxes us out of      ||
     and that still need to run     ||
     even when the chromium window  ||
     is entirey closed)             ||
                                    ||
                                    || <- process barrier, only
                                    ||    things that cross it are JSON
                                    ||    payloads sent asynchronously via IPC

This worked, somewhat. But side effects were always kind of a pain.

Redux was all good and fine as long as events dispatched actions which changed the state which was then rendered via React. But it didn't provide the opportunity to do things like launch external processes in response to actions being dispatched.

There were several competing solutions for side-effects in Redux. They all involved colorful words like "sagas" and "thunks", and mostly, most of them broke in hard-to-diagnose ways and were rewritten several times, which I didn't have the patience for, so I ended up doing my own thing, which was relatively simple.

Code in either process could subscribe to actions, and then decide to, uh, "do side effects" in response to them. Usually, during the course of those side effects, other actions were dispatched by what I started calling "reactors", to update the application's state to reflect the current status of the task (like installing a game).

But that's not the only way in which the two processes communicated.

butlerd

At some point, I realized that so much functionality had shifted over to butler, it probably made sense to have a single instance of butler running throughout the app's entire lifetime.

I introduced the "daemon mode" for butler.

Shell session
$ butler daemon --json --dbpath ~/.config/itch/db/butler.db
{"level":"info","message":"[db prepare] [debug] Current DB version is 1542741863","time":1574451746,"type":"log"}
{"level":"info","message":"[db prepare] [debug] Latest migration is   1542741863","time":1574451746,"type":"log"}
{"level":"info","message":"[db prepare] [debug] No migrations to run","time":1574451746,"type":"log"}
{"secret":"(redacted)","tcp":{"address":"127.0.0.1:38401"},"time":1574451746,"type":"butlerd/listen-notification"}

So now three processes were required to make the app run. I'm not going to comment on whether this is wasteful, that's not really the point of this article. The point was, this worked well.

And as butler had started being responsible for other tasks, like making API requests to the itch.io server, I wanted a way for both the main and the rendererer process to be able to make requests to butlerd.

So I figured, well, the renderer process is pretty much Chromium, it's fairly good at HTTP, and so is Go, so maybe something can be arranged here?

The problem is that the kind of requests one makes to butlerd aren't RESTful requests. They're not one-off questions. They're more like conversations, like:

And:

Meanwhile, in the first conversation:

This would happen if downloads were paused. As soon as they're started up again:

In other words, the information exchanged between butlerd and itch is more like Redux messages than it is like REST request-responses.

So there's no 1-1 mapping from butlerd JSON-RPC requests and notifications to HTTP requests. But I made it work anyway, using Server-sent events.

So for each conversation, itch would generate a "conversation ID", make a request that would turn into a stream of server-sent events, and:

And this had advantages! Like for example, all the messages exchanged between the renderer process and butlerd would be visible in the Chrome DevTools, just like other requests.

One issue was, each itch->butlerd cost us a full TCP connection. So I added HTTP/2 support to butler, which Electron (well, Chromium) took full advantage of - it was now multiplexing requests over fewer connections, which were now persistent. Latency decreased.

I ended up abandoning the HTTP transport for two reasons: first, it ended up being a complicated scheme. There were several non-trivial bugs in the handling of server-sent events alongside regular requests. And it was only useful for the renderer process, as the main process was using the regular TCP transport.

Secondly, I added a way for itch to upgrade its version of butler seamlessly: as it was running, it would spin up the new version of butler, connect to it, send new requests to it, and the old one would spin down as soon as it was down handling all the pending requests.

But, what I can only assume is a bug in someone's HTTP/2 implementation caused connections to the older butler instance to never close. (It worked fine with HTTP/1.1).

I had also been developing debugging tools for butlerd-over-TCP, that I wanted to use to inspect traffic from both the main and the renderer processes, and with that HTTP transport, I was unable to do that. So I eventually dropped the HTTP transport.

But it's not possible for a regular webpage to make a TCP connection. In the web security model, that's a big no no. You do end up speaking TCP, but only through higher-level protocols, like HTTP, or WebRTC. In Electron however, you can enable node integration, and then you can require node modules directly from the renderer process.

So, in v25, the app works more or less as follows:

The main process would make butlerd requests related to installing, updating, configuring and launching games. The renderer process would make butlerd requests related to what games you currently own, what games are currently installed, what collections you have, etc., to show information about that in the UI.

I mentioned that the main and renderer processes communicated in another way, and that's built-in electron IPC. In an electron app, you can send messages from main to renderer and back using their own protocol. That's what redux-electron-store was built upon, but it's also usable directly.

Electron also allows requiring "remote" modules - so that you can, from the renderer, call methods remotely on a main-side object. The problem is that this is all blocking, and when you want a responsive UI, you tend to avoid doing that, save for exceptional situations like sending the initial state snapshot (when you're not even rendering yet anyway).

Mixing native and web views

Early on when designing the itch app, it was decided that we didn't want to spend the time re-developing all views on both web and native. Some things would always be served by the website, whereas other would be rendered by the itch app, with a desktop-like interface that supports offline usage.

So, early on, I started showing parts of the website in the app. In non-essential places, mind you - it was still a requirement that you could pick a remembered itch.io account, view installed games, and launch them, even if you didn't have an internet connection. So some views needed to be "native":

The Installed Games view in itch v25. It works offline.

Game pages are hybrid in v25, with some native controls at the bottom:

The game page view in itch v25. The top part is served from the website, but the bottom part works offline

After some experiments, a browser-like design was picked, where you'd always have an address bar on top, with the traditional back/forward and reload controls.

This involved implementing a custom history and navigator controller, as some of the pages you could visit were native (rendered directly in the app's shell using React), and some of them were web (rendered using webview). The history allowed both to coexist and the controller allowed navigating seamlessly between each other:

The history shows both native and web pages.

However this was vastly more complicated than anticipated. The webview tag has not been the most stable thing in the world. Not only does it come with limitations, its navigation events have always been limited. But the problem it's trying to solve is also complicated.

Navigation in a webview in not just "making an HTTP request to another page" (and then some other page for its resources). You can navigate to another page without a load - using the pushState family of APIs. You can even navigate without pushState - by clicking on an anchor link.

Those are treated separately by the Electron webContents API (different events), and some of these don't show up in the window.history property. Or sometimes they do, but the events fired are not what you'd expect. Things get more complicated even with events like replaceState.

I'm fuzzy on the details because it has been a while, but I have tried for several weeks to recreate Electron's navigation controller purely from WebContents events and I failed, so I had to fall back to comparing their history and my history and try to reconcile them, whenever various events were fired.

For posterity, you can see the solution I ended up shipping on GitHub.

No, I'm not proud of it.

Embracing a browser-like design

While mixing native views with web views was a powerful paradigm (since it gave the app access to a lot of features we don't have the resources to implement twice), the navigation was a whole mess, as I've just described. Note that I glossed over many, many subtle problems I've had to work around.

A few weeks ago I wanted to add one simple feature to the app. And due to its structure, I had to re-read and re-understand at least eight different files, and modify at least four of them. I wasn't happy with that. I've always tried to document what it is I do on the itch app, even though an overwhelming portion of the work has been done solo, and, even just between me and me, I don't think this is sustainable.

So, I started thinking of ways to simplify the app.

What I really wanted to do was just strip away the browser completely. I wanted to make the app do only what we need a native app for. Show your library of game, install, update, configure, launch, and uninstall them, and that's it. For anything else, you would've used the website.

I also would've campaigned to improve integration between the website and the app, to allow for example queuing install of a game in the app directly from the website, which some other platforms already allow.

But after much internal discussion, that's... not happening. Folks have gotten to this way of doing things, and I'd have to make a bunch of folks angry and sad to change it. So I'm not going to change that.

What I am changing though, is the internal structure of the app.

Previousy (up until v25), if you were looking at a "a native page", like the library, there was only a single "web view", loaded off the file:// protocol and which works offline, like so:

And if you navigated away to a webpage, the main area would become a <webview/> tag, like so:

So in some way, the address in the first picture is a "lie". It's not rendered as a regular web page. It's just a bit of the application state, that makes it so that the overall application UI renders something other than a webview in its place.

That's why we need a history that's completely separate from *Electron/Chromium's history.

But Electron actually lets you register custom protocols. In many different ways, in fact. itch v25 does register itself as "handling the itch:" protocol, but that too is a half-life: that just means that whenever you click on an itch: link in another app, it makes a request to the OS saying "Hey does anyone know what to do with an itch: link?" to which the OS replies "Yes yup uh-huh there's this itch app over there, let me launch it with some command-line arguments".

(And if the app is already opened, the second instance sends a messages to the first and closes immediately. It even works in a slightly different way on macOS because, well, macOS, but that's the gist of it).

Internally however, up until itch v25, itch:// wasn't registered as a protocol inside the app.

But if we think of views like itch://library, itch://collections etc. as "regular web pages", then we can have a design where we always have two web views: one for the "chrome" (the sidebar, the address bar, the navigation buttons etc.)

Cool bear's hot tip

Firefox's "chrome" (the "outer" interface) is implemented entirely with JavaScript and the DOM btw. I don't know what Chromium does, but I'm fairly sure it's not too far from that either.

And this does simplify things. Now there's always a <webview> tag, it just sometimes loads content off of an actual HTTP server, and sometimes it loads content off of our custom protocol, but regardless it's just rendering HTML, executing JavaScript, and making the best out of bad CSS stylesheets.

Isolated data flow

This does raise a few questions though.

Now that native views (like Library, Collections, etc.) are just "regular web pages"... where do they get their information from? We can't afford to enable node integration anymore, because otherwise any page you navigate to (including http ones) would get access to the internals of the app, and that would be very, very bad for security.

(Like, random web pages having access to your whole hard disk bad.)

Which means communication mechanisms like:

Are completely out of the picture.

One thing immediately comes to mind: we're basically building an actual web app now, served off of itch://something, so why not make AJAX/XHR/fetch requests to itch://something?

And that's half of it. The good news is that we can prevent "random web pages" from making requests to itch:// - even though that custom protocol is registered throughout the lifetime of the app - by only allowing the itch://* origin to make requests.

But we'd like our native views to receive notifications from the main process. And to have conversations with butler - and we've established that doing so over HTTP is annoyingly complicated.

Luckily, there is a web standard suitable for establishing bidirectional communication channels, and it is... WebSocket.

And WebSocket is also good for security, because, again, we can prevent "random web pages" from establishing connections to our WebSocket server. The renderer process can send requests over WebSocket to the main process, which is now the only conduit to butlerd, so we have a centralized place to log all messages.

The New Design

So, the tentative new design for the itch app is as follows:

So pages like itch://library, when they're loaded, now have to make a fetch request to some itch:// URL to get the address of the WebSocket server, then establish a WebSocket connection.

The address of the WebSocket server is cached in sessionStorage, becaues it doesn't change throughout the lifetime of the app. Then, the renderer process and the main process exchange information, but only what they need - not the entire state of the app as before, when Redux was used.

So does it work? Yeah, it looks promising:

What you see in the video above uses very little of the v25 codebase. The whole structure was recreated almost from scratch (save for some components like IconButton).

The idea now is to port views from 25 into this new codebase. Instead of using a single Redux store and node integration, they'll have to work as "a regular web app", making fetch and WebSocket requests.

Instead of using higher-order components and automatic data binding with react-redux, I'm using react-hooks.

The current code isn't even that filthy - although, keep in mind it's not at feature parity yet. But here's the webview you see in the video for example:

TypeScript code
export const Webview = () => {
  const socket = useContext(SocketContext);
  const viewRef = useRef<WebviewTag>(null);
  const [url, setUrl] = useState("");
  const [title, setTitle] = useState("");
  const [loading, setLoading] = useState(false);

  useEffect(() => {
    const wv = viewRef.current;
    if (wv) {
      wv.addEventListener("load-commit", ev => {
        if (ev.isMainFrame) {
          setUrl(ev.url);
        }
      });
      wv.addEventListener("page-title-updated", ev => {
        setTitle(ev.title);
      });
      wv.addEventListener("did-start-loading", ev => {
        setLoading(true);
      });
      wv.addEventListener("did-stop-loading", ev => {
        setLoading(false);
      });
    }
  }, [viewRef]);

  useEffect(() => {
    if (socket) {
      return socket.listen(packets.navigate, ({ href }) => {
        let wv = viewRef.current;
        if (wv) {
          wv.loadURL(href);
        }
      });
    }
    return undefined;
  }, [socket]);

  return (
    <WebviewContainer>
      <Navigation viewRef={viewRef} title={title} url={url} loading={loading} />
      <webview src="itch://games/3" ref={viewRef} />
    </WebviewContainer>
  );
};

The React component tree is such that every "native page" (now a regular web page served off of itch:) has access to a SocketContext, which allows communicating with the main process. Here, we just listen for a navigate event.

All the <webview/> events are now usable and work across itch: and http(s): pages. React hooks make it easy to subscribe/unsubscribe to events cleanly, without having to use higher-order components like before. I'm no longer running away from local state, and although the main process holds a bunch of global state, not all renderer-side components need to know everything about it.

I'll let y'all know how it works out!