Posts from ·neil·

A new game by PleasingFungus: Silicon Zeroes

Our own PleasingFungus, now mostly retired from Crawl development, has a new game out on Steam. Silicon Zeroes is a puzzle game about digital logic and CPU design, available on both Steam and Here is a review of the game.

If you think Silicon Zeroes is a little too practical and you’d rather work with things like queue automata, you might be interested in PF’s older game Manufactoria.

Webtiles architecture

Author’s note: I originally posted this earlier this year to the roguelikedev subreddit. Then I deleted it in my periodic social media purge, but thought this should be preserved somewhere.

DCSS webtiles does all the “engine” stuff on the server side. Its architecture, and most of the client-side and middleware code, was developed by our devteam member edlothiol (Florian Diebold) starting in 2011. There are three components (and a fourth non-webtiles component for online console play over ssh):

  • The Crawl binary (written in C++) does everything the console version does, including rendering ASCII to the console. It *also* opens a Unix-domain socket which it uses to send information about the visible game state to the web server, as a JSON stream. One copy of Crawl runs for each player.
  • The webtiles server is a webserver written in Python (using the Tornado framework). On one end, it communicates with the Javascript client through the web; on the other, it launches the Crawl binary and talks to it through the socket and through Crawl’s stdin. The server is responsible for keeping track of who is playing or watching which games, and based on that forwarding JSON messages from Crawl to the appropriate clients, and converting client messages into game input. The server also handles in-game chat, and serves static files such as tilesheets and javascript over HTTP.
  • The client is written in Javascript. It establishes a websockets connection to the server, and speaks JSON over this connection. Playing a game versus watching it uses the same client code. The client takes the JSON messages coming from the server and uses them (along with the tilesheets) to determine what to render in its canvas. Input from the user is translated into a JSON message and sent to the server. Many things you would think can be handled on the client side, like scrolling a multi-page menu or displaying the inventory, actually go through the server, so that watchers can see what the player sees even before the full command is issued. One exception is the “line reader” (used whenever we ask the user to type something): there, the input editing uses an ordinary HTML text field; only when the user presses <Enter> are its contents sent to the server as input.
  • Console games replace the server with (an old version of) dgamelaunch, the software developed for by paxed (who is now a member of the nethack devteam, congrats). This is a really thin wrapper around the game: it runs Crawl, sends the game’s stdout to both a ttyrec and to dgamelaunch’s stdout, and sends its own stdin to Crawl’s stdin (this is why console players can play robotfindskitten, or boggle or atc, on some of the servers: the wrapper is so thin it works with pretty much any game). It is the responsibility of sshd or telnetd to actually do the network communications: as far as dgamelaunch is concerned, it’s just talking to a tty. Console watchers are fed a stream of the player’s ttyrec. On most servers dgamelaunch runs the webtiles version of the Crawl binary, and tells it to make a (Unix-domain socket) connection to the webtiles server as well. That way webtiles watchers can connect to and watch games being controlled from the console; the other direction is handled by having the webtiles server also make ttyrecs. dgamelaunch has its own messaging system, entirely separate from webtiles chat.

Hidden information, such as the contents of out-of-sight map cells or the identity of unidentified objects, never leaves the Crawl binary. Even before Webtiles existed, we kept most public and private information separate: there are classes for map_knowledge and monster_info entirely separate from the actual map and monster data. Items do use the same type, but have a function to strip off unknown info.

Lame ASCII-art diagram. As in Crawl, § represents a cloudthe Internet.

                 SERVER SIDE   §   PLAYER SIDE
          unix socket     websockets
            (JSON)           (JSON)
      CRAWL <----> WEBTILES <--§--> CLIENT
        ^   <-----          ---§-->
        |    stdin           HTTP
        |   (UTF-8)      (images, JS, etc)
        |                      §
        |                      §
        `--> DGL <---> SSHD <--§--> SSH
           stdin/stdout        §
         (UTF-8 + curses)      §

DCSS 0.13 beta (and Tournament schedule)

Crawl 0.13 is in beta!

You can test out the upcoming release on CSZO, CAO, or CLAN. We hope to have the other servers updated soon. Please report any bugs you find on Mantis.

The 0.13 release is scheduled for 11 October. The tournament will begin at 20:00 UTC on Friday 11 October and continue for 16 days (until 20:00 UTC Sunday, 27 October). Check here for more announcements, including links to the tournament rules.

Update: You may be interested in the 0.13 changelog.

Save compatibility in DCSS

A few days ago on IRC, blackcustard asked me to do a blog post on a particular save-compatibility fix I made about a month ago, because it’s an interesting (I prefer “dirty”) kludge. I should start, however, with an introduction to how save compatibility works in Crawl.

Version tags

For save compatibility purposes, the actual version number (for example, 0.12.2 or 0.13-a0-2578-ged92a99) makes almost no difference. Instead, there are three version numbers (which we call “tags” for historic reasons) associated with each save and with each version of crawl: the character format, major version, and minor version. In current trunk these are 0, 34, and 44.

  • The character format tag is stored in the “character info” section of the save: the basic information such as species and level that is displayed in the save browser. If this section is changed in an incompatible way, the character format version would be incremented. However, Crawl would not even see the old saves; this would be rather bad (they might accidentally be overwritten, for example), so we have never made such a change: the format is still version 0. Appending new information to this section requires only incrementing the minor tag (see below), which is much safer.
  • The major version tag is stored in each section of the save (each level, character, transiting monsters, and so on). Changing this version indicates that Crawl is no longer compatible with the old save: the save will be coloured red in the save browser and cannot be loaded. The major tag is never incremented within a release (e.g. between 0.12.0 and 0.12.1), only in trunk. We try to avoid doing this when we can, because it means trunk players will be unable to transfer their saves to newer versions to get bug fixes; it furthermore means that local players will have to keep the previous stable release installed until they have finished their games.

    The major tag has been incremented 34 times, but the rate has slowed down significantly. It was incremented several times in 0.8 development, once in 0.11 development, and once early in 0.12 development. Through heroic effort, kilobyte was able to restore save compatibility between 0.11 and 0.12 despite the major tag change. Therefore, current trunk can load saves from 0.11 and 0.12 releases, all but the earliest 0.12 development versions, and all 0.13 development versions.

  • The minor version tag is also stored in each section of the save, and is used to indicate changes that require special handling for loading old saves. The save-loading code if full of checks that say “only do this if the save’s minor tag is greater than N” or “initialize this new field if the minor tag is less than N”. Older versions of Crawl will refuse to load a save with a too-new minor tag (that is, we do not provide forward compatibility between versions), but newer versions will see the old tag and apply the appropriate fixups on load (which makes the save incompatible with the old version of crawl).

    The minor tag is reset to zero whenever the major tag changes. A major bump provides an opportunity to clean up the old save-compatibility code, since the old saves won’t be loadable anyway; and to remove or rearrange enumeration values that could not change without breaking compatibility. The minor tag has been incremented 44 times since the last major tag (0.12-a0-109-ged95631, in August 2012).

The upshot of this is that, for the most part, online players can keep transferring their saves to the latest version of Crawl. In the rare event that we change the major version tag (once a year maybe), the servers keep those saves on the old version, so the players can continue even if they don’t get bug fixes.

Among the situations that require compatibility code (and hence incrementing the minor version tag) are new fields or data structures added to monsters: trying to load those from the old save would either cause corruption, or crash. Likewise, any change to enumerations like monster type, spell, dungeon feature, other than adding a new value at the end, requires at least a minor version bump and code to shift all the old values.

Bugs: the difference between theory and practice

All the rules about how to preserve save compatibility, and what does and doesn’t need special handling, can be kind of complex (see docs/develop/save_compatibility.txt). It’s no wonder, then, that occasionally there are problems. The development of Spectral Weapon introduced two save compatibility issues at different times:

  1. Spectral Weapon was written before elemental wellspring and polymoth, but was merged into trunk later. The merge, unfortunately, left spectral weapon’s monster enum in its old place, before the two “older” monsters. Thus an elemental wellspring from an old save could be loaded as a spectral weapon (probably resulting in a crash), or a polymoth could be loaded as an elemental wellspring (no crash, but a wellspring with incorrect statistics and no spells).
  2. Spectral weapons got a ghost_demon structure to store their statistics (AC, attacks, etc), which differ from one spectral weapon to the next. An old save would not have this structure, but the code would try to load it anyway, getting out of sync and causing a crash when some later part of the save was misread.

At some point later we started noticing these apparently-corrupt saves that would crash on load. After some investigation we found that they seemed to all have a “spectral weapon” on the incorrect level. That let us track down the second, newer bug. But without the version tags, how to detect and fix it?

Fixing the bugs

Kilobyte made a first pass at fixing the ghost_demon problem. For potentially affected versions (remember that because of the missing tag, this check caught some good versions as well), he used read-ahead to determine whether it looked like the saved SW had a ghost_demon structure. If it did not not, the weapon was replaced with a placeholder “ghost” monster.

However, after kilobyte made but did not commit his patch, we discovered the other problem: sometimes neither the player, nor any ghosts on the level, had the Spectral Weapon spell. More digging showed that the spectral weapon had Primal Wave and Summon Water Elementals, and led us to the enum problem. To fix that, we needed some way to tell which monster it was supposed to be. We noticed that all three of the affected monsters (SW, Wellspring, and Polymoth) had different speeds, which happens to be stored with the monster rather than being loaded from the monster class. Then qoala pointed out that the ghost_demon change to spectral weapon happened in the very same batch of commits as the one that changed its speed from 25 to 30. We could therefore kill two birds with one stone by using the monster speed to determine the true monster type, and didn’t need the look-ahead.

Ultimately, for saves from affected versions (between minor tags 38 and 39), for any monster claiming to be one of those three, we check the monster’s saved speed. If it is 10, it must be a wellspring; if 12 it must be a polymoth; if 30 it must be a correct spectral weapon; and if 25 it must be a spectral weapon with a missing ghost_demon structure (so we remove it, leaving a placeholder ghost behind). However, haste, slow, and friends affect the speeds stored with the monster; we therefore have to check for numbers like 15 (hasted wellspring), 16 or 17 (slowed old spectral weapon), and so on. Fortunately none of the three base speeds is 2/3 or 3/2 of any of the others, so we are still able to distinguish on the basis of speed alone.

For all the gory details, see commit 0.13-a0-2175-ga079a5c. One day we will break compatibility again and increment the major version; then this kludge can return to the ether from which it came. Until then, enjoy! :)

Comments Off

CSZO downtime over

Edit 24 June (later): It’s back up now, and now in Tampa, Florida.

The server will be down for a data center migration most of Sunday, 23 June. If you’re looking for another US server, you can play on instead.

Edit 24 June: The ISP had various problems with the move, so cszo is still offline. I’m in contact with them to get the remaining issues resolved.

Comments Off

Even more euroservers!

Another European server is now available: (RHF), maintained by joosa. Old-timers may remember RHF from a few years ago; it is back, this time with webtiles support. RHF is located in Finland and serves both webtiles and console for trunk and stable versions. Games are reported on IRC by the bot Ruffell, and are also known to Sequell and the scoring pages.

Many thanks to joosa!

  • Also known as RHF.
  • Located in Finland, Europe.
  • It serves the latest released version of Dungeon Crawl Stone Soup.
  • It also serves the latest development version and is very regularly updated.
  • Additionally Dungeon SprintZot Defense and the Tutorial.
  • Access via SSH (port 22): username “rl” – no password required; you can use the SSH key (PuTTY key Unix key) if you want.
  • Access via WebSocket: WebTiles
  • Accounts and save files for SSH & WebSocket are shared.

New European webtiles server:

A new server is available:, hosted by Aleksi and maintained by TZer0. It is located in Germany and serves both webtiles and console for trunk and stable versions. Games are reported on IRC by the bot Lantell, and are also known to Sequell.

Many thanks to Aleksi and TZer0!

  • Also known as CLAN.
  • Located in Falkenstein/Chemnitz, Germany, Europe.
  • It serves the latest released and the previous version of Dungeon Crawl Stone Soup.
  • It also serves the latest development version and is very regularly updated.
  • Additionally Dungeon SprintZot Defense and the Tutorial.
  • Access via SSH (port 22): username “terminal” – password “terminal” or SSH-key (PuTTY key Unix key).
  • Access via WebSocket: WebTiles (port 8080)
  • Accounts and save files for SSH & WebSocket are shared.
  • Morgues, rc files, and so on are available online.

dpeg and rax on Roguelike Radio

The latest episode of Roguelike Radio covers player competitions in roguelikes. Among the panelists are our own dpeg and rax discussing the DCSS tournament.

New server:

As mentioned in MarvinPA’s announcement of the CAO downtime, a new public Crawl server is available:, or CSZO for short.  The server, located in West Chester, Pennsylvania in the eastern United States, hosts both webtiles ( and console Crawl (ssh to with username “crawl” and either the CAO key (putty version) or the password “crawlingtotheusa”).  CSZO runs trunk, 0.11, and 0.10, including sprint, zotdef, and the tutorial. Both trunk and stable versions are updated daily.

Games are shared between webtiles and ssh, and can furthermore be watched across interfaces (e.g. watch an ssh game in webtiles; chatting across interfaces doesn’t work that well for now, though).  You can even watch yourself in webtiles while playing ASCII, or vice versa, to get the best of both interfaces.

Ttyrecs and morgues are available for browsing at . Games and milestones are recorded by Sequell and reported in ##crawl on freenode by the new bot Sizzell; this also means that CSZO games will count for the upcoming tournament. Note that the user database is separate from CDO and CAO, so you will have to register an account for CSZO.

I plan to at some point publish the configurations, git branches, and instructions online to make it a little easier to set up a new server.  Eventually we’d like to make this all available as a Debian package or something similar, but first things first :)