25 Feb 2020
Long ago, in the distant past, Curtis introduced the idea of kelvin
versioning in an informal blog post about Urbit. Imagining
the idea of an ancient and long-frozen form of Martian computing, he described
this versioning scheme as follows:
Some standards are extensible or versionable, but some are not. ASCII, for
instance, is perma-frozen. So is IPv4 (its relationship to IPv6 is little
more than nominal - if they were really the same protocol, they’d have the
same ethertype). Moreover, many standards render themselves incompatible in
practice through excessive enthusiasm for extensibility. They may not be
perma-frozen, but they probably should be.
The true, Martian way to perma-freeze a system is what I call Kelvin
versioning. In Kelvin versioning, releases count down by integer degrees
Kelvin. At absolute zero, the system can no longer be changed. At 1K, one
more modification is possible. And so on. For instance, Nock is at 9K. It
might change, though it probably won’t. Nouns themselves are at 0K - it is
impossible to imagine changing anything about those three sentences.
Understood in this way, kelvin versioning is very simple. One simply counts
downwards, and at absolute zero (i.e. 0K) no other releases are legal. It is
no more than a versioning scheme designed for abstract components that should
eventually freeze.
Many years later, the Urbit blog described kelvin versioning once more in the
post Towards a Frozen Operating System. This presented a significant
refinement of the original scheme, introducing both recursive and so-called
“telescoping” mechanics to it:
The right way for this trunk to approach absolute zero is to “telescope” its
Kelvin versions. The rules of telescoping are simple:
If tool B sits on platform A, either both A and B must be at absolute zero,
or B must be warmer than A.
Whenever the temperature of A (the platform) declines, the temperature of B
(the tool) must also decline.
B must state the version of A it was developed against. A, when loading B,
must state its own current version, and the warmest version of itself with
which it’s backward-compatible.
Of course, if B itself is a platform on which some higher-level tool C
depends, it must follow the same constraints recursively.
This is more or less a complete characterisation of kelvin versioning, but it’s
still not quite precise enough. If one looks at other versioning schemes that
try to communicate some specific semantic content (the most obvious example
being semver), it’s obvious that they take great pains to be formal and
precise about their mechanics.
Experience has demonstrated to me that such formality is necessary. Even the
excerpt above has proven to be ambiguous or underspecified re: the details of
various situations or corner cases that one might run into. These confusions
can be resolved by a rigorous protocol specification, which, in this case isn’t
very difficult to put together.
Kelvin versioning and its use in Urbit is the subject of the currently-evolving
UP9, recent proposed updates to which have not yet been ratified. The
following is my own personal take on and simple formal specification of kelvin
versioning – I believe it resolves any major ambiguities that the original
descriptions may have introduced.
Kelvin Versioning (Specification)
(The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be
interpreted as described in RFC 2119.)
For any component A following kelvin versioning,
-
A’s version SHALL be a nonnegative integer.
-
A, at any specific version, MUST NOT be modified after release.
-
At version 0, new versions of A MUST NOT be released.
-
New releases of A MUST be assigned a new version, and this version
MUST be strictly less than the previous one.
-
If A supports another component B that also follows kelvin versioning, then:
- Either both A and B MUST be at version 0, or B’s version MUST be
strictly greater than A’s version.
- If a new version of A is released and that version supports B, then a new
version of B MUST be released.
These rules apply recursively for any kelvin-versioned component C that is
supported by B, and so on.
Examples
Examples are particularly useful here, so let me go through a few.
Let’s take the following four components, sitting in three layers, as a running
example. Here’s our initial state:
So we have A at 10K supporting B at 20K. B in turn supports both C at 21K and
D at 30K.
State 1
Imagine we have some patches lying around for D and want to release a new
version of it. That’s easy to do; we push out a new version of D. In this
case it will have version one less than 30, i.e. 29K:
A 10K
B 20K
C 21K
D 29K <-- cools from 30K to 29K
Easy peasy. This is the most trivial example.
The only possible point of confusion here is: well, what kind of change
warrants a version decrement? And the answer is: any (released) change
whatsoever. Anything with an associated kelvin version is immutable after
being released at that version, analogous to how things are done in any other
versioning scheme.
State 2
For a second example, imagine that we now have completed a major refactoring
of A and want to release a new version of that.
Since A supports B, releasing a new version of A obligates us to release a new
version of B as well. And since B supports both C and D, we are obligated,
recursively, to release new versions of those to boot.
The total effect of a new A release is thus the following:
A 9K <-- cools from 10K to 9K
B 19K <-- cools from 20K to 19K
C 20K <-- cools from 21K to 20K
D 28K <-- cools from 29K to 28K
This demonstrates the recursive mechanic of kelvin versioning.
An interesting effect of the above mechanic, as described in Toward a Frozen
Operating System is that anything that depends on (say) A, B, and C only
needs to express its dependency on some version of C. Depending on C at e.g.
20K implicitly specifies a dependency on its supporting component, B, at 19K,
and then A at 9K as well (since any change to A or B must also result in a
change to C).
State 3
Now imagine that someone has contributed a performance enhancement to C, and
we’d like to release a new version of that.
The interesting thing here is that we’re prohibited from releasing a new
version of C. Recall our current state:
A 9K
B 19K
C 20K <-- one degree K warmer than B
D 28K
Releasing a new version of C would require us to cool it by at least one
kelvin, resulting in the warmest possible version of 19K. But since its
supporting component, B, is already at 19K, this would constitute an illegal
state under kelvin versioning. A supporting component must always be strictly
cooler than anything it supports, or be at absolute zero conjointly with
anything it supports.
This illustrates the so-called telescoping mechanic of kelvin versioning – one
is to imagine one of those handheld telescopes made of segments that flatten
into each other when collapsed.
State 4
But now, say that we’re finally going to release our new API for B. We release
a new version of B, this one at 18K, which obligates us to in turn release new
versions of C and D:
A 9K
B 18K <-- cools from 19K to 18K
C 19K <-- cools from 20K to 19K
D 27K <-- cools from 28K to 27K
In particular, the new version of B gives us the necessary space to release a
new version of C, and, indeed, obligates us to release a new version of it. In
releasing C at 19K, presumably we’d include the performance enhancement that we
were prohibited from releasing in State 3.
State 5
A final example that’s simple, but useful to illustrate explicitly, involves
introducing a new component, or replacing a component entirely.
For example: say that we’ve decided to deprecate C and D and replace them with
a single new component, E, supported by B. This is as easy as it sounds:
A 9K
B 18K
E 40K <-- initial release at 40K
We just swap in E at the desired initial kelvin version. The initial kelvin
can be chosen arbitrarily; the only restriction is that it be warmer than the
the component that supports it (or be at absolute zero conjointly with it).
It’s important to remember that, in this component-resolution of kelvin
versioning, there is no notion of the “total temperature” of the stack. Some
third party could write another component, F, supported by E, with initial
version at 1000K, for example. It doesn’t introduce any extra burden or
responsibility on the maintainers of components A through E.
Collective Kelvin Versioning
So – all that is well and good for what I’ll call the component-level
mechanics of kelvin versioning. But it’s useful to touch on a related
construct, that of collectively versioning a stack of kelvin-versioned
components. This minor innovation on Curtis’s original idea was put together
by myself and my colleague Philip Monk.
If you have a collection of kelvin-versioned things, e.g. the things in our
initial state from the prior examples:
then you may want to release all these things, together, as some abstract
thing. Notably, this happens in the case of the Urbit kernel, where the stack
consists of a functional VM, an unapologetically amathematical purely
functional programming language, special-purpose kernel modules, etc.
It’s useful to be able to describe the whole kernel with a single version
number.
To do this in a consistent way, you can select one component in your stack to
serve as a primary index of sorts, and then capture everything it supports via
a patch-like, monotonically decreasing “fractional temperature” suffix.
This is best illustrated via example. If we choose B as our primary index in
the initial state above, for example, we could version the stack collectively
as 20.9K. B provides the 20K, and everything it supports is just lumped into
the “patch version” 9.
If we then consider the example given in State 1, i.e.:
in which D has cooled by a degree kelvin, then we can version this stack
collectively as 20.8K. If we were to then release a new version of C at 20K,
then we could release the stack collectively as 20.7K. And so on.
There is no strictly prescribed schedule as to how to decrease the fractional
temperature, but the following schedule is recommended:
.9, .8, .7, .., .1, .01, .001, .0001, ..
Similarly, the fractional temperature should reset to .9 whenever the primary
index cools. If we consider the State 2, for example, where a new release of A
led to every other component in the stack cooling, we had this:
Note that B has cooled by a kelvin, so we would version this stack collectively
as 19.9K. The primary index has decreased by a kelvin, and the fractional
temperature has been reset to .9.
While I think examples illustrate this collective scheme most clearly, after my
schpeel about the pitfalls of ambiguity it would be remiss of me not to include
a more formal spec:
Collective Kelvin Versioning (Specification)
(The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be
interpreted as described in RFC 2119.)
For a collection of kelvin-versioned components K:
-
K’s version SHALL be characterised by a primary index, chosen from a
component in K, and and a real number in the interval [0, 1) (the
“fractional temperature”), determined by all components that the primary
index component supports.
The fractional temperature MAY be 0 only if the primary index’s version
is 0.
-
K, at any particular version, MUST NOT be modified after release.
-
At primary index version 0 and fractional temperature 0, new versions of K
MUST NOT be released.
-
New releases of K MUST be assigned a new version, and this version
MUST be strictly less than the previous one.
-
When a new release of K includes new versions of any component supported by
the primary index, but not a new version of the primary index proper, its
fractional temperature MUST be less than the previous version.
Given constant primary index versions, fractional temperatures corresponding
to new releases SHOULD decrease according to the following schedule:
.9, .8, .7, .., .1, .01, .001, .0001, ..
-
When a new release of K includes a new version of the primary index, the
fractional temperature of SHOULD be reset to 9.
-
New versions of K MAY be indexed by components other than the primary
index (i.e., K may be “reindexed” at any point). However, the new chosen
component MUST either be colder than the primary index it replaces, or
be at version 0 conjointly with the primary index it replaces.
Etc.
In my experience, the major concern in adopting a kelvin versioning scheme is
that one will accidentally initialise everything with a set of temperatures
(i.e. versions) that are too cold (i.e. too close to 0), and thus burn through
too many version numbers too quickly on the path to freezing. To alleviate
this, it helps to remember that one has an infinite number of release
candidates available for every component at every temperature.
The convention around release candidates is just to prepend a suffix to the
next release version along the lines of .rc1, .rc2, etc. One should feel
comfortable using these liberally, iterating through release candidates as
necessary before finally committing to a new version at a properly cooler
temperature.
The applications that might want to adopt kelvin versioning are probably pretty
limited, and may indeed even be restricted to the Urbit kernel itself (Urbit
has been described by some as “that operating system with kernel that
eventually reaches absolute zero under kelvin versioning”). Nonetheless: I
believe this scheme to certainly be more than a mere marketing gimmick or what
have you, and, at minimum, it makes for an interesting change of pace from
semver.
11 Feb 2020
(UPDATE 2024/09/08: while hosting your own mailserver is not covered
in this post, I recommend you check out Simple NixOS Mailserver
for a borderline trivial way to do it.)
A couple of people recently asked about my email setup, so I figured it might
be best to simply document some of it here.
I run my own mail server for jtobin.io, plus another domain or two, and usually
wind up interacting with gmail for work. I use offlineimap to fetch and sync
mail with these remote servers, msmtp and msmtpq to send mail, mutt as my MUA,
notmuch for search, and tarsnap for backups.
There are other details; vim for writing emails, urlview for dealing with URLs,
w3m for viewing HTML, pass for password storage, etc. etc. But the
mail setup proper is as above.
I’ll just spell out some of the major config below, and will focus on the
configuration that works with gmail, since that’s probably of broader appeal.
You can get all the software for it via the following in nixpkgs:
mutt offlineimap msmtp notmuch notmuch-mutt
offlineimap
offlineimap is used to sync local and remote email; I use it to manually grab
emails occasionally throughout the day. You could of course set it up to run
automatically as a cron job or what have you, but I like deliberately fetching
my email only when I actually want to deal with it.
Here’s a tweaked version of one of my .offlineimaprc files:
[general]
accounts = work
[Account work]
localrepository = work-local
remoterepository = work-remote
postsynchook = notmuch new
[Repository work-local]
type = Maildir
localfolders = ~/mail/work
sep = /
restoreatime = no
[Repository work-remote]
type = Gmail
remoteuser = FIXME_user@domain.tld
remotepass = FIXME_google_app_password
realdelete = no
ssl = yes
sslcacertfile = /usr/local/etc/openssl/cert.pem
folderfilter = lambda folder: folder not in\
['[Gmail]/All Mail', '[Gmail]/Important', '[Gmail]/Starred']
You should be able to figure out the gist of this. Pay particular attention to
the ‘remoteuser’, ‘remotepass’, and ‘folderfilter’ options. For ‘remotepass’
in particular you’ll want to generate an app-specific password from Google.
The ‘folderfilter’ option lets you specify the gmail folders that you actually
want to sync; folder in [..] and folder not in [..] are probably all you’ll
want here.
If you don’t want to store your password in cleartext, and instead want to
grab it from an encrypted store, you can use the ‘remotepasseval’ option. I
don’t bother with this for Google accounts that have app-specific passwords,
but do for others.
This involves a little bit of extra setup. First, you can make some Python
functions available to the config file with ‘pythonfile’:
[general]
accounts = work
pythonfile = ~/.offlineimap.py
Here’s a version of that file that I keep, which grabs the desired from
pass(1):
#! /usr/bin/env python2
from subprocess import check_output
def get_pass():
return check_output("pass FIXME_PASSWORD", shell=True).strip("\n")
Then you can just call the get_pass function in ‘remotepasseval’ back in
.offlineimaprc:
[Repository work-remote]
type = Gmail
remoteuser = FIXME_user@domain.tld
remotepasseval = get_pass()
realdelete = no
ssl = yes
sslcacertfile = /usr/local/etc/openssl/cert.pem
folderfilter = lambda folder: folder not in\
['[Gmail]/All Mail', '[Gmail]/Important', '[Gmail]/Starred']
When you’ve got this set up, you should just be able to run offlineimap to
fetch your email. If you maintain multiple configuration files, it’s helpful
to specify a specific one using -c, e.g. offlineimap -c .offlineimaprc-foo.
msmtp, msmtpq
msmtp is used to send emails. It’s a very simple SMTP client. Here’s a
version of my .msmtprc:
defaults
auth on
tls on
tls_starttls on
tls_trust_file /usr/local/etc/openssl/cert.pem
logfile ~/.msmtp.log
account work
host smtp.gmail.com
port 587
from FIXME_user@domain.tld
user FIXME_user@domain.tld
password FIXME_google_app_password
account default: work
Again, very simple.
You can do a similar thing here if you don’t want to store passwords in
cleartext. Just use ‘passwordeval’ and the desired shell command directly,
e.g.:
account work
host smtp.gmail.com
port 587
from FIXME_user@domain.tld
user FIXME_user@domain.tld
passwordeval "pass FIXME_PASSWORD"
I occasionally like to work offline, so I use msmtpq to queue up emails to send
later. Normally you don’t have to deal with any of this directly, but
occasionally it’s nice to be able to check the queue. You can do that with
msmtp-queue -d:
$ msmtp-queue -d
no mail in queue
If there is something stuck in the queue, you can force it to send with
msmtp-queue -r or -R. FWIW, this has happened to me while interacting with
gmail under a VPN in the past.
mutt
Mutt is a fantastic MUA. Its tagline is “all mail clients suck, this one just
sucks less,” but I really love mutt. It may come as a surprise that working
with email can be a pleasure, especially if you’re accustomed to working with
clunky webmail UIs, but mutt makes it so.
Here’s a pruned-down version of one of my .muttrc files:
set realname = "MyReal Name"
set from = "user@domain.tld"
set use_from = yes
set envelope_from = yes
set mbox_type = Maildir
set sendmail = "~/.nix-profile/bin/msmtpq -a work"
set sendmail_wait = -1
set folder = "~/mail/work"
set spoolfile = "+INBOX"
set record = "+[Gmail]/Sent Mail"
set postponed = "+[Gmail]/Drafts"
set smtp_pass = "FIXME_google_app_password"
set imap_pass = "FIXME_google_app_password"
set signature = "~/.mutt/.signature-work"
set editor = "vim"
set sort = threads
set sort_aux = reverse-last-date-received
set pgp_default_key = "my_default_pgp@key"
set crypt_use_gpgme = yes
set crypt_autosign = yes
set crypt_replysign = yes
set crypt_replyencrypt = yes
set crypt_replysignencrypted = yes
bind index gg first-entry
bind index G last-entry
bind index B imap-fetch-mail
bind index - collapse-thread
bind index _ collapse-all
set alias_file = ~/.mutt/aliases
set sort_alias = alias
set reverse_alias = yes
source $alias_file
auto_view text/html
alternative_order text/plain text/enriched text/html
subscribe my_favourite@mailing.list
macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>i" \
"notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: reconstruct thread"
macro index l "<enter-command>unset wait_key<enter><shell-escape>read -p 'notmuch query: ' x; echo \$x >~/.cache/mutt_terms<enter><limit>~i \"\`notmuch search --output=messages \$(cat ~/.cache/mutt_terms) | head -n 600 | perl -le '@a=<>;chomp@a;s/\^id:// for@a;$,=\"|\";print@a'\`\"<enter>" "show only messages matching a notmuch pattern"
# patch rendering
# https://karelzak.blogspot.com/2010/02/highlighted-patches-inside-mutt.html
color body green default "^diff \-.*"
color body green default "^index [a-f0-9].*"
color body green default "^\-\-\- .*"
color body green default "^[\+]{3} .*"
color body cyan default "^[\+][^\+]+.*"
color body red default "^\-[^\-]+.*"
color body brightblue default "^@@ .*"
# vim: ft=muttrc:
Some comments on all that:
set mbox_type = Maildir
set sendmail = "~/.nix-profile/bin/msmtpq -a work"
set sendmail_wait = -1
set folder = "~/mail/work"
set spoolfile = "+INBOX"
set record = "+[Gmail]/Sent Mail"
set postponed = "+[Gmail]/Drafts"
Note here that we’re specifying msmtpq as our sendmail program. The -a work
command here refers to the account defined in your .msmtprc file, so if you
change the name of it there, you have to do it here as well. Ditto for the
folder.
(If you’re tweaking these config files for your own use, I’d recommend just
substituting all instances of ‘work’ with your own preferred account name.)
The negative ‘sendmail_wait’ value handles queueing mails up appropriately
when offline, IIRC.
set smtp_pass = "FIXME_google_app_password"
set imap_pass = "FIXME_google_app_password"
Here are the usual cleartext app passwords. If you want to store them
encrypted, there’s a usual method for doing that: add the following to the top
of your .muttrc:
source "gpg -d ~/.mutt/my-passwords.gpg |"
where .mutt/my-passwords.gpg should be the above smtp_pass and imap_pass
assignments, encrypted with your desired private key.
Continuing with the file at hand:
set signature = "~/.mutt/.signature-work"
set editor = "vim"
These should be self-explanatory. The signature file should just contain the
signature you want appended to your mails (it will be appended under a pair of
dashes). And if you want to use some other editor to compose your emails, just
specify it here.
set pgp_default_key = "my_default_pgp@key"
set crypt_use_gpgme = yes
set crypt_autosign = yes
set crypt_replysign = yes
set crypt_replyencrypt = yes
set crypt_replysignencrypted = yes
Mutt is one of the few programs that has great built-in support for PGP. It
can easily encrypt, decrypt, and sign messages, grab public keys, etc. Here
you can see that I’ve set it to autosign messages, reply to encrypted messages
with encrypted messages, and so on.
bind index gg first-entry
bind index G last-entry
bind index B imap-fetch-mail
bind index - collapse-thread
bind index _ collapse-all
These are a few key bindings that I find helpful. The first bunch are familiar
to vim users and are useful for navigating around; the second two are really
useful for compressing or expanding the view of your mailbox.
set alias_file = ~/.mutt/aliases
set sort_alias = alias
set reverse_alias = yes
source $alias_file
auto_view text/html
alternative_order text/plain text/enriched text/html
The alias file lets you define common shortcuts for single or multiple
addresses. I get a lot of use out of multiple address aliases, e.g.:
alias chums socrates@ago.ra, plato@acade.my, aristotle@lyce.um
The MIME type stuff below the alias config is just a sane set of defaults for
viewing common mail formats.
subscribe my_favourite@mailing.list
Mutt makes interacting with mailing lists very easy just by default, but you
can also indicate addresses that you’re subscribed to, as above, to unlock a
few extra features for them (‘list reply’ being a central one). To tell mutt
that you’re subscribed to haskell-cafe, for example, you’d use:
subscribe haskell-cafe@haskell.org
The three longer macros that follow are for notmuch. I really only find myself
using the last one, ‘l’, for search. FWIW, notmuch’s search functionality is
fantastic; I’ve found it to be more useful than gmail’s web UI search, I think.
The patch rendering stuff at the end is just a collection of heuristics for
rendering in-body .patch files well. This is useful if you subscribe to a
patch-heavy mailing list, e.g. LKML or git@vger.kernel.org, or if you just
want to be able to better-communicate about diffs in your day-to-day emails
with your buddies.
Fin
There are obviously endless ways you can configure all this stuff, especially
mutt, and common usage patterns that you’ll quickly find yourself falling into.
But whatever you find those to be, the above should at least get you up and
running pretty quickly with 80% of the desired feature set.
04 Feb 2019
In my last post I first introduced hnock, a little interpreter
for Nock, and then demonstrated it on a hand-rolled decrement function.
In this post I’ll look at how one can handle the same (contrived, but
illustrative) task in Hoon.
Hoon is the higher- or application-level programming language for working with
Arvo, the operating system of Urbit. The best way I can
describe it is something like “Haskell meets C meets J meets the environment is
always explicit.”
As a typed, functional language, Hoon feels surprisingly low-level. One is
never allocating or deallocating memory explicitly when programming in Hoon,
but the experience somehow feels similar to working in C. The idea is that the
language should be simple and straightforward and support a fairly limited
level of abstraction. There are the usual low-level functional idioms (map,
reduce, etc.), as well as a structural type system to keep the programmer
honest, but at its core, Hoon is something of a functional Go (a language
which, I happen to think, is not good).
It’s not a complex language, like Scala or Rust, nor a language that overtly
supports sky-high abstraction, like Haskell or Idris. Hoon is supposed to
exist at a sweet spot for getting work done. And I am at least willing to buy
the argument that it is pretty good for getting work done in Urbit.
Recall our naïve decrement function in Haskell. It looked like this:
dec :: Integer -> Integer
dec m =
let loop n
| succ n == m = n
| otherwise = loop (succ n)
in loop 0
Let’s look at a number of ways to write this in Hoon, showing off some of the
most important Hoon programming concepts in the process.
Cores
Here’s a Hoon version of decrement. Note that to the uninitiated, Hoon looks
gnarly:
|= m=@
=/ n=@ 0
=/ loop
|%
++ recur
?: =(+(n) m)
n
recur(n +(n))
--
recur:loop
We can read it as follows:
- Define a function that takes an argument, ‘m’, having type atom (recall
that an atom is an unsigned integer).
- Define a local variable called ‘n’, having type atom and value 0, and add it
to the environment (or, if you recall our Nock terminology, to the
subject).
- Define a local variable called ‘loop’, with precise definition to follow, and
add it to the environment.
- ‘loop’ is a core, i.e. more or less a named collection of functions.
Define one such function (or arm), ‘recur’, that checks to see if the
increment of ‘n’ is equal to ‘m’, returning ‘n’ if so, and calling itself,
except with the value of ‘n’ in the environment changed to ‘n + 1’, if not.
- Evaluate ‘recur’ as defined in ‘loop’.
(To test this, you can enter the Hoon line-by-line into the Arvo dojo.
Just preface it with something like =core-dec to give it a name, and call it
via e.g. (core-dec 20).)
Hoon may appear to be a write-only language, though I’ve found this to not
necessarily be the case (just to note, at present I’ve read more Hoon code than
I’ve written). Good Hoon has a terse and very vertical style. The principle
that keeps it readable is that, roughly, each line should contain one important
logical operation. These operations are denoted by runes, the =/ and ?:
and similar ASCII digraphs sprinkled along the left hand columns of the above
example. This makes it look similar to e.g. J – a language I have
long loved, but never mastered – although in J the rough ‘one operator per
line’ convention is not typically in play.
In addition to the standard digraph runes, there is also a healthy dose of
‘irregular’ syntax in most Hoon code for simple operations that one uses
frequently. Examples used above include =(a b) for equality testing, +(n)
for incrementing an atom, and foo(a b) for evaluating ‘foo’ with the value of
‘a’ in the environment changed to ‘b’. Each of these could be replaced with a
more standard rune-based expression, though for such operations the extra
verbosity is not usually warranted.
Cores like ‘loop’ seem, to me, to be the mainstay workhorse of Hoon
programming. A core is more or less a structure, or object, or dictionary, or
whatever, of functions. One defines them liberally, constructs a subject (i.e.
environment) to suit, and then evaluates them, or some part of them, against
the subject.
To be more precise, a core is a Nock expression; like every non-atomic value in
Nock, it is a tree. Starting from the cell [l r], the left subtree, ‘l’, is
a tree of Nock formulas (i.e. the functions, like ‘recur’, defined in the
core). The right subtree, ‘r’ is all the data required to evaluate those Nock
formulas. The traditional name for the left subtree, ‘l’, is the battery of
the core; the traditional name for the right subtree is the payload.
One is always building up a local environment in Hoon and then evaluating some
value against it. Aside from the arm ‘recur’, the core ‘loop’ also contains in
its payload the values ‘m’ and ‘n’. The expression ‘recur:loop’ – irregular
syntax for =< recur loop – means “use ‘loop’ as the environment and
evaluate ‘recur’.” Et voilà, that’s how we get our decrement.
You’ll note that this should feel very similar to the way we defined decrement
in Nock. Our hand-assembled Nock code, slightly cleaned up, looked like this:
[8
[1 0]
8
[1
6
[5 [4 0 6] [0 7]]
[0 6]
2 [[0 2] [4 0 6] [0 7]] [0 2]
]
2 [0 1] [0 2]
]
This formula, when evaluated against an atom subject, creates another subject
from it, defining a ‘loop’ analogue that looks in specific addresses in the
subject for itself, as well as the ‘m’ and ‘n’ variables, such that it produces
the decrement of the original subject. Our Hoon code does much the same –
every ‘top-level’ rune expression adds something to the subject, until we get
to the final expression, ‘recur:loop’, which evaluates ‘recur’ against the
subject, ‘loop’.
The advantage of Hoon, in comparison to Nock, is that we can work with names,
instead of raw tree addresses, as well as with higher-level abstractions like
cores. The difference between Hoon and Nock really is like the difference
between C and assembly!
For what it’s worth, here is the compiled Nock corresponding to our above
decrement function:
[8
[1 0]
8
[8
[1
6
[5 [4 0 6] 0 30]
[0 6]
9 2 10 [6 4 0 6] 0 1
]
0 1
]
7 [0 2] 9 2 0 1
]
It’s similar, though not identical, to our hand-rolled Nock. In particular,
you can see that it is adding a constant conditional formula, including the
familiar equality check, to the subject (note that the equality check, using
Nock-5, refers to address 30 instead of 7 – presumably this is because I have
more junk floating around in my dojo subject). Additionally, the formulas
using Nock-9 and Nock-10 reduce to Nock-2 and Nock-0, just like our hand-rolled
code does.
But our Hoon is doing more than the bespoke Nock version did, so we’re not
getting quite the same code. Worth noting is the ‘extra’ use of Nock-8, which
is presumably required because I’ve defined both ‘recur’, the looping function,
and ‘loop’, the core to hold it, and the hand-rolled Nock obviously didn’t
involve a core.
Doors
Here’s another way to write decrement, using another fundamental Hoon
construct, the door:
|= m=@
=/ loop
|_ n=@
++ recur
?: =(+(n) m)
n
~(recur ..recur +(n))
--
~(recur loop 0)
A door is a core that takes an argument. Here we’ve used the |_ rune,
instead than |%, to define ‘loop’, and note that it takes ‘n’ as an argument.
So instead of ‘n’ being defined external to the core, as it was in the previous
example, here we have to specify it explicitly when we call ‘recur’. Note that
this is more similar to our Haskell example, in which ‘loop’ was defined as a
function taking ‘n’ as an argument.
The two other novel things here are the ~(recur ..recur +(n)) and ~(recur
loop 0) expressions, which actually turn out to be mostly the same thing. The
syntax:
is irregular, and means “evaluate ‘arm’ in ‘door’ using ‘argument’”. So in the
last line, ~(recur loop 0) means “evaluate ‘recur’ in ‘loop’ with n set to 0.”
In the definition of ‘recur’, on the other hand, we need to refer to the door
that contains it, but are in the very process of defining that thing. The
‘..recur’ syntax means “the door that contains ‘recur’,” and is useful for
exactly this task, given we can’t yet refer to ‘loop’. The syntax ~(recur
..recur +(n)) means “evaluate ‘recur’ in its parent door with n set to n + 1.”
Let’s check the compiled Nock of this version:
[8
[8
[1 0]
[1
6
[5 [4 0 6] 0 30]
[0 6]
8
[0 1]
9 2 10 [6 7 [0 3] 4 0 6] 0 2
]
0 1
]
8
[0 2]
9 2 10 [6 7 [0 3] 1 0] 0 2
]
There’s even more going on here than in our core-implemented decrement, but
doors are a generalisation of cores, so that’s to be expected.
Hoon has special support, though, for one-armed doors. This is precisely how
functions (also called gates or traps, depending on the context) are
implemented in Hoon. The following is probably the most idiomatic version of
naïve decrement:
|= m=@
=/ n 0
|-
?: =(+(n) m)
n
$(n +(n))
The |= rune that we’ve been using throughout these examples really defines a
door, taking the specified argument, with a single arm called ‘$’. The |-
rune here does the same, except it immediately calls the ‘$’ arm after defining
it. The last line, $(n +(n)), is analogous to the recur(n +(n)) line in
our first example: it evaluates the ‘$’ arm, except changing the value of ‘n’
to ‘n + 1’ in the environment.
(Note that there are two ‘$’ arms defined in the above code – one via the use
of |=, and one via the use of |-. But there is no confusion as to which
one we mean, since the latter has been the latest to be added to the subject.
Additions to the subject are always prepended in Hoon – i.e. they are
placed at address 2. As the topmost ‘$’ in the subject is the one that
corresponds to |-, it is resolved first.)
The compiled Nock for this version looks like the following:
[8
[1 0]
8
[1
6
[5 [4 0 6] 0 30]
[0 6]
9 2 10 [6 4 0 6] 0 1
]
9 2 0 1
]
And it is possible (see the appendix) to show that, modulo some different
addressing, this reduces exactly to our hand-rolled Nock code.
UPDATE: my colleague Ted Blackman, an actual Hoon programmer, recommended
the following as a slightly more idiomatic version of naïve decrement:
=| n=@
|= m=@
^- @
?: =(+(n) m)
n
$(n +(n))
Note that here we’re declaring ‘n’ outside of the gate itself by using another
rune, =|, that gives the variable a default value based on its type (an
atom’s default value is 0). There’s also an explicit type cast via ^- @,
indicating that the gate produces an atom (like type signatures in Haskell, it
is considered good practice to include these, even though they may not strictly
be required).
Declaring ‘n’ outside the gate is interesting. It has an imperative feel,
as if one were writing the code in Python, or were using a monad like ‘State’
or a ‘PrimMonad’ in Haskell. Like in the Haskell case, we aren’t actually
doing any mutation here, of course – we’re creating new subjects to evaluate
each iteration of our Nock formula against. And the resulting Nock is very
succinct:
[6
[5 [4 0 14] 0 6]
[0 14]
9 2 10 [14 4 0 14] 0 1
]
Basic Generators
If you tested the above examples, I instructed you to do so by typing them into
Arvo’s dojo. I’ve come to believe that, in general, this is a poor way to
teach Hoon. It shouldn’t be done for all but the most introductory examples
(such as the ones I’ve provided here).
If you’ve learned Haskell, you are familiar with the REPL provided by GHCi, the
Glasgow Haskell Compiler’s interpreter. Code running in GHCi is implicitly
running in the IO monad, and I think this leads to confusion amongst newcomers
who must then mentally separate “Haskell in GHC” from “Haskell in GHCi.”
I think there is a similar problem in Hoon. Expressions entered into the dojo
implicitly grow or shrink or otherwise manipulate the dojo’s subject, which is
not, in general, available to standalone Hoon programs. Such standalone Hoon
programs are called generators. In general, they’re what you will use when
working in Hoon and Arvo.
There are four kinds of generators: naked, %say, %ask, and %get. In this
post we’ll just look at the first two; the last couple are out of scope, for
now.
Naked Generators
The simplest kind of generator is the ‘naked’ generator, which just exists in
a file somewhere in your Urbit’s “desk.” If you save the following as
naive-decrement.hoon in an Urbit’s home/gen directory, for example:
|= m=@
=/ n 0
|-
?: =(+(n) m)
n
$(n +(n))
Then you’ll be able to run it in a dojo via:
~zod:dojo> +naive-decrement 20
19
A naked generator can only be a simple function (technically, a gate) that
produces a noun. It has no access to any external environment – it’s
basically just a self-contained function in a file. It must have an argument,
and it must have only one argument; to pass multiple values to a naked
generator, one must use a cell.
Say Generators
Hoon is a purely functional language, but, unlike Haskell, it also has no IO
monad to demarcate I/O effects. Hoon programs do not produce effects on their
own at all – instead, they construct nouns that tell Arvo how to produce
some effect or other.
A %say generator (where %say is a symbol) produces a noun, but it can also
make use of provided environment data (e.g. date information, entropy, etc.).
The idea is that the generator has a specific structure that Arvo knows how to
handle, in order to supply it with the requisite information. Specifically,
%say generators have the structure:
:- %say
|= [<environment data> <list of arguments> <list of optional arguments>]
:- %noun
<code>
I’ll avoid discussing what a list is in Hoon at the moment, and we won’t
actually use any environment data in any examples here. But if you dump the
following in home/gen/naive-decrement.hoon, for example:
:- %say
|= [* [m=@ ~] ~]
:- %noun
=/ n 0
|-
?: =(+(n) m)
n
$(n +(n))
you can call it from the dojo via the mechanism as before:
~zod:dojo> +naive-decrement 20
19
The generator itself actually returns a particularly-structured noun; a cell
with the symbol %say as its head, and a gate returning a pair of the symbol
%noun and a noun as its tail. The %noun symbol describes the data produced
by the generator. But note that this is not displayed when evaluating the
generator in the dojo – instead, we just get the noun itself, but this
behaviour is dojo-dependent.
I think one should almost get in the habit of writing %say generators for
most Hoon code, even if a simple naked generator or throwaway dojo command
would do the trick. They are so important for getting things done in Hoon that
it helps to learn about & start using them sooner than later.
Fin
I’ve introduced Hoon and given a brief tour of what I think are some of the
most important tools for getting work done in the language. Cores, doors, and
gates will get you plenty far, and early exposure to generators, in the form of
the basic naked and %say variants, will help you avoid the habit of
programming in the dojo, and get you writing more practically-structured Hoon
code from the get-go.
I haven’t had time in this post to describe Hoon’s type system, which is
another very important topic when it comes to getting work done in the
language. I’ll probably write one more to create a small trilogy of sorts –
stay tuned.
Appendix
Let’s demonstrate that the compiled Nock code from our door-implemented
decrement reduces to the same as our hand-rolled Nock, save different address
use. Recall that our compile Nock code was:
[8
[1 0]
8
[1
6
[5 [4 0 6] 0 30]
[0 6]
9 2 10 [6 4 0 6] 0 1
]
9 2 0 1
]
An easy reduction is from Nock-9 to Nock-2. Note that *[a 9 b c] is the same
as *[*[a c] 2 [0 1] 0 b]. When ‘c’ is [0 1], we have that *[a c] = a,
such that *[a 9 b [0 1]] is the same as *[a 2 [0 1] 0 b], i.e. that the
formula [9 b c] is the same as the formula [2 [0 1] 0 b]. We can thus
reduce the use of Nock-9 on the last line to:
[8
[1 0]
8
[1
6
[5 [4 0 6] 0 30]
[0 6]
9 2 10 [6 4 0 6] 0 1
]
2 [0 1] 0 2
]
The remaining formula involving Nock-9 evaluates [10 [6 4 0 6] 0 1] against
the subject, and then evaluates [2 [0 1] [0 2]] against the result. Note
that, for some subject ‘a’, we have:
*[a 10 [6 4 0 6] 0 1]
= #[6 *[a 4 0 6] *[a 0 1]]
= #[6 *[a 4 0 6] a]
= #[3 [*[a 4 0 6] /[7 a]] a]
= #[1 [/[2 a] [*[a 4 0 6] /[7 a]]] a]
= [/[2 a] [*[a 4 0 6] /[7 a]]]
= [*[a 0 2] [*[a 4 0 6] *[a 0 7]]]
= *[a [0 2] [4 0 6] [0 7]]
such that [10 [6 4 0 6] 0 1] = [[0 2] [4 0 6] [0 7]]. And for
c = [[0 2] [4 0 6] [0 7]] and some subject ‘a’, we have:
*[a 9 2 c]
= *[*[a c] 2 [0 1] 0 2]
and for b = [2 [0 1] 0 2]:
*[*[a c] b]
= *[a 7 c b]
= *[a 7 [[0 2] [4 0 6] [0 7]] [2 [0 1] 0 2]]
such that:
[9 2 [0 2] [4 0 6] [0 7]] = [7 [[0 2] [4 0 6] [0 7]] [2 [0 1] 0 2]]
Now. Note that for any subject ‘a’ we have:
*[a 7 [[0 2] [4 0 6] [0 7]] [2 [0 1] 0 2]]
= *[a 7 [[0 2] [4 0 6] [0 7]] *[a 0 2]]
since *[a 2 [0 1] 0 2] = *[a *[a 0 2]]. Thus, we can reduce:
*[a 7 [[0 2] [4 0 6] [0 7]] *[a 0 2]]
= *[*[a [0 2] [4 0 6] [0 7]] *[a 0 2]]
= *[a 2 [[0 2] [4 0 6] [0 7]] [0 2]]
such that
[7 [[0 2] [4 0 6] [0 7]] [2 [0 1] 0 2]] = [2 [[0 2] [4 0 6] [0 7]] [0 2]]
and, so that, finally, we can reduce the compiled Nock to:
[8
[1 0]
8
[1
6
[5 [4 0 6] 0 30]
[0 6]
2 [[0 2] [4 0 6] [0 7]] 0 2
]
2 [0 1] 0 2
]
which, aside from the use of the dojo-assigned address 30 (and any reduction
errors on this author’s part), is the same as our hand-rolled Nock.