Implementing the Giry Monad
13 Feb 2017In my last post I went over the categorical and measure-theoretic foundations of the Giry monad, the ‘canonical’ probability monad that operates on the level of probability measures.
In this post I’ll pick up from where I left off and talk about a neat and faithful (if impractical) implementation of the Giry monad that one can put together in Haskell.
Measure, Integral, and Continuation
So. For a quick review, we’ve established the Giry monad as a triple \((\mathcal{P}, \mu, \eta)\), where \(\mathcal{P}\) is an endofunctor on the category of measurable spaces \(\textbf{Meas}\), \(\mu\) is a marginalizing integration operation defined by:
\[\mu(\rho)(A) = \int_{\mathcal{P}(M)} \left\{\lambda \nu . \int_M \chi_A d \nu \right\} d \rho\]and \(\eta\) is a monoidal identity, defined by the Dirac measure at a point:
\[\eta(x)(A) = \chi_A(x).\]How do we actually implement this beast? If we’re looking to be suitably general then it is unlikely that we’re going to be able to easily represent something like a \(\sigma\)-algebra over some space of measures on a computer, so that route is sort of a non-starter.
But it can be done. The key to implementing a general-purpose Giry monad is to notice that the fundamental operation involved in it is integration, and that we can avoid working with \(\sigma\)-algebras and measurable spaces directly if we focus on dealing with measurable functions instead of measurable sets.
Consider the integration map on measurable functions \(\tau_f\) that we’ve been using this whole time. For some measurable function \(f\), \(\tau_f\) takes a measure on some measurable space \(M = (X, \mathcal{X})\) and uses it to integrate \(f\) over \(X\). In other words:
\[\tau_f(\nu) = \int_X f d\nu.\]A measure in \(\mathcal{P}(M)\) has type \(X \to \mathbb{R}\), so \(\tau_f\) has corresponding type \((X \to \mathbb{R}) \to \mathbb{R}\).
This might look familiar to you; it’s very similar to the type signature for a continuation:
newtype Cont a r = Cont ((a -> r) -> r)
Indeed, if we restrict the carrier type of ‘Cont’ to the reals, we can be really faithful to the type:
newtype Integral a = Integral ((a -> Double) -> Double)
Now, let’s overload notation and call the integration map \(\tau_f\) itself a measure. That is, \(\tau_f\) is a mapping \(\nu \mapsto \int_{X}fd\nu\), so we’ll just interpret the notation \(\nu(f)\) to mean the same thing - \(\int_{X}fd\nu\). This is convenient because we can dispense with \(\tau\) and just pretend measures can be applied directly to measurable functions. There’s no way we can get confused here; measures operate on sets, not functions, so notation like \(\nu(f)\) is not currently in use. We just set \(\nu(f) = \tau_f(\nu)\) and that’s that. Let’s rename the ‘Integral’ type to match:
newtype Measure a = Measure ((a -> Double) -> Double)
We can extract a very nice shallowly-embedded language for integration here, the core of which is a single term:
integrate :: (a -> Double) -> Measure a -> Double
integrate f (Measure nu) = nu f
Note that this is the same way we’d express integration mathematically; we specify that we want to integrate a measurable function \(f\) with respect to some measure \(\nu\):
\[\int f d\nu = \texttt{integrate f nu}.\]The only subtle difference here is that we don’t specify the space we’re integrating over in the integral expression - instead, we’ll bake that into the definition of the measures we create themselves. Details in a bit.
What’s interesting here is that the Giry monad is the continuation monad with the carrier type restricted to the reals. This isn’t surprising when you think about what’s going on here - we’re representing measures as integration procedures, that is, programs that take a measurable function as input and then compute its integral in some particular way. A measure, as we’ve implemented it here, is just a ‘program with a missing piece’. And this is exactly the essence of the continuation monad in Haskell.
Typeclass Instances
We can fill out the functor, applicative, and monad instances mechanically by reference to the a standard continuation monad implementation, and each instance gives us some familiar conceptual structure or operation on probability measures. Let’s take a look.
The functor instance lets us transform the support of a measurable space while keeping its density structure invariant. If we have:
\[\nu(f) = \int_X f d\nu\]then mapping a measurable function over the measure corresponds to:
\[(\texttt{fmap} \, g \, \nu)(f) = \int_{X} (f \circ g) d\nu.\]The functor structure allows us to precisely express a pushforward measure or distribution of \(\nu\) under \(g\). It lets us ‘adapt’ a measure to other measurable spaces, just like a good functor should.
In Haskell, the functor instance corresponds exactly to the math:
instance Functor Measure where
fmap g nu = Measure $ \f ->
integrate (f . g) nu
The monad instance is exactly the Giry monad structure that we developed previously, and it allows us to sequence probability measures together by marginalizing one into another. We’ll write it in terms of bind, of course, which went like:
\[(\rho \gg\!\!= g)(f) = \int_M \left\{\lambda m . \int_N f dg(m) \right\} d \rho.\]The Haskell translation is verbatim:
instance Monad Measure where
return x = Measure (\f -> f x)
rho >>= g = Measure $ \f ->
integrate (\m -> integrate f (g m)) rho
Finally there’s the Applicative instance, which as I mentioned in the last post is sort of conceptually weird here. So in the spirit of that comment, I’m going to dodge any formal justification for now and just use the following instance which works in practice:
instance Applicative Measure where
pure x = Measure (\f -> f x)
Measure g <*> Measure h = Measure $ \f ->
g (\k -> h (f . k))
Conceptual Example
It’s worth taking a look at an example of how things should conceivably work here. Consider the following probabilistic model:
\[\begin{align*} \pi & \sim \text{beta}(\alpha, \beta) \\ \mu \, | \, \pi & \sim \text{binomial}(n, \pi) \end{align*}\]It’s a standard hierarchical presentation. A ‘compound’ measure can be obtained here by marginalizing over the beta measure \(\pi\), and that’s called the beta-binomial measure. Let’s find it.
The beta distribution has support on the \([0, 1]\) subset of the reals, and the binomial distribution with argument \(n\) has support on the \(\{0, \ldots, n\}\) subset of the integers, so we know that things should proceed like so:
\[\begin{align*} \psi(f) & = (\pi \gg\!\!= \mu)(f) \\ & = \int_{\mathbb{R}} \left\{\lambda p . \int_{\mathbb{Z}} f d\mu(p) \right\} d \pi. \end{align*}\]Eliding some theory of integration, I can tell you that \(\pi\) is absolutely continuous with respect to Lebesgue measure and that \(\mu(p)\) is absolutely continuous w/respect to counting measure for appropriate \(p\). So, \(\pi\) admits a density \(d\pi/dx = g_\pi\) and \(\mu(p)\) admits a density \(d\mu(p)/d\# = g_{\mu(p)}\), defined as:
\[g_\pi(p \, | \, \alpha, \beta) = \frac{1}{B(\alpha, \beta)} p^{\alpha - 1} (1 - p)^{\beta - 1}\]and
\[g_{\mu(p)}(x \, | \, n, p) = \binom{n}{x} p^x (1 - p)^{n - x}\]respectively, for \(B\) the beta function and \(\binom{n}{x}\) a binomial coefficient. Again, we can reduce the integral as follows, transforming the outermost integral into a standard Riemann integral and the innermost integral into a simple sum of products:
\[\psi(f) = \int_{0}^{1} \lambda p. \left\{ \lambda \alpha. \lambda \beta. g_{\pi}(p \, | \alpha, \beta) \sum_{z \in \{0, \ldots, n\}} f(z) \left( \lambda n. g_{\mu(p)}(z \, | \, n, p) \right) \right\} dx.\]where \(dx\) denotes Lebesgue measure. I could expand this further or simplify things a little more (the beta and binomial are conjugates) but you get the point, which is that we have a way to evaluate the integral.
What is really required here then is to be able to encode into the definitions of measures like \(\pi\) and \(\mu(p)\) the method of integration to use when evaluating them. For measures absolutely continuous w/respect to Lebesgue measure, we can use the Riemann integral over the reals. For measures absolutely continuous w/respect to counting measure, we can use a sum of products. In both cases, we’ll also need to supply the density or mass function by which the integral should be evaluated.
Creating Measures
Recall that we are representing measures as integration procedures. So to create one is to define a program by which we’ll perform integration.
Let’s start with the conceptually simpler case of a probability measure that’s absolutely continuous with respect to counting measure. We need to provide a support (the region for which probability is greater than 0) and a probability mass function (so that we can weight every point appropriately). Then we just want to integrate a function by evaluating it at every point in the support, multiplying the result by that point’s probability mass, and summing everything up. In code, this translates to:
fromMassFunction :: (a -> Double) -> [a] -> Measure a
fromMassFunction f support = Measure $ \g ->
foldl' (\acc x -> acc + f x * g x) 0 support
So if we want to construct a binomial measure, we can do that like so (where
choose
comes from Numeric.SpecFunctions
):
binomial :: Int -> Double -> Measure Int
binomial n p = fromMassFunction (pmf n p) [0..n] where
pmf n p x
| x < 0 || n < x = 0
| otherwise = choose n x * p ^^ x * (1 - p) ^^ (n - x)
The second example involves measures over the real line that are absolutely continuous with respect to Lebesgue measure. In this case we want to evaluate a Riemann integral over the entire real line, which is going to necessitate approximation on our part. There are a bunch of methods out there for approximating integrals, but a simple one for one-dimensional problems like this is quadrature, an implementation for which Ed Kmett has handily packaged up in his integration package:
fromDensityFunction :: (Double -> Double) -> Measure Double
fromDensityFunction d = Measure $ \f ->
quadratureTanhSinh (\x -> f x * d x)
where
quadratureTanhSinh = result . last . everywhere trap
Here we’re using quadrature to approximate the integral, but otherwise it has a similar form as ‘fromMassFunction’. The difference here is that we’re integrating over the entire real line, and so don’t have to supply a support explicitly.
We can use this to create a beta measure (where logBeta
again comes from
Numeric.SpecFunctions
):
beta :: Double -> Double -> Measure Double
beta a b = fromDensityFunction (density a b) where
density a b p
| p < 0 || p > 1 = 0
| otherwise = 1 / exp (logBeta a b) * p ** (a - 1) * (1 - p) ** (b - 1)
Note that since we’re going to be integrating over the entire real line and the beta distribution has support only over \([0, 1]\), we need to implicitly define the support here by specifying which regions of the domain will lead to a density of 0.
In any case, now that we’ve constructed those things we can just use a monadic bind to create the beta-binomial measure we described before. It masks a lot of under-the-hood complexity.
betaBinomial :: Int -> Double -> Double -> Measure Int
betaBinomial n a b = beta a b >>= binomial n
There are a couple of other useful ways to create measures, but the most notable is to use a sample in order to create an empirical measure. This is equivalent to passing in a specific support for which the mass function assigns equal probability to every element; I’ll use Gabriel Gonzalez’s foldl package here as it’s pretty elegant:
fromSample :: Foldable f => f a -> Measure a
fromSample = Measure . flip weightedAverage
weightedAverage :: (Foldable f, Fractional r) => (a -> r) -> f a -> r
weightedAverage f = Foldl.fold (weightedAverageFold f) where
weightedAverageFold :: Fractional r => (a -> r) -> Fold a r
weightedAverageFold f = Foldl.premap f averageFold
averageFold :: Fractional a => Fold a a
averageFold = (/) <$> Foldl.sum <*> Foldl.genericLength
Using ‘fromSample’ you can create an empirical measure using just about anything you’d like:
data Foo = Foo | Bar | Baz
foos :: [Foo]
foos = [Foo, Foo, Bar, Foo, Baz, Foo, Bar, Foo, Foo, Foo, Bar]
nu :: Measure Foo
nu = fromSample foos
Though I won’t demonstrate it here, you can use this approach to also create measures from sampling functions or random variables that use a source of randomness - just draw a sample from the function and pipe the result into ‘fromSample’.
Querying Measures
To query a measure is to simply get some result out of it, and we do that by integrating some measurable function against it. The easiest thing to do is to just take a straightforward expectation by integrating the identity function; for example, here’s the expected value of a beta(10, 10) measure:
> integrate id (beta 10 10)
0.49999999999501316
The expected value of a beta(\(\alpha\), \(\beta\)) distribution is \(\alpha / (\alpha + \beta)\), so we can verify analytically that the result should be 0.5. We observe a bit of numerical imprecision here because, if you’ll recall, we’re just approximating the integral via quadrature. For measures created via ‘fromMassFunction’ we don’t need to use quadrature, so we won’t observe the same kind of approximation error. Here’s the expected value of a binomial(10, 0.5) measure, for example:
> integrate fromIntegral (binomial 10 0.5)
5.0
Note here that we’re integrating the ‘fromIntegral’ function against the binomial measure. This is because the binomial measure is defined over the integers, rather than the reals, and we always need to evaluate to a real when we integrate. That’s part of the definition of a measure!
Let’s calculate the expectation of the beta-binomial distribution with \(n = 10\), \(\alpha = 1\), and \(\beta = 8\):
> integrate fromIntegral (betaBinomial 10 1 8)
1.108635884924813
Neato. And since we can integrate like this, we can really compute any of the moments of a measure. The first raw moment is what we’ve been doing here, and is called the expectation:
expectation :: Measure Double -> Double
expectation = integrate id
The second (central) moment is the variance. Here I mean variance in the moment-based sense, rather than as the possibly better-known sample variance:
variance :: Measure Double -> Double
variance nu = integrate (^ 2) nu - expectation nu ^ 2
The variance of a binomial(\(n\), \(p\)) distribution is known to be \(np(1-p)\), so for \(n = 10\) and \(p = 0.5\) we should get 2.5:
> variance (binomial 10 0.5)
<interactive>:87:11: error:
• Couldn't match type ‘Int’ with ‘Double’
Expected type: Measure Double
Actual type: Measure Int
• In the first argument of ‘variance’, namely ‘(binomial 10 0.5)’
In the expression: variance (binomial 10 0.5)
In an equation for ‘it’: it = variance (binomial 10 0.5)
Ahhh, but remember: the binomial measure is defined over the integers, so we can’t integrate it directly. No matter - the functorial structure allows us to adapt it to any other measurable space via a measurable function:
> variance (fmap fromIntegral (binomial 10 0.5))
2.5
Expectation and variance (and other moments) are pretty well-known, but you can do more exotic things as well. You can calculate the moment generating function for a measure, for example:
momentGeneratingFunction :: Measure Double -> Double -> Double
momentGeneratingFunction nu t = integrate (\x -> exp (t * x)) nu
and the cumulant generating function follows naturally:
cumulantGeneratingFunction :: Measure Double -> Double -> Double
cumulantGeneratingFunction nu = log . momentGeneratingFunction nu
A particularly useful construct is the cumulative distribution function for a measure, which calculates the probability of a region less than or equal to some number:
cdf :: Measure Double -> Double -> Double
cdf nu x = integrate (negativeInfinity `to` x) nu
negativeInfinity :: Double
negativeInfinity = negate (1 / 0)
to :: (Num a, Ord a) => a -> a -> a -> a
to a b x
| x >= a && x <= b = 1
| otherwise = 0
The beta(2, 2) distribution is symmetric around its mean 0.5, so the probability of the region \([0, 0.5]\) should itself be 0.5. This checks out as expected, modulo approximation error due to quadrature:
> cdf (beta 2 2) 0.5
0.4951814897381374
Similarly for measurable spaces without any notion of order, there’s a simple CDF analogue that calculates the probability of a region that contains the given points:
containing :: (Num a, Eq b) => [b] -> b -> a
containing xs x
| x `elem` xs = 1
| otherwise = 0
And probably the least interesting query of all is the simple ‘volume’, which calculates the total measure of a space. For any probability measure this must obviously be one, so it can at least be used as a quick sanity check:
volume :: Measure Double -> Double
volume = integrate (const 1)
Convolution and Friends
I mentioned in the last post that applicativeness corresponds to independence in some sense, and that independent measures over the same measurable space can be convolved together, à la:
\[(\nu + \zeta)(f) = \int_{M}\int_{M}f(x + y)d\nu(x)d\zeta(y)\]for measures \(\nu\) and \(\zeta\) on \(M\). In Haskell-land it’s well-known that any applicative instance gives you a free ‘Num’ instance, and the story is no different here:
instance Num a => Num (Measure a) where
(+) = liftA2 (+)
(-) = liftA2 (-)
(*) = liftA2 (*)
abs = fmap abs
signum = fmap signum
fromInteger = pure . fromInteger
There are a few neat ways to demonstrate this kind of thing. Let’s use a Gaussian measure here as a running example:
gaussian :: Double -> Double -> Measure Double
gaussian m s = fromDensityFunction (density m s) where
density m s x
| s <= 0 = 0
| otherwise =
1 / (s * sqrt (2 * pi)) * exp (negate ((x - m) ^^ 2) / (2 * (s ^^ 2)))
First, consider a chi-squared measure with \(k\) degrees of freedom. We could create this directly using a density function, but instead we can represent it by summing up independent squared Gaussian measures:
chisq :: Int -> Measure Double
chisq k = sum (replicate k normal) where
normal = fmap (^ 2) (gaussian 0 1)
To sanity check the result, we can compute the mean and variance of a \(\chi^2(2)\) measure, which should be \(k\) and \(2k\) respectively for \(k = 2\):
> expectation (chisq 2)
2.0000000000000004
> variance (chisq 2)
4.0
As a second example, consider a product of independent Gaussian measures. This is a trickier distribution to deal with analytically (see here), but we can use some well-known identities for general independent measures in order to verify our results. For any independent measures \(\mu\) and \(\nu\), we have:
\[\mathbb{E}(\mu\nu) = \mathbb{E}\mu \mathbb{E}\nu\]and
\[\text{var}(\mu\nu) = \text{var}(\mu)\text{var}(\nu) + \text{var}(\mu)(\mathbb{E}\nu)^2 + \text{var}(\nu)(\mathbb{E}\mu)^2\]for the expectation and variance of their product. So for a product of independent Gaussians w/parameters (1, 2) and (2, 3) respectively, we expect to see 2 for its expectation and 61 for its variance:
> expectation (gaussian 1 2 * gaussian 2 3)
2.0000000000000001
> variance (gaussian 1 2 * gaussian 2 3)
61.00000000000003
Woop!
Wrapping Up
And there you have it, a continuation-based implementation of the Giry monad. You can find a bunch of code with similar functionality to this packaged up in my old measurable library on GitHub if you’d like to play around with the concepts.
That library has accumulated a few stars since I first pushed it up in 2013. I think a lot of people are curious about these weird measure things, and this framework at least gives you the ability to play around with a representation for measures directly. I found it particularly useful for really grokking, say, that integrating some function \(f\) against a probability measure \(\nu\) is identical to integrating the identity function against the probability measure \(\texttt{fmap} \, f \, \nu\). And there are a few similar concepts there that I find really pop out when you play with measures directly, rather than when one just works with them on paper.
But let me now tell you why the Giry monad sucks in practice.
Take a look at this integral expression, which is brought about due to a monadic bind:
\[(\nu \gg\!\!= \mu)(f) = \int_{M} \left\{\lambda m . \int_{M} f d\mu(m) \right\} d \nu.\]For simplicitly, let’s assume that \(M\) is discrete and has cardinality \(|M|\). This means that the integral reduces to
\[(\nu \gg\!\!= \mu)(f) = \underbrace{\sum_{m \in M} d\nu(m) \underbrace{ \sum_{n \in M} f(n) d\mu(m)(n) }_{O(|M|)}}_{O(|M|)}\]for \(d\mu(m)\) and \(d\nu\) the appropriate Radon-Nikodym derivatives. You can see that the total number of operations involved in the integral is \(O(|M|^2)\), and indeed, for \(p\) monadic binds the computational complexity involved in evaluating all the integrals involved is exponential, on the order of \(|M|^p\). It was no coincidence that I demonstrated a variance calculation for a \(\chi^2(2)\) distribution instead of for a \(\chi^2(10)\).
This isn’t really much of a surprise - the cottage industry of approximating integrals exists because integration is hard in practice, and integration is surely best avoided whenever one can get away with doing so. Vikash Mansinghka’s quote on this topic is fitting: “don’t calculate probabilities - sample good guesses.” I’ll also add: relegate the measures to measure theory, where they seem to belong.
The Giry monad is a lovely abstract construction for formalizing the monadic structure of probability, and as canonical probabilistic objects, measures and integrals are tremendously useful when working theoretically. But they’re a complete non-starter when it comes to getting anything nontrivial done in practice. For that, there are far more useful representations for probability distributions in Haskell - notably, the sampling function or random variable representation found in things like mwc-probability/mwc-random-monad and random-fu, or even better, the structural representation based on free or operational monads like I’ve written about before, or that you can find in something like monad-bayes.
The intuitions gleaned from playing with the Giry monad carry over precisely to other representations for the probability monad. In all cases, ‘return’ will correspond, semantically, to constructing a Dirac distribution at a point, while ‘bind’ will correspond to a marginalizing operator. The same is true for the underlying (applicative) functor structure: ‘fmap’ always corresponds to a density-preserving transformation of the support, while applicativeness corresponds to independence (yielding convolution, etc.). And you have to admit, the connection to continuations is pretty cool.
There is clearly some connection to the codensity monad as well, but I think I’ll let someone else figure out the specifics of that one. Something something right-Kan extension..