Many people find moving from a beginner to an Intermediate Haskeller is hard because it means mastering the dreaded monads. But in actuality this is more of a documentation problem than a theory problem, because when beginners search on the internet for examples they often find explanations of what the monads are and how they work. In fact beginner code will almost always use monads without even knowing it! The purpose of this article is to go over how to use the more basic monads, not what they are or how they work. If you want to understand the theory or how they work then there are many other resources on the internet for you to read.

The Maybe Monad

For each monad in this post I’ll be going over how to think of them, what their do notation means, and why we want to use them. A Haskeller at any level of proficiency will run up against the Maybe data type, the difference is that beginners will run away from the type by pattern matching on it instead of using the monad instance like this:

Now we to execute a program in our language we need to write a sem function that evaluates the program and spits out a DVal, which are values in our semantic domain. To do so we’ll need some way of keeping track of variable bindings, and we need some way of representing an error. For errors we’ll use a Maybe and for state we’ll just use a simple Map from Data.Map.

Now we can define sem using case statements and pattern matching. This will probably look very familiar if you’ve ever done this in an introductory programming languages course.

When we see a literal we simple wrap it in our Maybe DVal type, when we want to perform addition we recursively evaluate the left hand side, and the right hand side. If we get values back then we perform the addition and wrap the result value in our Maybe DVal data type. If something goes wrong then we return a Nothing to represent the error. The Let construct is interesting; we evaluate the expression our variable x will be bound to, which is e in this example. If we get a value, v, back then we manipulate our store to map our variable x to our value v and then we recursively evaluate the in' expression, where the x variable will presumably be called. It may not look like it, but this pattern of carrying around a state in a function parameter, and then performing this recursion with an updated state is actually a state monad hiding in plain sight. If we see a variable reference Ref then we simply lookup its value form the store and return it. For the function definition we simple mirror the function in our semantic domain and finally application will proceed almost identically to addition, the only difference is that we deconstruct through pattern matching the function definition and recur into the function body.

That code will work just fine, but it is not exactly elegant and we aren’t utilizing the maybe monad as effectively as we could be. Instead we would be better served to use do notation to take advantage of the Maybe monad directly. Here is what the sem function would look like using do notation:

Whenever we talk about a monad we need to think about the computational context that it is encapsulating. For the Maybe monad this context encapsulates failure. What this means is that when we are in do notation for a Maybe if any line in the do notation returns a Nothing then the entire computation will return a Nothing value. Which is quite nice! For example with the Add case above, if our left hand side or our right hand side do not return a value but return something else, perhaps something we have not pattern matched on, then because we are in do notation, for a Maybe the entire result will become a Nothing. Another important detail of do notation is that every line must return a type m a, which means it must return a monad type. Thus, for this example, you’ll notice that we never have to write a Just constructor to pattern match on the result of a recursive call. This is because Just is a value of our monad, namely the Maybe monad. So in this example the type of m a in our do notation unifies to Maybe DVal where m is the Maybe monad, and our value, a is DVal.

The State Monad

Our semMaybe is a direct improvement over the first sem function. Not only is the code cleaner but it also describes the desired computation more succinctly than the previously. However, we still are carrying around a peculiar input to our function, namely our variable store. It would be much better if we didn’t have to explicitly pass around our store and instead could just reference it when needed. This is the computational context that the State monad encapsulates. Let’s step away from our little language and consider some more simple examples. First lets just have a stateful loop that counts to ten and returns something; those that are imperatively inclined should find this example enlightening:

The increment function is calling two convenient functions from the mtl library, namely get, and put. Let’s take a peek at their type signatures:

get simple returns the state, m s and put takes a new state s, and then replaces the old state with the new state and returns a unit value. We can also use a convenient function called modify to avoid these get and put calls like so:

As always let’s look at the type signature:

So now we can see how this works. Modify is a higher-ordered function that takes some function (s -> s) that takes a state and does something with it. In our case this is simply adding one to our state.

Do notation for a state monad is threading the state for every line. What this means is that we do not need to explicitly return arguments, put the state, get the state, and apply functions. Instead, we can just operate on the state implicitly through the do notation. Here is what I mean in code:

You’ll notice that the incBy3 function is never getting the state or putting the state. Rather this is all hidden in the increment function. What is beautiful about this is that we have a very declarative langauge for describing what our code does. We do not need to deal with any of the under-the-hood details of getting the state and incrementing it manually we can simply call increment and rely on do-notation to ensure that the state the second increment sees, is a result of the first increment call. Just like do-notation in a Maybe monad encapsulates failure, and hence every line in the do-notation could fail or succeed, do-notation for a State monad encapsulates stateful computation, where every line could perform some computation on the state, or with the state.

Now that we can increment our state, let’s write a loop and return something once our state reaches, say, 10.

In order to run anything in the state monad we must use one of three functions: runState, evalState. Here are their type signatures.

So you can see that runState takes a state monadic value State s a, and an initial state s, and returns a tuple that holds the final value of the computations a, and the final state s. execState and evalState, are similar funcitons but instead of returning a tuple they return either the last state s, or the final value a.

Let’s see if our function actually works now:

But what happens if we send in an initial state that is 10 or more?

Ah looks like we are incrementing the state before we perform the check which is extra computation we don’t necessarily want. Let’s implement a fix:

Much better. You’ll notice that I’ve nested do notation here. This is fine as long as the monads for each do does not change. If it does then you’ll need some function to convert the inner do monad, to the other (this is often a lift function). Alright let’s test it out:

Great! Looks like it works as we’ve intended. Hopefully you will have seen that there really is nothing to be scared of with the State monad, it is a valuable and powerful tool in our toolbox and we should try to use it when it makes sense to.

Combining the Maybe and State Monads

We’ve seen the result of rewriting our semantic function using a Maybe monad, and we know that we have a State monad hiding in plain sight, so one might want to combine these two to gain the benefits of both. Such a person may try something like this:

Even though it seems reasonable that you’d set your return value for State to a Maybe a, you won’t get far with this approach. Not only are you having to pattern match on the Maybe constructors (which we wanted to avoid in the first place) but when you go try to call modify haskell will get confused because modify is expecting a DVal but you are now passing it a Maybe DVal so you’ll have to deconstruct the Maybe type manually, which is exactly what we were trying to avoid by using the Maybe monad in the first place!

The appropriate way to do this is to use Monad Transformer Stacks. I won’t go into a ton of detail here on them but I’ll show how to construct one for this case. Instead of our BadEnv type we’ll use a StateT monad transformer to embed our Maybe in our State monad, like so:

The StateT monad transformer takes a state s, a inner monad m and a return type a and returns a monad transformer stack that embeds the inner monad into a state monad. These type variables are apparent in the runStateT, evalStateT and execStateT functions:

So now instead of having a State s a we have a StateT s m a, which means that when we run the StateT Monad Transformer we’ll get our inner monad back as output, which in this case is a Maybe monad.

Alright, let’s write our sem function using our monad transformer stack:

Ah quite nice! Let’s step through it case by case. For literals we just return our literal after wrapping in a DVal. It is important to note that this return is lifting a non-monadic DVal into the monad transformer stack. So that return doesn’t just lift the value to a Maybe, or to a State monad, it lifts it to a StateT s Maybe monad stack. The Add case is identical to the Maybe monad case. The let case recursively evaluates the b expression that we will be binding x to, as long as that succeeds then we add our binding for x to the result of b in our state with a call to modify, and finally recur to the “in” portion of the let binding, which in this case is the e variable. To grab a reference from our store we simply get the state, look it up, and return it. You’ll notice the use of a function called lift here. This is often used with monad transformer stacks because it allows one to take a value from the inner monad and lift it to the outer monad. So in this case our call to M.lookup returns a Maybe value, because our do-notation is for our outer monad, and therefore every line must have type StateT, haskell will throw a type error because it cannot match a Maybe with a StateT. To lift our Maybe to a StateT we simply will call lift on it. Here is the type signature for lift:

Where m is the inner monad and t is the outer monad transformer. In our case m = Maybe and t = StateT so the type of lift in the sem function above will unify to:

Moving on we have function definition which is identical to the Maybe monad case and finally function application. For function application we get our state, check that the left hand side evaluates to a function, and our right hand side evaluates to a value. If both do then we add the binding of x to v to our state and recur into the function body e. Now our sem' function takes only a program in our langauge and possbily returns a value if everything goes well; a much clearer type signature than our previous sem function had. Let’s just make sure that we can actually evaluate a program in our language:

As you can see we got back a Maybe (Dval, Env DVal). As you construct deeper monad transformer stacks you’ll find yourself having to unwrap each successive monad from the top down. So here our top monad was a StateT and our bottom monad was a Maybe, so we needed to unwrap the StateT with runStateT and then we can access the Maybe monad. For monads which do not have a Show instance, such as the Reader monad, you would need to call runReader on the result of the runStateT, or execStateT, or evalStateT call, depending on what you want to do.

But what happens if we have a type error in our program:

Just as expected we receive a Nothing back because of our type error. So not only have do we have more expressive types (types that describe the computation of a function), but we have more declarative, easier to understand code, and