diff --git a/docs/a-fistful-of-monads.html b/docs/a-fistful-of-monads.html index 590b0a5..b336165 100644 --- a/docs/a-fistful-of-monads.html +++ b/docs/a-fistful-of-monads.html @@ -31,357 +31,230 @@ -

A Fistful of Monads

- -

-When we first talked about functors, we saw that they were a useful concept for -values that can be mapped over. Then, we took that concept one step further by -introducing applicative functors, which allow us to view values of certain data -types as values with contexts and use normal functions on those values while -preserving the meaning of those contexts. -

- -

-In this chapter, we'll learn about -monads, which are just beefed up applicative functors, much like -applicative functors are only beefed up functors. -

- -more cool than u - -

-When we started off with functors, we saw that it's possible to map functions -over various data types. We saw that for this purpose, the Functor -type class was introduced and it had us asking the question: when we have a -function of type a -> b and some data type -f a, how do we map that function over the data type -to end up with f b? We saw how to map something over -a Maybe a, a list [a], an IO -a etc. We even saw how to map a function a -> b -over other functions of type r -> a to get -functions of type r -> b. To answer this question -of how to map a function over some data type, all we had to do was look at the type -of fmap: -

- -
-fmap :: (Functor f) => (a -> b) -> f a -> f b
-
- -

-And then make it work for our data type by writing the appropriate Functor instance. -

- -

-Then we saw a possible improvement of functors and said, hey, what if that -function a -> b is already wrapped inside a -functor value? Like, what if we have Just (*3), how -do we apply that to Just 5? What if we don't want to -apply it to Just 5 but to a Nothing instead? Or if we have [(*2),(+4)], -how would we apply that to [1,2,3]? How would that -work even? For this, the Applicative type class was -introduced, in which we wanted the answer to the following type: -

- -
-(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
-
- -

-We also saw that we can take a normal value and wrap it inside a data type. For -instance, we can take a -1 and wrap it so that it becomes a -Just 1. Or we can make it into a [1]. Or an I/O action that does nothing and just yields -1. The function that does this is called pure. -

- -

-Like we said, an applicative value can be seen as a value with an added context. -A fancy value, to put it in technical terms. For instance, the character 'a' is just a normal character, whereas Just 'a' has some added context. Instead of a Char, we have a Maybe Char, -which tells us that its value might be a character, but it could also be an -absence of a character. -

- -

-It was neat to see how the Applicative type class -allowed us to use normal functions on these values with context and how that -context was preserved. Observe: -

- -
-ghci> (*) <$> Just 2 <*> Just 8
+

A Fistful of Monads

+

When we first talked about functors, we saw that they were a useful +concept for values that can be mapped over. Then, we took that concept +one step further by introducing applicative functors, which allow us to +view values of certain data types as values with contexts and use normal +functions on those values while preserving the meaning of those +contexts.

+

In this chapter, we’ll learn about monads, which are just beefed up +applicative functors, much like applicative functors are only beefed up +functors.

+more cool than u +

When we started off with functors, we saw that it’s possible to map +functions over various data types. We saw that for this purpose, the +Functor type class was introduced and it had us asking the +question: when we have a function of type a -> b and +some data type f a, how do we map that function over the +data type to end up with f b? We saw how to map something +over a Maybe a, a list [a], an +IO a etc. We even saw how to map a function +a -> b over other functions of type +r -> a to get functions of type r -> b. +To answer this question of how to map a function over some data type, +all we had to do was look at the type of fmap:

+
fmap :: (Functor f) => (a -> b) -> f a -> f b
+

And then make it work for our data type by writing the appropriate +Functor instance.

+

Then we saw a possible improvement of functors and said, hey, what if +that function a -> b is already wrapped inside a functor +value? Like, what if we have Just (*3), how do we apply +that to Just 5? What if we don’t want to apply it to +Just 5 but to a Nothing instead? Or if we have +[(*2),(+4)], how would we apply that to +[1,2,3]? How would that work even? For this, the +Applicative type class was introduced, in which we wanted +the answer to the following type:

+
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
+

We also saw that we can take a normal value and wrap it inside a data +type. For instance, we can take a 1 and wrap it so that it +becomes a Just 1. Or we can make it into a +[1]. Or an I/O action that does nothing and just yields +1. The function that does this is called +pure.

+

Like we said, an applicative value can be seen as a value with an +added context. A fancy value, to put it in technical terms. For +instance, the character 'a' is just a normal character, +whereas Just 'a' has some added context. Instead of a +Char, we have a Maybe Char, which tells us +that its value might be a character, but it could also be an absence of +a character.

+

It was neat to see how the Applicative type class +allowed us to use normal functions on these values with context and how +that context was preserved. Observe:

+
ghci> (*) <$> Just 2 <*> Just 8
 Just 16
-ghci> (++) <$> Just "klingon" <*> Nothing
+ghci> (++) <$> Just "klingon" <*> Nothing
 Nothing
 ghci> (-) <$> [3,4] <*> [1,2,3]
-[2,1,0,3,2,1]
-
- -

-Ah, cool, so now that we treat them as applicative values, Maybe a values represent computations that might have -failed, [a] values represent computations that have -several results (non-deterministic computations), IO -a values represent values that have side-effects, etc. -

- -

-Monads are a natural extension of applicative functors and with them we're -concerned with this: if you have a value with a context, m -a, how do you apply to it a function that takes a normal a and returns a value with a context? That is, how do you -apply a function of type a -> m b to a value of -type m a? So essentially, we will want this function: -

- -
-(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
-
- -

-If we have a fancy value and a function that takes a normal value but -returns a fancy value, how do we feed that fancy value into the function? This -is the main question that we will concern ourselves when dealing with monads. We write -m a instead of f a because -the m stands for Monad, but monads are just -applicative functors that support >>=. The ->>= function is pronounced as bind. -

- -

-When we have a normal value a and a normal function -a -> b it's really easy to feed the value to the -function — you just apply the function to the value normally and that's -it. But when we're dealing with values that come with certain contexts, it takes -a bit of thinking to see how these fancy values are fed to functions and how to -take into account their behavior, but you'll see that it's easy as one two -three. -

- - -

Getting our feet wet with Maybe

- -monads, grasshoppa - -

-Now that we have a vague idea of what monads are about, let's see if we can make -that idea a bit less vague. -

- -

-Much to no one's surprise, Maybe is a monad, so let's -explore it a bit more and see if we can combine it with what we know about -monads. -

- +[2,1,0,3,2,1]
+

Ah, cool, so now that we treat them as applicative values, +Maybe a values represent computations that might have +failed, [a] values represent computations that have several +results (non-deterministic computations), IO a values +represent values that have side-effects, etc.

+

Monads are a natural extension of applicative functors and with them +we’re concerned with this: if you have a value with a context, +m a, how do you apply to it a function that takes a normal +a and returns a value with a context? That is, how do you +apply a function of type a -> m b to a value of type +m a? So essentially, we will want this function:

+
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
+

If we have a fancy value and a function that takes a normal +value but returns a fancy value, how do we feed that fancy value into +the function? This is the main question that we will concern +ourselves when dealing with monads. We write m a instead of +f a because the m stands for +Monad, but monads are just applicative functors that +support >>=. The >>= function is +pronounced as bind.

+

When we have a normal value a and a normal function +a -> b it’s really easy to feed the value to the +function — you just apply the function to the value normally and that’s +it. But when we’re dealing with values that come with certain contexts, +it takes a bit of thinking to see how these fancy values are fed to +functions and how to take into account their behavior, but you’ll see +that it’s easy as one two three.

+

Getting our feet wet with +Maybe

+monads, grasshoppa +

Now that we have a vague idea of what monads are about, let’s see if +we can make that idea a bit less vague.

+

Much to no one’s surprise, Maybe is a monad, so let’s +explore it a bit more and see if we can combine it with what we know +about monads.

-Make sure you understand applicatives at this point. It's good if you have a -feel for how the various Applicative instances work -and what kind of computations they represent, because monads are nothing more -than taking our existing applicative knowledge and upgrading it. +

Make sure you understand applicatives +at this point. It’s good if you have a feel for how the various +Applicative instances work and what kind of computations +they represent, because monads are nothing more than taking our existing +applicative knowledge and upgrading it.

- -

-A value of type Maybe a represents a value of type -a with the context of possible failure attached. A -value of Just "dharma" means that the string "dharma" is there whereas a value of -Nothing represents its absence, or if you look at the -string as the result of a computation, it means that the computation has failed. -

- -

-When we looked at Maybe as a functor, we saw that if -we want to fmap a function over it, it gets mapped -over the insides if it's a Just value, otherwise the -Nothing is kept because there's nothing to map it -over! -

- -

-Like this: -

- -
-ghci> fmap (++"!") (Just "wisdom")
-Just "wisdom!"
-ghci> fmap (++"!") Nothing
-Nothing
-
- -

-As an applicative functor, it functions similarly. However, applicatives also -have the function wrapped. Maybe is an applicative -functor in such a way that when we use <*> to -apply a function inside a Maybe to a value that's -inside a Maybe, they both have to be Just values for the result to be a Just value, otherwise the result is Nothing. It makes sense because if you're missing either -the function or the thing you're applying it to, you can't make something up out -of thin air, so you have to propagate the failure: -

- -
-ghci> Just (+3) <*> Just 3
+

A value of type Maybe a represents a value of type +a with the context of possible failure attached. A value of +Just "dharma" means that the string "dharma" +is there whereas a value of Nothing represents its absence, +or if you look at the string as the result of a computation, it means +that the computation has failed.

+

When we looked at Maybe as a functor, we saw that if we +want to fmap a function over it, it gets mapped over the +insides if it’s a Just value, otherwise the +Nothing is kept because there’s nothing to map it over!

+

Like this:

+
ghci> fmap (++"!") (Just "wisdom")
+Just "wisdom!"
+ghci> fmap (++"!") Nothing
+Nothing
+

As an applicative functor, it functions similarly. However, +applicatives also have the function wrapped. Maybe is an +applicative functor in such a way that when we use +<*> to apply a function inside a Maybe +to a value that’s inside a Maybe, they both have to be +Just values for the result to be a Just value, +otherwise the result is Nothing. It makes sense because if +you’re missing either the function or the thing you’re applying it to, +you can’t make something up out of thin air, so you have to propagate +the failure:

+
ghci> Just (+3) <*> Just 3
 Just 6
-ghci> Nothing <*> Just "greed"
+ghci> Nothing <*> Just "greed"
 Nothing
 ghci> Just ord <*> Nothing
-Nothing
-
- -

-When we use the applicative style to have normal functions act on Maybe values, it's similar. All the values have to be Just values, otherwise it's all for Nothing! -

- -
-ghci> max <$> Just 3 <*> Just 6
+Nothing
+

When we use the applicative style to have normal functions act on +Maybe values, it’s similar. All the values have to be +Just values, otherwise it’s all for +Nothing!

+
ghci> max <$> Just 3 <*> Just 6
 Just 6
 ghci> max <$> Just 3 <*> Nothing
-Nothing
-
- -

-And now, let's think about how we would do >>= -for Maybe. Like we said, >>= -takes a monadic value, and a function that takes a normal value and returns a -monadic value and manages to apply that function to the monadic value. How does -it do that, if the function takes a normal value? Well, to do that, it has to -take into account the context of that monadic value. -

- -

-In this case, >>= would take a Maybe a value and a function of type a -> Maybe b -and somehow apply the function to the Maybe a. To -figure out how it does that, we can use the intuition that we have from Maybe being an applicative functor. Let's say that we have -a function \x -> Just (x+1). It takes a number, -adds 1 to it and wraps it in a Just: -

- - -
-ghci> (\x -> Just (x+1)) 1
+Nothing
+

And now, let’s think about how we would do >>= for +Maybe. Like we said, >>= takes a monadic +value, and a function that takes a normal value and returns a monadic +value and manages to apply that function to the monadic value. How does +it do that, if the function takes a normal value? Well, to do that, it +has to take into account the context of that monadic value.

+

In this case, >>= would take a +Maybe a value and a function of type +a -> Maybe b and somehow apply the function to the +Maybe a. To figure out how it does that, we can use the +intuition that we have from Maybe being an applicative +functor. Let’s say that we have a function +\x -> Just (x+1). It takes a number, adds 1 +to it and wraps it in a Just:

+
ghci> (\x -> Just (x+1)) 1
 Just 2
 ghci> (\x -> Just (x+1)) 100
-Just 101
-
- -

-If we feed it 1, it evaluates to Just 2. If we give it the number 100, the result is Just 101. -Very straightforward. Now here's the kicker: how do we feed a -Maybe value to this function? If we think about how -Maybe acts as an applicative functor, answering this -is pretty easy. If we feed it a Just value, take what's inside -the Just and apply the function to it. If give it -a Nothing, hmm, well, then we're left with a -function but Nothing to apply it to. In that case, -let's just do what we did before and say that the result is Nothing. -

- -

-Instead of calling it >>=, let's call it -applyMaybe for now. It takes a Maybe a and a function that returns a Maybe b and manages to apply that function to the Maybe a. Here it is in code: -

- -
-applyMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
+Just 101
+

If we feed it 1, it evaluates to Just 2. If +we give it the number 100, the result is +Just 101. Very straightforward. Now here’s the kicker: how +do we feed a Maybe value to this function? If we think +about how Maybe acts as an applicative functor, answering +this is pretty easy. If we feed it a Just value, take +what’s inside the Just and apply the function to it. If +give it a Nothing, hmm, well, then we’re left with a +function but Nothing to apply it to. In that case, let’s +just do what we did before and say that the result is +Nothing.

+

Instead of calling it >>=, let’s call it +applyMaybe for now. It takes a Maybe a and a +function that returns a Maybe b and manages to apply that +function to the Maybe a. Here it is in code:

+
applyMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
 applyMaybe Nothing f  = Nothing
-applyMaybe (Just x) f = f x
-
- -

-Okay, now let's play with it for a bit. We'll use it as an infix function so that -the Maybe value is on the left side and the function on -the right: -

- -
-ghci> Just 3 `applyMaybe` \x -> Just (x+1)
+applyMaybe (Just x) f = f x
+

Okay, now let’s play with it for a bit. We’ll use it as an infix +function so that the Maybe value is on the left side and +the function on the right:

+
ghci> Just 3 `applyMaybe` \x -> Just (x+1)
 Just 4
-ghci> Just "smile" `applyMaybe` \x -> Just (x ++ " :)")
-Just "smile :)"
+ghci> Just "smile" `applyMaybe` \x -> Just (x ++ " :)")
+Just "smile :)"
 ghci> Nothing `applyMaybe` \x -> Just (x+1)
 Nothing
-ghci> Nothing `applyMaybe` \x -> Just (x ++ " :)")
-Nothing
-
- -

-In the above example, we see that when we used applyMaybe with a Just value and a function, the function simply got applied to the value inside the Just. When we tried to use it with a Nothing, the whole result was Nothing. -What about if the function returns a Nothing? Let's -see: -

- -
-ghci> Just 3 `applyMaybe` \x -> if x > 2 then Just x else Nothing
+ghci> Nothing `applyMaybe` \x -> Just (x ++ " :)")
+Nothing
+

In the above example, we see that when we used +applyMaybe with a Just value and a function, +the function simply got applied to the value inside the +Just. When we tried to use it with a Nothing, +the whole result was Nothing. What about if the function +returns a Nothing? Let’s see:

+
ghci> Just 3 `applyMaybe` \x -> if x > 2 then Just x else Nothing
 Just 3
 ghci> Just 1 `applyMaybe` \x -> if x > 2 then Just x else Nothing
-Nothing
-
- -

-Just what we expected. If the monadic value on the left is a Nothing, the whole thing is Nothing. -And if the function on the right returns a Nothing, -the result is Nothing again. This is very similar to when -we used Maybe as an applicative and we got a Nothing result if somewhere in there was a Nothing. -

- -

-It looks like that for Maybe, we've figured out how -to take a fancy value and feed it to a function that takes a normal value and -returns a fancy one. We did this by keeping in mind that a Maybe value represents a computation that might have -failed. -

- -

-You might be asking yourself, how is this useful? It may seem like applicative -functors are stronger than monads, since applicative functors allow us to take a -normal function and make it operate on values with contexts. We'll see that -monads can do that as well because they're an upgrade of applicative functors, -and that they can also do some cool stuff that applicative functors can't. -

- -

-We'll come back to Maybe in a minute, but first, -let's check out the type class that belongs to monads. -

- - -

The Monad type class

- -

-Just like functors have the Functor type class and applicative -functors have the Applicative type class, -monads come with their own type class: Monad! Wow, -who would have thought? This is what the type class looks like: -

- -
-class Monad m where
+Nothing
+

Just what we expected. If the monadic value on the left is a +Nothing, the whole thing is Nothing. And if +the function on the right returns a Nothing, the result is +Nothing again. This is very similar to when we used +Maybe as an applicative and we got a Nothing +result if somewhere in there was a Nothing.

+

It looks like that for Maybe, we’ve figured out how to +take a fancy value and feed it to a function that takes a normal value +and returns a fancy one. We did this by keeping in mind that a +Maybe value represents a computation that might have +failed.

+

You might be asking yourself, how is this useful? It may seem like +applicative functors are stronger than monads, since applicative +functors allow us to take a normal function and make it operate on +values with contexts. We’ll see that monads can do that as well because +they’re an upgrade of applicative functors, and that they can also do +some cool stuff that applicative functors can’t.

+

We’ll come back to Maybe in a minute, but first, let’s +check out the type class that belongs to monads.

+

The Monad type class

+

Just like functors have the Functor type class and +applicative functors have the Applicative type class, +monads come with their own type class: Monad! Wow, who +would have thought? This is what the type class looks like:

+
class Monad m where
     return :: a -> m a
 
     (>>=) :: m a -> (a -> m b) -> m b
@@ -390,304 +263,191 @@ 

The Monad type class

x >> y = x >>= \_ -> y fail :: String -> m a - fail msg = error msg -
- -this is you on monads -

-Let's start with the first line. It says class Monad m where. -But wait, didn't we say that monads are just beefed up applicative functors? Shouldn't -there be a class constraint in there along the lines of class (Applicative m) = > Monad m where so that a type -has to be an applicative functor first before it can be made a monad? Well, -there should, but when Haskell was made, it hadn't occurred to people that -applicative functors are a good fit for Haskell so they weren't in there. But -rest assured, every monad is an applicative functor, even if the Monad class declaration doesn't say so. -

- -

-The first function that the Monad type class defines -is return. It's the same as pure, only with a different name. Its type is (Monad m) => a -> m a. It takes a value and puts it -in a minimal default context that still holds that value. In other words, it -takes something and wraps it in a monad. It always does the same thing as the -pure function from the Applicative type class, which means we're already -acquainted with return. We already used return when doing I/O. We used it to take a value and make -a bogus I/O action that does nothing but yield that value. For Maybe it takes a value and wraps it in a Just. -

- + fail msg = error msg
+this is you on monads +

Let’s start with the first line. It says +class Monad m where. But wait, didn’t we say that monads +are just beefed up applicative functors? Shouldn’t there be a class +constraint in there along the lines of +class (Applicative m) = > Monad m where so that a type +has to be an applicative functor first before it can be made a monad? +Well, there should, but when Haskell was made, it hadn’t occurred to +people that applicative functors are a good fit for Haskell so they +weren’t in there. But rest assured, every monad is an applicative +functor, even if the Monad class declaration doesn’t say +so.

+

The first function that the Monad type class defines is +return. It’s the same as pure, only with a +different name. Its type is (Monad m) => a -> m a. It +takes a value and puts it in a minimal default context that still holds +that value. In other words, it takes something and wraps it in a monad. +It always does the same thing as the pure function from the +Applicative type class, which means we’re already +acquainted with return. We already used return +when doing I/O. We used it to take a value and make a bogus I/O action +that does nothing but yield that value. For Maybe it takes +a value and wraps it in a Just.

-Just a reminder: return is nothing like the -return that's in most other languages. It doesn't end -function execution or anything, it just takes a normal value and puts it in a -context. +

Just a reminder: return is nothing like the +return that’s in most other languages. It doesn’t end +function execution or anything, it just takes a normal value and puts it +in a context.

- -hmmm yaes - -

-The next function is >>=, or bind. It's like -function application, only instead of taking a normal value and feeding it to a -normal function, it takes a monadic value (that is, a value with a context) and -feeds it to a function that takes a normal value but returns a monadic value. -

- -

-Next up, we have >>. We won't pay too much -attention to it for now because it comes with a default implementation and we -pretty much never implement it when making Monad -instances. -

- -

-The final function of the Monad type class is -fail. We never use it explicitly in our code. Instead, it's used by Haskell to enable failure in a special syntactic construct for monads that we'll meet later. We don't need to concern ourselves with fail too much for now. -

- -

-Now that we know what the Monad type class looks -like, let's take a look at how Maybe is an instance -of Monad! -

- -
-instance Monad Maybe where
+hmmm yaes
+

The next function is >>=, or bind. It’s like +function application, only instead of taking a normal value and feeding +it to a normal function, it takes a monadic value (that is, a value with +a context) and feeds it to a function that takes a normal value but +returns a monadic value.

+

Next up, we have >>. We won’t pay too much +attention to it for now because it comes with a default implementation +and we pretty much never implement it when making Monad +instances.

+

The final function of the Monad type class is +fail. We never use it explicitly in our code. Instead, it’s +used by Haskell to enable failure in a special syntactic construct for +monads that we’ll meet later. We don’t need to concern ourselves with +fail too much for now.

+

Now that we know what the Monad type class looks like, +let’s take a look at how Maybe is an instance of +Monad!

+
instance Monad Maybe where
     return x = Just x
     Nothing >>= f = Nothing
     Just x >>= f  = f x
-    fail _ = Nothing
-
- -

-return is the same as pure, so that one's a no-brainer. We do what we did in the -Applicative type class and wrap it in a Just. -

- -

-The >>= function is the same as our -applyMaybe. When feeding the Maybe a to our function, we keep in mind the context and -return a Nothing if the value on the left is Nothing because if there's no value then there's no way to -apply our function to it. If it's a Just we take -what's inside and apply f to it. -

- -

-We can play around with Maybe as a monad: -

- -
-ghci> return "WHAT" :: Maybe String
-Just "WHAT"
+    fail _ = Nothing
+

return is the same as pure, so that one’s a +no-brainer. We do what we did in the Applicative type class +and wrap it in a Just.

+

The >>= function is the same as our +applyMaybe. When feeding the Maybe a to our +function, we keep in mind the context and return a Nothing +if the value on the left is Nothing because if there’s no +value then there’s no way to apply our function to it. If it’s a +Just we take what’s inside and apply f to +it.

+

We can play around with Maybe as a monad:

+
ghci> return "WHAT" :: Maybe String
+Just "WHAT"
 ghci> Just 9 >>= \x -> return (x*10)
 Just 90
 ghci> Nothing >>= \x -> return (x*10)
-Nothing
-
- -

-Nothing new or exciting on the first line since we already used pure with Maybe and we know that return is just pure with a -different name. The next two lines showcase >>= -a bit more. -

- -

-Notice how when we fed Just 9 to the function \x -> return (x*10), the x -took on the value 9 inside the function. It seems as -though we were able to extract the value from a Maybe -without pattern-matching. And we still didn't lose the context of our Maybe value, because when it's Nothing, -the result of using >>= will be Nothing as well. -

- - -

Walk the line

- -pierre - -

-Now that we know how to feed a Maybe a value to a -function of type -a -> Maybe b while taking into account the context -of possible failure, let's see how we can use >>= repeatedly to handle computations of several -Maybe a values. -

- -

-Pierre has decided to take a break from his job at the fish farm and try -tightrope walking. He's not that bad at it, but he does have one problem: birds -keep landing on his balancing pole! They come and they take a short rest, chat -with their avian friends and then take off in search of breadcrumbs. This -wouldn't bother him so much if the number of birds on the left side of the pole -was always equal to the number of birds on the right side. But sometimes, all -the birds decide that they like one side better and so they throw him off -balance, which results in an embarrassing tumble for Pierre (he's using a safety -net). -

- -

-Let's say that he keeps his balance if the number of birds on the left side of -the pole and on the right side of the pole is within three. So if there's -one bird on the right side and four birds on the left side, he's okay. But if a -fifth bird lands on the left side, then he loses his balance and takes a dive. -

- -

-We're going to simulate birds landing on and flying away from the pole and see -if Pierre is still at it after a certain number of birdy arrivals and -departures. For instance, we want to see what happens to Pierre if first one -bird arrives on the left side, then four birds occupy the right side and then -the bird that was on the left side decides to fly away. -

- -

-We can represent the pole with a simple pair of integers. The first component -will signify the number of birds on the left side and the second component the -number of birds on the right side: -

- -
-type Birds = Int
-type Pole = (Birds,Birds)
-
- -

-First we made a type synonym for Int, called -Birds, because we're using integers to represent -how many birds there are. And then we made a type synonym (Birds,Birds) and we called it Pole (not to be -confused with a person of Polish descent). -

- -

-Next up, how about we make a function that takes a number of birds and lands -them on one side of the pole. Here are the functions: -

- -
-landLeft :: Birds -> Pole -> Pole
+Nothing
+

Nothing new or exciting on the first line since we already used +pure with Maybe and we know that +return is just pure with a different name. The +next two lines showcase >>= a bit more.

+

Notice how when we fed Just 9 to the function +\x -> return (x*10), the x took on the +value 9 inside the function. It seems as though we were +able to extract the value from a Maybe without +pattern-matching. And we still didn’t lose the context of our +Maybe value, because when it’s Nothing, the +result of using >>= will be Nothing as +well.

+

Walk the line

+pierre +

Now that we know how to feed a Maybe a value to a +function of type a -> Maybe b while taking into account +the context of possible failure, let’s see how we can use +>>= repeatedly to handle computations of several +Maybe a values.

+

Pierre has decided to take a break from his job at the fish farm and +try tightrope walking. He’s not that bad at it, but he does have one +problem: birds keep landing on his balancing pole! They come and they +take a short rest, chat with their avian friends and then take off in +search of breadcrumbs. This wouldn’t bother him so much if the number of +birds on the left side of the pole was always equal to the number of +birds on the right side. But sometimes, all the birds decide that they +like one side better and so they throw him off balance, which results in +an embarrassing tumble for Pierre (he’s using a safety net).

+

Let’s say that he keeps his balance if the number of birds on the +left side of the pole and on the right side of the pole is within three. +So if there’s one bird on the right side and four birds on the left +side, he’s okay. But if a fifth bird lands on the left side, then he +loses his balance and takes a dive.

+

We’re going to simulate birds landing on and flying away from the +pole and see if Pierre is still at it after a certain number of birdy +arrivals and departures. For instance, we want to see what happens to +Pierre if first one bird arrives on the left side, then four birds +occupy the right side and then the bird that was on the left side +decides to fly away.

+

We can represent the pole with a simple pair of integers. The first +component will signify the number of birds on the left side and the +second component the number of birds on the right side:

+
type Birds = Int
+type Pole = (Birds,Birds)
+

First we made a type synonym for Int, called +Birds, because we’re using integers to represent how many +birds there are. And then we made a type synonym +(Birds,Birds) and we called it Pole (not to be +confused with a person of Polish descent).

+

Next up, how about we make a function that takes a number of birds +and lands them on one side of the pole. Here are the functions:

+
landLeft :: Birds -> Pole -> Pole
 landLeft n (left,right) = (left + n,right)
 
 landRight :: Birds -> Pole -> Pole
-landRight n (left,right) = (left,right + n)
-
- -

-Pretty straightforward stuff. Let's try them out: -

- -
-ghci> landLeft 2 (0,0)
+landRight n (left,right) = (left,right + n)
+

Pretty straightforward stuff. Let’s try them out:

+
ghci> landLeft 2 (0,0)
 (2,0)
 ghci> landRight 1 (1,2)
 (1,3)
 ghci> landRight (-1) (1,2)
-(1,1)
-
- -

-To make birds fly away we just had a negative number of birds land on one side. Because -landing a bird on the Pole returns a Pole, we can chain applications of landLeft and landRight: -

- -
-ghci> landLeft 2 (landRight 1 (landLeft 1 (0,0)))
-(3,1)
-
- -

-When we apply the function landLeft 1 to (0,0) we get (1,0). Then, we land a bird on the right side, resulting -in (1,1). Finally two birds land on the left -side, resulting in (3,1). We apply a function to -something by first writing the function and then writing its parameter, but here -it would be better if the pole went first and then the landing function. If we -make a function like this: -

- -
-x -: f = f x
-
- -

-We can apply functions by first writing the parameter and then the function: -

- -
-ghci> 100 -: (*3)
+(1,1)
+

To make birds fly away we just had a negative number of birds land on +one side. Because landing a bird on the Pole returns a +Pole, we can chain applications of landLeft +and landRight:

+
ghci> landLeft 2 (landRight 1 (landLeft 1 (0,0)))
+(3,1)
+

When we apply the function landLeft 1 to +(0,0) we get (1,0). Then, we land a bird on +the right side, resulting in (1,1). Finally two birds land +on the left side, resulting in (3,1). We apply a function +to something by first writing the function and then writing its +parameter, but here it would be better if the pole went first and then +the landing function. If we make a function like this:

+
x -: f = f x
+

We can apply functions by first writing the parameter and then the +function:

+
ghci> 100 -: (*3)
 300
 ghci> True -: not
 False
 ghci> (0,0) -: landLeft 2
-(2,0)
-
- -

-By using this, we can repeatedly land birds on the pole in a more readable -manner: -

- -
-ghci> (0,0) -: landLeft 1 -: landRight 1 -: landLeft 2
-(3,1)
-
- -

-Pretty cool! This example is equivalent to the one before where we repeatedly -landed birds on the pole, only it looks neater. Here, it's more obvious that we -start off with (0,0) and then land one bird one the -left, then one on the right and finally two on the left. -

- -

So far so good, but -what happens if 10 birds land on one side? -

- -
-ghci> landLeft 10 (0,3)
-(10,3)
-
- -

-10 birds on the left side and only 3 on the right? That's sure to send poor -Pierre falling through the air! This is pretty obvious here but what if we had a -sequence of landings like this: -

- -
-ghci> (0,0) -: landLeft 1 -: landRight 4 -: landLeft (-1) -: landRight (-2)
-(0,2)
-
- -

-It might seem like everything is okay but if you follow the steps here, you'll -see that at one time there are 4 birds on the right side and no birds on the -left! To fix this, we have to take another look at our landLeft and landRight -functions. From what we've seen, we want these functions to be able to fail. -That is, we want them to return a new pole if the balance is okay but fail if -the birds land in a lopsided manner. And what better way to add a context of -failure to value than by using Maybe! Let's rework -these functions: -

- -
-landLeft :: Birds -> Pole -> Maybe Pole
+(2,0)
+

By using this, we can repeatedly land birds on the pole in a more +readable manner:

+
ghci> (0,0) -: landLeft 1 -: landRight 1 -: landLeft 2
+(3,1)
+

Pretty cool! This example is equivalent to the one before where we +repeatedly landed birds on the pole, only it looks neater. Here, it’s +more obvious that we start off with (0,0) and then land one +bird one the left, then one on the right and finally two on the +left.

+

So far so good, but what happens if 10 birds land on one side?

+
ghci> landLeft 10 (0,3)
+(10,3)
+

10 birds on the left side and only 3 on the right? That’s sure to +send poor Pierre falling through the air! This is pretty obvious here +but what if we had a sequence of landings like this:

+
ghci> (0,0) -: landLeft 1 -: landRight 4 -: landLeft (-1) -: landRight (-2)
+(0,2)
+

It might seem like everything is okay but if you follow the steps +here, you’ll see that at one time there are 4 birds on the right side +and no birds on the left! To fix this, we have to take another look at +our landLeft and landRight functions. From +what we’ve seen, we want these functions to be able to fail. That is, we +want them to return a new pole if the balance is okay but fail if the +birds land in a lopsided manner. And what better way to add a context of +failure to value than by using Maybe! Let’s rework these +functions:

+
landLeft :: Birds -> Pole -> Maybe Pole
 landLeft n (left,right)
     | abs ((left + n) - right) < 4 = Just (left + n, right)
     | otherwise                    = Nothing
@@ -695,505 +455,310 @@ 

Walk the line

landRight :: Birds -> Pole -> Maybe Pole landRight n (left,right) | abs (left - (right + n)) < 4 = Just (left, right + n) - | otherwise = Nothing -
- -

-Instead of returning a Pole these functions now -return a Maybe Pole. They still take the number of -birds and the old pole as before, but then they check if landing that many birds -on the pole would throw Pierre off balance. We use guards to check if the -difference between the number of birds on the new pole is less than 4. If it is, -we wrap the new pole in a Just and return that. If it -isn't, we return a Nothing, indicating failure. -

- -

-Let's give these babies a go: -

- -
-ghci> landLeft 2 (0,0)
+    | otherwise                    = Nothing
+

Instead of returning a Pole these functions now return a +Maybe Pole. They still take the number of birds and the old +pole as before, but then they check if landing that many birds on the +pole would throw Pierre off balance. We use guards to check if the +difference between the number of birds on the new pole is less than 4. +If it is, we wrap the new pole in a Just and return that. +If it isn’t, we return a Nothing, indicating failure.

+

Let’s give these babies a go:

+
ghci> landLeft 2 (0,0)
 Just (2,0)
 ghci> landLeft 10 (0,3)
-Nothing
-
- -

-Nice! When we land birds without throwing Pierre off balance, we get a new pole -wrapped in a Just. But when many more birds end up on -one side of the pole, we get a Nothing. This is cool, -but we seem to have lost the ability to repeatedly land birds on the pole. We -can't do landLeft 1 (landRight 1 (0,0)) anymore -because when we apply landRight 1 to (0,0), we don't get a Pole, but -a Maybe Pole. landLeft 1 -takes a Pole and not a Maybe -Pole. -

- -

-We need a way of taking a Maybe Pole and feeding it -to a function that takes a Pole and returns a Maybe Pole. Luckily, we have >>=, -which does just that for Maybe. Let's give it a go: -

- -
-ghci> landRight 1 (0,0) >>= landLeft 2
-Just (2,1)
-
- -

-Remember, landLeft 2 has a type of Pole -> Maybe Pole. We couldn't just feed it the Maybe Pole that is the result of landRight 1 (0,0), so we use >>= to take that value with a context and give -it to landLeft 2. >>= does indeed allow us to treat the Maybe value as a value with context because if we feed a -Nothing into landLeft 2, -the result is Nothing and the failure is propagated: -

- -
-ghci> Nothing >>= landLeft 2
-Nothing
-
- -

-With this, we can now chain landings that may fail because >>= allows us to feed a -monadic value to a function that takes a normal one. -

- -

-Here's a sequence of birdy landings: -

- -
-ghci> return (0,0) >>= landRight 2 >>= landLeft 2 >>= landRight 2
-Just (2,4)
-
- -

-At the beginning, we used return to take a pole and -wrap it in a Just. We could have just applied landRight 2 to (0,0), it would -have been the same, but this way we can be more consistent by using >>= -for every function. -Just (0,0) gets fed to landRight -2, resulting in Just (0,2). This, in turn, -gets fed to landLeft 2, resulting in Just (2,2), and so on. -

- -

-Remember this example from before we introduced failure into Pierre's routine: -

- -
-ghci> (0,0) -: landLeft 1 -: landRight 4 -: landLeft (-1) -: landRight (-2)
-(0,2)
-
- -

-It didn't simulate his interaction with birds very well because in the middle -there his balance was off but the result didn't reflect that. But let's give -that a go now by using monadic application (>>=) -instead of normal application: -

- -
-ghci> return (0,0) >>= landLeft 1 >>= landRight 4 >>= landLeft (-1) >>= landRight (-2)
-Nothing
-
- -iama banana - -

-Awesome. The final result represents failure, which is what we expected. Let's -see how this result was obtained. First, return puts - (0,0) into a default context, making it a Just (0,0). Then, Just (0,0) >>= - landLeft 1 happens. Since the Just (0,0) is a Just value, landLeft 1 gets applied - to (0,0), resulting in a Just - (1,0), because the birds are still relatively balanced. Next, Just (1,0) >>= landRight 4 takes place and the - result is Just (1,4) as the balance of the birds is - still intact, although just barely. Just (1,4) gets - fed to landLeft (-1). This means that landLeft (-1) (1,4) takes place. Now because of how landLeft works, this results in a Nothing, because the resulting pole is off balance. Now - that we have a Nothing, it gets fed to landRight (-2), but because it's a Nothing, the result is automatically Nothing, as we have nothing to apply landRight (-2) to. -

- -

-We couldn't have achieved this by just using Maybe -as an applicative. If you try it, you'll get stuck, because applicative functors -don't allow for the applicative values to interact with each other very much. -They can, at best, be used as parameters to a function by using the applicative -style. The applicative operators will fetch their results and feed them to the -function in a manner appropriate for each applicative and then put the final -applicative value together, but there isn't that much interaction going on -between them. Here, however, each step relies on the previous one's result. On -every landing, the possible result from the previous one is examined and the -pole is checked for balance. This determines whether the landing will succeed or -fail. -

- -

-We may also devise a function that ignores the current number of birds on the -balancing pole and just makes Pierre slip and fall. We can call it banana: -

- -
-banana :: Pole -> Maybe Pole
-banana _ = Nothing
-
- -

-Now we can chain it together with our bird landings. It will always cause our walker -to fall, because it ignores whatever's passed to it and always returns a -failure. Check it: -

- -
-ghci> return (0,0) >>= landLeft 1 >>= banana >>= landRight 1
-Nothing
-
- -

-The value Just (1,0) gets fed to banana, but it produces a Nothing, which -causes everything to result in a Nothing. How -unfortunate! -

- -

-Instead of making functions that ignore their input and just return a -predetermined monadic value, we can use the >> -function, whose default implementation is this: -

- -
-(>>) :: (Monad m) => m a -> m b -> m b
-m >> n = m >>= \_ -> n
-
- -

-Normally, passing some value to a function that ignores its parameter and always -just returns some predetermined value would always result in that predetermined value. With -monads however, their context and meaning has to be considered as well. Here's -how >> acts with Maybe: -

- -
-ghci> Nothing >> Just 3
+Nothing
+

Nice! When we land birds without throwing Pierre off balance, we get +a new pole wrapped in a Just. But when many more birds end +up on one side of the pole, we get a Nothing. This is cool, +but we seem to have lost the ability to repeatedly land birds on the +pole. We can’t do landLeft 1 (landRight 1 (0,0)) anymore +because when we apply landRight 1 to (0,0), we +don’t get a Pole, but a Maybe Pole. +landLeft 1 takes a Pole and not a +Maybe Pole.

+

We need a way of taking a Maybe Pole and feeding it to a +function that takes a Pole and returns a +Maybe Pole. Luckily, we have >>=, which +does just that for Maybe. Let’s give it a go:

+
ghci> landRight 1 (0,0) >>= landLeft 2
+Just (2,1)
+

Remember, landLeft 2 has a type of +Pole -> Maybe Pole. We couldn’t just feed it the +Maybe Pole that is the result of +landRight 1 (0,0), so we use >>= to take +that value with a context and give it to landLeft 2. +>>= does indeed allow us to treat the +Maybe value as a value with context because if we feed a +Nothing into landLeft 2, the result is +Nothing and the failure is propagated:

+
ghci> Nothing >>= landLeft 2
+Nothing
+

With this, we can now chain landings that may fail because +>>= allows us to feed a monadic value to a function +that takes a normal one.

+

Here’s a sequence of birdy landings:

+
ghci> return (0,0) >>= landRight 2 >>= landLeft 2 >>= landRight 2
+Just (2,4)
+

At the beginning, we used return to take a pole and wrap +it in a Just. We could have just applied +landRight 2 to (0,0), it would have been the +same, but this way we can be more consistent by using +>>= for every function. Just (0,0) gets +fed to landRight 2, resulting in Just (0,2). +This, in turn, gets fed to landLeft 2, resulting in +Just (2,2), and so on.

+

Remember this example from before we introduced failure into Pierre’s +routine:

+
ghci> (0,0) -: landLeft 1 -: landRight 4 -: landLeft (-1) -: landRight (-2)
+(0,2)
+

It didn’t simulate his interaction with birds very well because in +the middle there his balance was off but the result didn’t reflect that. +But let’s give that a go now by using monadic application +(>>=) instead of normal application:

+
ghci> return (0,0) >>= landLeft 1 >>= landRight 4 >>= landLeft (-1) >>= landRight (-2)
+Nothing
+iama banana +

Awesome. The final result represents failure, which is what we +expected. Let’s see how this result was obtained. First, +return puts (0,0) into a default context, +making it a Just (0,0). Then, +Just (0,0) >>= landLeft 1 happens. Since the +Just (0,0) is a Just value, +landLeft 1 gets applied to (0,0), resulting in +a Just (1,0), because the birds are still relatively +balanced. Next, Just (1,0) >>= landRight 4 takes +place and the result is Just (1,4) as the balance of the +birds is still intact, although just barely. Just (1,4) +gets fed to landLeft (-1). This means that +landLeft (-1) (1,4) takes place. Now because of how +landLeft works, this results in a Nothing, +because the resulting pole is off balance. Now that we have a +Nothing, it gets fed to landRight (-2), but +because it’s a Nothing, the result is automatically +Nothing, as we have nothing to apply +landRight (-2) to.

+

We couldn’t have achieved this by just using Maybe as an +applicative. If you try it, you’ll get stuck, because applicative +functors don’t allow for the applicative values to interact with each +other very much. They can, at best, be used as parameters to a function +by using the applicative style. The applicative operators will fetch +their results and feed them to the function in a manner appropriate for +each applicative and then put the final applicative value together, but +there isn’t that much interaction going on between them. Here, however, +each step relies on the previous one’s result. On every landing, the +possible result from the previous one is examined and the pole is +checked for balance. This determines whether the landing will succeed or +fail.

+

We may also devise a function that ignores the current number of +birds on the balancing pole and just makes Pierre slip and fall. We can +call it banana:

+
banana :: Pole -> Maybe Pole
+banana _ = Nothing
+

Now we can chain it together with our bird landings. It will always +cause our walker to fall, because it ignores whatever’s passed to it and +always returns a failure. Check it:

+
ghci> return (0,0) >>= landLeft 1 >>= banana >>= landRight 1
+Nothing
+

The value Just (1,0) gets fed to banana, +but it produces a Nothing, which causes everything to +result in a Nothing. How unfortunate!

+

Instead of making functions that ignore their input and just return a +predetermined monadic value, we can use the >> +function, whose default implementation is this:

+
(>>) :: (Monad m) => m a -> m b -> m b
+m >> n = m >>= \_ -> n
+

Normally, passing some value to a function that ignores its parameter +and always just returns some predetermined value would always result in +that predetermined value. With monads however, their context and meaning +has to be considered as well. Here’s how >> acts with +Maybe:

+
ghci> Nothing >> Just 3
 Nothing
 ghci> Just 3 >> Just 4
 Just 4
 ghci> Just 3 >> Nothing
-Nothing
-
- -

-If you replace >> with >>= -\_ ->, it's easy to see why it acts like it does. -

- -

-We can replace our banana function in the chain with a >> and then a Nothing: -

- -
-ghci> return (0,0) >>= landLeft 1 >> Nothing >>= landRight 1
-Nothing
-
- -

-There we go, guaranteed and obvious failure! -

- -

-It's also worth taking a look at what this would look like if we hadn't made the -clever choice of treating Maybe values as values with -a failure context and feeding them to functions like we did. Here's how a -series of bird landings would look like: -

- -
-routine :: Maybe Pole
+Nothing
+

If you replace >> with +>>= \_ ->, it’s easy to see why it acts like it +does.

+

We can replace our banana function in the chain with a +>> and then a Nothing:

+
ghci> return (0,0) >>= landLeft 1 >> Nothing >>= landRight 1
+Nothing
+

There we go, guaranteed and obvious failure!

+

It’s also worth taking a look at what this would look like if we +hadn’t made the clever choice of treating Maybe values as +values with a failure context and feeding them to functions like we did. +Here’s how a series of bird landings would look like:

+
routine :: Maybe Pole
 routine = case landLeft 1 (0,0) of
     Nothing -> Nothing
     Just pole1 -> case landRight 4 pole1 of
         Nothing -> Nothing
         Just pole2 -> case landLeft 2 pole2 of
             Nothing -> Nothing
-            Just pole3 -> landLeft 1 pole3
-
- -john joe glanton - -

-We land a bird on the left and then we examine the possibility of failure and -the possibility of success. In the case of failure, we return a Nothing. In the case of success, we land birds on the -right and then do the same thing all over again. Converting this monstrosity -into a neat chain of monadic applications with >>= -is a classic example of how the Maybe monad saves us -a lot of time when we have to successively do computations that are based on -computations that might have failed. -

- -

-Notice how the Maybe implementation of >>= features exactly this logic of seeing if a value -is Nothing and if it is, returning a Nothing right away and if it isn't, going forward with what's -inside the Just. -

- -

-In this section, we took some functions that we had and saw that they would work -better if the values that they returned supported failure. By turning those -values into Maybe values and replacing normal -function application with >>=, we got a -mechanism for handling failure pretty much for free, because >>= is supposed to preserve the context of the value -to which it applies functions. In this case, the context was that our values -were values with failure and so when we applied functions to such values, the -possibility of failure was always taken into account. -

- - -

do notation

- -

-Monads in Haskell are so useful that they got their own special syntax called -do notation. We've already encountered do notation when we were doing I/O and there we said that -it was for gluing together several I/O actions into one. Well, as it turns out, -do notation isn't just for IO, but can be used -for any monad. Its principle is still the same: gluing together monadic -values in sequence. We're going to take a look at how do notation works and why it's useful. -

- -

-Consider this familiar example of monadic application: -

- -
-ghci> Just 3 >>= (\x -> Just (show x ++ "!"))
-Just "3!"
-
- -

-Been there, done that. Feeding a monadic value to a function that returns one, -no big deal. Notice how when we do this, x becomes -3 inside the lambda. Once we're inside that lambda, -it's just a normal value rather than a monadic value. Now, what if we had another ->>= -inside that function? Check this out: -

- -
-ghci> Just 3 >>= (\x -> Just "!" >>= (\y -> Just (show x ++ y)))
-Just "3!"
-
- -

-Ah, a nested use of >>=! In the outermost -lambda, we feed Just "!" to the lambda \y -> Just (show x ++ y). Inside this lambda, the y becomes "!". x is still 3 because we got it -from the outer lambda. All this sort of reminds me of the following expression: -

- -
-ghci> let x = 3; y = "!" in show x ++ y
-"3!"
-
- -

-The main difference between these two is that the values in the former example -are monadic. They're values with a failure context. We can replace any of them -with a failure: -

- -
-ghci> Nothing >>= (\x -> Just "!" >>= (\y -> Just (show x ++ y)))
+            Just pole3 -> landLeft 1 pole3
+john joe glanton +

We land a bird on the left and then we examine the possibility of +failure and the possibility of success. In the case of failure, we +return a Nothing. In the case of success, we land birds on +the right and then do the same thing all over again. Converting this +monstrosity into a neat chain of monadic applications with +>>= is a classic example of how the +Maybe monad saves us a lot of time when we have to +successively do computations that are based on computations that might +have failed.

+

Notice how the Maybe implementation of +>>= features exactly this logic of seeing if a value +is Nothing and if it is, returning a Nothing +right away and if it isn’t, going forward with what’s inside the +Just.

+

In this section, we took some functions that we had and saw that they +would work better if the values that they returned supported failure. By +turning those values into Maybe values and replacing normal +function application with >>=, we got a mechanism for +handling failure pretty much for free, because >>= is +supposed to preserve the context of the value to which it applies +functions. In this case, the context was that our values were values +with failure and so when we applied functions to such values, the +possibility of failure was always taken into account.

+

do notation

+

Monads in Haskell are so useful that they got their own special +syntax called do notation. We’ve already encountered +do notation when we were doing I/O and there we said that +it was for gluing together several I/O actions into one. Well, as it +turns out, do notation isn’t just for IO, but +can be used for any monad. Its principle is still the same: gluing +together monadic values in sequence. We’re going to take a look at how +do notation works and why it’s useful.

+

Consider this familiar example of monadic application:

+
ghci> Just 3 >>= (\x -> Just (show x ++ "!"))
+Just "3!"
+

Been there, done that. Feeding a monadic value to a function that +returns one, no big deal. Notice how when we do this, x +becomes 3 inside the lambda. Once we’re inside that lambda, +it’s just a normal value rather than a monadic value. Now, what if we +had another >>= inside that function? Check this +out:

+
ghci> Just 3 >>= (\x -> Just "!" >>= (\y -> Just (show x ++ y)))
+Just "3!"
+

Ah, a nested use of >>=! In the outermost lambda, +we feed Just "!" to the lambda +\y -> Just (show x ++ y). Inside this lambda, the +y becomes "!". x is still +3 because we got it from the outer lambda. All this sort of +reminds me of the following expression:

+
ghci> let x = 3; y = "!" in show x ++ y
+"3!"
+

The main difference between these two is that the values in the +former example are monadic. They’re values with a failure context. We +can replace any of them with a failure:

+
ghci> Nothing >>= (\x -> Just "!" >>= (\y -> Just (show x ++ y)))
 Nothing
 ghci> Just 3 >>= (\x -> Nothing >>= (\y -> Just (show x ++ y)))
 Nothing
-ghci> Just 3 >>= (\x -> Just "!" >>= (\y -> Nothing))
-Nothing
-
- -

-In the first line, feeding a Nothing to a function -naturally results in a Nothing. In the second line, -we feed Just 3 to a function and the x becomes 3, but then we feed a -Nothing to the inner lambda and the result of that is -Nothing, which causes the outer lambda to produce -Nothing as well. So this is sort of like assigning -values to variables in let expressions, only that the -values in question are monadic values. -

- -

-To further illustrate this point, let's write this in a script and have each -Maybe value take up its own line: -

- -
-foo :: Maybe String
+ghci> Just 3 >>= (\x -> Just "!" >>= (\y -> Nothing))
+Nothing
+

In the first line, feeding a Nothing to a function +naturally results in a Nothing. In the second line, we feed +Just 3 to a function and the x becomes +3, but then we feed a Nothing to the inner +lambda and the result of that is Nothing, which causes the +outer lambda to produce Nothing as well. So this is sort of +like assigning values to variables in let expressions, only +that the values in question are monadic values.

+

To further illustrate this point, let’s write this in a script and +have each Maybe value take up its own line:

+
foo :: Maybe String
 foo = Just 3   >>= (\x ->
-      Just "!" >>= (\y ->
-      Just (show x ++ y)))
-
- -

-To save us from writing all these annoying lambdas, Haskell gives us -do notation. It allows us to write the previous piece -of code like this: -

- -
-foo :: Maybe String
+      Just "!" >>= (\y ->
+      Just (show x ++ y)))
+

To save us from writing all these annoying lambdas, Haskell gives us +do notation. It allows us to write the previous piece of +code like this:

+
foo :: Maybe String
 foo = do
     x <- Just 3
-    y <- Just "!"
-    Just (show x ++ y)
-
- -90s owl - -

-It would seem as though we've gained the ability to temporarily extract things -from Maybe values without having to check if the -Maybe values are Just -values or Nothing values at every step. How cool! If any of the -values that we try to extract from are Nothing, the -whole do expression will result in a Nothing. We're yanking out their (possibly existing) values -and letting >>= worry about the context that -comes with those values. It's important to remember that -do expressions are just different syntax for chaining -monadic values. -

- -

-In a do expression, every line is a monadic value. To -inspect its result, we use <-. If we have a -Maybe String and we bind it with <- to a variable, that variable will be a String, just like when we used >>= -to feed monadic values to lambdas. The last monadic value in a do expression, like Just (show x ++ y) here, -can't be used with <- to bind its result, because that -wouldn't make sense if we translated the do -expression -back to a chain of >>= applications. Rather, -its result is the result of the whole glued up monadic value, taking into -account the possible failure of any of the previous ones. -

- -

-For instance, examine the following line: -

- -
-ghci> Just 9 >>= (\x -> Just (x > 8))
-Just True
-
- -

-Because the left parameter of >>= is a -Just value, the lambda is applied to 9 and the result is a Just True. If we -rewrite this in do notation, we get: -

- -
-marySue :: Maybe Bool
+    y <- Just "!"
+    Just (show x ++ y)
+90s owl +

It would seem as though we’ve gained the ability to temporarily +extract things from Maybe values without having to check if +the Maybe values are Just values or +Nothing values at every step. How cool! If any of the +values that we try to extract from are Nothing, the whole +do expression will result in a Nothing. We’re +yanking out their (possibly existing) values and letting +>>= worry about the context that comes with those +values. It’s important to remember that do expressions are +just different syntax for chaining monadic values.

+

In a do expression, every line is a monadic value. To +inspect its result, we use <-. If we have a +Maybe String and we bind it with <- to a +variable, that variable will be a String, just like when we +used >>= to feed monadic values to lambdas. The last +monadic value in a do expression, like +Just (show x ++ y) here, can’t be used with +<- to bind its result, because that wouldn’t make sense +if we translated the do expression back to a chain of +>>= applications. Rather, its result is the result of +the whole glued up monadic value, taking into account the possible +failure of any of the previous ones.

+

For instance, examine the following line:

+
ghci> Just 9 >>= (\x -> Just (x > 8))
+Just True
+

Because the left parameter of >>= is a +Just value, the lambda is applied to 9 and the +result is a Just True. If we rewrite this in +do notation, we get:

+
marySue :: Maybe Bool
 marySue = do
     x <- Just 9
-    Just (x > 8)
-
- -

-If we compare these two, it's easy to see why the result of the whole monadic -value is the result of the last monadic value in the do -expression with all the previous ones chained into it. -

- -

-Our tightwalker's routine can also be expressed with do -notation. landLeft and landRight -take a number of birds and a pole and produce a pole wrapped in a Just, unless the tightwalker -slips, in which case a Nothing is produced. We used ->>= to chain successive steps because each one -relied on the previous one and each one had an added context of possible -failure. Here's two birds landing on the left side, then two birds landing on -the right and then one bird landing on the left: -

- -
-routine :: Maybe Pole
+    Just (x > 8)
+

If we compare these two, it’s easy to see why the result of the whole +monadic value is the result of the last monadic value in the +do expression with all the previous ones chained into +it.

+

Our tightwalker’s routine can also be expressed with do +notation. landLeft and landRight take a number +of birds and a pole and produce a pole wrapped in a Just, +unless the tightwalker slips, in which case a Nothing is +produced. We used >>= to chain successive steps +because each one relied on the previous one and each one had an added +context of possible failure. Here’s two birds landing on the left side, +then two birds landing on the right and then one bird landing on the +left:

+
routine :: Maybe Pole
 routine = do
     start <- return (0,0)
     first <- landLeft 2 start
     second <- landRight 2 first
-    landLeft 1 second
-
- -

-Let's see if he succeeds: -

- -
-ghci> routine
-Just (3,2)
-
- -

-He does! Great. When we were doing these routines by explicitly writing ->>=, we usually said something like -return (0,0) >>= landLeft 2, because -landLeft 2 is a function that returns a -Maybe value. With do -expressions however, each line must feature a monadic value. So we explicitly pass -the previous Pole to the landLeft -landRight functions. If we examined the variables to -which we bound our Maybe values, start would be (0,0), first would be (2,0) and so on. -

- -

-Because do expressions are written line by line, they may look -like imperative code to some people. But the thing is, they're just sequential, as each value in -each line relies on the result of the previous ones, along with their contexts -(in this case, whether they succeeded or failed). -

- -

Again, let's take a look at what this piece of code would look like if we hadn't -used the monadic aspects of Maybe:

- -
-routine :: Maybe Pole
+    landLeft 1 second
+

Let’s see if he succeeds:

+
ghci> routine
+Just (3,2)
+

He does! Great. When we were doing these routines by explicitly +writing >>=, we usually said something like +return (0,0) >>= landLeft 2, because +landLeft 2 is a function that returns a Maybe +value. With do expressions however, each line must feature +a monadic value. So we explicitly pass the previous Pole to +the landLeft landRight functions. If we +examined the variables to which we bound our Maybe values, +start would be (0,0), first would +be (2,0) and so on.

+

Because do expressions are written line by line, they +may look like imperative code to some people. But the thing is, they’re +just sequential, as each value in each line relies on the result of the +previous ones, along with their contexts (in this case, whether they +succeeded or failed).

+

Again, let’s take a look at what this piece of code would look like +if we hadn’t used the monadic aspects of Maybe:

+
routine :: Maybe Pole
 routine =
     case Just (0,0) of
         Nothing -> Nothing
@@ -1201,935 +766,565 @@ 

do notation

Nothing -> Nothing Just first -> case landRight 2 first of Nothing -> Nothing - Just second -> landLeft 1 second -
- -

-See how in the case of success, the tuple inside Just -(0,0) becomes start, the result of -landLeft 2 start becomes first, etc. -

- -

-If we want to throw the Pierre a banana peel in do -notation, we can do the following: -

- -
-routine :: Maybe Pole
+                Just second -> landLeft 1 second
+

See how in the case of success, the tuple inside +Just (0,0) becomes start, the result of +landLeft 2 start becomes first, etc.

+

If we want to throw the Pierre a banana peel in do +notation, we can do the following:

+
routine :: Maybe Pole
 routine = do
     start <- return (0,0)
     first <- landLeft 2 start
     Nothing
     second <- landRight 2 first
-    landLeft 1 second
-
- -

-When we write a line in do notation without binding -the monadic value with <-, it's just like putting ->> after the monadic value whose result we -want to ignore. We sequence the monadic value but we ignore its result because -we don't care what it is and it's prettier than writing -_ <- Nothing, which is equivalent to the above. -

- -

-When to use do notation and when to explicitly use ->>= is up to you. I think this example lends -itself to explicitly writing >>= because each -step relies specifically on the result of the previous one. With do notation, we had to specifically write on which pole the -birds are landing, but every time we used that came directly before. But still, -it gave us some insight into do notation. -

- -

-In do notation, when we bind monadic values to -names, we can utilize pattern matching, just like in let -expressions and function parameters. Here's an example of pattern matching in a -do expression: -

- -
-justH :: Maybe Char
+    landLeft 1 second
+

When we write a line in do notation without binding the +monadic value with <-, it’s just like putting +>> after the monadic value whose result we want to +ignore. We sequence the monadic value but we ignore its result because +we don’t care what it is and it’s prettier than writing +_ <- Nothing, which is equivalent to the above.

+

When to use do notation and when to explicitly use +>>= is up to you. I think this example lends itself +to explicitly writing >>= because each step relies +specifically on the result of the previous one. With do +notation, we had to specifically write on which pole the birds are +landing, but every time we used that came directly before. But still, it +gave us some insight into do notation.

+

In do notation, when we bind monadic values to names, we +can utilize pattern matching, just like in let expressions +and function parameters. Here’s an example of pattern matching in a +do expression:

+
justH :: Maybe Char
 justH = do
-    (x:xs) <- Just "hello"
-    return x
-
- -

-We use pattern matching to get the first character of the string "hello" -and then we present it as the result. So justH evaluates to -Just 'h'. -

- -

-What if this pattern matching were to -fail? When matching on a pattern in a function fails, the next pattern is -matched. If the matching falls through all the patterns for a given function, an -error is thrown and our program crashes. On the other hand, failed pattern -matching in let expressions results in an error being -produced right away, because the mechanism of falling through patterns isn't -present in let expressions. When pattern matching -fails in a do expression, the fail function is called. It's part of the Monad type class and it enables failed pattern matching to -result in a failure in the context of the current monad instead of making our -program crash. Its default implementation is this: -

- -
-fail :: (Monad m) => String -> m a
-fail msg = error msg
-
- -

-So by default it does make our program crash, but monads that incorporate a -context of possible failure (like Maybe) usually -implement it on their own. For Maybe, its implemented -like so: -

- -
-fail _ = Nothing
-
- -

-It ignores the error message and makes a Nothing. So -when pattern matching fails in a Maybe value that's -written in do notation, the whole value results in a -Nothing. This is preferable to having our program -crash. Here's a do expression with a pattern that's -bound to fail: -

- -
-wopwop :: Maybe Char
+    (x:xs) <- Just "hello"
+    return x
+

We use pattern matching to get the first character of the string +"hello" and then we present it as the result. So +justH evaluates to Just 'h'.

+

What if this pattern matching were to fail? When matching on a +pattern in a function fails, the next pattern is matched. If the +matching falls through all the patterns for a given function, an error +is thrown and our program crashes. On the other hand, failed pattern +matching in let expressions results in an error being +produced right away, because the mechanism of falling through patterns +isn’t present in let expressions. When pattern matching +fails in a do expression, the fail function is +called. It’s part of the Monad type class and it enables +failed pattern matching to result in a failure in the context of the +current monad instead of making our program crash. Its default +implementation is this:

+
fail :: (Monad m) => String -> m a
+fail msg = error msg
+

So by default it does make our program crash, but monads that +incorporate a context of possible failure (like Maybe) +usually implement it on their own. For Maybe, its +implemented like so:

+
fail _ = Nothing
+

It ignores the error message and makes a Nothing. So +when pattern matching fails in a Maybe value that’s written +in do notation, the whole value results in a +Nothing. This is preferable to having our program crash. +Here’s a do expression with a pattern that’s bound to +fail:

+
wopwop :: Maybe Char
 wopwop = do
-    (x:xs) <- Just ""
-    return x
-
- -

-The pattern matching fails, so the effect is the same as if the whole line with -the pattern was replaced with a Nothing. Let's try -this out: -

- -
-ghci> wopwop
-Nothing
-
- -

-The failed pattern matching has caused a failure within the context of our -monad instead of causing a program-wide failure, which is pretty neat. -

- - -

The list monad

-dead cat -

-So far, we've seen how Maybe values can be viewed as -values with a failure context and how we can incorporate failure handling into -our code by using >>= to feed them to functions. -In this section, we're going to take a look at how to use the monadic aspects of -lists to bring non-determinism into our code in a clear and readable manner. -

- -

-We've already talked about how lists represent non-deterministic values -when they're used as applicatives. A value like 5 -is deterministic. It has only one result and we know exactly what it is. On the -other hand, a value like [3,8,9] contains several -results, so we can view it as one value that is actually many values at the same -time. Using lists as applicative functors showcases this -non-determinism nicely: -

- -
-ghci> (*) <$> [1,2,3] <*> [10,100,1000]
-[10,100,1000,20,200,2000,30,300,3000]
-
- -

-All the possible combinations of multiplying elements from the left list with -elements from the right list are included in the resulting list. When dealing -with non-determinism, there are many choices that we can make, so we just try -all of them, and so the result is a non-deterministic value as well, only it has -many more results. -

- -

-This context of non-determinism translates to monads very nicely. Let's go ahead -and see what the Monad instance for lists looks like: -

- -
-instance Monad [] where
+    (x:xs) <- Just ""
+    return x
+

The pattern matching fails, so the effect is the same as if the whole +line with the pattern was replaced with a Nothing. Let’s +try this out:

+
ghci> wopwop
+Nothing
+

The failed pattern matching has caused a failure within the context +of our monad instead of causing a program-wide failure, which is pretty +neat.

+

The list monad

+dead cat +

So far, we’ve seen how Maybe values can be viewed as +values with a failure context and how we can incorporate failure +handling into our code by using >>= to feed them to +functions. In this section, we’re going to take a look at how to use the +monadic aspects of lists to bring non-determinism into our code in a +clear and readable manner.

+

We’ve already talked about how lists represent non-deterministic +values when they’re used as applicatives. A value like 5 is +deterministic. It has only one result and we know exactly what it is. On +the other hand, a value like [3,8,9] contains several +results, so we can view it as one value that is actually many values at +the same time. Using lists as applicative functors showcases this +non-determinism nicely:

+
ghci> (*) <$> [1,2,3] <*> [10,100,1000]
+[10,100,1000,20,200,2000,30,300,3000]
+

All the possible combinations of multiplying elements from the left +list with elements from the right list are included in the resulting +list. When dealing with non-determinism, there are many choices that we +can make, so we just try all of them, and so the result is a +non-deterministic value as well, only it has many more results.

+

This context of non-determinism translates to monads very nicely. +Let’s go ahead and see what the Monad instance for lists +looks like:

+
instance Monad [] where
     return x = [x]
     xs >>= f = concat (map f xs)
-    fail _ = []
-
- -

-return does the same thing as pure, so we should already be familiar with -return for lists. It takes a value and puts it in a -minimal default context that still yields that value. In other words, it makes a -list that has only that one value as its result. This is useful for when we want -to just wrap a normal value into a list so that it can interact with -non-deterministic values. -

- -

-To understand how >>= works for lists, it's -best if we take a look at it in action to gain some intuition first. >>= is about taking a value with a context (a monadic -value) and feeding it to a function that takes a normal value and returns one -that has context. If that function just produced a normal value instead of one -with a context, >>= wouldn't be so useful -because after one use, the context would be lost. Anyway, let's try feeding a -non-deterministic value to a function: -

- -
-ghci> [3,4,5] >>= \x -> [x,-x]
-[3,-3,4,-4,5,-5]
-
- -

-When we used >>= with Maybe, the monadic value was fed into the function while -taking care of possible failures. Here, it takes care of non-determinism for us. -[3,4,5] is a non-deterministic value and we feed it -into a function that returns a non-deterministic value as well. The result is -also non-deterministic, and it features all the possible results of taking -elements from the list [3,4,5] and passing them to -the function \x -> [x,-x]. This function takes a -number and produces two results: one negated and one that's unchanged. So when -we use >>= to feed this list to the function, -every number is negated and also kept unchanged. The x -from the lambda takes on every value from the list that's fed to it. -

- -

-To see how this is achieved, we can just follow the implementation. First, we -start off with the list [3,4,5]. Then, we map the -lambda over it and the result is the following: -

- -
-[[3,-3],[4,-4],[5,-5]]
-
- -

-The lambda is applied to every element and we get a list of lists. Finally, we -just flatten the list and voila! We've applied a non-deterministic function to a -non-deterministic value! -

- -

-Non-determinism also includes support for failure. The empty list [] is pretty much the equivalent of Nothing, because it signifies the absence of a result. -That's why failing is just defined as the empty list. The error message gets -thrown away. Let's play around with lists that fail: -

- -
-ghci> [] >>= \x -> ["bad","mad","rad"]
+    fail _ = []
+

return does the same thing as pure, so we +should already be familiar with return for lists. It takes +a value and puts it in a minimal default context that still yields that +value. In other words, it makes a list that has only that one value as +its result. This is useful for when we want to just wrap a normal value +into a list so that it can interact with non-deterministic values.

+

To understand how >>= works for lists, it’s best +if we take a look at it in action to gain some intuition first. +>>= is about taking a value with a context (a monadic +value) and feeding it to a function that takes a normal value and +returns one that has context. If that function just produced a normal +value instead of one with a context, >>= wouldn’t be +so useful because after one use, the context would be lost. Anyway, +let’s try feeding a non-deterministic value to a function:

+
ghci> [3,4,5] >>= \x -> [x,-x]
+[3,-3,4,-4,5,-5]
+

When we used >>= with Maybe, the +monadic value was fed into the function while taking care of possible +failures. Here, it takes care of non-determinism for us. +[3,4,5] is a non-deterministic value and we feed it into a +function that returns a non-deterministic value as well. The result is +also non-deterministic, and it features all the possible results of +taking elements from the list [3,4,5] and passing them to +the function \x -> [x,-x]. This function takes a number +and produces two results: one negated and one that’s unchanged. So when +we use >>= to feed this list to the function, every +number is negated and also kept unchanged. The x from the +lambda takes on every value from the list that’s fed to it.

+

To see how this is achieved, we can just follow the implementation. +First, we start off with the list [3,4,5]. Then, we map the +lambda over it and the result is the following:

+
[[3,-3],[4,-4],[5,-5]]
+

The lambda is applied to every element and we get a list of lists. +Finally, we just flatten the list and voila! We’ve applied a +non-deterministic function to a non-deterministic value!

+

Non-determinism also includes support for failure. The empty list +[] is pretty much the equivalent of Nothing, +because it signifies the absence of a result. That’s why failing is just +defined as the empty list. The error message gets thrown away. Let’s +play around with lists that fail:

+
ghci> [] >>= \x -> ["bad","mad","rad"]
 []
 ghci> [1,2,3] >>= \x -> []
-[]
-
- -

-In the first line, an empty list is fed into the lambda. Because the list has no -elements, none of them can be passed to the function and so the result is an -empty list. This is similar to feeding Nothing to a -function. In the second line, each element gets passed to the function, but the -element is ignored and the function just returns an empty list. Because the -function fails for every element that goes in it, the result is a failure. -

- -

-Just like with Maybe values, we can chain several -lists with >>=, propagating the -non-determinism: -

- -
-ghci> [1,2] >>= \n -> ['a','b'] >>= \ch -> return (n,ch)
-[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
-
- -concatmap - -

-The list [1,2] gets bound to n and ['a','b'] gets bound to -ch. Then, we do return (n,ch) (or [(n,ch)]), -which means taking a pair of (n,ch) and putting it in -a default minimal context. In this case, it's making the smallest possible list -that still presents (n,ch) as the result and features -as little non-determinism as possible. Its effect on the context is minimal. -What we're saying here is this: for every element in [1,2], -go over every element in ['a','b'] and produce a -tuple of one element from each list. -

- -

-Generally speaking, because return takes a value and -wraps it in a minimal context, it doesn't have any extra effect (like failing -in Maybe or resulting in more non-determinism for -lists) but it does present something as its result. -

- +[]
+

In the first line, an empty list is fed into the lambda. Because the +list has no elements, none of them can be passed to the function and so +the result is an empty list. This is similar to feeding +Nothing to a function. In the second line, each element +gets passed to the function, but the element is ignored and the function +just returns an empty list. Because the function fails for every element +that goes in it, the result is a failure.

+

Just like with Maybe values, we can chain several lists +with >>=, propagating the non-determinism:

+
ghci> [1,2] >>= \n -> ['a','b'] >>= \ch -> return (n,ch)
+[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
+concatmap +

The list [1,2] gets bound to n and +['a','b'] gets bound to ch. Then, we do +return (n,ch) (or [(n,ch)]), which means +taking a pair of (n,ch) and putting it in a default minimal +context. In this case, it’s making the smallest possible list that still +presents (n,ch) as the result and features as little +non-determinism as possible. Its effect on the context is minimal. What +we’re saying here is this: for every element in [1,2], go +over every element in ['a','b'] and produce a tuple of one +element from each list.

+

Generally speaking, because return takes a value and +wraps it in a minimal context, it doesn’t have any extra effect (like +failing in Maybe or resulting in more non-determinism for +lists) but it does present something as its result.

-When you have non-deterministic values interacting, you can view their -computation as a tree where every possible result in a list represents a -separate branch. +

When you have non-deterministic values interacting, you can view +their computation as a tree where every possible result in a list +represents a separate branch.

- -

-Here's the previous expression rewritten in do notation: -

- - -
-listOfTuples :: [(Int,Char)]
+

Here’s the previous expression rewritten in do +notation:

+
listOfTuples :: [(Int,Char)]
 listOfTuples = do
     n <- [1,2]
-    ch <- ['a','b']
-    return (n,ch)
-
- -

-This makes it a bit more obvious that n takes on -every value from [1,2] and ch takes on every value from ['a','b']. Just like with Maybe, -we're extracting the elements from the monadic values and treating them like -normal values and >>= takes care of the context -for us. The context in this case is non-determinism. -

- -

-Using lists with do notation really reminds me of -something we've seen before. Check out the following piece of code: -

- -
-ghci> [ (n,ch) | n <- [1,2], ch <- ['a','b'] ]
-[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
-
- -

-Yes! List comprehensions! In our do notation example, -n became every result from [1,2] -and for every such result, ch was assigned a result -from ['a','b'] and then the final line put (n,ch) into a default context (a singleton list) to present -it as the result without introducing any additional non-determinism. In this -list comprehension, the same thing happened, only we didn't have to write -return at the end to present (n,ch) as the result because the output part of a list -comprehension did that for us. -

- -

-In fact, list comprehensions are just syntactic sugar for using lists as -monads. In the end, list comprehensions and lists in do -notation translate to using >>= to do -computations that feature non-determinism. -

- -

-List comprehensions allow us to filter our output. For instance, we can filter a -list of numbers to search only for that numbers whose digits contain a 7: -

- -
-ghci> [ x | x <- [1..50], '7' `elem` show x ]
-[7,17,27,37,47]
-
- -

-We apply show to x to turn -our number into a string and then we check if the character '7' is part of that string. Pretty clever. To see how -filtering in list comprehensions translates to the list monad, we have to check -out the guard function and the MonadPlus type class. The MonadPlus -type class is for monads that can also act as monoids. Here's its definition: -

- -
-class Monad m => MonadPlus m where
+    ch <- ['a','b']
+    return (n,ch)
+

This makes it a bit more obvious that n takes on every +value from [1,2] and ch takes on every value +from ['a','b']. Just like with Maybe, we’re +extracting the elements from the monadic values and treating them like +normal values and >>= takes care of the context for +us. The context in this case is non-determinism.

+

Using lists with do notation really reminds me of +something we’ve seen before. Check out the following piece of code:

+
ghci> [ (n,ch) | n <- [1,2], ch <- ['a','b'] ]
+[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
+

Yes! List comprehensions! In our do notation example, +n became every result from [1,2] and for every +such result, ch was assigned a result from +['a','b'] and then the final line put (n,ch) +into a default context (a singleton list) to present it as the result +without introducing any additional non-determinism. In this list +comprehension, the same thing happened, only we didn’t have to write +return at the end to present (n,ch) as the +result because the output part of a list comprehension did that for +us.

+

In fact, list comprehensions are just syntactic sugar for using lists +as monads. In the end, list comprehensions and lists in do +notation translate to using >>= to do computations +that feature non-determinism.

+

List comprehensions allow us to filter our output. For instance, we +can filter a list of numbers to search only for that numbers whose +digits contain a 7:

+
ghci> [ x | x <- [1..50], '7' `elem` show x ]
+[7,17,27,37,47]
+

We apply show to x to turn our number into +a string and then we check if the character '7' is part of +that string. Pretty clever. To see how filtering in list comprehensions +translates to the list monad, we have to check out the +guard function and the MonadPlus type class. +The MonadPlus type class is for monads that can also act as +monoids. Here’s its definition:

+
class Monad m => MonadPlus m where
     mzero :: m a
-    mplus :: m a -> m a -> m a
-
- -

-mzero is synonymous to mempty -from the Monoid type class and mplus corresponds to mappend. -Because lists are monoids as well as monads, they can be made an instance of -this type class: -

- -
-instance MonadPlus [] where
+    mplus :: m a -> m a -> m a
+

mzero is synonymous to mempty from the +Monoid type class and mplus corresponds to +mappend. Because lists are monoids as well as monads, they +can be made an instance of this type class:

+
instance MonadPlus [] where
     mzero = []
-    mplus = (++)
-
- -

-For lists mzero represents a non-deterministic -computation that has no results at all — a failed computation. mplus joins two non-deterministic values into one. The -guard function is defined like this: -

- -
-guard :: (MonadPlus m) => Bool -> m ()
+    mplus = (++)
+

For lists mzero represents a non-deterministic +computation that has no results at all — a failed computation. +mplus joins two non-deterministic values into one. The +guard function is defined like this:

+
guard :: (MonadPlus m) => Bool -> m ()
 guard True = return ()
-guard False = mzero
-
- -

-It takes a boolean value and if it's True, takes a () and puts it in a minimal default context -that still succeeds. Otherwise, it makes a failed monadic value. Here it is in -action: -

- -
-ghci> guard (5 > 2) :: Maybe ()
+guard False = mzero
+

It takes a boolean value and if it’s True, takes a +() and puts it in a minimal default context that still +succeeds. Otherwise, it makes a failed monadic value. Here it is in +action:

+
ghci> guard (5 > 2) :: Maybe ()
 Just ()
 ghci> guard (1 > 2) :: Maybe ()
 Nothing
 ghci> guard (5 > 2) :: [()]
 [()]
 ghci> guard (1 > 2) :: [()]
-[]
-
- -

-Looks interesting, but how is it useful? In the list monad, we use it to filter -out non-deterministic computations. Observe: -

- -
-ghci> [1..50] >>= (\x -> guard ('7' `elem` show x) >> return x)
-[7,17,27,37,47]
-
- -

-The result here is the same as the result of our previous list comprehension. -How does guard achieve this? Let's first -see how guard functions in conjunction with >>: -

- -
-ghci> guard (5 > 2) >> return "cool" :: [String]
-["cool"]
-ghci> guard (1 > 2) >> return "cool" :: [String]
-[]
-
- -

-If guard succeeds, the result contained within it -is an empty tuple. So then, we use >> to ignore -that empty tuple and present something else as the result. However, if guard fails, then so will the return later on, because feeding an empty list to a -function with >>= always results in an empty -list. A guard basically says: if this boolean is -False then produce a failure right here, otherwise -make a successful value that has a dummy result of () -inside it. All this does is to allow the computation to continue. -

- -

-Here's the previous example rewritten in do notation: -

- -
-sevensOnly :: [Int]
+[]
+

Looks interesting, but how is it useful? In the list monad, we use it +to filter out non-deterministic computations. Observe:

+
ghci> [1..50] >>= (\x -> guard ('7' `elem` show x) >> return x)
+[7,17,27,37,47]
+

The result here is the same as the result of our previous list +comprehension. How does guard achieve this? Let’s first see +how guard functions in conjunction with +>>:

+
ghci> guard (5 > 2) >> return "cool" :: [String]
+["cool"]
+ghci> guard (1 > 2) >> return "cool" :: [String]
+[]
+

If guard succeeds, the result contained within it is an +empty tuple. So then, we use >> to ignore that empty +tuple and present something else as the result. However, if +guard fails, then so will the return later on, +because feeding an empty list to a function with >>= +always results in an empty list. A guard basically says: if +this boolean is False then produce a failure right here, +otherwise make a successful value that has a dummy result of +() inside it. All this does is to allow the computation to +continue.

+

Here’s the previous example rewritten in do +notation:

+
sevensOnly :: [Int]
 sevensOnly = do
     x <- [1..50]
-    guard ('7' `elem` show x)
-    return x
-
- -

-Had we forgotten to present x as the final result by -using return, the resulting list would just be a list -of empty tuples. Here's this again in the form of a list comprehension: -

- -
-ghci> [ x | x <- [1..50], '7' `elem` show x ]
-[7,17,27,37,47]
-
- -

-So filtering in list comprehensions is the same as using guard. -

- -

A knight's quest

-

-Here's a problem that really lends itself to being solved with non-determinism. -Say you have a chess board and only one knight piece on it. We want to find out -if the knight can reach a certain position in three moves. We'll just use a pair -of numbers to represent the knight's position on the chess board. The first -number will determine the column he's in and the second number will determine -the row. -

- -hee haw im a horse - -

-Let's make a type synonym for the knight's current position on the chess board: -

- -
-type KnightPos = (Int,Int)
-
- -

-So let's say that the knight starts at (6,2). Can he -get to (6,1) in exactly three moves? Let's see. If we -start off at (6,2) what's the best move to make next? -I know, how about all of them! We have non-determinism at our disposal, so -instead of picking one move, let's just pick all of them at once. Here's a -function that takes the knight's position and returns all of its next moves: -

- -
-moveKnight :: KnightPos -> [KnightPos]
+    guard ('7' `elem` show x)
+    return x
+

Had we forgotten to present x as the final result by +using return, the resulting list would just be a list of +empty tuples. Here’s this again in the form of a list comprehension:

+
ghci> [ x | x <- [1..50], '7' `elem` show x ]
+[7,17,27,37,47]
+

So filtering in list comprehensions is the same as using +guard.

+

A knight’s quest

+

Here’s a problem that really lends itself to being solved with +non-determinism. Say you have a chess board and only one knight piece on +it. We want to find out if the knight can reach a certain position in +three moves. We’ll just use a pair of numbers to represent the knight’s +position on the chess board. The first number will determine the column +he’s in and the second number will determine the row.

+hee haw im a horse +

Let’s make a type synonym for the knight’s current position on the +chess board:

+
type KnightPos = (Int,Int)
+

So let’s say that the knight starts at (6,2). Can he get +to (6,1) in exactly three moves? Let’s see. If we start off +at (6,2) what’s the best move to make next? I know, how +about all of them! We have non-determinism at our disposal, so instead +of picking one move, let’s just pick all of them at once. Here’s a +function that takes the knight’s position and returns all of its next +moves:

+
moveKnight :: KnightPos -> [KnightPos]
 moveKnight (c,r) = do
-    (c',r') <- [(c+2,r-1),(c+2,r+1),(c-2,r-1),(c-2,r+1)
+    (c',r') <- [(c+2,r-1),(c+2,r+1),(c-2,r-1),(c-2,r+1)
                ,(c+1,r-2),(c+1,r+2),(c-1,r-2),(c-1,r+2)
                ]
-    guard (c' `elem` [1..8] && r' `elem` [1..8])
-    return (c',r')
-
- -

-The knight can always take one step horizontally or vertically and two steps -horizontally or vertically but its movement has to be both horizontal and -vertical. (c',r') takes on every value from the list -of movements and then guard makes sure that the new -move, (c',r') is still on the board. If it it's not, -it produces an empty list, which causes a failure and return (c',r') isn't carried out for that position. -

- -

-This function can also be written without the use of lists as a monad, but we -did it here just for kicks. Here is the same function done with filter: -

- -
-moveKnight :: KnightPos -> [KnightPos]
+    guard (c' `elem` [1..8] && r' `elem` [1..8])
+    return (c',r')
+

The knight can always take one step horizontally or vertically and +two steps horizontally or vertically but its movement has to be both +horizontal and vertical. (c',r') takes on every value from +the list of movements and then guard makes sure that the +new move, (c',r') is still on the board. If it it’s not, it +produces an empty list, which causes a failure and +return (c',r') isn’t carried out for that position.

+

This function can also be written without the use of lists as a +monad, but we did it here just for kicks. Here is the same function done +with filter:

+
moveKnight :: KnightPos -> [KnightPos]
 moveKnight (c,r) = filter onBoard
     [(c+2,r-1),(c+2,r+1),(c-2,r-1),(c-2,r+1)
     ,(c+1,r-2),(c+1,r+2),(c-1,r-2),(c-1,r+2)
     ]
-    where onBoard (c,r) = c `elem` [1..8] && r `elem` [1..8]
-
- -

-Both of these do the same thing, so pick one that you think looks nicer. Let's -give it a whirl: -

- -
-ghci> moveKnight (6,2)
+    where onBoard (c,r) = c `elem` [1..8] && r `elem` [1..8]
+

Both of these do the same thing, so pick one that you think looks +nicer. Let’s give it a whirl:

+
ghci> moveKnight (6,2)
 [(8,1),(8,3),(4,1),(4,3),(7,4),(5,4)]
 ghci> moveKnight (8,1)
-[(6,2),(7,3)]
-
- -

-Works like a charm! We take one position and we just carry out all the possible -moves at once, so to speak. So now that we have a non-deterministic next position, -we just use >>= to feed it to moveKnight. Here's a function that takes a position and -returns all the positions that you can reach from it in three moves: -

- -
-in3 :: KnightPos -> [KnightPos]
+[(6,2),(7,3)]
+

Works like a charm! We take one position and we just carry out all +the possible moves at once, so to speak. So now that we have a +non-deterministic next position, we just use >>= to +feed it to moveKnight. Here’s a function that takes a +position and returns all the positions that you can reach from it in +three moves:

+
in3 :: KnightPos -> [KnightPos]
 in3 start = do
     first <- moveKnight start
     second <- moveKnight first
-    moveKnight second
-
- -

-If you pass it (6,2), the resulting list is quite -big, because if there are several ways to reach some position in three moves, -it crops up in the list several times. The above without do notation: -

- - -
-in3 start = return start >>= moveKnight >>= moveKnight >>= moveKnight
-
- -

-Using >>= once gives us all possible moves from -the start and then when we use >>= the second -time, for every possible first move, every possible next move is computed, and -the same goes for the last move. -

- -

-Putting a value in a default context by applying return -to it and then feeding it to a function with >>= -is the same as just normally applying the function to that value, but we did it -here anyway for style. -

- -

-Now, let's make a function that takes two positions and tells us if you can -get from one to the other in exactly three steps: -

- -
-canReachIn3 :: KnightPos -> KnightPos -> Bool
-canReachIn3 start end = end `elem` in3 start
-
- -

-We generate all the possible positions in three steps and then we see if the -position we're looking for is among them. So let's see if we can get from -(6,2) to (6,1) in three -moves: -

- -
-ghci> (6,2) `canReachIn3` (6,1)
-True
-
- -

-Yes! How about from (6,2) to (7,3)? -

- -
-ghci> (6,2) `canReachIn3` (7,3)
-False
-
- -

-No! As an exercise, you can change this function so that when you can reach one -position from the other, it tells you which moves to take. Later on, we'll see -how to modify this function so that we also pass it the number of moves to take -instead of that number being hardcoded like it is now. -

- - -

Monad laws

- -the court finds you guilty of peeing all over
-everything - -

-Just like applicative functors, and functors before them, monads come with a few -laws that all monad instances must abide by. Just because something is made an -instance of the Monad type class doesn't mean that -it's a monad, it just means that it was made an instance of a type class. For -a type to truly be a monad, the monad laws must hold for that type. These laws -allow us to make reasonable assumptions about the type and its behavior. -

- -

-Haskell allows any type to be an instance of any type class as long as the types check -out. It can't check if the monad laws hold for a type though, so if we're making -a new instance of the Monad type class, we have to be -reasonably sure that all is well with the monad laws for that type. We can rely -on the types that come with the standard library to satisfy the laws, but later -when we go about making our own monads, we're going to have to manually check -the if the laws hold. But don't worry, they're not complicated. -

- -

Left identity

- -

-The first monad law states that if we take a value, put it in a default -context with return and then feed it to a function by -using >>=, it's the same as just taking the -value and applying the function to it. To put it formally: -

- + moveKnight second
+

If you pass it (6,2), the resulting list is quite big, +because if there are several ways to reach some position in three moves, +it crops up in the list several times. The above without do +notation:

+
in3 start = return start >>= moveKnight >>= moveKnight >>= moveKnight
+

Using >>= once gives us all possible moves from +the start and then when we use >>= the second time, +for every possible first move, every possible next move is computed, and +the same goes for the last move.

+

Putting a value in a default context by applying return +to it and then feeding it to a function with >>= is +the same as just normally applying the function to that value, but we +did it here anyway for style.

+

Now, let’s make a function that takes two positions and tells us if +you can get from one to the other in exactly three steps:

+
canReachIn3 :: KnightPos -> KnightPos -> Bool
+canReachIn3 start end = end `elem` in3 start
+

We generate all the possible positions in three steps and then we see +if the position we’re looking for is among them. So let’s see if we can +get from (6,2) to (6,1) in three moves:

+
ghci> (6,2) `canReachIn3` (6,1)
+True
+

Yes! How about from (6,2) to (7,3)?

+
ghci> (6,2) `canReachIn3` (7,3)
+False
+

No! As an exercise, you can change this function so that when you can +reach one position from the other, it tells you which moves to take. +Later on, we’ll see how to modify this function so that we also pass it +the number of moves to take instead of that number being hardcoded like +it is now.

+

Monad laws

+ +

Just like applicative functors, and functors before them, monads come +with a few laws that all monad instances must abide by. Just because +something is made an instance of the Monad type class +doesn’t mean that it’s a monad, it just means that it was made an +instance of a type class. For a type to truly be a monad, the monad laws +must hold for that type. These laws allow us to make reasonable +assumptions about the type and its behavior.

+

Haskell allows any type to be an instance of any type class as long +as the types check out. It can’t check if the monad laws hold for a type +though, so if we’re making a new instance of the Monad type +class, we have to be reasonably sure that all is well with the monad +laws for that type. We can rely on the types that come with the standard +library to satisfy the laws, but later when we go about making our own +monads, we’re going to have to manually check the if the laws hold. But +don’t worry, they’re not complicated.

+

Left identity

+

The first monad law states that if we take a value, put it in a +default context with return and then feed it to a function +by using >>=, it’s the same as just taking the value +and applying the function to it. To put it formally:

- -

-If you look at monadic values as values with a context and return as taking a value and putting it in a default -minimal context that still presents that value as its result, it makes sense, -because if that context is really minimal, feeding this monadic value to a -function shouldn't be much different than just applying the function to the -normal value, and indeed it isn't different at all. -

- -

-For the Maybe monad return -is defined as Just. The Maybe -monad is all about possible failure, and if we have a value and want to put it -in such a context, it makes sense that we treat it as a successful computation -because, well, we know what the value is. Here's some return usage with Maybe: -

- -
-ghci> return 3 >>= (\x -> Just (x+100000))
+

If you look at monadic values as values with a context and +return as taking a value and putting it in a default +minimal context that still presents that value as its result, it makes +sense, because if that context is really minimal, feeding this monadic +value to a function shouldn’t be much different than just applying the +function to the normal value, and indeed it isn’t different at all.

+

For the Maybe monad return is defined as +Just. The Maybe monad is all about possible +failure, and if we have a value and want to put it in such a context, it +makes sense that we treat it as a successful computation because, well, +we know what the value is. Here’s some return usage with +Maybe:

+
ghci> return 3 >>= (\x -> Just (x+100000))
 Just 100003
 ghci> (\x -> Just (x+100000)) 3
-Just 100003
-
- -

-For the list monad return puts something in a -singleton list. The >>= implementation for -lists goes over all the values in the list and applies the function to them, but -since there's only one value in a singleton list, it's the same as applying the -function to that value: -

- -
-ghci> return "WoM" >>= (\x -> [x,x,x])
-["WoM","WoM","WoM"]
-ghci> (\x -> [x,x,x]) "WoM"
-["WoM","WoM","WoM"]
-
- -

-We said that for IO, using return makes an I/O action that has no side-effects but -just presents a value as its result. So it makes sense that this law holds for -IO as well. -

- -

Right identity

- -

-The second law states that if we have a monadic value and we use >>= to feed it to return, -the result is our original monadic value. Formally: -

- +Just 100003
+

For the list monad return puts something in a singleton +list. The >>= implementation for lists goes over all +the values in the list and applies the function to them, but since +there’s only one value in a singleton list, it’s the same as applying +the function to that value:

+
ghci> return "WoM" >>= (\x -> [x,x,x])
+["WoM","WoM","WoM"]
+ghci> (\x -> [x,x,x]) "WoM"
+["WoM","WoM","WoM"]
+

We said that for IO, using return makes an +I/O action that has no side-effects but just presents a value as its +result. So it makes sense that this law holds for IO as +well.

+

Right identity

+

The second law states that if we have a monadic value and we use +>>= to feed it to return, the result is +our original monadic value. Formally:

- -

-This one might be a bit less obvious than the first one, but let's take a look -at why it should hold. When we feed monadic values to functions by using >>=, those functions take normal values and return -monadic ones. return is also one such function, if -you consider its type. Like we said, return puts a -value in a minimal context that still presents that value as its result. This -means that, for instance, for Maybe, it doesn't -introduce any failure and for lists, it doesn't introduce any extra -non-determinism. Here's a test run for a few monads: -

- -
-ghci> Just "move on up" >>= (\x -> return x)
-Just "move on up"
+

This one might be a bit less obvious than the first one, but let’s +take a look at why it should hold. When we feed monadic values to +functions by using >>=, those functions take normal +values and return monadic ones. return is also one such +function, if you consider its type. Like we said, return +puts a value in a minimal context that still presents that value as its +result. This means that, for instance, for Maybe, it +doesn’t introduce any failure and for lists, it doesn’t introduce any +extra non-determinism. Here’s a test run for a few monads:

+
ghci> Just "move on up" >>= (\x -> return x)
+Just "move on up"
 ghci> [1,2,3,4] >>= (\x -> return x)
 [1,2,3,4]
-ghci> putStrLn "Wah!" >>= (\x -> return x)
-Wah!
-
- -

-If we take a closer look at the list example, the implementation for ->>= is: -

- -
-xs >>= f = concat (map f xs)
-
- -

-So when we feed [1,2,3,4] to -return, first return -gets mapped over [1,2,3,4], resulting in -[[1],[2],[3],[4]] and then this gets concatenated and -we have our original list. -

- -

-Left identity and right identity are basically laws that describe how return should behave. It's an important function for -making normal values into monadic ones and it wouldn't be good if the monadic -value that it produced did a lot of other stuff. -

- -

Associativity

- -

-The final monad law says that when we have a chain of monadic function -applications with >>=, it shouldn't matter how -they're nested. Formally written: -

- +ghci> putStrLn "Wah!" >>= (\x -> return x) +Wah!
+

If we take a closer look at the list example, the implementation for +>>= is:

+
xs >>= f = concat (map f xs)
+

So when we feed [1,2,3,4] to return, first +return gets mapped over [1,2,3,4], resulting +in [[1],[2],[3],[4]] and then this gets concatenated and we +have our original list.

+

Left identity and right identity are basically laws that describe how +return should behave. It’s an important function for making +normal values into monadic ones and it wouldn’t be good if the monadic +value that it produced did a lot of other stuff.

+

Associativity

+

The final monad law says that when we have a chain of monadic +function applications with >>=, it shouldn’t matter +how they’re nested. Formally written:

- -

-Hmmm, now what's going on here? We have one monadic value, m and two monadic functions f -and g. When we're doing (m ->>= f) >>= g, we're feeding m to f, which results in a monadic value. Then, we feed that monadic value to g. -In the expression m >>= (\x -> f x >>= g), we take a -monadic value and we feed it to a function that feeds the result of f x to g. It's not easy to see -how those two are equal, so let's take a look at an example that makes this -equality a bit clearer. -

- -

-Remember when we had our tightrope walker Pierre walk a rope while birds landed -on his balancing pole? To simulate birds landing on his balancing pole, we -made a chain of several functions that might produce failure: -

- -
-ghci> return (0,0) >>= landRight 2 >>= landLeft 2 >>= landRight 2
-Just (2,4)
-
- -

-We started with Just (0,0) and then bound that value -to the next monadic function, landRight 2. The result -of that was another monadic value which got bound into the next -monadic function, and so on. If we were to explicitly parenthesize this, we'd -write: -

- -
-ghci> ((return (0,0) >>= landRight 2) >>= landLeft 2) >>= landRight 2
-Just (2,4)
-
- -

-But we can also write the routine like this: -

- -
-return (0,0) >>= (\x ->
+

Hmmm, now what’s going on here? We have one monadic value, +m and two monadic functions f and +g. When we’re doing +(m >>= f) >>= g, we’re feeding m +to f, which results in a monadic value. Then, we feed that +monadic value to g. In the expression +m >>= (\x -> f x >>= g), we take a monadic +value and we feed it to a function that feeds the result of +f x to g. It’s not easy to see how those two +are equal, so let’s take a look at an example that makes this equality a +bit clearer.

+

Remember when we had our tightrope walker Pierre walk a rope while +birds landed on his balancing pole? To simulate birds landing on his +balancing pole, we made a chain of several functions that might produce +failure:

+
ghci> return (0,0) >>= landRight 2 >>= landLeft 2 >>= landRight 2
+Just (2,4)
+

We started with Just (0,0) and then bound that value to +the next monadic function, landRight 2. The result of that +was another monadic value which got bound into the next monadic +function, and so on. If we were to explicitly parenthesize this, we’d +write:

+
ghci> ((return (0,0) >>= landRight 2) >>= landLeft 2) >>= landRight 2
+Just (2,4)
+

But we can also write the routine like this:

+
return (0,0) >>= (\x ->
 landRight 2 x >>= (\y ->
 landLeft 2 y >>= (\z ->
-landRight 2 z)))
-
- -

-return (0,0) is the same as -Just (0,0) and when we feed it to the lambda, -the x becomes (0,0). -landRight takes a number of birds and a pole (a tuple -of numbers) and that's what it gets passed. This results in a Just (0,2) and when we feed this to the next lambda, -y is (0,2). This goes on -until the final bird landing produces a Just (2,4), -which is indeed the result of the whole expression. -

- -

-So it doesn't matter how you nest feeding values to monadic functions, what -matters is their meaning. Here's another way to look at this law: consider -composing two functions, f and g. Composing two functions is implemented like so: -

- -
-(.) :: (b -> c) -> (a -> b) -> (a -> c)
-f . g = (\x -> f (g x))
-
- -

-If the type of g is a -> b -and the type of f is b -> c, -we arrange them into a new function which has a type of a -> c, so that its parameter is passed between those -functions. Now what if those two functions were monadic, that is, what if the -values they returned were monadic values? If we had a function of type -a -> m b, we couldn't just pass its result to a -function of type b -> m c, because that function -accepts a normal b, not a monadic one. We could -however, use >>= to make that happen. So by -using >>=, we can compose two monadic -functions: -

- -
-(<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> (a -> m c)
-f <=< g = (\x -> g x >>= f)
-
- -

-So now we can compose two monadic functions: -

- -
-ghci> let f x = [x,-x]
+landRight 2 z)))
+

return (0,0) is the same as Just (0,0) and +when we feed it to the lambda, the x becomes +(0,0). landRight takes a number of birds and a +pole (a tuple of numbers) and that’s what it gets passed. This results +in a Just (0,2) and when we feed this to the next lambda, +y is (0,2). This goes on until the final bird +landing produces a Just (2,4), which is indeed the result +of the whole expression.

+

So it doesn’t matter how you nest feeding values to monadic +functions, what matters is their meaning. Here’s another way to look at +this law: consider composing two functions, f and +g. Composing two functions is implemented like so:

+
(.) :: (b -> c) -> (a -> b) -> (a -> c)
+f . g = (\x -> f (g x))
+

If the type of g is a -> b and the type +of f is b -> c, we arrange them into a new +function which has a type of a -> c, so that its +parameter is passed between those functions. Now what if those two +functions were monadic, that is, what if the values they returned were +monadic values? If we had a function of type a -> m b, +we couldn’t just pass its result to a function of type +b -> m c, because that function accepts a normal +b, not a monadic one. We could however, use +>>= to make that happen. So by using +>>=, we can compose two monadic functions:

+
(<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> (a -> m c)
+f <=< g = (\x -> g x >>= f)
+

So now we can compose two monadic functions:

+
ghci> let f x = [x,-x]
 ghci> let g x = [x*3,x*2]
 ghci> let h = f <=< g
 ghci> h 3
-[9,-9,6,-6]
-
- -

-Cool. So what does that have to do with the associativity law? Well, when we -look at the law as a law of compositions, it states that -f <=< (g <=< h) should be the same as -(f <=< g) <=< h. This is just another -way of saying that for monads, the nesting of operations shouldn't matter. -

- -

-If we translate the first two laws to use <=<, -then the left identity law states that for every monadic function f, -f <=< return -is the same as writing just f and the right identity law -says that return <=< f is also no different from -f. -

- -

-This is very similar to how if f -is a normal function, (f . g) . h is the same as f . (g . h), f . id -is always the same as f and id . -f -is also just f. -

- -

-In this chapter, we took a look at the basics of monads and learned how the -Maybe monad and the list monad work. In the next -chapter, we'll take a look at a whole bunch of other cool monads and we'll also -learn how to make our own. -

+[9,-9,6,-6]
+

Cool. So what does that have to do with the associativity law? Well, +when we look at the law as a law of compositions, it states that f <=< (g <=< h) should be the same +as (f <=< g) <=< h. This is +just another way of saying that for monads, the nesting of operations +shouldn’t matter.

+

If we translate the first two laws to use <=<, +then the left identity law states that for every monadic function +f, f <=< return is the +same as writing just f and the right +identity law says that return <=< f +is also no different from f.

+

This is very similar to how if f is a normal function, +(f . g) . h is the same as f . (g . h), +f . id is always the same as f and +id . f is also just f.

+

In this chapter, we took a look at the basics of monads and learned +how the Maybe monad and the list monad work. In the next +chapter, we’ll take a look at a whole bunch of other cool monads and +we’ll also learn how to make our own.

Input and Output

-

poor dog

+poor dog

We’ve mentioned that Haskell is a purely functional language. Whereas in imperative languages you usually get things done by giving the computer a series of steps to execute, functional programming is more of @@ -68,8 +68,8 @@

Input and Output

of all the things that purity offers, like laziness, robustness and modularity while efficiently communicating with the outside world.

Hello, world!

-

HELLO!

+HELLO!

Up until now, we’ve always loaded our functions into GHCI to test them out and play with them. We’ve also explored the standard library functions that way. But now, after eight or so chapters, we’re finally @@ -152,8 +152,8 @@

Hello, world!

getLine.

ghci> :t getLine
 getLine :: IO String
-

luggage

+luggage

Aha, o-kay. getLine is an I/O action that contains a result type of String. That makes sense, because it will wait for the user to input something at the terminal and then that @@ -672,8 +672,8 @@

Hello, world!

function that takes a string and returns an I/O action. That I/O action will, when performed, print beautiful poetry to your terminal.

Files and streams

-

streams

+streams

getChar is an I/O action that reads a single character from the terminal. getLine is an I/O action that reads a line from the terminal. These two are pretty straightforward and most @@ -968,8 +968,8 @@

Files and streams

type FilePath = String

IOMode is a type that’s defined like this:

data IOMode = ReadMode | WriteMode | AppendMode | ReadWriteMode
-

A FILE IN A CAKE!!!

+A FILE IN A CAKE!!!

Just like our type that represents the seven possible values for the days of the week, this type is an enumeration that represents what we want to do with our opened file. Very simple. Just note that this type @@ -1048,8 +1048,8 @@

Files and streams

result <- f handle hClose handle return result -

butter toast

+butter toast

We know the result will be an I/O action so we can just start off with a do. First we open the file and get a handle from it. Then, we apply handle to our function to get back the I/O @@ -1319,8 +1319,8 @@

Files and streams

$ cat todo.txt Take salad out of the oven

Command line arguments

-

COMMAND LINE ARGUMENTS!!! ARGH

+COMMAND LINE ARGUMENTS!!! ARGH

Dealing with command line arguments is pretty much a necessity if you want to make a script or application that runs on a terminal. Luckily, Haskell’s standard library has a nice way of getting command line @@ -1540,8 +1540,8 @@

Command line arguments

hClose tempHandle removeFile fileName renameFile tempName fileName -

fresh baked salad

+fresh baked salad

To summarize our solution: we made a dispatch association that maps from commands to functions that take some command line arguments and return an I/O action. We see what the command is and based on that we @@ -1587,9 +1587,9 @@

Command line arguments

input and if there is erroneous input, perform the error reporting I/O action. Another way is to use exceptions, which we will meet soon.

Randomness

-

+alt="this picture is the ultimate source of randomness and wackiness">

Many times while programming, you need to get some random data. Maybe you’re making a game where a die needs to be thrown or you need to generate some test data to test out your program. There are a lot of @@ -1860,8 +1860,8 @@

Randomness

then putStrLn "You are correct!" else putStrLn $ "Sorry, it was " ++ show randNumber askForNumber newGen -

jack of diamonds

+jack of diamonds

We make a function askForNumber, which takes a random number generator and returns an I/O action that will prompt the user for a number and tell him if he guessed it right. In that function, we first @@ -1933,9 +1933,9 @@

Randomness

less stuff in main and also provides us with a function that we can reuse easily.

Bytestrings

-

+alt="like normal string, only they byte … what a pedestrian pun this is">

Lists are a cool and useful data structure. So far, we’ve used them pretty much everywhere. There are a multitude of functions that operate on them and Haskell’s laziness allows us to exchange the for and while @@ -2121,8 +2121,8 @@

Bytestrings

write programs by using normal strings and then convert them to use bytestrings if the performance is not satisfactory.

Exceptions

-

timberr!!!!

+timberr!!!!

All languages have procedures, functions, and pieces of code that might fail in some way. That’s just a fact of life. Different languages have different ways of handling those failures. In C, we usually use @@ -2166,9 +2166,9 @@

Exceptions

*** Exception: divide by zero ghci> head [] *** Exception: Prelude.head: empty list -

+alt="Stop right there, criminal scum! Nobody breaks the law on my watch! Now pay your fine or it’s off to jail.">

Pure code can throw exceptions, but it they can only be caught in the I/O part of our code (when we’re inside a do block that goes into main). That’s because you don’t know when (or if) @@ -2252,8 +2252,8 @@

Exceptions

which then decides what to do. So the final result is an I/O action that will either act the same as the first parameter or it will do what the handler tells it if the first I/O action throws an exception.

-

non sequitor

+non sequitor

If you’re familiar with try-catch blocks in languages like Java or Python, the catch function is similar to them. The first parameter is the thing to try, kind of like the stuff in the diff --git a/markdown/generated_html/introduction.html b/markdown/generated_html/introduction.html index 147f497..21544c1 100644 --- a/markdown/generated_html/introduction.html +++ b/markdown/generated_html/introduction.html @@ -46,8 +46,8 @@

About this tutorial

came falling into place. So this is an attempt at adding another useful resource for learning Haskell so you have a bigger chance of finding one you like.

-

bird

+bird

This tutorial is aimed at people who have experience in imperative programming languages (C, C++, Java, Python …) but haven’t programmed in a functional language before (Haskell, ML, OCaml …). Although I bet that @@ -68,7 +68,7 @@

About this tutorial

So what’s Haskell?

fx Haskell is a purely +width="150" height="146" alt="fx"> Haskell is a purely functional programming language. In imperative languages you get things done by giving the computer a sequence of tasks and then it executes them. While executing them, it can change state. For instance, @@ -93,7 +93,7 @@

So what’s Haskell?

is correct and then build more complex functions by gluing simple functions together.

lazy Haskell is lazy. +width="240" height="209" alt="lazy"> Haskell is lazy. That means that unless specifically told otherwise, Haskell won’t execute functions and calculate things until it’s really forced to show you a result. That goes well with referential transparency and it allows @@ -118,7 +118,7 @@

So what’s Haskell?

language you can just take some initial data and efficiently transform and mend it so it resembles what you want at the end.

boat Haskell is statically +width="160" height="153" alt="boat"> Haskell is statically typed. When you compile your program, the compiler knows which piece of code is a number, which is a string and so on. That means that a lot of possible errors are caught at compile time. If you try to add diff --git a/markdown/generated_html/making-our-own-types-and-typeclasses.html b/markdown/generated_html/making-our-own-types-and-typeclasses.html index 8d1497e..846bfb0 100644 --- a/markdown/generated_html/making-our-own-types-and-typeclasses.html +++ b/markdown/generated_html/making-our-own-types-and-typeclasses.html @@ -54,9 +54,9 @@

Algebraic data types intro

In a similar fashion, we can think of the Int type as being defined like this:

data Int = -2147483648 | -2147483647 | ... | -1 | 0 | 1 | 2 | ... | 2147483647
-

caveman

+class="left" width="220" height="215" alt="caveman">

The first and last value constructors are the minimum and maximum possible values of Int. It’s not actually defined like this, the ellipses are here because we omitted a heapload of numbers, so @@ -216,9 +216,9 @@

Algebraic data types intro

abstract in such a way that we hide their implementation. Also, whoever uses our module can’t pattern match against the value constructors.

Record syntax

-

record

+class="right" width="208" height="97" alt="record">

OK, we’ve been tasked with creating a data type that describes a person. The info that we want to store about that person is: first name, last name, age, height, phone number, and favorite ice-cream flavor. I @@ -319,9 +319,9 @@

Type parameters

get a clear picture of what type parameters work like in action, let’s take a look at how a type we’ve already met is implemented.

data Maybe a = Nothing | Just a
-

yeti

+class="left" width="209" height="260" alt="yeti">

The a here is the type parameter. And because there’s a type parameter involved, we call Maybe a type constructor. Depending on what we want this data type to hold when it’s not @@ -412,9 +412,9 @@

Type parameters

Car "Ford" "Mustang" 1967 :: (Num t) => Car [Char] [Char] t ghci> :t Car "Ford" "Mustang" "nineteen sixty seven" Car "Ford" "Mustang" "nineteen sixty seven" :: Car [Char] [Char] [Char] -

meekrat

+class="right" width="150" height="267" alt="meekrat">

In real life though, we’d end up using Car String String Int most of the time and so it would seem that parameterizing the Car type isn’t really worth it. We @@ -502,8 +502,8 @@

Type parameters

ghci> Vector 2 9 3 `vectMult` (Vector 4 9 5 `scalarMult` Vector 9 2 4) Vector 148 666 222

Derived instances

-

gob

+gob

In the Typeclasses 101 section, we explained the basics of typeclasses. We explained that a @@ -710,9 +710,9 @@

Type synonyms

reading our code and documentation. Here’s how the standard library defines String as a synonym for [Char].

type String = [Char]
-

chicken

+class="left" width="169" height="225" alt="chicken">

We’ve introduced the type keyword. The keyword might be misleading to some, because we’re not actually making anything new (we did that with the data keyword), but we’re just making a @@ -782,9 +782,9 @@

Type synonyms

Map Int String or if we’re dealin’ with one of them polymorphic functions, [a] or (Ord a) => Maybe a and stuff. And like, sometimes me and -my buddies say that Maybe is a type, but we don’t mean that, -cause every idiot knows Maybe is a type constructor. When I -apply an extra type to Maybe, like +my buddies say that Maybe is a type, but we don’t mean +that, cause every idiot knows Maybe is a type constructor. +When I apply an extra type to Maybe, like Maybe String, then I have a concrete type. You know, values can only have types that are concrete types! So in conclusion, live fast, love hard and don’t let anybody else use your comb!

@@ -920,9 +920,9 @@

Type synonyms

then we wouldn’t know why we couldn’t get the code. But now, we have information about the failure in our result type.

Recursive data structures

-

the fonz

+class="right" width="168" height="301" alt="the fonz">

As we’ve seen, a constructor in an algebraic data type can have several (or none at all) fields and each field must be of some concrete type. With that in mind, we can make types whose constructors have @@ -1023,9 +1023,9 @@

Recursive data structures

like that, normal prefix constructors or stuff like 8 or 'a', which are basically constructors for the numeric and character types, respectively.

-

binary search tree

+class="left" width="323" height="225" alt="binary search tree">

Now, we’re going to implement a binary search tree. If you’re not familiar with binary search trees from languages like C, here’s what they are: an element points to two elements, one on its left @@ -1139,9 +1139,9 @@

Recursive data structures

boolean values and weekday enumerations to binary search trees and more!

Typeclasses 102

-

tweet

+class="right" width="175" height="480" alt="tweet">

So far, we’ve learned about some of the standard Haskell typeclasses and we’ve seen which types are in them. We’ve also learned how to automatically make our own types instances of the standard typeclasses @@ -1375,9 +1375,9 @@

Typeclasses 102

show you the type declaration of a function. I think that’s pretty cool.

A yes-no typeclass

-

yesno

+class="left" width="201" height="111" alt="yesno">

In JavaScript and some other weakly typed languages, you can put almost anything inside an if expression. For example, you can do all of the following: if (0) alert("YEAH!") else alert("NO!"), @@ -1499,9 +1499,9 @@

The Functor typeclass

than to see how it’s implemented? Let’s take a peek.

class Functor f where
     fmap :: (a -> b) -> f a -> f b
-

I AM FUNCTOOOOR!!!

+class="right" width="220" height="441" alt="I AM FUNCTOOOOR!!!">

Alright. We see that it defines one function, fmap, and doesn’t provide any default implementation for it. The type of fmap is interesting. In the definitions of typeclasses so @@ -1673,9 +1673,9 @@

The Functor typeclass

functor laws in more detail in one of the next chapters.

Kinds and some type-foo

-

TYPE FOO MASTER

+class="right" width="287" height="400" alt="TYPE FOO MASTER">

Type constructors take other types as parameters to eventually produce concrete types. That kind of reminds me of functions, which take values as parameters to produce values. We’ve seen that type diff --git a/markdown/generated_html/modules.html b/markdown/generated_html/modules.html index 0265bd5..163da8c 100644 --- a/markdown/generated_html/modules.html +++ b/markdown/generated_html/modules.html @@ -33,8 +33,8 @@

Modules

Loading modules

-

modules

+modules

A Haskell module is a collection of related functions, types and typeclasses. A Haskell program is a collection of modules where the main module loads up the other modules and then uses the functions defined in @@ -169,8 +169,8 @@

Data.List

When we transpose these three lists, the third powers are then in the first row, the second powers in the second one and so on. Mapping sum to that produces our desired result.

-

shopping lists

+shopping lists

foldl' and foldl1' are stricter versions of their respective lazy incarnations. When using lazy folds on really big lists, @@ -638,8 +638,8 @@

Data.List

By functions that take an ordering function, you usually do compare `on` something.

Data.Char

-

lego char

+lego char

The Data.Char module does what its name suggests. It exports functions that deal with characters. It’s also helpful when filtering and mapping over strings because they’re just lists of @@ -873,8 +873,8 @@

Data.Map

Just "555-2938" ghci> findKey "christopher" phoneBook Nothing -

legomap

+legomap

Works like a charm! If we have the friend’s phone number, we Just get the number, otherwise we get Nothing.

@@ -984,8 +984,8 @@

Data.Map

fromListWith is a cool little function. It acts like fromList, only it doesn’t discard duplicate keys but it uses a function supplied to it to decide what to -do with them. Let’s say that a friend can have several numbers and we have -an association list set up like this.

+do with them. Let’s say that a friend can have several numbers and we +have an association list set up like this.

phoneBook =
     [("amelia","555-2938")
     ,("amelia","342-2492")
@@ -1035,8 +1035,8 @@ 

Data.Map

see a complete list of functions in the documentation.

Data.Set

-

legosets

+legosets

The Data.Set module offers us, well, sets. Like sets from mathematics. Sets are kind of like a cross between lists and maps. All the elements in a set are unique. And because they’re internally @@ -1138,8 +1138,8 @@

Data.Set

lists but as you can see, nub preserves the ordering of the list’s elements, while setNub does not.

Making our own modules

-

making modules

+making modules

We’ve looked at some cool modules so far, but how do we make our own module? Almost every programming language enables you to split your code up into several files and Haskell is no different. When making programs, diff --git a/markdown/generated_html/recursion.html b/markdown/generated_html/recursion.html index 85ad224..426ee35 100644 --- a/markdown/generated_html/recursion.html +++ b/markdown/generated_html/recursion.html @@ -33,8 +33,8 @@

Recursion

Hello recursion!

-

SOVIET RUSSIA

+SOVIET RUSSIA

We mention recursion briefly in the previous chapter. In this chapter, we’ll take a closer look at recursion, why it’s important to Haskell and how we can work out very concise and elegant solutions to @@ -129,8 +129,8 @@

Maximum awesome

maximum' (x:xs) = max x (maximum' xs)

How’s that for elegant! In essence, the maximum of a list is the max of the first element and the maximum of the tail.

-

max

+max

A few more recursive functions

Now that we know how to generally think recursively, let’s implement @@ -170,8 +170,8 @@

A few more recursive | n <= 0 = [] take' _ [] = [] take' n (x:xs) = x : take' (n-1) xs -

painter

+painter

The first pattern specifies that if we try to take a 0 or negative number of elements, we get an empty list. Notice that we’re using _ to match the list because we don’t really care what it is @@ -254,8 +254,8 @@

Quick, sort!

child for Haskell. Therefore, let’s implement it here, even though implementing quicksort in Haskell is considered really cheesy because everyone does it to showcase how elegant Haskell is.

-

quickman

+quickman

So, the type signature is going to be quicksort :: (Ord a) => [a] -> [a]. No surprises there. The edge condition? Empty list, as is expected. A sorted empty @@ -296,8 +296,8 @@

Quick, sort!

the same function. Eventually, we’ll break it up so much that we reach empty lists and an empty list is already sorted in a way, by virtue of being empty. Here’s an illustration:

-

quicksort

+quicksort

An element that is in place and won’t move anymore is represented in orange. If you read them from left to right, you’ll see the sorted list. Although we chose @@ -321,8 +321,8 @@

Thinking recursively

element of the list times the product of the rest of the list. The length of a list is one plus the length of the tail of the list. Ekcetera, ekcetera …

-

brain

+brain

Of course, these also have edge cases. Usually the edge case is some scenario where a recursive application doesn’t make sense. When dealing with lists, the edge case is most often the empty list. If you’re diff --git a/markdown/generated_html/starting-out.html b/markdown/generated_html/starting-out.html index add4522..675a08c 100644 --- a/markdown/generated_html/starting-out.html +++ b/markdown/generated_html/starting-out.html @@ -34,7 +34,7 @@

Starting Out

Ready, set, go!

egg Alright, let’s get started! If +width="214" height="187" alt="egg"> Alright, let’s get started! If you’re the sort of horrible person who doesn’t read introductions to things and you skipped it, you might want to read the last section in the introduction anyway because it explains what you need to follow this @@ -120,7 +120,7 @@

Ready, set, go!

functions that aren’t used with numbers are prefix functions. Let’s take a look at them.

phoen Functions are usually prefix, so +width="160" height="161" alt="phoen"> Functions are usually prefix, so from now on we won’t explicitly state that a function is of the prefix form, we’ll just assume it. In most imperative languages, functions are called by writing the function name and then writing its parameters in @@ -230,8 +230,8 @@

Baby’s first functions

doubleSmallNumber x = if x > 100
                         then x
                         else x*2
-

this is you

+this is you

Right here we introduced Haskell’s if statement. You’re probably familiar with if statements from other languages. The difference between Haskell’s if statement and if statements in imperative languages is that @@ -270,7 +270,7 @@

Baby’s first functions

interchangeably.

An intro to lists

BUY A DOG Much like shopping lists in +width="150" height="103" alt="BUY A DOG"> Much like shopping lists in the real world, lists in Haskell are very useful. It’s the most used data structure and it can be used in a multitude of different ways to model and solve a whole bunch of problems. Lists are SO awesome. In this @@ -384,8 +384,8 @@

An intro to lists

ghci> init [5,4,3,2,1]
 [5,4,3,2]

If we think of a list as a monster, here’s what’s what.

-

list monster

+list monster

But what happens if we try to get the head of an empty list?

ghci> head []
 *** Exception: Prelude.head: empty list
@@ -463,7 +463,7 @@

An intro to lists

href="modules.html#data-list">later.

Texas ranges

draw What if we want a list of all +width="200" height="258" alt="draw"> What if we want a list of all numbers between 1 and 20? Sure, we could just type them all out but obviously that’s not a solution for gentlemen who demand excellence from their programming languages. Instead, we’ll use ranges. Ranges are a way @@ -533,12 +533,12 @@

Texas ranges

[10,10,10].

I’m a list comprehension

frog If you’ve ever taken a course in +width="180" height="156" alt="frog"> If you’ve ever taken a course in mathematics, you’ve probably run into set comprehensions. They’re normally used for building more specific sets out of general sets. A basic comprehension for a set that contains the first ten even natural numbers is . The part before the pipe is called the output +alt="set notation">. The part before the pipe is called the output function, x is the variable, N is the input set and x <= 10 is the predicate. That means that the set contains the doubles of all natural numbers that satisfy the @@ -638,8 +638,8 @@

I’m a list comprehension

not in GHCI, it’s better to split longer list comprehensions across multiple lines, especially if they’re nested.

Tuples

-

tuples

+tuples

In some ways, tuples are like lists — they are a way to store several values into a single value. However, there are a few fundamental differences. A list of numbers is a list of numbers. That’s its type and @@ -733,8 +733,8 @@

Tuples

infinite lists:

ghci> zip [1..] ["apple", "orange", "cherry", "mango"]
 [(1,"apple"),(2,"orange"),(3,"cherry"),(4,"mango")]
-

look at meee

+look at meee

Here’s a problem that combines tuples and list comprehensions: which right triangle that has integers for all sides and all sides equal to or smaller than 10 has a perimeter of 24? First, let’s try generating all diff --git a/markdown/generated_html/syntax-in-functions.html b/markdown/generated_html/syntax-in-functions.html index 4b84390..d538e02 100644 --- a/markdown/generated_html/syntax-in-functions.html +++ b/markdown/generated_html/syntax-in-functions.html @@ -34,8 +34,8 @@

Syntax in Functions

Pattern matching

-

four!

+four!

This chapter will cover some of Haskell’s cool syntactic constructs and we’ll start with pattern matching. Pattern matching consists of specifying patterns to which some data should conform and then checking @@ -255,8 +255,8 @@

Pattern matching

(xs ++ [x,y,z]) or just (xs ++ [x]), but because of the nature of lists, you can’t do that.

Guards, guards!

-

guards

+guards

Whereas patterns are a way of making sure a value conforms to some form and deconstructing it, guards are a way of testing whether some property of a value (or several of them) are true or false. That sounds @@ -440,8 +440,8 @@

Let it be

let sideArea = 2 * pi * r * h topArea = pi * r ^2 in sideArea + 2 * topArea -

let it be

+let it be

The form is let <bindings> in <expression>. The names that you define in the let part are accessible to the expression after the in part. As you can see, we could have @@ -516,8 +516,8 @@

Let it be

way, the function body is closer to its name and type declaration and to some that’s more readable.

Case expressions

-

case

+case

Many imperative languages (C, C++, Java, etc.) have case syntax and if you’ve ever programmed in them, you probably know what it’s about. It’s about taking a variable and then executing blocks of code for diff --git a/markdown/generated_html/types-and-typeclasses.html b/markdown/generated_html/types-and-typeclasses.html index db74b96..9c965fc 100644 --- a/markdown/generated_html/types-and-typeclasses.html +++ b/markdown/generated_html/types-and-typeclasses.html @@ -33,8 +33,8 @@

Types and Typeclasses

Believe the type

-

moo

+moo

Previously we mentioned that Haskell has a static type system. The type of every expression is known at compile time, which leads to safer code. If you write a program where you try to divide a boolean type with @@ -67,7 +67,7 @@

Believe the type

ghci> :t 4 == 5 4 == 5 :: Bool

bomb Here we see that doing +width="171" height="144" alt="bomb"> Here we see that doing :t on an expression prints out the expression followed by :: and its type. :: is read as “has type of”. Explicit types are always denoted with the first letter in capital case. @@ -158,7 +158,7 @@

Type variables

ghci> :t head
 head :: [a] -> a

box Hmmm! What is this a? +width="130" height="93" alt="box"> Hmmm! What is this a? Is it a type? Remember that we previously stated that types are written in capital case, so it can’t exactly be a type. Because it’s not in capital case it’s actually a type variable. That means @@ -183,8 +183,8 @@

Type variables

different types. It just states that the first component’s type and the return value’s type are the same.

Typeclasses 101

-

class

+class

A typeclass is a sort of interface that defines some behavior. If a type is a part of a typeclass, that means that it supports and implements the behavior the typeclass describes. A lot of people coming diff --git a/markdown/generated_html/zippers.html b/markdown/generated_html/zippers.html index ee10f5c..9b8f25c 100644 --- a/markdown/generated_html/zippers.html +++ b/markdown/generated_html/zippers.html @@ -30,8 +30,8 @@

Zippers

-

hi im chet

+hi im chet

While Haskell’s purity comes with a whole bunch of benefits, it makes us tackle some problems differently than we would in impure languages. Because of referential transparency, one value is as good as another in @@ -87,8 +87,8 @@

Taking a walk

) )

And here’s this tree represented graphically:

-

polly says her back hurts

+polly says her back hurts

Notice that W in the tree there? Say we want to change it into a P. How would we go about doing that? Well, one way would be to pattern match on our tree until we find the element @@ -157,8 +157,8 @@

Taking a walk

subtree, one that allows us to efficiently switch focus to subtrees that are nearby.

A trail of breadcrumbs

-

whoop dee doo

+whoop dee doo

Okay, so for focusing on a subtree, we want something better than just a list of directions that we always follow from the root of our tree. Would it help if we start at the root of the tree and move either @@ -185,8 +185,8 @@

A trail of breadcrumbs

freeTree and go right and then left:

ghci> goLeft (goRight (freeTree, []))
 (Node 'W' (Node 'C' Empty Empty) (Node 'R' Empty Empty),[L,R])
-

almostthere

+almostthere

Okay, so now we have a tree that has 'W' in its root and 'C' in the root of its left subtree and 'R' in the root of its right subtree. The breadcrumbs are [L,R], @@ -268,8 +268,8 @@

Going back up

goUp :: (Tree a, Breadcrumbs a) -> (Tree a, Breadcrumbs a)
 goUp (t, LeftCrumb x r:bs) = (Node x t r, bs)
 goUp (t, RightCrumb x l:bs) = (Node x l t, bs)
-

asstronaut

+asstronaut

We’re focusing on the tree t and we check what the latest Crumb is. If it’s a LeftCrumb, then we construct a new tree where our tree t is the left subtree @@ -368,8 +368,8 @@

Focusing on lists

href="making-our-own-types-and-typeclasses.html#recursive-data-structures">implemented our own lists, we defined our data type like so:

data List a = Empty | Cons a (List a) deriving (Show, Read, Eq, Ord)
-

the best damn thing

+the best damn thing

Contrast this with our definition of our binary tree and it’s easy to see how lists can be viewed as trees where each node has only one subtree.

@@ -479,8 +479,8 @@

A very simple file system

]

That’s actually what my disk contains right now.

A zipper for our file system

-

spongedisk

+spongedisk

Now that we have a file system, all we need is a zipper so we can zip and zoom around it and add, modify and remove files as well as folders. Like with binary trees and lists, we’re going to be leaving breadcrumbs @@ -540,8 +540,8 @@

A zipper for our file system

focuses on the file with the given name. That file has to be in the current focused folder. This function doesn’t search all over the place, it just looks at the current folder.

-

wow cool great

+wow cool great

First we use break to break the list of items in a folder into those that precede the file that we’re searching for and those that come after it. If you remember, break takes a @@ -617,8 +617,8 @@

Watch your step

subtree:

goLeft :: Zipper a -> Zipper a
 goLeft (Node x l r, bs) = (l, LeftCrumb x r:bs)
-

falling for you

+falling for you

But what if the tree we’re stepping off from is an empty tree? That is, what if it’s not a Node, but an Empty? In this case, we’d get a runtime error because the pattern match would fail diff --git a/markdown/generated_md/chapters.md b/markdown/generated_md/chapters.md index 1c9b6bf..cd9051d 100644 --- a/markdown/generated_md/chapters.md +++ b/markdown/generated_md/chapters.md @@ -89,5 +89,5 @@ * [A very simple file system](zippers.html#a-very-simple-file-system) * [Watch your step](zippers.html#watch-your-step) -This work is licensed under a [Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=license rel=nofollow} because I couldn't find a license with an even longer name. +This work is licensed under a [Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=license} because I couldn't find a license with an even longer name. diff --git a/markdown/source_md/chapters_foot.md b/markdown/source_md/chapters_foot.md index 939def4..1818275 100644 --- a/markdown/source_md/chapters_foot.md +++ b/markdown/source_md/chapters_foot.md @@ -1,3 +1,3 @@ -This work is licensed under a [Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=license rel=nofollow} because I couldn't find a license with an even longer name. +This work is licensed under a [Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=license} because I couldn't find a license with an even longer name. diff --git a/markdown/source_md/faq.md b/markdown/source_md/faq.md index d7cd10d..f69c3c3 100644 --- a/markdown/source_md/faq.md +++ b/markdown/source_md/faq.md @@ -4,7 +4,7 @@ ## Can I put this tutorial on my site or change it or whatever? -Sure, it's licensed under a [creative commons](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=nofollow} license, so you can share and change this, as long as you do it with a smile on your face and for non-commercial purposes. +Sure, it's licensed under a [creative commons](https://creativecommons.org/licenses/by-nc-sa/3.0/){rel=license} license, so you can share and change this, as long as you do it with a smile on your face and for non-commercial purposes. ## Do you recommend any other Haskell reading material? diff --git a/markdown/source_md/input-and-output.md b/markdown/source_md/input-and-output.md index 4a03e2f..422bc5a 100644 --- a/markdown/source_md/input-and-output.md +++ b/markdown/source_md/input-and-output.md @@ -897,7 +897,7 @@ Same goes for writing to the terminal, it's kind of like writing to a file. We can call these two files `stdout` and `stdin`, meaning *standard output* and *standard input*, respectively. Keeping that in mind, we'll see that writing to and reading from files is very much like writing to the standard output and reading from the standard input. -We'll start off with a really simple program that opens a file called *girlfriend.txt*, which contains a verse from Avril Lavigne's #1 hit *Girlfriend*, and just prints it out to the terminal. +We'll start off with a really simple program that opens a file called *girlfriend.txt*, which contains a verse from Avril Lavigne's #1 hit *Girlfriend*, and just prints it out to the terminal. Here's *girlfriend.txt*: ```{.plain} diff --git a/markdown/source_md/introduction.md b/markdown/source_md/introduction.md index 1c3c694..561697e 100644 --- a/markdown/source_md/introduction.md +++ b/markdown/source_md/introduction.md @@ -15,7 +15,7 @@ So this is an attempt at adding another useful resource for learning Haskell so ![bird](assets/images/introduction/bird.png){.left width=230 height=192} -This tutorial is aimed at people who have experience in imperative programming languages (C, C++, Java, Python …) but haven't programmed in a functional language before (Haskell, ML, OCaml …). +This tutorial is aimed at people who have experience in imperative programming languages (C, C++, Java, Python ...) but haven't programmed in a functional language before (Haskell, ML, OCaml ...). Although I bet that even if you don't have any significant programming experience, a smart person such as yourself will be able to follow along and learn Haskell. The channel #haskell on the Libera.Chat network is a great place to ask questions if you're feeling stuck. @@ -25,7 +25,7 @@ I failed to learn Haskell approximately 2 times before finally grasping it becau But then once it just "clicked" and after getting over that initial hurdle, it was pretty much smooth sailing. I guess what I'm trying to say is: Haskell is great and if you're interested in programming you should really learn it even if it seems weird at first. Learning Haskell is much like learning to program for the first time --- it's fun! -It forces you to think differently, which brings us to the next section … +It forces you to think differently, which brings us to the next section ... ## So what's Haskell? {#so-whats-haskell} diff --git a/markdown/source_md/starting-out.md b/markdown/source_md/starting-out.md index 9a989ab..94ffa50 100644 --- a/markdown/source_md/starting-out.md +++ b/markdown/source_md/starting-out.md @@ -345,7 +345,7 @@ ghci> [9.4,33.2,96.2,11.2,23.25] !! 1 But if you try to get the sixth element from a list that only has four elements, you'll get an error so be careful! Lists can also contain lists. -They can also contain lists that contain lists that contain lists … +They can also contain lists that contain lists that contain lists ... ```{.haskell: .ghci} ghci> b = [[1,2,3,4],[5,3,3,3],[1,2,2,3,4],[1,2,3]] @@ -698,7 +698,7 @@ ghci> [ x*y | x <- [2,5,10], y <- [8,10,11], x*y > 50] [55,80,100,110] ``` -How about a list comprehension that combines a list of adjectives and a list of nouns … for epic hilarity. +How about a list comprehension that combines a list of adjectives and a list of nouns ... for epic hilarity. ```{.haskell: .ghci} ghci> nouns = ["hobo","frog","pope"] diff --git a/markdown/source_md/types-and-typeclasses.md b/markdown/source_md/types-and-typeclasses.md index 0822673..3a02965 100644 --- a/markdown/source_md/types-and-typeclasses.md +++ b/markdown/source_md/types-and-typeclasses.md @@ -92,7 +92,7 @@ It's used for whole numbers. `Int` is bounded, which means that it has a minimum and a maximum value. Usually on 32-bit machines the maximum possible `Int` is 2147483647 and the minimum is -2147483648. -`Integer`{.label .type} stands for, er … also integer. +`Integer`{.label .type} stands for, er ... also integer. The main difference is that it's not bounded so it can be used to represent really really big numbers. I mean like really big. `Int`, however, is more efficient. @@ -163,7 +163,7 @@ This is much like generics in other languages, only in Haskell it's much more po Functions that have type variables are called **polymorphic functions**. The type declaration of `head` states that it takes a list of any type and returns one element of that type. -Although type variables can have names longer than one character, we usually give them names of a, b, c, d … +Although type variables can have names longer than one character, we usually give them names of a, b, c, d ... Remember `fst`? It returns the first component of a pair.