For a long time, one of the things I didn't understand was the monad. I read quite a few tutorials about it and gained working knowledge. But, I still wasn't satisfied. I wasn't satisfied because I felt they were missing something fundamental. Almost as if playing Taboo. I didn't know what it was either. So, until recently, the discomfort remained.
En route to enlightenment I came across the works of Gottfried Wilhelm Leibniz. Aside from inventing calculus and defining its symbols, that we still use today, he is also the inventor of the Monad and the subsequent philosophy know as Monadology. Within eight bullets of the text, it clicked in my mind and I felt enlightened about monads.
Leibniz talks of monads as the true atoms because these are the particles that are indivisible and serve as components for larger things. Monads are not altered by externalities, rather the only thing that modifies them are internal processes. Also, they have qualities and they differ from each other based on those qualities. This is just to pick out the relevant basics for this discussion.
Let's consider how Haskell defines the monad. It is everything that supports the following functions from the Monad class:
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
The first operator takes a monad and a function. It "unravels" the monad and passes it to the function. The function then returns a monad which the operator returns as well. The second operator is similar except that it will accept a function with zero parameters. So, that's it, anything implementing this will be a monad. The question is, how does this stem from Leibniz's work?
These operators are the connectors or the glue that keeps these monad particles together to form a composite substance. In Haskell we've provided two monads to the operator. One as the first parameter and another as the return value of the function in the second parameter. Neither monad can be split into pieces or affected by any externalities. They only expose their properties through the parameters that we give them where all the decisions remain internal. Notice that we've not done anything to execute the monads, we've merely connected them.
Let's look at an example. Below is code that will ask for my name and print out a "Hello" string with my name in it. Quite simple.
(putStr "What's your name? ") >>
(\x -> putStr $ "Hello " ++ x ++ "\n")
I highlight the monads by surrounding them with parenthesis. There are three. One, print a question. Two, get response and, three, print a string with the response. Each of those items is complete in itself but together they make up a compound thing that consists of the three monads. The compound is the thing that has the quality of performing all of the operations and nothing else.
This is a simple compound that logically can seem like a chain of events, however it can be an arbitrary graph with loops and decisions made internally based on the inputs you provide to the compound. To execute the compound, you merely provide input (empty, in this case) and ask for the output. In order for the compound to get the output - the "Hello" string - it must first get a line. But, in order to get a line, it must first print a string. Going backwards in the chain.
This is an incredibly powerful concept because, actually, your program is not doing anything until you start demanding output. This means that you can pass around these compounds without evaluating until you are ready. Potentially I can send this example to any function that accepts monads and have it executed or attached to other compounds.
Here's another person not happy about existing monad tutorials: Stephen Diehl. I'm very happy that people are writing those tutorials but I believe that this is the core that every monad tutorial should start with. All the practical applications just seem so much simpler with the understanding of the philosophy behind the monad.