Sunday, 24 November 2013

Why we write.

For unknown reasons a thought bounced around: Why do people write? Let's put on the logical hat and make a list. The reasons mentioned below are very fundamental and contain many more detailed reasons. These specific ones are mentioned because the author felt that the following should not be generalized any further.

Memorialization seems to be the easiest reason and arguably the biggest reason. We want to remember things. Something happened? Write it down for future reference. Thought of a great idea? Write it down.

This, however, can be a tricky one due to all sorts of biases involved. An event happens and time passes until it is written down. With time the accuracy starts to degrade with some decay function. Then there is a problem if biases and points of view, the recorded event is at maximum only as accurate as the author's observation. The skill of the author to describe the events can also bring the accuracy into question. Elizabeth Loftus talks about the creation of false memories which is hardly a malicious intent but throws even more uncertainty into the mix.

Maintaining trust in the record is an incredibly difficult undertaking. My theory is that most of us just close our eyes and pretend everything is OK unless something obvious stand out. Of course as a society we put various means of alleviating the problem of trust. Means such as the use of references, language standards and peer reviews. All of which reduce to some form of trust in a person. These approaches probably work well assuming that most people are not malicious is nature.
Organization is an easy one as well. With thousands of things happening all at once, there is a good chance you can't keep track of them all. This is probably closely tied to memorialization but with a different purpose.

It is a lot easier to trust the accuracy of this type of writing because the entities being described have either been documented somewhere else (shifting the validation from the writing in question) or they are ideas created by the author. Ideas created by the author can be assumed to be 100% accuracy because the writing in question is the first instance where the idea enters the world. The only other place the idea exists is the author's head which we cannot compare the accuracy to.
Discussion with yourself or others. This one is not so obvious, at least not until one thinks about the question. We write letters to discuss things with others. But, we also write diaries and notes to keep track of what we've thought of in order to follow the steps of logic. Discussion is very similar to organization, as in organization of thought. However, it deserves its own mention due to the difference of intent.

The intent in writing for the sake of discussion is to show a trail of thought. Probably everyone can remember C follows B follows A but what if there are 30 steps. That requires writing things down, perhaps with the author as the only audience. For example, one of the purposes of this blog is to help the author organize his thoughts of experimentation.
Art, some people like to write for the sake of writing. Something about the word play that drives people to come up with elaborate combinations that have nothing but artistic value.

In almost every case a piece of writing will contain several of these forms. In some cases the art will be pervasive through the entire piece, however others can be mutually exclusive in different parts of the written piece. When the author puts their art into the writing, the readers enjoy it more.

Friday, 10 May 2013

Then I finally understood Monads

Haskell is a fascinating language. It is clearly an imperfect culmination of years of careful research by the languages community. There are many things I like about: currying, the functional style, the type system, etc. There are also many things I don't fully understand, mostly due to lack of experience.

For a long time, one of the things I didn't understand was the monad. I read quite a few tutorials about it and gained working knowledge. But, I still wasn't satisfied. I wasn't satisfied because I felt they were missing something fundamental. Almost as if playing Taboo. I didn't know what it was either. So, until recently, the discomfort remained.

En route to enlightenment I came across the works of Gottfried Wilhelm Leibniz. Aside from inventing calculus and defining its symbols, that we still use today, he is also the inventor of the Monad and the subsequent philosophy know as Monadology. Within eight bullets of the text, it clicked in my mind and I felt enlightened about monads.

Leibniz talks of monads as the true atoms because these are the particles that are indivisible and serve as components for larger things. Monads are not altered by externalities, rather the only thing that modifies them are internal processes. Also, they have qualities and they differ from each other based on those qualities. This is just to pick out the relevant basics for this discussion.

Let's consider how Haskell defines the monad. It is everything that supports the following functions from the Monad class:

  (>>=)       :: forall a b. m a -> (a -> m b) -> m b
  (>>)        :: forall a b. m a -> m b -> m b 

The first operator takes a monad and a function. It "unravels" the monad and passes it to the function. The function then returns a monad which the operator returns as well. The second operator is similar except that it will accept a function with zero parameters. So, that's it, anything implementing this will be a monad. The question is, how does this stem from Leibniz's work?

These operators are the connectors or the glue that keeps these monad particles together to form a composite substance. In Haskell we've provided two monads to the operator. One as the first parameter and another as the return value of the function in the second parameter. Neither monad can be split into pieces or affected by any externalities. They only expose their properties through the parameters that we give them where all the decisions remain internal. Notice that we've not done anything to execute the monads, we've merely connected them.

Let's look at an example. Below is code that will ask for my name and print out a "Hello" string with my name in it. Quite simple.

  (putStr "What's your name? ") >> 
     (getLine) >>= 
       (\x -> putStr $ "Hello " ++ x ++ "\n")

I highlight the monads by surrounding them with parenthesis. There are three. One, print a question. Two, get response and, three, print a string with the response. Each of those items is complete in itself but together they make up a compound thing that consists of the three monads. The compound is the thing that has the quality of performing all of the operations and nothing else.

This is a simple compound that logically can seem like a chain of events, however it can be an arbitrary graph with loops and decisions made internally based on the inputs you provide to the compound. To execute the compound, you merely provide input (empty, in this case) and ask for the output. In order for the compound to get the output - the "Hello" string - it must first get a line. But, in order to get a line, it must first print a string. Going backwards in the chain.

This is an incredibly powerful concept because, actually, your program is not doing anything until you start demanding output. This means that you can pass around these compounds without evaluating until you are ready. Potentially I can send this example to any function that accepts monads and have it executed or attached to other compounds.

Here's another person not happy about existing monad tutorials: Stephen Diehl. I'm very happy that people are writing those tutorials but I believe that this is the core that every monad tutorial should start with. All the practical applications just seem so much simpler with the understanding of the philosophy behind the monad.

Tuesday, 23 April 2013

More on CSP

CSP is an interesting language because it lets one model potentially complex states through fairly simple means. As we saw in the last entry, you simply have to list how events follow each other. By building sequences of events you're actually drawing a state transition diagram. In such a diagram the transition is the event. For example, here's a diagram of the Blog process we defined earlier:

Each bubble represents a process. Blog1 = Writeup -> Post -> Blog and so on. In this case we just have one chain of events. It is also possible to have multiple chains of events. So we can define Blog as follows:

  Blog = idea -> writeup -> post -> Blog
                 talk -> writeup -> post -> Blog

In this case there is a concept of choice. Once I have an idea I can either talk about the idea or I can immediately write it up. The notation [] says that the choice is external, meaning that it is not up to this process to decide which path to go down on. CSP does not care, it is there to represent all paths. So, a decision can be made to talk if I have a friend around. If not, then I go straight to writing up my idea. I can also make it an internal decision with the |~| symbol.

Let's define a talkative friend:

  Friend = enterhouse -> talk -> exithouse -> Friend

Notice that after exiting the house he's still my friend. Friend and blogging interact through talking which we, of course, represent in the following process:

  Interaction = Friend {| talk |} Blog

Again, this is a very powerful concept of process composition. This interaction essentially says that the Friend cannot talk until Blog is able to talk. Because our processes are recursive, is it possible that posts will be made without talk ever happening. A quite world - how nice. Interaction process can be represented through a block diagram like this:

It seems so simple but actually there is a lot being abstracted behind the scenes should this be implemented. Those are two processes that have to wait until both are ready to perform the talk event. So, there has to be synchronization going on. To me, the mind warping experience happened when I imagined that this is implemented in hardware and talk is actually a write connecting to components. It should also be noted that talk remains an external event - perhaps a spider need to jump out for the conversation to begin.

For those who have spend many years programming the tendency is to think of talk as an event produced by Friend and an event produced by Blog. So, it seems that the two processes are sending things to each other and what happens if the talks occur at the same time for each process. I struggled with that idea for a while, until I realized that I was thinking about it all wrong. Seems so obvious now. In this concurrent system of representation we do not consider events happening at the same time, rather the processes put themselves into a state where they are ready to accept events. This means that there is always some sequence of events that the system must be able to handle. CSP is the language that allows the user to consider every possible sequence of events to check the constructed model.

Tuesday, 9 April 2013

Communicating Sequencial Processes

CSP for short. It's a language for describing processes. For example:

   Blog = idea -> writeup -> post -> Blog

is a process that describes how one writes posts. Notice that this says nothing about how things happen. All this shows is the sequence of events. An important feature of CSP is the ability to compose processes. Imagine that we do your thinking with tea:

 Thinking = tea -> writeup -> Thinking

After you had some tea, you do your idea write up and then go back to drinking tea. To show that you drink tea while writing blogs the two processes can be combined:

 Blogging = Thinking [| writeup |] Blog

This becomes a new process where events idea, tea and  post happen in a defined sequence but independent of the other process. However, writeup must be done in synchronization. This means that once an idea happens Blog process will effectively block and wait until process Thinking arrives at event writeup.

Being able to compose processes is a very powerful concept. Visually each of the processes can be represented in a state transition diagram. Obviously, things get a lot more complex once you try to put them together even for something as simple as Blogging.

There are some advanced tool that help with writing CSP, one is FDR (Failures, Divergences and Refinements) and Probe. Also, Using CSP is a great but dense book that covers just about everything a beginner user will need.

Saturday, 6 April 2013


Posterous, I'm sad to see you go. We've had a rare few encounters, but sadly you decided to close doors. It was a real pleasure, however. My favorite feature was the blog by email. I know that Blogger has it. It's not quite as good because I have no chance at remembering the special email address.

I've had several attempts with blogging, but generally I find it strange to write something without a specific audience. On the other hand, I need a creative outlet - one of which is writing. Do you have any suggestions on blog writing?

I will be using this blog in an attempt to organize my thoughts. So, if you do follow this, then expect to see things like tutorials and explanations of desperate things that will - hopefully - become coherent at some point.