In Part 1, we started out with just the operation of functional composition, and derived the Monoids - composable functions of a single input and output parameter type.
In Part 2, we generalised these, by allowing as much type variation as practicable. Retaining only the stricture that a given function's input be obtainable from the amplified output of its predecessor, we got ourselves the Monads.
In Part 3, we "discovered" the relationships between the monadic bind operation, the SelectMany method of LINQ, and C#'s way cool Query Comprehension from and select syntax.
There are a couple of candidates for Part 4. One article on parsers, another favourite monad example, is almost complete. Another on continuations is also on the starting block. I'd have to be honest, and say that the subject of continuations is of much more interest to people working in areas like language compiler design, and other more esoteric, academical researches, than to us average app-writing Joes. On the other hand, continuations certainly provide the best available proof that the applications of monads aren't confined to the "amplified data" related contexts we've looked at so far.
So I reckon it'll probably be parsers next, with continuations making an appearance after them. Meantime however, while I'm polishing off my parsers, I thought we might stop for a little bit of perspective.
What's The Point?
The earlier articles in this series have all been repeatedly revised, and usually extended, since their initial publication. The reason for this has been the absolute failure of people to stop asking me, "What's the point of this functional stuff?" Or, to rephrase that: my own personal failure, to provide sufficient background.
I thought I was being clever! - just revealing enough about the functional motivation, to cover each item (monoid, monad, or comprehension) as it emerged. In my experience, exposing a procedural or object-oriented programmer, even to the very jagged word "Haskell" alone, would send them running to the bar to order the next round. And while that behaviour has its own undeniable appeal (and obvious applications), it doesn't address the issues that functions do.
So, in a nut:
- To make efficient use of multiple cores, we must parallel program.
- Threading is difficult.
- But multiple cores are not the future: they're here now. One week ago, Intel revealed the Knights Corner, pictured above: a fifty-core, 22nm processor. Try to program that without parallelism, and your code will never use more than 2% of the available hardware. Do you consider that acceptable?
- The big, error-prone problem with parallel code, is the controlled synchronisation of state.
- Functional techniques sidestep this issue extremely effectively, by quite simply avoiding state and mutability.
Monadic design quickly comes into this equation. We now need to rediscover how to program an entire swathe of tasks, which had previously made apparently irreducible use of mutable state. These situations include input, output, and concurrency; synchronous and asynchronous exceptions; and interfacing with libraries written in other languages, functional or otherwise. Haskell (and most other functional) programmers know these as "the awkward squad", and are fully conversant with their common solution, the I/O monad.
Back On The C# Track
So there it is, that's how I see our immediate multicore future panning out. This little series will continue for a bit, attempting to introduce a procedural, object-oriented audience to functional techniques, using the methods and facilities that have recently become available in C#.
We used to claim that object-oriented programming could be done in almost any language. In fact I once proved this to myself, many decades back, by writing a commercial ATE system (Automated Test Equipment) in a purely object-oriented style, using nothing but the Microsoft procedural GW-Basic interpreter. Yes, seriously! The result was extremely satisfying, and an undeniable air of maintainability exuded from its conventions, when compared with operationally similar procedural scripts from other jobs. But it was also notably verbose, tedious in the extreme to write, and certainly could not be said to represent an experience which I would ever wish to repeat.
Today, we seem to be saying that functional coding can be practised in certain suitably enhanced O-O languages, such as C#. And it's equally true. It's just that you often have to jump through some rather ugly syntactic hoops, to arrive at an implementation that would be infinitely more tidy, elegant and pleasing to the eye and brain, if executed using a purpose-designed and built, function-oriented language.
Yes, it's fascinating that monadic C# functions can be composed just as readily as monoidal ones, despite the apparent impedance mismatch between their inputs and outputs. And yes, it's initially impressive to see these constructions massaged into (query) comprehension syntax, certainly an improvement on fluent method chaining. But that only goes so far, and even achieving what it does, is seen by many as nothing more than a mis-use of a comprehension syntax originally designed exclusively for SQL-style queries, rather than fully generalised relational domains.
There are strict and severe limits to what can be accomplished with ad hoc functional extensions to O-O languages generally, and C# in particular. For that reason, I give this series two, maybe three further articles at most, before its inevitable defection to the wonders of F#.
Update: Eric White has just published Recursive Descent Parser: A Simple Grammar, which is part 3 of his series on "using LINQ to write a recursive-descent parser for SpreadsheetML formulas" [sic]. I'm going to wait until the next two parts come out, as there seems to be every possibility that Eric is about to cover exactly the same ground as (and who knows, maybe even do it a little better than?) me.