Concurrent/Parallel Programming - The Next Generation
Thursday, January 5, 2006
"But the bottom line is, for application programmers, don't get into this space unless you're pretty sure you know what you're doing and you're willing to wrestle with some seriously difficult debugging."or Bill de hÓra:
"Distributed programming is hard (and with it comes the truly hard matters of cache invalidation and naming)."or Herb Sutter and James Larus - here:
"Probably the greatest cost of concurrency is that concurrency really is hard: The programming model, meaning the model in the programmer's head that he needs to reason reliably about his program, is much harder than it is for sequential control flow."and here:
"But concurrency is hard. Not only are today's languages and tools inadequate to transform applications into parallel programs, but also it is difficult to find parallelism in mainstream applications, and-worst of all-concurrency requires programmers to think in a way humans find difficult."And yet, almost everyone (see here, here, and here for examples) is saying:
"Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software."So, how are developers going to deliver the next generation of concurrent applications and (most importantly) how will Lisp fit into all of this? First of all, let's start with some basics. In my previous articles on this subject (see here, here, here, here, here), I've gone through some background and some definitions and I won't re-hash any of that again. However, it is probably a good idea to go over the different concurrent programming paradigms (as taken from the book Concepts, Techniques, and Models of Computer Programming):
- Shared-state Concurrency: This approach assumes multiple threads of execution with shared data using locks, monitors and transactions. It is the typical Java approach to concurrency and it is also the most widespread form of concurrency. However, shared-state concurrency is also the most complicated model to program to as it requires a fine level of synchronization and scheduling between threads.
- Message-passing Concurrency: Uses asynchronous messaging to communicate between multiple threads over a specified port. This is the model used (very successfully) by Erlang. This model provides for a coarser level of interaction between threads but is easier to program to.
- Declarative/Dataflow Concurrency: This approach is data-driven and extends sequential declarative-type programming with multiple threads of execution. It is a simpler approach then approaches #1 & #2 and it is very useful for certain types of problems; however, it is not widely used.
I mentioned above that the Declarative/Dataflow Concurrency model is very useful for certain types of problems but that it is not widely used. This looks likely to change as others analyze what Google has been doing recently in the concurrent/parallel programming space. I interviewed with Google a while ago and was intrigued when one guy said that the person in the cube next to him had a whole wall of lisp books and that there were a lot of ex-lispers in the company (of course I already knew about one of those lispers!). Well, it looks like some lispy ideas are starting to percolate out of Google now too. MapReduce is a library that Google have developed for "automatic parallelization and distribution of large-scale computation". The MapReduce "abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages". Wikipedia provides a definition:
"MapReduce is a programming tool developed by Google in C++, in which parallel computations over large (> 1 terabyte) data sets are performed. The terminology of "Map" and "Reduce", and their general idea, is borrowed from functional programming languages use of the constructs map and reduce in functional programming and features of array programming languages."Conceptually, MapReduce fits into the Declarative/Dataflow Concurrency paradigm described above. There is a Google paper that describes what it does in more detail; however, the diagram from that paper and the abstract tell a lot:
"MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.Google have recently introduced (internally) a DSL called Sawzall that is designed to be used with MapReduce for data analysis. The paper describes the rationale for creating a Domain-Specific Language instead of using a general-purpose programming language:
Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.
Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day."
"Why put a language above MapReduce? MapReduce is very effective; what's missing? And why a new language? Why not just attach an existing language such as Python to MapReduce?The paper goes on to say "Although it has been deployed only for about 18 months, Sawzall has become one of the most widely used programming languages at Google". This is surely a pretty good testimonial for using a DSL for concurrent/parallel programming! So, maybe you're interested in MapReduce and would like to explore some code but you don't work for Google? Oy vey, what to do? ;-) There are actually a number of different things one can do.
The usual reasons for creating a special-purpose language apply here. Notation customized to a particular problem domain can make programs clearer, more compact, and more expressive. Capturing the aggregators in the language (and its run-time) means that the programmer never has to provide one, unlike when using MapReduce. It also has led to some elegant programs as well as a comfortable way to think about data processing problems in large distributed data sets. Also, support for protocol buffers and other domain-specific types simplifies programming at a lower level. Overall, Sawzall programs tend to be around 10 to 20 times shorter than the equivalent MapReduce programs in C++ and significantly easier to write.
Other advantages of a custom language include the ability to add domain-specific features, custom debugging and profiling interfaces, and so on.
The original motivation, however, was completely different: parallelism. Separating out the aggregators and providing no other inter-record analysis maximizes the opportunity to distribute processing across records. It also provides a model for distributed processing, which in turn encourages users to think about the problem in a different light. In an existing language such as Awk or Python, users would be tempted to write the aggregators in that language, which would be difficult to parallelize. Even if one provided a clean interface and library for aggregators in these languages, seasoned users would want to roll their own sometimes, which could introduce dramatic performance problems."
A non-Google version of MapReduce has already been implemented for Nutch (an open-source web search library), so that is one way some experimenting could be done. However, it's not necessary to look at a direct port of MapReduce in order to play around with the ideas underlying it. Conceptually, the approach is similar (at least in some respects) to "blackboard systems". The following diagram illustrates the approach that blackboard systems take, utilizing a shared data store and a controller (along with specific "knowledge sources" or "expert programs") to allow multiple threads/processes to cooperatively work on a solution.
If you want to have a play around with a blackboard system in Lisp to experiment with the concepts, you can download the open source GBBopen library (the above diagram was taken from the GBBopen documentation).
There are interesting times ahead in the area of parallel/concurrent programming. I get excited when I think about the historical background of Lisp in concurrent/parallel processing, the unique ability of Lisp to easily create Domain-Specific Languages, and the need for some new solutions in the concurrent/parallel programming space. I'm sure there are a lot of opportunities for Lisp here!