However, I believe the actor model is not the *best* possible approach to message passing. The model is rather intricate, with monitors, links, handles, timeouts, priorities, groups, and so on. To me it seems a bit like the OO of concurrency: well-designed and insightful, but needlessly complicated compared to a more general and powerful paradigm such as generic programming.
In my eyes, the well-defined failure semantics with links/monitors do not convolute the design, but rather make the important aspect of error handling explicit. Moreover, priorities, links, monitors are all *opt-in* and concepts orthogonal to each other. A user can ignore them if desired. To stick with your analogy, it sounds to me that this modular behavior is what you'd expect from "concurrent generic programming."
I think the *right* design would be a concurrent equivalent of generic programming, where the only fundamental building blocks should be a well-designed statically typed SPSC queue, move semantics, a low-level thread launching utility (such as boost::thread) and a concise generic EDSL for the linking of nodes with queues.
The notion of *right* is very subjective, in my eyes. For example, I personally don't want threads to be the concurrency building block in my application. I would like to run as many threads as I have cores on my machine, and a scheduler that maps logical tasks to a thread pool. Today, a thread is what C++ programmers choose as concurrency primitive. But it's a hardware abstraction and does not scale. (You cannot spawn millions of threads efficiently.) Your application may offer a much higher degree of logical parallelism, for whatever notion of task you choose. We have to start appreciating that other languages have had tremendous success with the actor model. Skala/Akka, Clojure, Erlang, all show that the this is an industrial-strength abstraction of not only concurrency but also network transparency. (When programming for cloud/cluster applications, one has to consider the latter; see below.)
start(readfile(input) | runlengthenc | huffmanenc | writefile(output));
You describe a classic pipes-and-filters notion of concurrency here, where presumably you'd expect your data to flow asynchronously through the filters. Effectively, this is just syntactic sugar for message passing, where nodes represent actors taking one type of message, transforming it, and spitting out another (except for the sink). Such an EDSL is orthogonal to the underlying mechanism for message passing.
I dislike the option to throw out static typing but I realise that's a matter of taste.
Yeah, I agree with you. Static typing is what makes C++ powerful. We have to understand though, that from the perspective of a single actor, message handling is *always* type-safe. It's only when you build larger systems and want to test whether the protocol match. And libcppa offers that, it's just more boilerplate. For rapid prototyping, I can understand that one may want a weaker notion of protocol compatibility, though.
I'm a bit skeptical about the necessity and usefulness of built-in network transparency, but you might be able to convince me that it needs to be there.
I feel quite the opposite: network transparency is an essential aspect of any message passing abstraction. When developing cluster-scale applications, I would like to write my application logic once and consider deployment an orthogonal problem. Wiring components without needing to touch the implementation is a *huge* advantage. It enables implementing complex and dynamic behaviors of distributed systems, for example spawn new nodes if the system sense a compute bottleneck. Matthias