Dear Gavin, thank you, too, for your nice feedback!
Looks like a nice concept at first glance (I haven't looked at the implementation), although some of those known issues (particularly the limited lifespan) would probably need to be solved before it could be accepted.
It clearly is not yet ready, I know this. I just want to get feedback as early as possible to steer into the right direction from the beginning.
It's not a completely new concept, of course; one of the original future proposals included a future_stream that was essentially a recursive future -- whenever it completed it produced both a value and another future for the next value. This didn't make it into Boost (or the Standard) in the end, though I assume many people have reimplemented something similar on their own.
This sounds like a very similar concept indeed, but I cannot find anything about this on the web. Maybe the future_queue is a bit different in the sense that one can have multiple ready values in the queue, just like in any normal fifo queue.
Having said that, the usual thinking on implementing blockable lockfree queues doesn't involve futures per se, but rather focuses on using an "eventcount" to provide the blocking behaviour as a wrapper around the core data structure itself, so it can be used with multiple different kinds of queues -- lockfree is usually a balancing act between tradeoffs, so different use cases benefit most from different structures.
I know that, but I think with the future_queue approach one can do much more (and is also simpler to use since you don't have to bother with wrapping the queue and dealing with counters). I think the important features are continuations, when_any and (today added) when_all. With these one can build something like "processing pipelines" even without having to explicitly care about the creation of threads. I will prepare a more advanced example showing better what I mean. I will let you know when it's ready. Cheers, Martin