Having said that, the usual thinking on implementing blockable lockfree queues doesn't involve futures per se, but rather focuses on using an "eventcount" to provide the blocking behaviour as a wrapper around the core data structure itself, so it can be used with multiple different kinds of queues -- lockfree is usually a balancing act between tradeoffs, so different use cases benefit most from different structures.
On a second thought, maybe I could even change the implementation of the future_queue to internally use e.g. the well-known boost::lockfree queues together with some semaphore to implement the blocking behaviour. I am not 100% sure yet, but I think all current features could be implemented by that. Do you think this would improve the implementation? It might be a bit simpler and maybe even more efficient in some cases.
one point to consider: even if the queue is lockfree, the semaphore operations may internally acquire locks (maybe not in user-space, but in the OS) ... i've double checked the kernel-space implementations on linux and darwin some time ago: posting/notifying a semaphore is not lockfree on these platforms, as some code paths involve (spin)locks. so if lockfree behavior is required to produce values from a real-time context, one may have to poll a lockfree queue. cheers, tim