On 25 May 2015 at 21:04, Peter Dimov wrote:
I've got everything working except the sequence:
promise<int> p; p.set_value(5); return p.get_future().get();
This should reduce to a mov $5, %eax, but currently does not for an unknown reason. I'm just about to go experiment and see why.
I'm really struggling to see how all that could work. Where is the result stored? In the promise? Wouldn't this require the promise to outlive the future<>? This doesn't hold in general. How is it to be guaranteed?
The promise storage is an unrestricted union holding any of the types
returnable by the future, or a future<T> * or a shared_ptr
In seastar, we achieved this by reserving space for the result in both future and promise, and by having future and promise track each other (so they can survive moves).
Doesn't this introduce a race between ~promise and ~future? ~future checks if( promise_ ), ~promise checks if( future_ ), and things get ugly.
Fixing that requires synchronization, which makes mov $5, %eax impossible.
In my implementation, if you never call promise.get_future() you never get synchronisation. The constexpr folding has the compiler elide all that. Even though clang spills a lot of ops, it still spills no synchronisation. MSVC unfortunately spills everything, but no synchronisation would ever be executed by the CPU. Also if you call promise.set_value() before promise.get_future(), you never get synchronisation as futures are single shot. I took a very simple synchronisation solution - a spinlock in both promise and future. Both are locked before any changes happen in either, if synchronisation has been switched on. Another optimisation I have yet to do is to detach the future-promise after the value has been set to avoid unnecessary synchronisation. Giovanni Piero Deretta wrote:
So, the future/promise pair can be optimized out if the work can be completed synchronously (i.e. immediately or at get time). But then, why use a future at all? What is the use case you are trying to optimize for? do you have an example?
For me my main purpose is making AFIO allocate four malloc/frees per
op instead of eight. I also make heavy use of make_ready_future(),
and by definition that is now optimally fast.
Also, as I mentioned, the lion's share of the future implementation
is actually reusable as a monadic transport. That's currently a
monad
I believe that trying to design a future that can fulfill everybody's requirements is a lost cause. The c++ way is to define concepts and algorithms that work on concepts. The types we want to generalize are std::future, expected, possibly optional and all the other futures that have been cropping up in the meantime. The algorithms are of course those required for composition: then, when_all, when_any plus probably get and wait.
I think it might not be a lost cause in a world with concepts and especially modules. Until then it's going to be unacceptably slow. And that's years away, and I need this now. Regarding heterogeneous future type wait composition, yes this is hoped to be the foundation for such an eventual outcome. That's more Vicente's boat than mine though.
We could just take a page from Haskell and call the concept Monad, but maybe we want something more specific, like Result.
I'll take that as a vote for result<T>. Bjorn suggested holder<T> and value<T> as well. I think the former too generic, and the latter suggests the thing is a value and can decay into one without get(). Or maybe, that some might think that.
Then, in addition to the algorithms themselves, there is certainly space in boost for a library for helping build custom futures, a-la boost.iterator.
I originally intended that, my second round of experiments showed it could be very exciting. But I choose to bow out in favour of something I can deliver in weeks rather than months. I need all this ready mid-June so I can start retrofitting AFIO. Besides, simpler I think may well win the race. More complex means more compiler quirks. I am dealing enough with those already! Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/