Niall- On 14:48 Tue 25 Aug , Niall Douglas wrote:
On 25 Aug 2015 at 8:26, Andreas Schäfer wrote:
Monad is designed to be as absolutely as lightweight as possible, and is per-commit CI tested to ensure it generates no more than X opcodes per operation.
Traditional complexities such as O(N) make no sense for Monad. No operation it does isn't constant time. You're really testing how friendly Monad is to the compiler optimiser (which is very friendly).
OK, but how is the number of opcodes relevant in any practical setting? I for one expect the compiler to generate fast code. And if that means that one horribly slow instruction gets replaces by 10 fast ones, then so be it. I'd suggest to set up performance tests which measure time or throughput for sets of typical workloads.
I have those too, of course. Have a look at https://raw.githubusercontent.com/ned14/boost.monad/master/Readme.md, bottom of the document.
Thanks. Is the benchmark code for this available online so I can see what was actually measured?
I'm still unsure what "51 opcodes <= Value transport <= 32 opcodes" is supposed to mean.
Min/max code bloat for a monad<int> or future<int>.
So the minimum is 51 and the maximum is 32?
Since you mentioned monads were basically identical: why don't you just use std::future?
AFIO has never been able to return a plain std::future because of the lack of standardised wait composure in current C++ i.e. when_all(futures ...). It returns an afio::future<> which internally keeps a unique integer which looks up stored continuations in an unordered_map. This let AFIO implement continuations on top of std::future.
OK, this explains why you're using afio::future instead of std::future, but not why afio::future relies on afio::monad instead of std::future. AFAICS basic_monad doesn't add much over future's API, except for get_or() and friends. But those functions aren't being used anyway, right?
Monad's basic_future<Policy> is base for all future-ish and future-like future types. You can make whole families of customised future types with any semantics you like, so long as they are future-ish. One such custom future type is afio::future.
Monad's basic_future<Policy> is a refinement of basic_monad<Policy> i.e. inherits directly. Anything implementing basic_future<Policy> therefore also implements basic_monad<Policy>.
i.e. all lightweight futures are also lightweight monads. Just asynchronous ones.
So this architecture makes it easy to define different "future-ish" types. Got it. I've seen code blocks like this...
#define BOOST_MONAD_FUTURE_NAME_POSTFIX #define BOOST_MONAD_FUTURE_POLICY_ERROR_TYPE stl11::error_code #define BOOST_MONAD_FUTURE_POLICY_EXCEPTION_TYPE std::exception_ptr #include "detail/future_policy.ipp" #define BOOST_MONAD_FUTURE_NAME_POSTFIX _result #define BOOST_MONAD_FUTURE_POLICY_ERROR_TYPE std::error_code #include "detail/future_policy.ipp" #define BOOST_MONAD_FUTURE_NAME_POSTFIX _option #include "detail/future_policy.ipp"
...which, if I'm not mistaken define future, future_result and future_option. Same for monad, monad_result and monad_option. Of these only future and monad seem to be used in boost.monad, and only future is actually in use by boost.afio. This leads me to the question: why this complicated architecture, if you only really rely on future?
One big win is universal wait composure. You can feed any combination of custom future type into the same when_all() or when_any() function, and they'll compose.
Monad contains a full Concurrency TS implementation, and works everywhere in any configuration and currently with far lower overheads than any existing STL implementation by a big margin (about 50%).
That's a big claim. Do you have performance tests to back it up? Which which implementation did you compare your results?
See the benchmarks posted earlier. I compared to libstdc++ 5.1 and Dinkumware VS2015.
Do you happen to have a link handy? I must have missed that post.
There are known bugs and todo parts in Monad typical of a library only started in June, not least that future<void>.get() is not returning a void. I know of no serious bugs though, and AFIO is working well with Monad so far.
That means AFIO can drop the internal unordered_map, and become a pure Concurrency TS implementation with no workarounds and more importantly, no central locking at all. That will eliminate the last remaining synchronisation in AFIO, and it will be a pure wait free solution.
So, is AFIO composable with other parallel code? Could I use it along with other threads?
Absolutely! The whole point is generic reusability, and why I'm about to embark on such a large internal refactor to more closely align with the current Concurrency and Coroutines TSs. Code you write using AFIO right now will fit hand and glove with the C++ of the next five years - you are future-proofed with the API before review today.
Regarding thread safety of AFIO objects, everything should be thread safe and can be concurrently used by multiple threads. You may not get desirable outcomes of course e.g. closing a handle during a write on that handle from another thread will likely cause data loss for obvious reasons.
Thanks! -Andreas -- ========================================================== Andreas Schäfer HPC and Grid Computing Department of Computer Science 3 Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany +49 9131 85-27910 PGP/GPG key via keyserver http://www.libgeodecomp.org ========================================================== (\___/) (+'.'+) (")_(") This is Bunny. Copy and paste Bunny into your signature to help him gain world domination!