On 25 Aug 2015 at 16:32, Andreas Schäfer wrote:
I have those too, of course. Have a look at https://raw.githubusercontent.com/ned14/boost.monad/master/Readme.md, bottom of the document.
Thanks. Is the benchmark code for this available online so I can see what was actually measured?
Of course: https://github.com/ned14/boost.monad/blob/master/test/unittests.cpp
I'm still unsure what "51 opcodes <= Value transport <= 32 opcodes" is supposed to mean.
Min/max code bloat for a monad<int> or future<int>.
So the minimum is 51 and the maximum is 32?
Correct. That non-sensical outcome is due to definition. The minimum code bloat _should_ result from a known-memory outcome where the compiler knows for a fact memory has some value, and therefore can prune branching down to no code output at all. The maximum code bloat is where memory has unknown state, and therefore the compiler must generate all possible branches for that unknown state. Compiler optimisers are not as deterministic as perhaps they should be, so you get this weird outcome where the compiler generates more bloat for the known case than the unknown case. Which is a compiler optimiser bug, strictly speaking.
i.e. all lightweight futures are also lightweight monads. Just asynchronous ones.
So this architecture makes it easy to define different "future-ish" types. Got it. I've seen code blocks like this...
#define BOOST_MONAD_FUTURE_NAME_POSTFIX #define BOOST_MONAD_FUTURE_POLICY_ERROR_TYPE stl11::error_code #define BOOST_MONAD_FUTURE_POLICY_EXCEPTION_TYPE std::exception_ptr #include "detail/future_policy.ipp" #define BOOST_MONAD_FUTURE_NAME_POSTFIX _result #define BOOST_MONAD_FUTURE_POLICY_ERROR_TYPE std::error_code #include "detail/future_policy.ipp" #define BOOST_MONAD_FUTURE_NAME_POSTFIX _option #include "detail/future_policy.ipp"
...which, if I'm not mistaken define future, future_result and future_option. Same for monad, monad_result and monad_option. Of these only future and monad seem to be used in boost.monad, and only future is actually in use by boost.afio.
Yep, that's code which stamps out many policy class implementations using the preprocessor by reincluding the same base policy implementation with macros reconfiguring it. I got bored maintaining many separate policy classes, so I automated a single solution.
This leads me to the question: why this complicated architecture, if you only really rely on future?
Monad is designed as a useful library on its own. AFIO only uses part of it.
Monad contains a full Concurrency TS implementation, and works everywhere in any configuration and currently with far lower overheads than any existing STL implementation by a big margin (about 50%).
That's a big claim. Do you have performance tests to back it up? Which which implementation did you compare your results?
See the benchmarks posted earlier. I compared to libstdc++ 5.1 and Dinkumware VS2015.
Do you happen to have a link handy? I must have missed that post.
End of the Readme file! Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/