Niall Douglas
[...]
I think a table with MPL98 forms on one side and Hana equivalents on the other would be an enormous help with the learning curve.
Will do.
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
That would mean a lot of workarounds. TBH, I think the "right" thing to do is to push the compiler folks to support C++14 (and without bugs, plz) as soon as possible. The reason I am so unwilling to do workarounds is that _the whole point_ of Hana is that it's cutting edge. Whenever you remove a C++14 feature, Hana goes back to the stone-age performance of Fusion/MPL and becomes much, much less usable. Why not use Fusion/MPL in that case?
BTW some of those graphs you had in C++ Now showing time and space benchmarks of performance would be really useful in the docs, maybe in an Appendix. When MSVC eventually gets good enough that Hana could be ported to it (VS2015?) I think it would be fascinating to see the differences. I'm sure Microsoft's compiler team would also view Hana as an excellent test of future MSVCs, indeed maybe Stephan could have Hana used as an internal test of conformance for the team to aim for.
Yup, I have a benchmark suite for Boost.Hana, like I had for the MPL11. I have started integrating them with the documentation, but I'm not sure what's the best way of doing it so I did not push forward on that. Basically, I got a benchmark for almost every operation of almost every sequence that's supported by Hana (including those adapted from external libraries), but I'm not sure yet of how to group them in the documentation (per operation? per sequence? per type class?). The problem is made worse by two things: - It only makes sense to benchmark components that are isomorphic. For example, what does it mean to benchmark a std::tuple against a mpl::vector? Not much, because the price you pay for std::tuple gives you the ability to hold values, whereas mpl::vector can only hold types. We don't want to compare apples with oranges, and the grouping of benchmarks should be influenced by that. - How do we handle different compilers? Right now, all benchmarks are produced only on Clang, which is OK because it's the only compiler that can compile the library. When there is more than one compiler, how do we generate the benchmarks for all of them, and how do we integrate the benchmarks in the documentation?
I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later.
I thought about doing this, but I did not because I thought it was a HUGE undertaking to automate it. I think we're better off integrating the benchmarks in the documentation and then when something is really weird, we just have to look at the generated benchmarks and see what's wrong. If someone can suggest a way to do it automatically that won't take me weeks to set up, I'm interested. Also, I wouldn't want to slip in the trap of testing the compiler; testing Hana is a large enough task as it is (341 tests + 165 examples as we speak).
I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now.
I'll think about something; that's a good idea. Thanks.
Finally, there are ways and means for doxygen docs to automatically convert into BoostBook docs. You'll need to investigate those before starting a formal review. Tip: look into how Geometry/AFIO does the doxygen conversion, it's brittle but it is easier than the others.
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook? Regards, Louis