Re: [boost] [Boost-announce] [Boost] [Hana] Formal review for Hana next week (June 10th)
[Correction: 10th June is a Wednesdsay, not Monday.] Dear Boost community, The formal review of Louis Dionne's Hana library starts Wednesday, 10th June and ends on 24th June. Hana is a header-only library for C++ metaprogramming that provides facilities for computations on both types and values. It provides a superset of the functionality provided by Boost.MPL and Boost.Fusion but with more expressiveness, faster compilation times, and faster (or equal) run times. To dive right in to examples, please see the Quick start section of the library's documentation: http://ldionne.com/hana/index.html#tutorial-quickstart Hana makes use of C++14 language features and thus requires a C++14 conforming compiler. It is recommended you evaluate it with clang 3.5 or higher.[1] Hana's source code is available on Github: https://github.com/ldionne/hana Full documentation is also viewable on Github: http://ldionne.github.io/hana To read the documentation offline: git clone http://github.com/ldionne/hana --branch=gh-pages doc/gh-pages For a gentle introduction to Hana, please see: 1. C++Now 2015: http://ldionne.github.io/hana-cppnow-2015 (slides) 2. C++Con 2014: https://youtu.be/L2SktfaJPuU (video) http://ldionne.github.io/hana-cppcon-2014 (slides) We encourage your participation in this review. At a minimum, kindly state: - Whether you believe the library should be accepted into Boost - Your name - Your knowledge of the problem domain. You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design * Implementation * Documentation * Tests * Usefulness - Did you attempt to use the library? If so: * Which compiler(s)? * What was the experience? Any problems? - How much effort did you put into your evaluation of the review? [1] A note for Windows users: As mentioned, Hana requires a C++14 conforming compiler. If you would like to try it, a VM with Linux and clang 3.5 is a fairly painless option. Some users have also reported success with using Clang 3.5 on Windows. If you would like assistance configuring the former option, feel free to reach out to us for this. Best, Glen
On 5 Jun 2015 at 16:57, Glen Fernandes wrote:
- Whether you believe the library should be accepted into Boost
I vote unconditional acceptance.
- Your name
Niall Douglas.
- Your knowledge of the problem domain.
Very little. Moreover, the type of C++ programming used in its implementation I find a major chore to write, and a lot of it is frankly beyond me (as you'll see in my shortly to presented optimally lightweight monad<T>). I am particularly hoping that Hana will eventually free me of ever having to do it by hand again in the future once Visual Studio can compile Hana.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
There is a still a bit to go to match STL naming conventions, though it has improved enormously over before. For example, the STL uses empty(), not is_empty(). is_empty() in the STL means something different. I'd also *hugely* prefer if Hana matched, name for name, the name choices in Ranges v3. For example, it's group() in Hana, but group_by() in Ranges v3. That would lessen the cognitive load for people using both together - which I suspect in the longer term will be many if not most. It also would increase the chances of Hana entering the standard C++ library as a compile-time version of Ranges. It also provides a way of telling the naming bike shedders to sod off, as whatever Eric has chosen is what you'll choose period. I also think all the Concepts need to match naming with Eric's, and eventually in the future that both libraries use identical Concept implementations (which I assume would then be upgraded with Concepts Lite on supporting compilers). I'd suggest therefore breaking the Concepts stuff out into a modular part easily replaceable in the future with a new library shared by Hana and Ranges.
* Implementation
I won't comment on this as I am not really qualified to say. It looks fine.
* Documentation
I agree with other reviewers that the code examples need a hana:: namespace qualifier before all uses of hana stuff. I quite like the doxygen prettification, maybe matching the Boost colour scheme more? One problem is slow page load times on IE, especially the first page, but also every click in the left hand table of contents. I also see no graphs displaying either in IE or Chrome. BTW for my lightweight monad<T> I have some python which counts x64 ops generated by a code example and gets the CI commit to fail if limits are exceeded, if you're interested for the benchmarking and/or making hard promises about runtime costs.
* Tests
Tests should be capable of using BOOST_CHECK and BOOST_REQUIRE as well as static_assert. It should be switchable. You can then feed the Boost.Test XML results into the regression test tooling.
* Usefulness
Very useful.
- Did you attempt to use the library? If so: * Which compiler(s)? * What was the experience? Any problems?
None.
- How much effort did you put into your evaluation of the review?
I've been reviewing the project for over a year now regularly, and have been known to write Louis private emails with ideas :) Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Tue, Jun 16, 2015 at 7:35 PM, Niall Douglas
On 5 Jun 2015 at 16:57, Glen Fernandes wrote:
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
There is a still a bit to go to match STL naming conventions, though it has improved enormously over before. For example, the STL uses empty(), not is_empty(). is_empty() in the STL means something different.
I'd also *hugely* prefer if Hana matched, name for name, the name choices in Ranges v3. For example, it's group() in Hana, but group_by() in Ranges v3. That would lessen the cognitive load for people using both together - which I suspect in the longer term will be many if not most. It also would increase the chances of Hana entering the standard C++ library as a compile-time version of Ranges.
+1 Zach
On 5 Jun 2015 at 16:57, Glen Fernandes wrote:
[...]
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
There is a still a bit to go to match STL naming conventions, though it has improved enormously over before. For example, the STL uses empty(), not is_empty(). is_empty() in the STL means something different.
I'd also *hugely* prefer if Hana matched, name for name, the name choices in Ranges v3. For example, it's group() in Hana, but group_by() in Ranges v3. That would lessen the cognitive load for people using both together - which I suspect in the longer term will be many if not most. It also would increase the chances of Hana entering the standard C++ library as a compile-time version of Ranges.
Actually, a lot of the names are very close. I just checked Eric's range library and there are a __lot__ of resemblances. I guess there was a common inspiration (Haskell?) or a lot of luck. I'll try to converge even more towards his naming.
It also provides a way of telling the naming bike shedders to sod off, as whatever Eric has chosen is what you'll choose period.
lol
I also think all the Concepts need to match naming with Eric's, and eventually in the future that both libraries use identical Concept implementations (which I assume would then be upgraded with Concepts Lite on supporting compilers). I'd suggest therefore breaking the Concepts stuff out into a modular part easily replaceable in the future with a new library shared by Hana and Ranges.
That's a noble goal, but it is completely impossible without some redesign of Range-v3's concepts. Range-v3 is based on runtime concepts like ForwardRange, which is itself based on ForwardIterator. There is just no way heterogeneous containers can be made to model these things, even if we were to have an iterator-based design like Fusion. The problem here lies in compile-time vs runtime. But I could be wrong, and maybe Eric can tell us more about that.
* Implementation
I won't comment on this as I am not really qualified to say. It looks fine.
* Documentation
I agree with other reviewers that the code examples need a hana:: namespace qualifier before all uses of hana stuff.
I quite like the doxygen prettification, maybe matching the Boost colour scheme more? One problem is slow page load times on IE, especially the first page, but also every click in the left hand table of contents. I also see no graphs displaying either in IE or Chrome.
I can see the charts on Google Chrome. Perhaps you reloaded the page a ton of times? If so, there's a limit on the number of queries one can do to fetch the data sets from GitHub. After something like 50 reloads, you have to wait one hour. It shouldn't be a problem for most people, though.
BTW for my lightweight monad<T> I have some python which counts x64 ops generated by a code example and gets the CI commit to fail if limits are exceeded, if you're interested for the benchmarking and/or making hard promises about runtime costs.
I might be interested; where's that code?
* Tests
Tests should be capable of using BOOST_CHECK and BOOST_REQUIRE as well as static_assert. It should be switchable. You can then feed the Boost.Test XML results into the regression test tooling.
What about tests that fail/pass at compile-time? How does the MPL handle that? Also, passing a BOOST_CHECK assertion does not necessarily mean anything for Hana, since sometimes what we're checking is not only that something is true, but that something is true __at compile-time__. How is the MPL integrated with the regression test tooling? I think that is closer to what Hana might need?
* Usefulness
Very useful.
- Did you attempt to use the library? If so: * Which compiler(s)? * What was the experience? Any problems?
None.
- How much effort did you put into your evaluation of the review?
I've been reviewing the project for over a year now regularly, and have been known to write Louis private emails with ideas :)
Thanks a lot for your review and comments during the past year; you've been providing me with invaluable feedback and I appreciate that. Regards, Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Re-Boost-announce-Boost-Hana-Formal-revie... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 17 Jun 2015 at 9:36, Louis Dionne wrote:
Actually, a lot of the names are very close. I just checked Eric's range library and there are a __lot__ of resemblances. I guess there was a common inspiration (Haskell?) or a lot of luck. I'll try to converge even more towards his naming.
Great minds think alike! But no, more seriously, at a high level Hana could become "Ranges for the compile time" and fit hand and glove with Ranges for the run time. This is why I think David Sankel's preference for smaller, reusable, single purpose lower level solutions is misguided. I'd ordinarily agree with that assessment of his by the way in 98% of cases, but not in this one specific case. I think Hana could potentially be standards material, indeed I have thought this since you first argued for it instead of a MPL11. If you can match Eric's algorithms as close as you can, and indeed if Eric can match your algorithms as close as he can, I think Hana could be in a C++ 22.
I also think all the Concepts need to match naming with Eric's, and eventually in the future that both libraries use identical Concept implementations (which I assume would then be upgraded with Concepts Lite on supporting compilers). I'd suggest therefore breaking the Concepts stuff out into a modular part easily replaceable in the future with a new library shared by Hana and Ranges.
That's a noble goal, but it is completely impossible without some redesign of Range-v3's concepts. Range-v3 is based on runtime concepts like ForwardRange, which is itself based on ForwardIterator. There is just no way heterogeneous containers can be made to model these things, even if we were to have an iterator-based design like Fusion. The problem here lies in compile-time vs runtime. But I could be wrong, and maybe Eric can tell us more about that.
I can appreciate that Range's Concepts emulation right now may not be able to fit. But Range's Concepts Lite surely must be able to fit by definition. That said, I've never used a Concept in my life, so I am probably not understanding what you mean by runtime concepts. I know Ranges extends Iterators, but unless I missed something I had thought that Ranges only did that for backwards compatibility, and that Ranges could be used pure functional. It's those pure functional parts I refer to, I am imagining a world where for compile time functional programming you reach for Hana and for run time functional programming you reach for Ranges. Both are opposite, but simultaneously the same thing. If that makes sense.
I can see the charts on Google Chrome. Perhaps you reloaded the page a ton of times? If so, there's a limit on the number of queries one can do to fetch the data sets from GitHub. After something like 50 reloads, you have to wait one hour. It shouldn't be a problem for most people, though.
Sigh. It's working today. Wasn't before.
BTW for my lightweight monad<T> I have some python which counts x64 ops generated by a code example and gets the CI commit to fail if limits are exceeded, if you're interested for the benchmarking and/or making hard promises about runtime costs.
I might be interested; where's that code?
Have a look at https://github.com/ned14/boost.spinlock/tree/master/test/constexprs. The key files are: * All the *.cpp files which are each test case * with_clang_gcc.sh and with_msvc.bat - These scripts compile every *.cpp file into an object file, then disassemble it. If you add -g to the compiler flags, you'll get interleaved source + assembler, very handy to see what source is causing opcodes to appear. * count_opcodes.py - This is the world's worst x64 opcode counter. I *really* don't want to have to write a full assembler parser in Python, so this nasty hack script tries to inline all function calls in your chosen example function test1(). This step is necessary because the compiler doesn't merely output the code you compile, but also the headers you drag in and you need some postprocessing to extract just the parts you care about. By world's worst I mean it gets confused very easily, and will fatal exit if it gets itself into a loop. If you keep your test cases small, and always compile to x64 not x86, it generally works well enough. The with_clang_gcc and with_msvc scripts output two things: 1. A CSV history of every opcode count for all past builds. You can feed that to Jenkins to plot as a graph, or just use it to debug when you broke something. I've personally found the CSV history much more useful than originally expected. 2. A JUnit XML unit test results file with the pass/fail status for each test, the opcode count, and just for fun a dump of the assembler produced. This displays very pretty in Jenkins and you can have Jenkins email you when you broke something.
* Tests
Tests should be capable of using BOOST_CHECK and BOOST_REQUIRE as well as static_assert. It should be switchable. You can then feed the Boost.Test XML results into the regression test tooling.
What about tests that fail/pass at compile-time? How does the MPL handle that? Also, passing a BOOST_CHECK assertion does not necessarily mean anything for Hana, since sometimes what we're checking is not only that something is true, but that something is true __at compile-time__. How is the MPL integrated with the regression test tooling? I think that is closer to what Hana might need?
Strictly speaking you should use compile-fail in Boost.Build i.e. have a suite of test case programs which if they don't fail to compile with the right error that itself is a failure. Or its equivalent in cmake. I assume if you get into Boost, you'll need to convert to Boost.Build anyway. However, I suspect for a large chunk of your tests they don't strictly speaking need to be compile time failures. They could be switched with a macro to runtime, and therefore output XML for the regression tester to show.
Thanks a lot for your review and comments during the past year; you've been providing me with invaluable feedback and I appreciate that.
Thank you Louis for taking the time and very substantial effort to bring us Hana. I have spent the last three weeks or so template metaprogramming for my lightweight monad<T>, and it has reminded me how much I dislike template metaprogramming. I take my hat off to you. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
participants (4)
-
Glen Fernandes
-
Louis Dionne
-
Niall Douglas
-
Zach Laine