Re: [boost] [Hana] Formal review
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
- Whether you believe the library should be accepted into Boost * Conditions for acceptance
Yes, I think it should be accepted into Boost. The one condition is that the library works with release versions of at least two different mainstream C++ compilers. More on that later...
- Your name
David J. Sankel
- Your knowledge of the problem domain.
I've written numerous libraries that make use of Boost.MPL and Boost.Fusion. I also have quite a bit of functional programming expertise.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds. My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
* Implementation
The code itself looks to be well structured and well documented. Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction. The home page states that boost libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria. Many other Boost authors have made heroic efforts to meet that criteria and the reputation of Boost is due, in no small part, to those efforts. I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from. That being my position on the issue, my acceptance vote is conditional on hana supporting at least two released versions of mainstream compilers. Given that gcc support seems pretty close, that shouldn't be hard to achieve. * Documentation
Looks good.
* Tests
I didn't evaluate these. I tried, but failed, to build the code and filed related issues. I'm assuming the problems are minor and straightforward to fix. * Usefulness
Maybe in a couple years for me. Without VS 2015 support at least, I'm going to be waiting a while. - Did you attempt to use the library? Yes. Started working through the getting started examples.
If so: * Which compiler(s)
clang 3.6.1
* What was the experience? Any problems?
Nothing more to add. I filed an issues for all problems encountered and they seemed minor.
- How much effort did you put into your evaluation of the review?
I attended both of Louis's talks, skimmed through the documentation, attempted to build, and read portions of the code. -- David Sankel
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
wrote: - Whether you believe the library should be accepted into Boost * Conditions for acceptance
Yes, I think it should be accepted into Boost. The one condition is that the library works with release versions of at least two different mainstream C++ compilers. More on that later... I understand that we need at least two compilers, but I suppose this should be achieved by the time the library would be ready.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries. I agree that there is the Core of the library and the different types and algorithms. I agree having highly composable types and algorithms, but why do you
Le 17/06/15 21:55, David Sankel a écrit : prefer them split in several libraries? Is because having them on the same library allows the author to take some short cuts that shouldn't be used? Is it because it would be difficult to add other libraries using the Core in a coherent way? Is it because we need time to review each one of the Concepts, types and algorithms? Or, ....
* Implementation
The code itself looks to be well structured and well documented.
Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction.
The home page states that boost libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria. The intent of the library is to be widely useful and it will be. Lets
Why? we have discusses this a lot of time and it is up to the library author to state what compilers the library supports. the time do his job.
Many other Boost authors have made heroic efforts to meet that criteria and the reputation of Boost is due, in no small part, to those efforts. You are right that some of us are spending a lot of time trying to cover multiple not conforming compilers and also multiple c++ versions. I believe that this is one of the things slowing the growth of Boost.
I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from. Lets build them for the every-day Joe C++ developer of tomorrow. Stabilizing a library takes time. But we need that the library be used to learn more. Been in Boost would help to achieve it.
That being my position on the issue, my acceptance vote is conditional on hana supporting at least two released versions of mainstream compilers. Given that gcc support seems pretty close, that shouldn't be hard to achieve.
I agree that at least two compilers seems a minimum. Best, Vicente
On Wed, Jun 17, 2015 at 3:29 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote: Le 17/06/15 21:55, David Sankel a écrit :
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
wrote:
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
I agree that there is the Core of the library and the different types and algorithms. I agree having highly composable types and algorithms, but why do you prefer them split in several libraries?
Thanks for the great question Vincente.
Is it because we need time to review each one of the Concepts, types and algorithms?
That's a bit of a strawman, but yes I'd like to review core hanna (which is mostly what is used in the examples) and then review major libraries built on hanna (such as the Haskell typeclass stack that is currently included). My main interest in a simpler release is that we'll get more people attempting alternative strategies for, say, doing a applicative/monad/etc. stack built on hana. That experience and competition will be better for the C++ communities we serve. My secondary interest in a smaller release is related to the flip-side of Parkison's law of triviality. Perhaps it should be called reactor-building? hana is very large and I think there are few who are going to do all its subcomponents justice in a review. I know I'm not.
* Implementation
The code itself looks to be well structured and well documented.
Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction.
Why? we have discusses this a lot of time and it is up to the library author to state what compilers the library supports.
Given your comment below ("I agree that at least two compilers seems a minimum.") it seems that we agree a line needs to be drawn and where it should be drawn in this particular instance. While it is up to the library author which compilers to support, it is up to us reviewers to decide if their proposed choices are acceptable. The home page states that boost
libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria.
The intent of the library is to be widely useful and it will be. Lets the time do his job.
The boost home page does not state that boost libraries "are intended to *eventually *be widely useful, and usable across a broad spectrum of applications". It's a subtle, but big difference. One sentence is attractive to the majority of companies who make real software, and the other is not. C++, more or less, is a language for engineers who make multi-platform, large-scale, high-performance, long-lived applications and libraries. I like that Boost has been a library collection for these folks and hope it stays that way. Many other Boost authors have made heroic efforts to meet that
criteria and the reputation of Boost is due, in no small part, to those efforts.
You are right that some of us are spending a lot of time trying to cover multiple not conforming compilers and also multiple c++ versions. I believe that this is one of the things slowing the growth of Boost.
I think you must not mean growth in adoption because wide compatibility is one of the biggest arguments in favor of adopting boost in a company. What kind of growth do you mean?
I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from.
Lets build them for the every-day Joe C++ developer of tomorrow. Stabilizing a library takes time. But we need that the library be used to learn more. Been in Boost would help to achieve it.
I agree that a library being in boost will imply more usage. People use new libraries in Boost in a large part because they trust in the quality of libraries in boost. That reputation was earned, in turn, because quality libraries [from an engineering perspective] have wide compatibility. hana, with one supported compiler, isn't there yet. -- David
Le 18/06/15 05:59, David Sankel a écrit :
On Wed, Jun 17, 2015 at 3:29 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Le 17/06/15 21:55, David Sankel a écrit :
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
I agree that there is the Core of the library and the different types and algorithms. I agree having highly composable types and algorithms, but why do you prefer them split in several libraries?
Thanks for the great question Vincente.
Is it because we need time to review each one of the Concepts, types and algorithms?
That's a bit of a strawman, but yes I'd like to review core hanna (which is mostly what is used in the examples) and then review major libraries built on hanna (such as the Haskell typeclass stack that is currently included).
My main interest in a simpler release is that we'll get more people attempting alternative strategies for, say, doing a applicative/monad/etc. stack built on hana. That experience and competition will be better for the C++ communities we serve.
My secondary interest in a smaller release is related to the flip-side of Parkison's law of triviality. Perhaps it should be called reactor-building? hana is very large and I think there are few who are going to do all its subcomponents justice in a review. I know I'm not.
I agree that I have not taken the time to review every concept and algorithm. And I don't share the design of some of them. The question is, if having them in Boost.Hana prevents others to propose their own stack of Haskel like type classes? We have not discussed too much about the way the mapping of a type to a concept is done currently. I preferred the old way making the mapping of all the operation all at once. I believe that there will a problem in identifying where the Core of Hana is? David, Louis, do you have a clear idea of what could be this Hana/Core?
* Implementation
The code itself looks to be well structured and well documented. Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction.
Why? we have discusses this a lot of time and it is up to the library author to state what compilers the library supports.
Given your comment below ("I agree that at least two compilers seems a minimum.") it seems that we agree a line needs to be drawn and where it should be drawn in this particular instance.
While it is up to the library author which compilers to support, it is up to us reviewers to decide if their proposed choices are acceptable. I would say it is up to the review manager ;-)
The home page states that boost
libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria.
The intent of the library is to be widely useful and it will be. Lets the time do his job.
The boost home page does not state that boost libraries "are intended to *eventually *be widely useful, and usable across a broad spectrum of applications". It's a subtle, but big difference. One sentence is attractive to the majority of companies who make real software, and the other is not.
C++, more or less, is a language for engineers who make multi-platform, large-scale, high-performance, long-lived applications and libraries. I like that Boost has been a library collection for these folks and hope it stays that way. What is wrong having Boost libraries that some companies can not use because they not work with the compilers they use? I find that it is a good thing if there is at least one that can use it. And Boost is not only for companies. There are a lot of people that has more freedom on the choice of the compiler version they use than a company can have that would profit in having Hana in Boost.
Many other Boost authors have made heroic efforts to meet that
criteria and the reputation of Boost is due, in no small part, to those efforts.
You are right that some of us are spending a lot of time trying to cover multiple not conforming compilers and also multiple c++ versions. I believe that this is one of the things slowing the growth of Boost.
I think you must not mean growth in adoption because wide compatibility is one of the biggest arguments in favor of adopting boost in a company. What kind of growth do you mean?
I mean grow in more useful libraries. Each company has its own criteria about which 3pp libraries can be used. Portability is one of them but not the only one. and as I said there is more than companies on the world.
I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from.
Lets build them for the every-day Joe C++ developer of tomorrow. Stabilizing a library takes time. But we need that the library be used to learn more. Been in Boost would help to achieve it.
I agree that a library being in boost will imply more usage. People use new libraries in Boost in a large part because they trust in the quality of libraries in boost. That reputation was earned, in turn, because quality libraries [from an engineering perspective] have wide compatibility. hana, with one supported compiler, isn't there yet.
Are you suggesting to postpone the inclusion of Hana for 3-6 months? I will not be against freezing the inclusion of Hana in a Boost release until there are at least two supported compilers. It could however be included on the develop branch in order to be ready to be include this day ;-) Vicente
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Vicente J. Botet Escriba Sent: 18 June 2015 07:06 To: boost@lists.boost.org Subject: Re: [boost] [Hana] Formal review
Le 18/06/15 05:59, David Sankel a écrit :
On Wed, Jun 17, 2015 at 3:29 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Are you suggesting to postpone the inclusion of Hana for 3-6 months? I will not be against freezing the inclusion of Hana in a Boost release until there are at least two supported compilers. It could however be included on the develop branch in order to be ready to be include this day ;-)
Let's not be unnecessary bureaucratic about this. If users can make use of nearly all of Hana on one compiler, that's good enough for me. There have always been things that don't work on all compilers since the start. And this is still the case - many libraries don't show an all-green on the test matrix. Apart from library documentation that may advise "don't even think about using compiler Y", there is always the test matrix to guide users about what is likely to work: What You See Is What You Get. But in the end it is always "Such It and See". Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Every reviewer and contributor to this discussion is, of course, entitled to their view of: - What makes a library useful (including how many compilers are supported)? - What the standard for Boost libraries should be (minimum compiler support)? While it is may be beneficial to discuss those two questions above (perhaps even independent of Hana's review, for all of Boost, and with a different subject), I just want to make certain that nobody is under the impression that Hana is ineligible for review because of compiler support. Here is my take on it: 1. The current requirements for Boost libraries do advise authors of only:[1] a. "Aim for ISO Standard C++" b. "There is no requirement that a library run on C++ compilers which do not conform to the ISO standard." c. "There is no requirement that a library run on any particular C++ compiler. Boost contributors often try to ensure their libraries work with popular compilers." 2. Compiler vendors today are more actively trying to conform to the standard. In my mind it is a question of "when" g++ will support the necessary features, not "if". These aren't contentious things that anyone is concerned will never be supported. (e.g. It is not like we're back in 2003 and someone has submitted a library that is littered with unconditional use of the 'export' keyword). 3. Usefulness is more important that [current] compiler support. If [future] g++ 5.3 and clang 3.5 users get to enjoy useful libraries, these libraries can drive language conformance in other compiler vendors. (Useful and popular Boost libraries driving minimum C++ language feature support in compiler vendors is also an appealing thought). Glen [1] http://www.boost.org/development/requirements.html
On June 18, 2015 1:20:39 PM EDT, Glen Fernandes
Here is my take on it: 1. The current requirements for Boost libraries do advise authors of only:[1] a. "Aim for ISO Standard C++" b. "There is no requirement that a library run on C++ compilers which do not conform to the ISO standard." c. "There is no requirement that a library run on any particular C++ compiler. Boost contributors often try to ensure their libraries work with popular compilers."
You omitted an important paragraph: "Since there is no absolute way to prove portability, many boost submissions demonstrate practical portability by compiling and executing correctly with two different C++ compilers, often under different operating systems. Otherwise reviewers may disbelieve that porting is in fact practical." ___ Rob (Sent from my portable computation engine)
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Rob Stewart Sent: 18 June 2015 18:43 To: boost@lists.boost.org Subject: Re: [boost] [Hana] Formal review
On June 18, 2015 1:20:39 PM EDT, Glen Fernandes
wrote: Here is my take on it: 1. The current requirements for Boost libraries do advise authors of only:[1] a. "Aim for ISO Standard C++" b. "There is no requirement that a library run on C++ compilers which do not conform to the ISO standard." c. "There is no requirement that a library run on any particular C++ compiler. Boost contributors often try to ensure their libraries work with popular compilers."
You omitted an important paragraph:
"Since there is no absolute way to prove portability, many boost submissions demonstrate practical portability by compiling and executing correctly with two different C++ compilers, often under different operating systems. Otherwise reviewers may disbelieve that porting is in fact practical."
Most Boost submissions do indeed do this - but this one is pushing compiler technology so hard that it doesn't work - yet. This not-qualified-none-reviewer still believes that Hana is portable (and can be made so). This is a chicken'n'egg issue - I believe it will probably improve compilers if we accept Hana into Boost. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On 17 Jun 2015 at 21:59, David Sankel wrote:
That's a bit of a strawman, but yes I'd like to review core hanna (which is mostly what is used in the examples) and then review major libraries built on hanna (such as the Haskell typeclass stack that is currently included).
That's a fair point. Are you proposing a Hana core and a Hana applications split? I see particular value in this if a Hana core library can become MSVC compatible much sooner than a Hana applications library. For me, the lack of MSVC support - even with winclang getting ever closer to replacing MSVC - is a showstopper to me using Hana at all in my own code. And from last month onwards I stopped supporting VS2013 in my new code, so I'm hardly being backwards. There is precedent for a library not supporting MSVC initially on entering Boost with support being added later, though if I remember correctly it hasn't been common since VS2003 which was the first MSVC with partial template specialisation.
The boost home page does not state that boost libraries "are intended to *eventually *be widely useful, and usable across a broad spectrum of applications". It's a subtle, but big difference. One sentence is attractive to the majority of companies who make real software, and the other is not.
C++, more or less, is a language for engineers who make multi-platform, large-scale, high-performance, long-lived applications and libraries. I like that Boost has been a library collection for these folks and hope it stays that way.
Also a fair point. However, probably a majority of Boost users are on toolsets at least a decade old, and won't be able to even conduct feasibility studies of C++ 11 libraries for years more yet. If you leave the Boost ecosystem things diverge, some users are on bleeding edge clang only C++, others on a pre-98 C++ level. I'd say there are a lot more of the latter especially in games and embedded systems than the former. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas
On 17 Jun 2015 at 21:59, David Sankel wrote:
That's a bit of a strawman, but yes I'd like to review core hanna (which is mostly what is used in the examples) and then review major libraries built on hanna (such as the Haskell typeclass stack that is currently included).
That's a fair point. Are you proposing a Hana core and a Hana applications split?
I see particular value in this if a Hana core library can become MSVC compatible much sooner than a Hana applications library. For me, the lack of MSVC support - even with winclang getting ever closer to replacing MSVC - is a showstopper to me using Hana at all in my own code. And from last month onwards I stopped supporting VS2013 in my new code, so I'm hardly being backwards.
I'm sorry, but I fear MSVC support will not be possible until they have better standard compliance. Metaprogramming libraries have always been hard on compilers. Hana, being in addition a _modern_ metaprogramming library, is even hard on the most cutting edge compilers. There's just no way Hana (even if split into a core sub library) could work on MSVC. The part that's hard with the compiler is not the concepts, it's the implementation itself.
[...]
Regards, Louis
On 6/18/15 3:55 AM, David Sankel wrote:
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
I still need to find time to make a formal review, but allow me to concur with this sentiment. Hana is a nice library, well implemented and executed. I'd vote yes for its acceptance into boost. But will I use it? That's a negative. I share the same opinion as Peter Dimov, and Eric Niebler that C++11 makes it very (extremely!) easy to do TMP. I'd prefer to use a very small TMP library like Peter's or Eric's with a very small (close to zero) conceptual overhead, or none at all. My current inclination is to use Eric's Tiny Metaprogramming Library. In my opinion, less is more in modern post c++11 TMP. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
Le 18/06/15 02:38, Joel de Guzman a écrit :
On 6/18/15 3:55 AM, David Sankel wrote:
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
I still need to find time to make a formal review, but allow me to concur with this sentiment. Hana is a nice library, well implemented and executed. I'd vote yes for its acceptance into boost. But will I use it? That's a negative. Why? the interface? the compile-time performances? is it because Hana is too generic? I share the same opinion as Peter Dimov, and Eric Niebler that C++11 makes it very (extremely!) easy to do TMP. I'd prefer to use a very small TMP library like Peter's or Eric's with a very small (close to zero) conceptual overhead, or none at all. My current inclination is to use Eric's Tiny Metaprogramming Library. In my opinion, less is more in modern post c++11 TMP.
I like Eric's and Peter's meta-propamming approaches. They are close to what we have been done on meta-programming since years, and the use of C++11 (C++14) makes them shorter and more elegant. As I already said in other threads (GSoC) I want a C++11 meta-programming library like Meta in Boost. I would be also interested in having a comparaison of the compile times performances of Meta and Hana. Vicente
On 6/18/15 1:43 PM, Vicente J. Botet Escriba wrote:
Le 18/06/15 02:38, Joel de Guzman a écrit :
On 6/18/15 3:55 AM, David Sankel wrote:
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
I still need to find time to make a formal review, but allow me to concur with this sentiment. Hana is a nice library, well implemented and executed. I'd vote yes for its acceptance into boost. But will I use it? That's a negative. Why? the interface? the compile-time performances? is it because Hana is too generic?
As I said below:
I share the same opinion as Peter Dimov, and Eric Niebler that C++11 makes it very (extremely!) easy to do TMP. I'd prefer to use a very small TMP library like Peter's or Eric's with a very small (close to zero) conceptual overhead, or none at all. My current inclination is to use Eric's Tiny Metaprogramming Library. In my opinion, less is more in modern post c++11 TMP.
A very minimal subset is all that I need, no more. C++14 is rich enough to do TMP easily unlike before when MPL and Fusion were invented. Perhaps you don't even need a TMP library anymore! As a matter of fact, Thomas Heller and I are working on a phoenix-lite experiment with zero TMP library dependencies (only uses std, nothing more). Compile times? Blink of an eye! Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On 6/17/15 11:57 PM, Joel de Guzman wrote:
A very minimal subset is all that I need, no more. C++14 is rich enough to do TMP easily unlike before when MPL and Fusion were invented. Perhaps you don't even need a TMP library anymore! As a matter of fact, Thomas Heller and I are working on a phoenix-lite experiment with zero TMP library dependencies (only uses std, nothing more). Compile times? Blink of an eye!
I've been long time user of MPL. Now I'm getting to know C++11. From reading this list - in particular Peter Dimov's "Simple C++ TMP" I'm coming to believe that this is the simplest low overhead approach for me. Peter's document is another masterpiece. I would like to see it reformated in a way which is easier to use as a reference - sort of a C++11+ TMP "cheatsheet". I'm actually considering do this as a personal exercise in order to really understand and internalize it. Robert Ramey
Joel de Guzman
[...]
A very minimal subset is all that I need, no more. C++14 is rich enough to do TMP easily unlike before when MPL and Fusion were invented. Perhaps you don't even need a TMP library anymore! As a matter of fact, Thomas Heller and I are working on a phoenix-lite experiment with zero TMP library dependencies (only uses std, nothing more). Compile times? Blink of an eye!
Joel, I'd be curious to see this phoenix-lite experiment. In particular, I'd be curious to give a shot at it with Hana when I get some time. It might be one of those examples that really do not benefit from a TMP library, but so far the code always shrinks in size when you use Hana instead of something else (or nothing). If you're using std::tuple, the compile-times should also go down by at least a bit. Regards, Louis
On 6/19/15 3:07 AM, Louis Dionne wrote:
Joel de Guzman
writes: [...]
A very minimal subset is all that I need, no more. C++14 is rich enough to do TMP easily unlike before when MPL and Fusion were invented. Perhaps you don't even need a TMP library anymore! As a matter of fact, Thomas Heller and I are working on a phoenix-lite experiment with zero TMP library dependencies (only uses std, nothing more). Compile times? Blink of an eye!
Joel, I'd be curious to see this phoenix-lite experiment. In particular, I'd be curious to give a shot at it with Hana when I get some time. It might be one of those examples that really do not benefit from a TMP library, but so far the code always shrinks in size when you use Hana instead of something else (or nothing). If you're using std::tuple, the compile-times should also go down by at least a bit.
It's not just about compile time, no. What's more important to me is the conceptual overhead of using another library to do simple things. TMP in what were once TMP heavy libraries used to take a significant amount of code in c++03. Not anymore. Then of course another factor is minimizing dependencies. The less dependencies the better. Zero is ideal. There are other issues, such as debuggability of TMP code using the lambda trick, if you are still using that for CT efficiency, but I guess I need to dive deeper to give a real review. Not being able to debug TMP code is a showstopper for me. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
Joel de Guzman
On 6/19/15 3:07 AM, Louis Dionne wrote:
[...]
It's not just about compile time, no. What's more important to me is the conceptual overhead of using another library to do simple things. TMP in what were once TMP heavy libraries used to take a significant amount of code in c++03. Not anymore. Then of course another factor is minimizing dependencies. The less dependencies the better. Zero is ideal.
I understand your point about minimizing dependencies. However, the same goes for any kind of library, TMP or not. At some point, one must draw a line and accept having external dependencies, or eternally reimplement everything from scratch. I guess you are the only one who can decide where that line should be drawn for your own projects, and that's OK. However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff. Anyway, I'd like to have a look at your Phoenix-lite project, to see if it could be written using Hana, and how so. It is also definitely possible that no gains can be obtained from using Hana in that project, in which case that would give me a good example of what _not_ to use Hana for.
There are other issues, such as debuggability of TMP code using the lambda trick, if you are still using that for CT efficiency, but I guess I need to dive deeper to give a real review. Not being able to debug TMP code is a showstopper for me.
I'm not using the lambda trick anymore because of shabby support for generic lambdas and the lack of constexpr lambdas. Regards, Louis
On 6/19/15 9:12 AM, Louis Dionne wrote:
Joel de Guzman
writes: On 6/19/15 3:07 AM, Louis Dionne wrote:
[...]
It's not just about compile time, no. What's more important to me is the conceptual overhead of using another library to do simple things. TMP in what were once TMP heavy libraries used to take a significant amount of code in c++03. Not anymore. Then of course another factor is minimizing dependencies. The less dependencies the better. Zero is ideal.
I understand your point about minimizing dependencies. However, the same goes for any kind of library, TMP or not. At some point, one must draw a line and accept having external dependencies, or eternally reimplement everything from scratch. I guess you are the only one who can decide where that line should be drawn for your own projects, and that's OK.
However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff.
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers. So where is the speedup? is it because of std::tuple? If so, why don't you decouple your nice tuple implementation and offer it separately? Or is it the handwritten stuff? If so why? Why can't Zacc use the same tricks that you used in Hana?
Anyway, I'd like to have a look at your Phoenix-lite project, to see if it could be written using Hana, and how so. It is also definitely possible that no gains can be obtained from using Hana in that project, in which case that would give me a good example of what _not_ to use Hana for.
There are other issues, such as debuggability of TMP code using the lambda trick, if you are still using that for CT efficiency, but I guess I need to dive deeper to give a real review. Not being able to debug TMP code is a showstopper for me.
I'm not using the lambda trick anymore because of shabby support for generic lambdas and the lack of constexpr lambdas.
Ok, so what new tricks are you using to speed up compile time then? In my experience, the main reason for excessive compile time are the long type names. How are you able to overcome that? regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On 06/19/2015 06:35 AM, Joel de Guzman wrote:
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers. So where
It seems to me that most of your general arguments against using Hana could be equally applied against using Spirit.
On 6/19/15 7:08 PM, Bjorn Reese wrote:
On 06/19/2015 06:35 AM, Joel de Guzman wrote:
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers. So where
It seems to me that most of your general arguments against using Hana could be equally applied against using Spirit.
Yes! And that's exactly what I am doing with X3 :-) Minimizing dependencies. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On 6/20/15 6:06 AM, Joel de Guzman wrote:
On 6/19/15 7:08 PM, Bjorn Reese wrote:
On 06/19/2015 06:35 AM, Joel de Guzman wrote:
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers. So where
It seems to me that most of your general arguments against using Hana could be equally applied against using Spirit.
Yes! And that's exactly what I am doing with X3 :-) Minimizing dependencies.
Let me clarify, if that is too terse. I am *not* against using a TMP library. It's just that my preference now is for simpler libraries; as simple as possible, but still providing most (95+%) of the functionalities. That's what we are aiming for with X3. Less is more, simpler is better. Most of TMP usage can be distilled in a single header file (again referring to Eric's and Peter's works). If I can do the same with X3, that would be super! Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
Joel de Guzman wrote:
Let me clarify, if that is too terse. I am *not* against using a TMP library. It's just that my preference now is for simpler libraries; as simple as possible, but still providing most (95+%) of the functionalities. That's what we are aiming for with X3. Less is more, simpler is better. Most of TMP usage can be distilled in a single header file (again referring to Eric's and Peter's works).
Truth be told, I'm not a big fan of single header libraries. A single header does have its benefits, but on the other hand, every change causes a recompilation of everything using anything from the library. Whereas if the library is split into headers per each component, client code using mp_this is not affected when mp_that.hpp changes. So I wouldn't hold the number of headers in itself against Louis. Fine-grained is not without its uses.
On 6/22/15 6:06 AM, Peter Dimov wrote:
Joel de Guzman wrote:
Let me clarify, if that is too terse. I am *not* against using a TMP library. It's just that my preference now is for simpler libraries; as simple as possible, but still providing most (95+%) of the functionalities. That's what we are aiming for with X3. Less is more, simpler is better. Most of TMP usage can be distilled in a single header file (again referring to Eric's and Peter's works).
Truth be told, I'm not a big fan of single header libraries. A single header does have its benefits, but on the other hand, every change causes a recompilation of everything using anything from the library. Whereas if the library is split into headers per each component, client code using mp_this is not affected when mp_that.hpp changes. So I wouldn't hold the number of headers in itself against Louis. Fine-grained is not without its uses.
Understood. I too did not like single headers, and I know exactly what you mean. I've split all headers to be as fine grained as possible. I've come to realize, however, that in some cases, this is good! There are cases where, in most uses, you will certainly need all the core functionality anyway that it makes sense to just group them all together. I'm totally fine with a few header files with another forwarding header that groups the core --same thing. What's essential is keeping the core to a minimum. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
Understood. I too did not like single headers, and I know exactly what you mean. I've split all headers to be as fine grained as possible. I've come to realize, however, that in some cases, this is good! There are cases where, in most uses, you will certainly need all the core functionality anyway that it makes sense to just group them all together. I'm totally fine with a few header files with another forwarding header that groups the core --same thing. What's essential is keeping the core to a minimum.
You can have both, split headers and then aggregate that into bigger headers and then aggregate those aggregations into an #include
Edouard wrote:
However when headers are too fine grained it's almost impossible for the user of a library to know what should be included.
With properly fine-grained headers, it's trivial to know what should be included - if you use, for instance, mp_transform, you include mp_transform.hpp. That's overkill if many mp_ things are one-liners, of course. But my point is that "not knowing what to include" is not one of the problems with "too fine grained".
Le 22/06/15 12:42, Peter Dimov a écrit :
Edouard wrote:
However when headers are too fine grained it's almost impossible for the user of a library to know what should be included.
With properly fine-grained headers, it's trivial to know what should be included - if you use, for instance, mp_transform, you include mp_transform.hpp. That's overkill if many mp_ things are one-liners, of course. But my point is that "not knowing what to include" is not one of the problems with "too fine grained".
I like fine grained headers because it states explicitly the dependencies between the different classes and/or functions. Vicente
On Thu, Jun 18, 2015 at 11:35 PM, Joel de Guzman
On 6/19/15 9:12 AM, Louis Dionne wrote:
Joel de Guzman
writes: On 6/19/15 3:07 AM, Louis Dionne wrote:
[...]
It's not just about compile time, no. What's more important to me is the conceptual overhead of using another library to do simple things. TMP in what were once TMP heavy libraries used to take a significant amount of code in c++03. Not anymore. Then of course another factor is minimizing dependencies. The less dependencies the better. Zero is ideal.
I understand your point about minimizing dependencies. However, the same goes for any kind of library, TMP or not. At some point, one must draw a line and accept having external dependencies, or eternally reimplement everything from scratch. I guess you are the only one who can decide where that line should be drawn for your own projects, and that's OK.
However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff.
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers. So where is the speedup? is it because of std::tuple? If so, why don't you decouple your nice tuple implementation and offer it separately? Or is it the handwritten stuff? If so why? Why can't Zacc use the same tricks that you used in Hana?
I was using effectively the same trick in at least one place, and more ad hoc ones in others. However, after partially converting to Hana, I could see a smaller, and thus more maintainable, code base. As for the "32k header mega library", if it builds faster than the alternative and I don't need to understand much of that interface anyway, I find I don't really care. Zach
Joel de Guzman
On 6/19/15 9:12 AM, Louis Dionne wrote:
[...]
However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff.
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers.
Regarding the "32k header mega library" thing, I'd like to precise that it's
not as bad as it seems. First, part of it is just documentation. Second, that's
a _real_ 32 kLOC, not a 100 kLOC of dependencies hidden behind a 5 kLOC library.
Hana as no dependencies except the
So where is the speedup? is it because of std::tuple? If so, why don't you decouple your nice tuple implementation and offer it separately? Or is it the handwritten stuff? If so why?
The speedup is partly the tuple implementation, but also the tight coupling
between some algorithms and that tuple implementation. There are also a lot
of small decisions you can take in your library to reduce the compile times,
even if individually they seem to only have a minor impact. For example, I
decided to use static_cast
Why can't Zacc use the same tricks that you used in Hana?
He can, but then he'll be rewriting quite a bit of code he wish he didn't have to rewrite. It is also unlikely that he ends up with something as fast, for the simple reason that he does not want to spend three days optimizing the compile-time of an algorithm, like I sometimes do. That's nothing new; you spend a lot of time making a library really good at what it does, and then other people use it. And another day, you'll use one of their libraries.
Anyway, I'd like to have a look at your Phoenix-lite project, to see if it could be written using Hana, and how so. It is also definitely possible that no gains can be obtained from using Hana in that project, in which case that would give me a good example of what _not_ to use Hana for.
There are other issues, such as debuggability of TMP code using the lambda trick, if you are still using that for CT efficiency, but I guess I need to dive deeper to give a real review. Not being able to debug TMP code is a showstopper for me.
I'm not using the lambda trick anymore because of shabby support for generic lambdas and the lack of constexpr lambdas.
Ok, so what new tricks are you using to speed up compile time then? In my experience, the main reason for excessive compile time are the long type names. How are you able to overcome that?
Nope, the type names are still long AFAICT. Hana is not magic; it won't give you good compile-times and good error messages suddenly. We're still in C++. However, it tries very hard to be clever whenever it can, and if you write your code in a half-decent way, you should end up with something OK at compile-time. Of course, I expect writing a complex metaprogram with Hana will also result in long compile-times, but my goal is to make it faster or on-par with something you would write yourself. To do much better, we would probably need a compiler-provided closure type. Basically, std::tuple as a compiler intrinsic. I think this could be lightning fast, but we're not there yet. Regards, Louis
However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff.
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers.
Louis Dionne respondeth:
Regarding the "32k header mega library" thing, I'd like to precise that it's not as bad as it seems. First, part of it is just documentation. Second, that's a _real_ 32 kLOC, not a 100 kLOC of dependencies hidden behind a 5 kLOC library. Hana as no dependencies except the
, <utility> and <cstddef> headers, which you probably already use anyway. In comparison, including almost any other Boost library will pull in a lot more than 32 kLOCs in dependencies.
This.
For TMP libraries, and particularly for their application in large
production codebases, I conclude the following pattern tends to hold: MORE
IS LESS.
Paraphrased, this would be, "More reusable code in libraries means less
application-specific code is necessary, so the program is more feature-rich
and functionality evolves better over time."
Explanation: A "richer" (well-defined/implemented) library tends to
provide non-linear value improvements over a "smaller" library.
In large codebases, a "more-complete" library is superior because:
(a) (Real-world) Corner cases are addressed. When confronted with a given
corner case that is unaddressed by the limited ambitions of the smaller
library, you write adapter layers which can be brittle or expensive in many
ways, and which possibly don't really do what you wanted. These tend to be
inconsistently adapted/used across the codebase, which is bad, because it
should have been centralized somehow (such as through a reusable library).
(b) It becomes unnecessary to also rely upon other (overlapping) libraries
that usually offer a different metaphor, but which sufficiently overlap
with functionality as to cause confusion among developers (because there is
now, "more than one way" to do some things).
(c) Efficiencies are possible within the library implementation. Those
additional corner cases can be addressed and short-circuited, but as we all
know, for TMP this often relies upon more (partial-)specializations
(requiring more code).
I like Hana's approach because it is unifying, and presents a single
consistent model. Like Zach said, I don't see it as "big" -- I only care
about that small interface that is relevant for my needs, and my
production-code is smaller/less.
That 32K includes extensive comments (which can be stripped if necessary),
but I think it's particularly awesome that it is truly uncoupled from
almost everything (needing only
To do much better, we would probably need a compiler-provided closure type. Basically, std::tuple as a compiler intrinsic. I think this could be lightning fast, but we're not there yet.
Hey, that's a really great idea. I could come up with lots of uses for that. --charley
On 6/19/15 11:45 PM, charleyb123 . wrote:
However, even though you may be content with the ease of writing TMP code in C++14, I think you might be surprised to see how shorter it could be if you used Hana. This is, for example, the case of Zach's Units-BLAS library. It was really quite short with C++14 only, but it was even shorter with Hana. And the code was written with a higher level of abstraction. And in that case, there was even a compile-time speedup over std::tuple + handwritten stuff.
"really quite short" is good enough if the cost is having to depend on a "32k header mega library" as David notes. Not to mention having to learn another library for me and for all future maintainers.
Louis Dionne respondeth:
Regarding the "32k header mega library" thing, I'd like to precise that it's not as bad as it seems. First, part of it is just documentation. Second, that's a _real_ 32 kLOC, not a 100 kLOC of dependencies hidden behind a 5 kLOC library. Hana as no dependencies except the
, <utility> and <cstddef> headers, which you probably already use anyway. In comparison, including almost any other Boost library will pull in a lot more than 32 kLOCs in dependencies. This.
For TMP libraries, and particularly for their application in large production codebases, I conclude the following pattern tends to hold: MORE IS LESS.
Paraphrased, this would be, "More reusable code in libraries means less application-specific code is necessary, so the program is more feature-rich and functionality evolves better over time."
Explanation: A "richer" (well-defined/implemented) library tends to provide non-linear value improvements over a "smaller" library.
In large codebases, a "more-complete" library is superior because:
[snip] Well, that sums it up. So let us just agree to disagree. I'll state my opinion again: less is more in post C++11 TMP. 98% of TMP can be done in a simple single header file as was done by Peter Dimov and Eric Niebler with a very simple, straightforward interface. 98% is more than enough for me. I've written heavy TMP most of my life as a C++ programmer and the weight of the code was because of the limitations of C++03 (and prior!). Let me just make this clear: I am for Hanna's acceptance into boost. It is cool and well implemented. Many people will find a use for it. It's just not for me. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
Joel de Guzman
[...]
Let me just make this clear: I am for Hanna's acceptance into boost. It is cool and well implemented. Many people will find a use for it. It's just not for me.
I'm fine with that, really. But just to be clear, would you use a TMP library that would be basically Hana core: - an efficient tuple implementation - optimized algorithms (filter, transform, for_each, nothing fancy) - no fancy FP concepts - a couple of header files, no more Would you use that, or would you still prefer to DIY? Regards, Louis
On 6/20/15 6:57 AM, Louis Dionne wrote:
Joel de Guzman
writes: [...]
Let me just make this clear: I am for Hanna's acceptance into boost. It is cool and well implemented. Many people will find a use for it. It's just not for me.
I'm fine with that, really. But just to be clear, would you use a TMP library that would be basically Hana core:
- an efficient tuple implementation - optimized algorithms (filter, transform, for_each, nothing fancy) - no fancy FP concepts - a couple of header files, no more
Would you use that, or would you still prefer to DIY?
Yes! Definitely! Isn't that what this thread, started by David, is all about? Let me quote: My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries. Make it as simple as Eric's or Peter's libs, since that's what you will be up against. "a couple of header files" will be fine fine, but a single header file would be super cool! (P.S. phoenix-lite is a single header file) Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On 6/17/2015 3:55 PM, David Sankel wrote:
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
wrote: - Whether you believe the library should be accepted into Boost * Conditions for acceptance
Yes, I think it should be accepted into Boost. The one condition is that the library works with release versions of at least two different mainstream C++ compilers. More on that later...
- Your name
David J. Sankel
- Your knowledge of the problem domain.
I've written numerous libraries that make use of Boost.MPL and Boost.Fusion. I also have quite a bit of functional programming expertise.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
* Implementation
The code itself looks to be well structured and well documented.
Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction. The home page states that boost libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria. Many other Boost authors have made heroic efforts to meet that criteria and the reputation of Boost is due, in no small part, to those efforts.
I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from.
That being my position on the issue, my acceptance vote is conditional on hana supporting at least two released versions of mainstream compilers. Given that gcc support seems pretty close, that shouldn't be hard to achieve.
I have not reviewed Hana yet but I feel the need to comment about any arbitrary number of compilers a library must work under. I completely disagree with the notion that any such number would be the cause of accepting or not accepting a library as part of Boost. If a library is found useful and works according to the latest version of the C++ standard that should be more than enough for that library to be accepted into Boost as part of the review process pending individual reviews.
On Wed, Jun 17, 2015 at 11:23 PM, Edward Diener
I have not reviewed Hana yet but I feel the need to comment about any arbitrary number of compilers a library must work under. I completely disagree with the notion that any such number would be the cause of accepting or not accepting a library as part of Boost. If a library is found useful and works according to the latest version of the C++ standard that should be more than enough for that library to be accepted into Boost as part of the review process pending individual reviews.
+1 Zach
On 6/17/15 9:23 PM, Edward Diener wrote:
If a library is found useful and works according to the latest version of the C++ standard that should be more than enough for that library to be accepted into Boost as part of the review process pending individual reviews.
This is long standing and explicit boost policy - It's somewhere in the boost web - too tedious to find right now. Robert Ramey
David Sankel
On Wed, Jun 10, 2015 at 3:19 AM, Glen Fernandes
wrote: [...]
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design
The core technique of combining value and type expressions is solid and makes metaprogramming easier and, as a bonus, improves compilation speeds.
I'd just like to point out that representing types as value is not what makes compilation faster. It is actually the oposite: representing type-level computations at the type-level is indeed faster, since the compiler has less work to do. What makes Hana faster than Fusion/MPL is the usage of modern C++14 techniques and also a design built around the compiler's execution model rather than the machine's execution model (like the STL/Fusion/MPL with iterators). For pure type-level computations (which are mostly a myth in C++14, because you usually don't need them that much), the fastest way to go would be to use something like the MPL11 I presented in 2014, or Meta (if it's implementation is clever enough). Basically, a MPL (pure-type), but using C++11/14 techniques. However, then you're back to the great fun of writing type-level computations like you did in C++03. My gamble with Hana is that for the very few times you actually need type-level computations, taking a small compile-time hit is worth the expressiveness gain and the ease of interfacing with your actual heterogeneous runtime code (written also with Hana), which is what's important at the end of the day.
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
Yes, it would be possible. That would mean (non exhaustively): Pros: - No conceptual overhead (it would dumb tuples + algorithms, nothing else) - Slightly faster compile times (smaller library) Cons - No real separation in concepts. This is the flipside of having no conceptual overhead. - No interoperation with other libraries (STL/Fusion/MPL) - No (or very little) possibility of extending the library by users. Don't even try defining your own sequence, it wouldn't be supported. Hana advertises itself as a "standard library for metaprogramming". If you look at Eric's Range library, for example, he also has to define a truckload of concepts to get this done. My feeling is that in order to do something _cleanly_, one must sometimes aim for something larger than the strict minimum that can technically get the job done. That's what I went for. However, it is clear that for some users doing simple stuff Hana's concepts will just be annoying. Hence, there is definitely value in a library that would basically implement Hana's core, but only that. Shooting from the hip, that would include: - An efficient tuple implementation (std::tuple usually sucks, sorry) - Efficient algorithms on this tuple type - A way to wrap types into values - Extended integral constants with operators - Maybe, but just maybe, a compile-time Optional That could either be a different library (I could consider taking this over in the future), or splitting Hana in such a way that this Core content can be accessed without any conceptual/compile-time overhead. However, this is either a non-trivial redesign of the library or a new library completely, so it is part of a "future plan".
* Implementation
The code itself looks to be well structured and well documented.
Unfortunately hana only works with one compiler: clang. While I agree that Boost shouldn't need to support Visual C++ 6.0 anymore, I believe this is going too far in the opposite direction. The home page states that boost libraries "are intended to be widely useful, and usable across a broad spectrum of applications". I've always interpreted that statement to be in a practical rather than theoretical sense and I don't think hana meets that criteria. Many other Boost authors have made heroic efforts to meet that criteria and the reputation of Boost is due, in no small part, to those efforts.
I do appreciate the argument that making use of new features encourages compiler implementers to implement then. I maintain, however, that this isn't Boost's job. Boost provides high quality libraries that the every-day Joe C++ developer can benefit from.
That being my position on the issue, my acceptance vote is conditional on hana supporting at least two released versions of mainstream compilers. Given that gcc support seems pretty close, that shouldn't be hard to achieve.
[Be warned, I'm getting slightly emotive below.] It's not about me, it's about them. Seriously, go look at GCC's or Clang's bug trackers. Since I seem to be the only user doing "real" stuff with new C++14 features, I'm also the one finding a lot of the bugs. But of course, I don't weight much in the balance when paying customers(*) ask for bugs to be fixed, or when more obvious C++03 bugs appear. The most important thing stopping Hana from becoming more mature (and higher quality) is the lack of users and the lack of support from compilers. Both can be fixed __very quickly__ by entering Boost, or we could also sit back and enjoy the ride for the next year or so. (*) I waited for this bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65719 to be fixed for more than 2 months. The bug was fixed 2 days after someone I know filed the same bug against RedHat. The dev who fixed the bug worked for RedHat, and the fix was 2 lines of code in GCC. That could be a coincidence, but I doubt it. What I'm asking for is not rock solid perfect C++14 support, but I simply need stuff like this int main() { [](auto x) -> decltype((void)x) { }; } to compile on GCC (try it). Oh and by the way, if I don't find these bugs now, you will find them yourself when you use (even trivial) C++14 features in those compilers. So we're all together in the same boat, except for MSVC users who seem to enjoy swimming along.
* Usefulness
Maybe in a couple years for me. Without VS 2015 support at least, I'm going to be waiting a while.
- Did you attempt to use the library?
Yes. Started working through the getting started examples.
If so: * Which compiler(s)
clang 3.6.1
* What was the experience? Any problems?
Nothing more to add. I filed an issues for all problems encountered and they seemed minor.
For the reference of others, the problems are related to the lack of testing on libstdc++ on Linux. I'm working to fix those errors, and also test on libstdc++ on Travis. Thanks a lot for the review, David. I'll try to get your compilation errors fixed ASAP. Regards, Louis
[Louis Dionne]
Oh and by the way, if I don't find these bugs now, you will find them yourself when you use (even trivial) C++14 features in those compilers. So we're all together in the same boat, except for MSVC users who seem to enjoy swimming along.
We pay special attention to Boost-blocking bugs. If you've submitted a compiler or library bug that's important to Boost, feel free to contact me and I'll ping the appropriate dev. STL
Stephan T. Lavavej
[Louis Dionne]
Oh and by the way, if I don't find these bugs now, you will find them yourself when you use (even trivial) C++14 features in those compilers. So we're all together in the same boat, except for MSVC users who seem to enjoy swimming along.
We pay special attention to Boost-blocking bugs. If you've submitted a compiler or library bug that's important to Boost, feel free to contact me and I'll ping the appropriate dev.
I'm glad to hear about this, thanks. Unfortunately, everything I've done in the past year or so has been solely on Clang (and bits on GCC trunk) because of C++14. But I guess your advice will come in handy when/if I port Hana to MSVC, once full C++14 support is reached. Regards, Louis
My one question, as I read though the implementation, is "can the core benefits of this library be achived with a simpler 'light' version of this implementation?". While I appreciate the attempt to encode a Haskell-style typeclass hierarchy, I feel like that is not the core competency of hana and should be a separate library and discussion. As it is, this is a 32k header mega library. I'd prefer several small, highly-targeted, highly-composable libraries.
Yes, it would be possible.
Well, you already on planning to move to use Fit, instead of whats currently in the Functional part. Also, the way you define concepts are very interesting and might be a good library on its own as other libraries that don't need metaprogramming may want use it. Of course, its best to see what happens as the library grows. I believe something similar happened to Spirit. It was one library and then split out to other libraries(such as Phoenix and Fusion).
However, it is clear that for some users doing simple stuff Hana's concepts will just be annoying. Hence, there is definitely value in a library that would basically implement Hana's core, but only that. Shooting from the hip, that would include:
Well the Fit library provides a lot of this core capability, I believe. It has a focus on being lightweight instead providing something highly generic.
- An efficient tuple implementation (std::tuple usually sucks, sorry)
The Fit library already provides an efficient tuple implementation. Additionally, it is also empty optimized on compilers that support it(current only clang does). However, with the goal of being lightweight, it doesn't provide indexed accessors directly. They could be implemented easily using `args`, however, for more sophisticated needs like that, Hana should just be used instead.
- Efficient algorithms on this tuple type
Yes, and the Fit library provides the components to easily do algorithms on
sequences. `tuple_cat` can be implemented in one line:
auto tuple_cat = unpack(constructstd::tuple());
As well as `for_each`:
auto for_each = [](auto&& seq, auto f) { unpack(by(f))(seq); };
And `transform`:
auto transform = [](auto&& seq, auto f) { return unpack(by(f,
constructstd::tuple()))(seq); };
And `fold`:
auto fold = [](auto&& seq, auto f) { return unpack(compress(f))(seq); };
Plus, its extensible and could be adapted to work with fusion sequences as
well:
template<class Sequence>
struct unpack_sequence
- A way to wrap types into values
I think this is fairly trivial as well. There doesn't seem to be a need for a complete separate library: template<class T> struct type { using type = T; }; template<class T> type<T> decltype_(T&&) { return {}; }
- Extended integral constants with operators
The Tick library also provides extended integral constants with operators. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Re-Hana-Formal-review-tp4677251p4677295.h... Sent from the Boost - Dev mailing list archive at Nabble.com.
participants (17)
-
Bjorn Reese
-
charleyb123 .
-
David Sankel
-
edouard@fausse.info
-
Edward Diener
-
Glen Fernandes
-
Joel de Guzman
-
Louis Dionne
-
Niall Douglas
-
Paul A. Bristow
-
Paul Fultz II
-
Peter Dimov
-
Rob Stewart
-
Robert Ramey
-
Stephan T. Lavavej
-
Vicente J. Botet Escriba
-
Zach Laine