[GSoC] [Boost.Hana] Formal review request
Dear Boost, It has been a while since I've given news about my GSoC project, Boost.Hana[1]. Things are going very well and I uploaded the library to the Boost Incubator. I think it is time to push the library forward for review. As the author, I know everything that's left to be done and polished before I can be fully satisfied. Still, I believe the library is worthwhile in its current state as it implements a superset of the functionality found in the Boost.MPL and Boost.Fusion. I also think the library will benefit greatly from a larger user base and more feedback. Here are some caveats: - The library requires a full C++14 compiler, so only Clang 3.5 can compile the unit tests right now. However, compilers will eventually catch up. Also, pushing cutting edge libraries forward might motivate compilers to support C++14 ASAP, which is probably a good thing. - The library is not fully stable yet, so interface changes are to be expected. I don't see this as a problem as long as this is documented, especially since I expect Hana to be used mostly for toying around for at least a while. I could be mistaken. So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana. Regards, Louis [1]: http://github.com/ldionne/hana
On 07/28/2014 10:20 AM, Louis Dionne wrote:
So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana.
Regards, Louis
I see reference docs as generated by Doxygen, but I don't see user docs. Am I missing it? That would certainly be a show-stopper. \e
On Mon, Jul 28, 2014 at 11:33 AM, Eric Niebler wrote:
I see reference docs as generated by Doxygen, but I don't see user docs. Am I missing it? That would certainly be a show-stopper.
\e
His user documentation is also via Doxygen (and looks fairly impressive to me): http://ldionne.github.io/hana/ Glen
Eric Niebler
On 07/28/2014 10:20 AM, Louis Dionne wrote:
So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana.
Regards, Louis
I see reference docs as generated by Doxygen, but I don't see user docs. Am I missing it? That would certainly be a show-stopper.
As Glen points out, the tutorial is also written with Doxygen. You have to click on "Boost.Hana Manual" in the side bar at the left. It should also be the landing page when you enter the documentation at: http://ldionne.github.io/hana/ The tutorial goes through the basic concepts used everywhere in the library and then explains how to use the reference documentation (accessible by clicking "Reference" on the left panel). Perhaps you have already seen that but it did not qualify as a tutorial to you? That would be valid criticism. The documentation was built with Boost.Fusion as a model, i.e. a short primer and then a thorough reference with examples of how to use each component. I thought this was the best way to go given the nature of the library (a lot of general purpose utilities and very little room for surprise). Please let me know if you think the documentation is not adequate (or anything else for that matter). Regards, Louis
On 07/28/2014 12:17 PM, Louis Dionne wrote:
Eric Niebler
writes: On 07/28/2014 10:20 AM, Louis Dionne wrote:
So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana.
Regards, Louis
I see reference docs as generated by Doxygen, but I don't see user docs. Am I missing it? That would certainly be a show-stopper.
As Glen points out, the tutorial is also written with Doxygen. You have to click on "Boost.Hana Manual" in the side bar at the left. It should also be the landing page when you enter the documentation at:
Ah! Nice, thanks.
On 7/28/2014 3:17 PM, Louis Dionne wrote:
Eric Niebler
writes: On 07/28/2014 10:20 AM, Louis Dionne wrote:
So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana.
Regards, Louis
I see reference docs as generated by Doxygen, but I don't see user docs. Am I missing it? That would certainly be a show-stopper.
As Glen points out, the tutorial is also written with Doxygen. You have to click on "Boost.Hana Manual" in the side bar at the left.
I do not see this side bar on the left of your GitHub page.
It should also be the landing page when you enter the documentation at:
I see this online. But in your instructions on your GitHub page you say there is an offline version in the doc/gh-pages of your library but the index.html there only shows the doxygen documentation.
The tutorial goes through the basic concepts used everywhere in the library and then explains how to use the reference documentation (accessible by clicking "Reference" on the left panel). Perhaps you have already seen that but it did not qualify as a tutorial to you? That would be valid criticism.
The documentation was built with Boost.Fusion as a model, i.e. a short primer and then a thorough reference with examples of how to use each component. I thought this was the best way to go given the nature of the library (a lot of general purpose utilities and very little room for surprise).
Please let me know if you think the documentation is not adequate (or anything else for that matter).
Regards, Louis
Edward Diener
On 7/28/2014 3:17 PM, Louis Dionne wrote:
[...]
As Glen points out, the tutorial is also written with Doxygen. You have to click on "Boost.Hana Manual" in the side bar at the left.
I do not see this side bar on the left of your GitHub page.
The side bar is in the documentation at http://ldionne.github.io/hana , not on the GitHub page of the project itself.
It should also be the landing page when you enter the documentation at:
I see this online. But in your instructions on your GitHub page you say there is an offline version in the doc/gh-pages of your library but the index.html there only shows the doxygen documentation.
Just to make sure; did you do git submodule update --init --remote at the root of your local clone, as instructed in the README? This checks out the doc/gh-pages submodule at its latest version. What I suspect you did is clone the project and then `git submodule init`, which would have left you with some fairly old version of the documentation. The `--remote` option must be added because the master branch only tracks the submodule at a given commit. I know of two solutions for this: 1. Use `git submodule update --init --remote` instead of the usual `git submodule update --init` to check out the latest version of the documentation. 2. Update the commit of the submodule referenced by the master branch every time we regenerate the documentation in order to make `git submodule update --init` equivalent to `git submodule update --init --remote`. I went for the first option because I did not want to make a commit in master each time I updated the documentation. What I'll try to do for now is change the contents of doc/gh-pages that you get by default and put a note saying "Here's the command you should do if you want the documentation offline" Does that seem reasonable? Regards, Louis
On 7/28/2014 4:37 PM, Louis Dionne wrote:
Edward Diener
writes: On 7/28/2014 3:17 PM, Louis Dionne wrote:
[...]
As Glen points out, the tutorial is also written with Doxygen. You have to click on "Boost.Hana Manual" in the side bar at the left.
I do not see this side bar on the left of your GitHub page.
The side bar is in the documentation at
, not on the GitHub page of the project itself.
I do see that.
It should also be the landing page when you enter the documentation at:
I see this online. But in your instructions on your GitHub page you say there is an offline version in the doc/gh-pages of your library but the index.html there only shows the doxygen documentation.
Just to make sure; did you do
git submodule update --init --remote
C:\Programming\VersionControl\modular-boost\libs\hana>git submodule update --init --remote Submodule path 'doc/gh-pages': checked out '68817a886f0f13d286b28f27e4462694b37522b9' I do now see the full manual. This is what I need to understand your library. Just a doxygen reference with examples never does anything for me <g>.
at the root of your local clone, as instructed in the README? This checks out the doc/gh-pages submodule at its latest version. What I suspect you did is clone the project and then `git submodule init`, which would have left you with some fairly old version of the documentation.
The `--remote` option must be added because the master branch only tracks the submodule at a given commit. I know of two solutions for this:
1. Use `git submodule update --init --remote` instead of the usual `git submodule update --init` to check out the latest version of the documentation.
2. Update the commit of the submodule referenced by the master branch every time we regenerate the documentation in order to make `git submodule update --init` equivalent to `git submodule update --init --remote`.
I went for the first option because I did not want to make a commit in master each time I updated the documentation. What I'll try to do for now is change the contents of doc/gh-pages that you get by default and put a note saying
"Here's the command you should do if you want the documentation offline"
Does that seem reasonable?
A separate branch with the latest full documentation , maybe called 'doc' might be clearer.
Edward Diener
[...]
A separate branch with the latest full documentation , maybe called 'doc' might be clearer.
There is such a branch, and it's called `gh-pages`. It's the way of doing things for projects on GitHub. The problem with providing the documentation _only_ through a branch is that you have to check it out (and hence overwrite the current working directory) to see it. So what I did is create such a branch and then make it available through a submodule, so you can check out that branch in a subdirectory. Louis
Louis Dionne
Edward Diener
writes: [...]
A separate branch with the latest full documentation , maybe called 'doc' might be clearer.
There is such a branch, and it's called `gh-pages`. It's the way of doing things for projects on GitHub. The problem with providing the documentation _only_ through a branch is that you have to check it out (and hence overwrite the current working directory) to see it. So what I did is create such a branch and then make it available through a submodule, so you can check out that branch in a subdirectory.
Okay, so I made some changes and here's how it works now: 1. If you clone the repository as instructed in the README, everything just works. 2. If you clone the repository and then just checkout the submodule, but not at its latest version, you get an empty directory with a README telling you exactly the commands to get the latest documentation. 3. When I release stable versions of the library, the default checkout of the documentation submodule will be the documentation for that version of the library instead of an empty directory with a README, as one would expect. Louis
On 7/28/2014 8:26 PM, Louis Dionne wrote:
Louis Dionne
writes: Edward Diener
writes: [...]
A separate branch with the latest full documentation , maybe called 'doc' might be clearer.
There is such a branch, and it's called `gh-pages`. It's the way of doing things for projects on GitHub. The problem with providing the documentation _only_ through a branch is that you have to check it out (and hence overwrite the current working directory) to see it. So what I did is create such a branch and then make it available through a submodule, so you can check out that branch in a subdirectory.
Okay, so I made some changes and here's how it works now:
1. If you clone the repository as instructed in the README, everything just works.
2. If you clone the repository and then just checkout the submodule, but not at its latest version, you get an empty directory with a README telling you exactly the commands to get the latest documentation.
3. When I release stable versions of the library, the default checkout of the documentation submodule will be the documentation for that version of the library instead of an empty directory with a README, as one would expect.
This is reasonable. But I do not understand from your documentation how your library relates to Boost MPL. The Boost MPL library is about manipulating and creating types at compile time, and creating logic paths again at compile time to manipulate types. All of this is encapulated by the MPL's notion of metafunctions to do compile-time programming. But I cannot get from your documentation any of the MPL equivalence of how any of this is done with your library. Is your library meant to duplicate/replace this MPL functionality in some way ? Or is it meant to do something else entirely not related to compile-time programming ? I am only asking this because I had been told that your library is at the the least a replacement for MPL compile-time type manipulation functionality using C++11 on up.
On 28 Jul 2014 at 23:03, Edward Diener wrote:
But I do not understand from your documentation how your library relates to Boost MPL. The Boost MPL library is about manipulating and creating types at compile time, and creating logic paths again at compile time to manipulate types. All of this is encapulated by the MPL's notion of metafunctions to do compile-time programming. But I cannot get from your documentation any of the MPL equivalence of how any of this is done with your library. Is your library meant to duplicate/replace this MPL functionality in some way ? Or is it meant to do something else entirely not related to compile-time programming ?
I am only asking this because I had been told that your library is at the the least a replacement for MPL compile-time type manipulation functionality using C++11 on up.
I think a table with MPL98 forms on one side and Hana equivalents on the other would be an enormous help with the learning curve. Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds. BTW some of those graphs you had in C++ Now showing time and space benchmarks of performance would be really useful in the docs, maybe in an Appendix. When MSVC eventually gets good enough that Hana could be ported to it (VS2015?) I think it would be fascinating to see the differences. I'm sure Microsoft's compiler team would also view Hana as an excellent test of future MSVCs, indeed maybe Stephan could have Hana used as an internal test of conformance for the team to aim for. I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later. I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now. Finally, there are ways and means for doxygen docs to automatically convert into BoostBook docs. You'll need to investigate those before starting a formal review. Tip: look into how Geometry/AFIO does the doxygen conversion, it's brittle but it is easier than the others. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 29 Jul 2014 at 11:04, Niall Douglas wrote:
I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now.
Apologies, the fellow is in fact Hungarian. Actually, there are no less than *two* such Hungarians, both with tools solving the problem from different ways. One is Zoltán Porkoláb, the other is Ábel Sinkovics. The former presented at C++ Now 2013, the latter at C++ Now 2014. Thanks to Benedek for correcting me. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
The documentation is awesome, thanks! I liked the inline discussions that relate the library with Fusion and MPL, and in particular your use of variable templates (e.g. type<>).
I would feel much happier if you got trunk GCC working, even if that means workarounds.
I'd rather wait till GCC offers basic C++14 support for variable templates
and rel. constexpr before even attempting this since otherwise this looks
like a major undertaking for little win: those who can use gcc-trunk can
probably also use clang-trunk and anyhow these users are a minority
anyways. IMO compilers will get there in due time.
On Tue, Jul 29, 2014 at 12:36 PM, Niall Douglas
On 29 Jul 2014 at 11:04, Niall Douglas wrote:
I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now.
Apologies, the fellow is in fact Hungarian.
Actually, there are no less than *two* such Hungarians, both with tools solving the problem from different ways. One is Zoltán Porkoláb, the other is Ábel Sinkovics. The former presented at C++ Now 2013, the latter at C++ Now 2014.
Thanks to Benedek for correcting me.
Niall
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 07/29/2014 11:10 AM, Gonzalo BG wrote:
The documentation is awesome, thanks! I liked the inline discussions that relate the library with Fusion and MPL, and in particular your use of variable templates (e.g. type<>).
I would feel much happier if you got trunk GCC working, even if that means workarounds.
I'd rather wait till GCC offers basic C++14 support for variable templates and rel. constexpr before even attempting this since otherwise this looks like a major undertaking for little win: those who can use gcc-trunk can probably also use clang-trunk and anyhow these users are a minority anyways. IMO compilers will get there in due time.
I agree. I can certainly understand Louis' reluctance. All that effort to adapt to old compilers would be useless as soon as the those compilers upgrade, and hana would provide them (as already mentioned) with more incentive to upgrade. [snip]
Niall Douglas
[...]
I think a table with MPL98 forms on one side and Hana equivalents on the other would be an enormous help with the learning curve.
Will do.
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
That would mean a lot of workarounds. TBH, I think the "right" thing to do is to push the compiler folks to support C++14 (and without bugs, plz) as soon as possible. The reason I am so unwilling to do workarounds is that _the whole point_ of Hana is that it's cutting edge. Whenever you remove a C++14 feature, Hana goes back to the stone-age performance of Fusion/MPL and becomes much, much less usable. Why not use Fusion/MPL in that case?
BTW some of those graphs you had in C++ Now showing time and space benchmarks of performance would be really useful in the docs, maybe in an Appendix. When MSVC eventually gets good enough that Hana could be ported to it (VS2015?) I think it would be fascinating to see the differences. I'm sure Microsoft's compiler team would also view Hana as an excellent test of future MSVCs, indeed maybe Stephan could have Hana used as an internal test of conformance for the team to aim for.
Yup, I have a benchmark suite for Boost.Hana, like I had for the MPL11. I have started integrating them with the documentation, but I'm not sure what's the best way of doing it so I did not push forward on that. Basically, I got a benchmark for almost every operation of almost every sequence that's supported by Hana (including those adapted from external libraries), but I'm not sure yet of how to group them in the documentation (per operation? per sequence? per type class?). The problem is made worse by two things: - It only makes sense to benchmark components that are isomorphic. For example, what does it mean to benchmark a std::tuple against a mpl::vector? Not much, because the price you pay for std::tuple gives you the ability to hold values, whereas mpl::vector can only hold types. We don't want to compare apples with oranges, and the grouping of benchmarks should be influenced by that. - How do we handle different compilers? Right now, all benchmarks are produced only on Clang, which is OK because it's the only compiler that can compile the library. When there is more than one compiler, how do we generate the benchmarks for all of them, and how do we integrate the benchmarks in the documentation?
I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later.
I thought about doing this, but I did not because I thought it was a HUGE undertaking to automate it. I think we're better off integrating the benchmarks in the documentation and then when something is really weird, we just have to look at the generated benchmarks and see what's wrong. If someone can suggest a way to do it automatically that won't take me weeks to set up, I'm interested. Also, I wouldn't want to slip in the trap of testing the compiler; testing Hana is a large enough task as it is (341 tests + 165 examples as we speak).
I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now.
I'll think about something; that's a good idea. Thanks.
Finally, there are ways and means for doxygen docs to automatically convert into BoostBook docs. You'll need to investigate those before starting a formal review. Tip: look into how Geometry/AFIO does the doxygen conversion, it's brittle but it is easier than the others.
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook? Regards, Louis
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Louis Dionne Sent: 29 July 2014 18:03 To: boost@lists.boost.org Subject: Re: [boost] [GSoC] [Boost.Hana] Formal review request Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook?
It certainly isn't compulsory, but it gives it a Boosty look'n'feel that will probably make it easier for Boost users to navigate. For Boost, Quickbook, Boostbook, Doxygen *is* mainstream ;-) IMO, the nicest docs start with Quickbook, an easy but very powerful 'markup language'. You can use all your Doxygen comments in your code to provide a C++ reference section with no extra work. And you can prepare automatic indexes that help user find their way around. Tell me if I can help. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On Tue, Jul 29, 2014 at 10:02 AM, Louis Dionne wrote:
That would mean a lot of workarounds. TBH, I think the "right" thing to do is to push the compiler folks to support C++14 (and without bugs, plz) as soon as possible. The reason I am so unwilling to do workarounds is that _the whole point_ of Hana is that it's cutting edge. Whenever you remove a C++14 feature, Hana goes back to the stone-age performance of Fusion/MPL and becomes much, much less usable. Why not use Fusion/MPL in that case?
I agree.
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook?
It is not mandatory as far as I know. [It was rightly mandatory for Boost.Align because my documentation was essentially a single giant index.html :-) ]. In my opinion your current documentation looks nicer, and more modern, than most existing Boost library documentation]. The only case I can see for BoostBook would be visual consistency with existing Boost documentation. Glen
On 29 Jul 2014 at 17:02, Louis Dionne wrote:
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
That would mean a lot of workarounds.
Not necessarily. You could wait until GCC trunk catches up instead, helping it along by adding however many bug reports as is necessary. Timing well a formal review is often as important as the quality of your library. I always feel heebie jeebies about code which works only on one compiler. For me to vote yes for a Boost library to enter Boost I need to feel it is well tested and reliable and has had all its kinks knocked out. I struggle to see me feeling this with a library which can't be tested widely. Instead of heading straight into the community review queue, perhaps a few rounds of intermediate informal reviews like this one? I'd particularly like to see Eric and Joel's opinion of your library so far too.
Basically, I got a benchmark for almost every operation of almost every sequence that's supported by Hana (including those adapted from external libraries), but I'm not sure yet of how to group them in the documentation (per operation? per sequence? per type class?).
The problem is made worse by two things:
- It only makes sense to benchmark components that are isomorphic. For example, what does it mean to benchmark a std::tuple against a mpl::vector? Not much, because the price you pay for std::tuple gives you the ability to hold values, whereas mpl::vector can only hold types. We don't want to compare apples with oranges, and the grouping of benchmarks should be influenced by that.
I guess it's like when choosing between list, map and vector from the STL. All are "equivalent", but you need to know how each might scale out for what you need. Most users won't just known automatically. My problem always with MPL98 was I had no idea what was fast or slow for my use cases.
- How do we handle different compilers? Right now, all benchmarks are produced only on Clang, which is OK because it's the only compiler that can compile the library. When there is more than one compiler, how do we generate the benchmarks for all of them, and how do we integrate the benchmarks in the documentation?
You're right this is more of a name and shame thing between AST compilers. MSVC is very different though, and you may eventually need a separate line labelled "MSVC" just for it.
I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later.
I thought about doing this, but I did not because I thought it was a HUGE undertaking to automate it.
No, it's easier than you think. Have a look at https://ci.nedprod.com/ whose default dashboard shows a graph labelled "RUDP performance". This tracks performance of a build over time to ensure performance doesn't regress. All you need is for your performance test tool to output some CSV, a Jenkins Plot plugin does the rest.
I think we're better off integrating the benchmarks in the documentation and then when something is really weird, we just have to look at the generated benchmarks and see what's wrong. If someone can suggest a way to do it automatically that won't take me weeks to set up, I'm interested.
Mastering Jenkins takes months, but once mastered configuring all sorts of test scenarios becomes trivial. I'd actively merge your Jenkins/Travis output into your docs too, it is nowadays an online world.
Also, I wouldn't want to slip in the trap of testing the compiler; testing Hana is a large enough task as it is (341 tests + 165 examples as we speak).
I think testing modern C++ is always testing the compiler. It's why I develop first on MSVC, I stand a good chance of porting that to GCC or clang, less so the other way round.
Finally, there are ways and means for doxygen docs to automatically convert into BoostBook docs. You'll need to investigate those before starting a formal review. Tip: look into how Geometry/AFIO does the doxygen conversion, it's brittle but it is easier than the others.
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook?
As you'll find when formal review comes, to pass isn't about how good your library is, it's about eliminating as many rational objections others can think of. BoostBook format documentation is a very easy way of crossing permanently off a raft of potential objections. Other categories of objection will be far harder to address, trust me. I suspect the hardest will probably be the "so what?" objection in all the guises it manifests, as it always is for any library which is particularly novel. All that said, I have little love for the Boost documentation format, I think it outdated, brittle, and inflexible, and it does a lousy job of automating reference documentation. But it could be a lot worse, and I have been forced in the past to use a lot worse. Besides, you'll invest five days or so of wishing pain on those responsible for the tools, and once it's working you'll never need touch it again. I did find it took some months to find and fix all the corner cases in the doc output though, and even now PDF generation from AFIO's docs are a joke due to the long template strings. Anyway, it's up to you. BTW, I've noticed that when peer review managers volunteer to manage they tend to favour ones with BoostBook docs. I think they also think it's another problem they don't have to think about during managing. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas
On 29 Jul 2014 at 17:02, Louis Dionne wrote:
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
That would mean a lot of workarounds.
Not necessarily. You could wait until GCC trunk catches up instead, helping it along by adding however many bug reports as is necessary. Timing well a formal review is often as important as the quality of your library.
I always feel heebie jeebies about code which works only on one compiler. For me to vote yes for a Boost library to enter Boost I need to feel it is well tested and reliable and has had all its kinks knocked out. I struggle to see me feeling this with a library which can't be tested widely.
Instead of heading straight into the community review queue, perhaps a few rounds of intermediate informal reviews like this one?
That's ok with me.
I'd particularly like to see Eric and Joel's opinion of your library so far too.
I'd like that too.
[...]
My problem always with MPL98 was I had no idea what was fast or slow for my use cases.
Lol. Number one tip if you want to improve your compile-time performance with the MPL: do not _ever_ use mpl::vector, use mpl::list instead.
[...]
I thought about doing this, but I did not because I thought it was a HUGE undertaking to automate it.
No, it's easier than you think. Have a look at https://ci.nedprod.com/ whose default dashboard shows a graph labelled "RUDP performance". This tracks performance of a build over time to ensure performance doesn't regress. All you need is for your performance test tool to output some CSV, a Jenkins Plot plugin does the rest.
That's pretty cool!
I think we're better off integrating the benchmarks in the documentation and then when something is really weird, we just have to look at the generated benchmarks and see what's wrong. If someone can suggest a way to do it automatically that won't take me weeks to set up, I'm interested.
Mastering Jenkins takes months, but once mastered configuring all sorts of test scenarios becomes trivial. I'd actively merge your Jenkins/Travis output into your docs too, it is nowadays an online world.
I don't have months, but if someone is willing to help I'll collaborate. The current build system is setup with CMake; surely it integrates easily with Jenkins?
[...]
As you'll find when formal review comes, to pass isn't about how good your library is, it's about eliminating as many rational objections others can think of.
I hope it's at least _a bit_ about how good the library is. :)
[...]
Besides, you'll invest five days or so of wishing pain on those responsible for the tools, and once it's working you'll never need touch it again. I did find it took some months to find and fix all the corner cases in the doc output though, and even now PDF generation from AFIO's docs are a joke due to the long template strings.
If I spend 5 days on improving the current documentation, I'll have the best freakin' documentation you could ever wish to have in Boost. I'll favor doing that before BoostBook, and hopefully the quality of the resulting documentation clears up a lot of objections.
Anyway, it's up to you. BTW, I've noticed that when peer review managers volunteer to manage they tend to favour ones with BoostBook docs. I think they also think it's another problem they don't have to think about during managing.
Regards, Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 29 Jul 2014 at 19:13, louis_dionne wrote:
No, it's easier than you think. Have a look at https://ci.nedprod.com/ whose default dashboard shows a graph labelled "RUDP performance". This tracks performance of a build over time to ensure performance doesn't regress. All you need is for your performance test tool to output some CSV, a Jenkins Plot plugin does the rest.
That's pretty cool!
I'm getting ever keener on traffic light dashboards the older I become, probably due to failing memory :). The dashboard at https://ci.nedprod.com/view/Boost.AFIO/ tells me everything I need to know about the present state of AFIO.
Mastering Jenkins takes months, but once mastered configuring all sorts of test scenarios becomes trivial. I'd actively merge your Jenkins/Travis output into your docs too, it is nowadays an online world.
I don't have months, but if someone is willing to help I'll collaborate. The current build system is setup with CMake; surely it integrates easily with Jenkins?
Indeed it does, but you've just raised another likely problem for peer review. Some will argue that you need to use Boost.Build throughout. It's a bit like the BoostBook docs here, it's about ticking off boxes. Regarding modern CI testing for Boost, well there isn't any, you're expected to provide your own. Travis is clunky, limited but free, either myself or Antony can help you with that. Jenkins is vastly more powerful, but you'll need a machine permanently visible to the internet for maximum usefulness. If you're planning to stay in software development for the next few years, you'll find the effort in setting up a personal Jenkins install easily repays in productivity and an enormous improvement in the quality of code you ship (tip: start with a hypervisor platform on a dedicated machine, it'll save you tons of time during upgrades later. I personally use a Linux distro called Proxmox, it makes managing VMs easy). For example, I am for my day job tracking down a timing bug which occurs less than 4% of the time, to find it I have Jenkins run a soak test every ten minutes and filter out when it fails. Jenkins is enormously capable and flexible, but is also a badly designed piece of software with a very non-obviously steep learning curve and a counter-intuitive UI. Once you have learned all the stuff they don't document, it's great. For example, all those Boost.AFIO per-target jobs listed at the dashboard above are all automagically generated for me and the VM targets are orchestrated via managed scripts, I originally made the naïve mistake most make of manually configuring separate jobs and then the next most-naive mistake of thinking the matrix builder is actually useful and not horrendously broken.
As you'll find when formal review comes, to pass isn't about how good your library is, it's about eliminating as many rational objections others can think of.
I hope it's at least _a bit_ about how good the library is. :)
It's very similar in practice to peer review for academic papers at the top journals. Yes, it does have something to do with quality, but fashionable topics plays a big part, as does current research funding imperatives, as does ticking many invisible cultural boxes you only know about with experience, as does a bit of luck in which reviewers you get and the mood they are in. You've got the fashionable topics part in spades at least.
Besides, you'll invest five days or so of wishing pain on those responsible for the tools, and once it's working you'll never need touch it again. I did find it took some months to find and fix all the corner cases in the doc output though, and even now PDF generation from AFIO's docs are a joke due to the long template strings.
If I spend 5 days on improving the current documentation, I'll have the best freakin' documentation you could ever wish to have in Boost. I'll favor doing that before BoostBook, and hopefully the quality of the resulting documentation clears up a lot of objections.
The tools for converting doxygen to BoostBook only do reasonably well with reference documentation. They do a poor job with explanatory sections or tutorials. For those you'll have to manually rewrite doxygen markup probably into quickbook markup. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Niall Douglas Sent: 30 July 2014 14:51 To: boost@lists.boost.org Subject: Re: [boost] [GSoC] [Boost.Hana] Formal review request
On 29 Jul 2014 at 19:13, louis_dionne wrote:
No, it's easier than you think. Have a look at https://ci.nedprod.com/ whose default dashboard shows a graph labelled "RUDP performance". This tracks performance of a build over time to ensure performance doesn't regress. All you need is for your performance test tool to output some CSV, a Jenkins Plot plugin does the rest.
That's pretty cool!
I'm getting ever keener on traffic light dashboards the older I become,
to failing memory :). The dashboard at https://ci.nedprod.com/view/Boost.AFIO/ tells me everything I need to know about the present state of AFIO.
Mastering Jenkins takes months, but once mastered configuring all sorts of test scenarios becomes trivial. I'd actively merge your Jenkins/Travis output into your docs too, it is nowadays an online world.
I don't have months, but if someone is willing to help I'll collaborate. The current build system is setup with CMake; surely it integrates easily with Jenkins?
Indeed it does, but you've just raised another likely problem for peer review. Some will argue that you need to use Boost.Build throughout. It's a bit like the BoostBook docs here, it's about ticking off boxes.
Regarding modern CI testing for Boost, well there isn't any, you're expected to provide your own. Travis is clunky, limited but free, either myself or Antony can help you with that. Jenkins is vastly more powerful, but you'll need a machine permanently visible to the internet for maximum usefulness. If you're planning to stay in software development for the next few years, you'll find the effort in setting up a personal Jenkins install easily repays in productivity and an enormous improvement in the quality of code you ship (tip: start with a hypervisor
a dedicated machine, it'll save you tons of time during upgrades later. I
use a Linux distro called Proxmox, it makes managing VMs easy). For example, I am for my day job tracking down a timing bug which occurs less than 4% of the time, to find it I have Jenkins run a soak test every ten minutes and filter out when it fails.
Jenkins is enormously capable and flexible, but is also a badly designed piece of software with a very non-obviously steep learning curve and a counter-intuitive UI. Once you have learned all the stuff they don't document, it's great. For example, all those Boost.AFIO per-target jobs listed at the dashboard above are all automagically generated for me and the VM targets are orchestrated via managed scripts, I originally made the naïve mistake most make of manually configuring separate jobs and then the next most-naive mistake of thinking the matrix builder is actually useful and not horrendously broken.
As you'll find when formal review comes, to pass isn't about how good your library is, it's about eliminating as many rational objections others can think of.
I hope it's at least _a bit_ about how good the library is. :)
It's very similar in practice to peer review for academic papers at the top journals. Yes, it does have something to do with quality, but fashionable topics plays a big part, as does current research funding imperatives, as does ticking many invisible cultural boxes you only know about with experience, as does a bit of luck in which reviewers you get and the mood they are in. You've got the fashionable topics
probably due platform on personally part
in spades at least.
Besides, you'll invest five days or so of wishing pain on those responsible for the tools, and once it's working you'll never need touch it again. I did find it took some months to find and fix all the corner cases in the doc output though, and even now PDF generation from AFIO's docs are a joke due to the long template strings.
If I spend 5 days on improving the current documentation, I'll have the best freakin' documentation you could ever wish to have in Boost. I'll favor doing that before BoostBook, and hopefully the quality of the resulting documentation clears up a lot of objections.
The tools for converting doxygen to BoostBook only do reasonably well with reference documentation. They do a poor job with explanatory sections or tutorials. For those you'll have to manually rewrite doxygen markup probably into quickbook markup.
For those libraries that use the Quickbook/Doxygen/Autoindex toolchain, it is assumed that Quickbook is used for the tutorial part. For me, the killer argument for using Quickbook is the ability to use *code snippets*. These ensure that the bits of code you show have actually been through the (current) compiler. I see no problems converting to Quickbook. Conversion is not too painful - especially if you have done it before. However, I don't think you should worry about the docs now - they are fine for a review. IMO your priority is to get some real-life users able to make an informed review. Paul PS For me, the library name is a big turn off - it doesn't say what the library does. And, for me, the names of functions are enough to condemn the library as unacceptable for Boost. This really, really impressive and obviously really useful library is for C++ users, not Haskell users. --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Paul A. Bristow
[...]
For me, the killer argument for using Quickbook is the ability to use *code snippets*. These ensure that the bits of code you show have actually been through the (current) compiler.
That's what I currently do with Doxygen; all the snippets in the reference are taken from the example/ subdirectory of the project, and all those files are compiled and ran just like the other unit tests. It's really great.
I see no problems converting to Quickbook. Conversion is not too painful - especially if you have done it before.
However, I don't think you should worry about the docs now - they are fine for a review.
IMO your priority is to get some real-life users able to make an informed review.
Agreed.
Paul
PS For me, the library name is a big turn off - it doesn't say what the library does.
Heterogeneous combiNAtors. I agree the mapping is not as direct as, say, MPL, but it still beats Spirit, Phoenix and Proto (unless those are non-obvious acronyms).
And, for me, the names of functions are enough to condemn the library as unacceptable for Boost.
I have reasons to be uncomfortable with changing names in the library: - While names are inconsistent with the usual C++, they are consistent inside the library. Changing one name can break this consistency if I'm not careful. Further, there are names which just don't have a C++-friendly name, so I can't hope to change _all_ the names. We'll have to deal with either 100% FP names or 50% C++ / 50% FP, but 100% C++ just can't be done. - Some, C++ names imply some kind of mutation of the structure. Since Hana does not do any mutation, I must be careful not to choose a name that suggests something that's not the reality.
This really, really impressive and obviously really useful library is for C++ users, not Haskell users.
Louis
On Wed, 30 Jul 2014 13:19:44 -0700, Louis Dionne
Paul A. Bristow
writes: [snip] And, for me, the names of functions are enough to condemn the library as unacceptable for Boost.
I have reasons to be uncomfortable with changing names in the library:
- While names are inconsistent with the usual C++, they are consistent inside the library. Changing one name can break this consistency if I'm not careful. Further, there are names which just don't have a C++-friendly name, so I can't hope to change _all_ the names. We'll have to deal with either 100% FP names or 50% C++ / 50% FP, but 100% C++ just can't be done.
- Some, C++ names imply some kind of mutation of the structure. Since Hana does not do any mutation, I must be careful not to choose a name that suggests something that's not the reality.
If you haven't/aren't already doing so, you can explicitly document: 1) That this is an FP library. 2) Why it is so and how this impacts the naming and design. Explicitly, why a non-FP approach would make for a sub-par library. 3) When any unfamiliar (to C++ programmers) names are first introduced provide (the approximate?) C++ mapping. A part of the documentation may just be about educating a FP-illiterate audience. It would also be help if some of the C++14 concepts were explained, or in the least their use highlighted with appropriate links to references. One last thought, as Joe-everyday programmer why should I care about this library? What does it allow me to do (or do easier) that I wasn't able to do before? My thoughts, Mostafa
Mostafa
On Wed, 30 Jul 2014 13:19:44 -0700, Louis Dionne
wrote: [...]
If you haven't/aren't already doing so, you can explicitly document: 1) That this is an FP library. 2) Why it is so and how this impacts the naming and design. Explicitly, why a non-FP approach would make for a sub-par library. 3) When any unfamiliar (to C++ programmers) names are first introduced provide (the approximate?) C++ mapping.
Hmmm. Yes, it definitely makes sense to at least put a warning sign and to give rationales. That would basically do the job I'm currently doing on this list, but once and for all. Will do.
A part of the documentation may just be about educating a FP-illiterate audience. It would also be help if some of the C++14 concepts were explained, or in the least their use highlighted with appropriate links to references.
I won't write a tutorial on C++14, but I'll put relevant links when I use something that's new in the language. Thanks for the suggestion, I had'nt thought of that.
One last thought, as Joe-everyday programmer why should I care about this library? What does it allow me to do (or do easier) that I wasn't able to do before?
If you either use MPL/Fusion or do any non-trivial std::tuple manipulation, Hana will reduce the complexity of your metaprogramming code by giving you high level abstractions at low compile-time cost. For example, instead of doing like those folks did here http://stackoverflow.com/a/20441189/627587 just to unpack a std::tuple into a variadic function, you would write unpack(f, std::make_tuple(...)) == f(...) with Hana. And it's going to compile just as fast as the hand-written version. If you're willing to use the tuple provided by Hana, it's going to compile much, much faster. If you don't do any kind of metaprogramming, Hana is not for you.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Louis Dionne Sent: 30 July 2014 21:20 To: boost@lists.boost.org Subject: Re: [boost] [GSoC] [Boost.Hana] Formal review request
Paul A. Bristow
writes: [...]
For me, the killer argument for using Quickbook is the ability to use *code snippets*. These ensure that the bits of code you show have actually been through the (current) compiler.
That's what I currently do with Doxygen; all the snippets in the reference are taken from the example/ subdirectory of the project, and all those files are compiled and ran just like the other unit tests. It's really great.
Excellent! (Are they *live* links, updating the docs when you modify the example?) How about an alphabetic index of functions, names, words, concepts, ideas ...? We are used to using the index in books, but they are less common in e-docs. Despite hyperlinks, navigation and search are still troublesome. I find the index really useful when dealing with the monster Boost.Math docs - and I wrote some of it!
PS For me, the library name is a big turn off - it doesn't say what the library does.
Heterogeneous combiNAtors
I see. But I still don't like it (nor do I like Fusion or Proto but that's authors rights). Internal function names are much more important. As Eric wisely remarks, "Names are Hard. ", but "Names are Important!" Saying *why* is going to be vital? Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
Paul A. Bristow
[...]
That's what I currently do with Doxygen; all the snippets in the reference are taken from the example/ subdirectory of the project, and all those files are compiled and ran just like the other unit tests. It's really great.
Excellent! (Are they *live* links, updating the docs when you modify the example?)
Well, I currently update the documentation by hand: make gh-pages.update cd doc/gh-pages && git push The first command regenerates the doc and creates the commit on the gh-pages branch, and then I push it after a quick validation on my part. So the examples are always up-to-date, but the whole process could surely be more automatic. Something like a post commit hook could regenerate the doc, but I don't know how to set that up and I've more urgent things to do right now.
How about an alphabetic index of functions, names, words, concepts, ideas ...?
We are used to using the index in books, but they are less common in e-docs. Despite hyperlinks, navigation and search are still troublesome. I find the index really useful when dealing with the monster Boost.Math docs - and I wrote some of it!
I disabled the index because I thought the documentation was best browsed by using the type classes structure. Of course, this implies that one knows which methods are in which type class, and that's circular. I'll see if the Doxygen-generated index makes some sense and I'll re-enable it if it does.
PS For me, the library name is a big turn off - it doesn't say what the library does.
Heterogeneous combiNAtors
I see. But I still don't like it (nor do I like Fusion or Proto but that's authors rights).
Internal function names are much more important.
As Eric wisely remarks, "Names are Hard. ", but "Names are Important!"
Saying *why* is going to be vital?
Yes, I'll document that design decision and I'll change some names when it does not break the internal consistency and makes it easier for C++ers. Louis
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Louis Dionne Sent: 31 July 2014 14:16 To: boost@lists.boost.org Subject: Re: [boost] [GSoC] [Boost.Hana] Formal review request
Paul A. Bristow
writes: Excellent! (Are they *live* links, updating the docs when you modify the example?)
Well, I currently update the documentation by hand:
make gh-pages.update cd doc/gh-pages && git push
The first command regenerates the doc and creates the commit on the gh-pages branch, and then I push it after a quick validation on my part. So the examples are always up-to-date, but the whole process could surely be more automatic. Something like a post commit hook could regenerate the doc, but I don't know how to set that up and I've more urgent things to do right now.
That isn't quite what Quickbook does - it marks the start and end of the snippet in the example .cpp. So What You Mark is What You Get - always. No eyeballing!
How about an alphabetic index of functions, names, words, concepts, ideas ...?
We are used to using the index in books, but they are less common in e-docs. Despite hyperlinks, navigation and search are still troublesome. I find the index really useful when dealing with the monster Boost.Math docs - and I wrote some of it!
I disabled the index because I thought the documentation was best browsed by using the type classes structure. Of course, this implies that one knows which methods are in which type class, and that's circular. I'll see if the Doxygen-generated index makes some sense and I'll re-enable it if it does.
I recommend this - makes docs bigger by who cares? But Doxygen index isn't an index like a book - to words etc in the body of the text. So if you don't know the name of the function, you are stuck. Most of us are blindly using a Famous Search Engine - with mixed results in my experience. However, your docs are at least as good as most, and I think this issue of finding things is still largely unsolved. I'd concentrate on other issues. Good luck Paul
Paul A. Bristow
[...]
The first command regenerates the doc and creates the commit on the gh-pages branch, and then I push it after a quick validation on my part. So the examples are always up-to-date, but the whole process could surely be more automatic. Something like a post commit hook could regenerate the doc, but I don't know how to set that up and I've more urgent things to do right now.
That isn't quite what Quickbook does - it marks the start and end of the snippet in the example .cpp.
Yes, Doxygen does the same. I meant I examined the output to make sure I did not screw things up myself, not to make sure the right example code was included. My comment blocks look like /*! * Brief description * * Detailed description * * * @snippet example/foobar.cpp some_tag */ and then, I have in example/foobar.cpp: //! [some_tag] some example code here //! [some_tag] Doxygen will include verbatim everything inside both tags. I'm pretty sure we were talking about the same thing.
[...]
However, your docs are at least as good as most, and I think this issue of finding things is still largely unsolved.
I'd concentrate on other issues.
I'll keep on improving the documentation and check out TODOS on my list (it has grown quite large with all the discussions).
Good luck
Thanks! Louis
Edward Diener
[...]
But I do not understand from your documentation how your library relates to Boost MPL. The Boost MPL library is about manipulating and creating types at compile time, and creating logic paths again at compile time to manipulate types. All of this is encapulated by the MPL's notion of metafunctions to do compile-time programming. But I cannot get from your documentation any of the MPL equivalence of how any of this is done with your library. Is your library meant to duplicate/replace this MPL functionality in some way ? Or is it meant to do something else entirely not related to compile-time programming ?
I had started to reply with a lengthy explanation of how it works, but that would not fix the problem because it is now clear that my documentation _is_ the problem. I'll improve it, make it more explicit and then post back.
I am only asking this because I had been told that your library is at the the least a replacement for MPL compile-time type manipulation functionality using C++11 on up.
It is, and it's also a replacement for Fusion and hence many other metaprogramming utilities. In particular, I'll write a cheatsheet showing how to do everything in the MPL and everything in Fusion with Hana. Thank you for taking the time to read the documentation (I assume you did) and give me your feedback. Regards, Louis
On 7/29/2014 12:23 PM, Louis Dionne wrote:
Edward Diener
writes: [...]
But I do not understand from your documentation how your library relates to Boost MPL. The Boost MPL library is about manipulating and creating types at compile time, and creating logic paths again at compile time to manipulate types. All of this is encapulated by the MPL's notion of metafunctions to do compile-time programming. But I cannot get from your documentation any of the MPL equivalence of how any of this is done with your library. Is your library meant to duplicate/replace this MPL functionality in some way ? Or is it meant to do something else entirely not related to compile-time programming ?
I had started to reply with a lengthy explanation of how it works, but that would not fix the problem because it is now clear that my documentation _is_ the problem. I'll improve it, make it more explicit and then post back.
I am only asking this because I had been told that your library is at the the least a replacement for MPL compile-time type manipulation functionality using C++11 on up.
It is, and it's also a replacement for Fusion and hence many other metaprogramming utilities. In particular, I'll write a cheatsheet showing how to do everything in the MPL and everything in Fusion with Hana.
Thank you for taking the time to read the documentation (I assume you did) and give me your feedback.
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class. You also refer to type classes and data classes but C++ has no specific meaning for either of those terms. Also while it is useful to give syntactical examples you should explain what is occurring in C++ and not in some hypothetical Haskell-like functional programming language which a C++ programmer does not know. In other words no matter how brilliant or clever your library or your constructs are, you need to explain what functionality does what things and not just that some syntax does something but another similar syntax does something else entirely. Because in actual use if a user of your library chooses to use some syntax, without understanding what the underlying functionality actually accomplishes, it is way too easy to do the wrong thing in more complicated scenarios than what your examples provide.
Edward Diener
[...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
You also refer to type classes and data classes but C++ has no specific meaning for either of those terms.
What did people say when they first heard about: - Proto transforms - MPL metafunctions / metafunction classes - MPL/Fusion tags - any domain-specific term introduced in a library (Spirit has a lot of them) Surely C++ had no specific meaning for any of those terms before they were introduced. With each library comes some terminology, and "type class" and "data type" are just that; new concepts for which names had to be chosen. That being said, type classes feel like C++ concepts and data types feel like Fusion/MPL tags, so I'll seriously consider renaming them to that if it can make people happier. I'm a bit worried about renaming "type classes" to "concepts" though, as it could lead to confusion. What do you think?
Also while it is useful to give syntactical examples you should explain what is occurring in C++ and not in some hypothetical Haskell-like functional programming language which a C++ programmer does not know. In other words no matter how brilliant or clever your library or your constructs are, you need to explain what functionality does what things and not just that some syntax does something but another similar syntax does something else entirely.
Are you refering to something specific in the current documentation, or is this just a general advice?
Because in actual use if a user of your library chooses to use some syntax, without understanding what the underlying functionality actually accomplishes, it is way too easy to do the wrong thing in more complicated scenarios than what your examples provide.
Regards, Louis
On 7/29/2014 9:38 PM, Louis Dionne wrote:
Edward Diener
writes: [...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
Yes it is 'type<t>' and not 'Type<t>'. But... "First, type is a variable template, and type<T> is an object representing the C++ type T." The syntax 'type<T>' normally means a class in C++.
You also refer to type classes and data classes but C++ has no specific meaning for either of those terms.
What did people say when they first heard about: - Proto transforms - MPL metafunctions / metafunction classes - MPL/Fusion tags - any domain-specific term introduced in a library (Spirit has a lot of them)
Surely C++ had no specific meaning for any of those terms before they were introduced. With each library comes some terminology, and "type class" and "data type" are just that; new concepts for which names had to be chosen.
That being said, type classes feel like C++ concepts and data types feel like Fusion/MPL tags, so I'll seriously consider renaming them to that if it can make people happier. I'm a bit worried about renaming "type classes" to "concepts" though, as it could lead to confusion. What do you think?
I have no objection what you call them. I just felt that you should explain them as thoroughly as you feel possible ( remember you may know what you have designed but to others this is new ground ) before examples and not after.
Also while it is useful to give syntactical examples you should explain what is occurring in C++ and not in some hypothetical Haskell-like functional programming language which a C++ programmer does not know. In other words no matter how brilliant or clever your library or your constructs are, you need to explain what functionality does what things and not just that some syntax does something but another similar syntax does something else entirely.
Are you refering to something specific in the current documentation, or is this just a general advice?
General advice. It mainly reflects that your method of explaining the conceptual elements of your syntax is very difficult for me, but may be welcomed by most others. I am just mentally incapable of understanding syntactical examples before I understand thoroughly the conceptual elements which the example is about. I will reread the doc again to see if I can get anywhere. Thanks for your efforts !
Because in actual use if a user of your library chooses to use some syntax, without understanding what the underlying functionality actually accomplishes, it is way too easy to do the wrong thing in more complicated scenarios than what your examples provide.
2014-07-30 11:47 GMT+08:00 Edward Diener
On 7/29/2014 9:38 PM, Louis Dionne wrote:
Edward Diener
writes: [...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
Yes it is 'type<t>' and not 'Type<t>'. But...
"First, type is a variable template, and type<T> is an object representing the C++ type T."
The syntax 'type<T>' normally means a class in C++.
I believe it's object in c++ meaning, a new feature in c++1y: http://en.wikipedia.org/wiki/C%2B%2B14#Variable_templates
Edward Diener
Edward Diener
writes: [...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation
On 7/29/2014 9:38 PM, Louis Dionne wrote: please
use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
Yes it is 'type<t>' and not 'Type<t>'. But...
"First, type is a variable template, and type<T> is an object representing the C++ type T."
The syntax 'type<T>' normally means a class in C++.
As Tongari points out correctly, `type<T>` is a variable template, as in the new feature in C++14. Specifically, it is declared as: template <typename T> constexpr unspecified type{}; Then, `type<T>` is an object equivalent to `unspecified{}`. Perhaps this was the cause of your confusion?
You also refer to type classes and data classes but C++ has no specific meaning for either of those terms.
What did people say when they first heard about: - Proto transforms - MPL metafunctions / metafunction classes - MPL/Fusion tags - any domain-specific term introduced in a library (Spirit has a lot of them)
Surely C++ had no specific meaning for any of those terms before they were introduced. With each library comes some terminology, and "type class" and "data type" are just that; new concepts for which names had to be chosen.
That being said, type classes feel like C++ concepts and data types feel like Fusion/MPL tags, so I'll seriously consider renaming them to that if it can make people happier. I'm a bit worried about renaming "type classes" to "concepts" though, as it could lead to confusion. What do you think?
I have no objection what you call them. I just felt that you should explain them as thoroughly as you feel possible ( remember you may know what you have designed but to others this is new ground ) before examples and not after.
I'll see if there are some places where I throw some code without enough explanation/context and try to make it clearer.
[...]
General advice. It mainly reflects that your method of explaining the conceptual elements of your syntax is very difficult for me, but may be welcomed by most others. I am just mentally incapable of understanding syntactical examples before I understand thoroughly the conceptual elements which the example is about.
I will reread the doc again to see if I can get anywhere. Thanks for your efforts !
Give me a couple of days to work on what has been discussed in the last days, and then give it a shot. Thank you a lot for your criticism, it is much appreciated. Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
Le 30/07/14 05:47, Edward Diener a écrit :
On 7/29/2014 9:38 PM, Louis Dionne wrote:
Edward Diener
writes: [...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
Yes it is 'type<t>' and not 'Type<t>'. But...
"First, type is a variable template, and type<T> is an object representing the C++ type T."
The syntax 'type<T>' normally means a class in C++. type<int> is a class that represents the type int. I find the name type<T> is appropriated.
Vicente
Le 21/09/14 16:40, Vicente J. Botet Escriba a écrit :
Le 30/07/14 05:47, Edward Diener a écrit :
On 7/29/2014 9:38 PM, Louis Dionne wrote:
Edward Diener
writes: [...]
Please consider also that you are using terminology in the present documentation which does not relate to C++ very well, although it may relate to Haskell for all I know. If you rework the documentation please use C++ terminology for things. As an example you referred to Type<T> as an object when in C++ it is a class.
I am confused; `Type<T>` does not exist in Hana, so I doubt I mention it anywhere unless I made an error. There's `type<T>`, which is a variable template, and `Type`, which is a C++ struct.
Yes it is 'type<t>' and not 'Type<t>'. But...
"First, type is a variable template, and type<T> is an object representing the C++ type T."
The syntax 'type<T>' normally means a class in C++. type<int> is a class that represents the type int. I find the name type<T> is appropriated.
My bad, it seems that type<int> is a variable representing the type int. Vicente
Jul 29, 2014; 3:04am Niall Douglas Niall Douglas On 28 Jul 2014 at 23:03, Edward Diener wrote:
But I do not understand from your documentation how your library relates to Boost MPL. The Boost MPL library is about manipulating and creating types at compile time, and creating logic paths again at compile time to manipulate types. All of this is encapulated by the MPL's notion of metafunctions to do compile-time programming. But I cannot get from your documentation any of the MPL equivalence of how any of this is done with your library. Is your library meant to duplicate/replace this MPL functionality in some way ? Or is it meant to do something else entirely not related to compile-time programming ?
I am only asking this because I had been told that your library is at the the least a replacement for MPL compile-time type manipulation functionality using C++11 on up.
I think a table with MPL98 forms on one side and Hana equivalents on the other would be an enormous help with the learning curve.
+1
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
-1 I would disagree. Boost has always required conformity with the latest C++ standard - current compilers be damned. Of course a number of libraries don't make much sense if they're not widely compilable and those authors have accommodated this. But a library such as this doesn't require such a wide compatibility to be useful. No one is going to rip out the mpl in a current library and replace it with hana. A newer app or library will (should) be using a current compilers and just hana from the start. So I think any requirement/request support some older compiler is needlessly burdensome and not worth the effort. Were hana to get accepted, it wouldn't appear in boost for at least a year and by that time C++14 should be available to anyone who want's to depend upon it.
I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later.
I would also like to see a test dashboard of some sort implemented. Especially since it's unclear which compilers can compile this thing.
I'd like to see some mention in the docs of how to use Hana with that metaprogramming debugger from that German fellow. He presented at a C++ Now.
lol - well there's lot's of things I'd like to see - but I think it's not a great idea to request/require some feature/support which depends upon something that isn't in boost and might require significant work to add. Let's make it work before we make it better. Jul 29, 2014; 9:10am Gonzalo BG Gonzalo BG wrote
The documentation is awesome, thanks! I liked the inline discussions that relate the library with Fusion and MPL, and in particular your use of variable templates (e.g. type<>).
awesome is way too generous. needs work. On Tue, Jul 29, 2014 at 10:02 AM, Louis Dionne wrote:
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook?
It's not mandatory as far as I know either. Boost documentation is all over the place - from terrible to incredible. And documentation tools have evolved. Library authors haven't often migrated tools once the documentation is done. You DOxygen is a lot better than most I've seen. Misc notes regarding boost documentation. a) In the beginning, boost documentation was just html - a lot of still is just that - e.g. serialization library. b) Then came along BoostBook. Very attractive because it decoupled documentation content from the formatting which allows the creation of PDF and HTML from the same "source". It was on the DocBook "wave" and seemed the way of the future. The tool chain was a pain to setup (and still likely is), but once, set up works pretty well. c) But BoostBook was a pain to edit - then came along QuickBook which defined it's own markup (yet another one!) but produced DocBook source. This addressed the BoostBook editing pain but left the BoostBook advantages. The first workable combination which addressed most needs. d) Some libraries (e.g Geometry, Units) have incorporate DOxygen into this mix due the convenience of including the reference information along with the code. Basically this amounts to a poor man's literate programming system. This is a lot easier for the programmer than trying to maintain two separate markups. e) Somewhere along the line Eric Niebler, let lose an email along the lines of "I can't stand it any more" criticizing boost documentation. Nothing much came of it, but it did affect me and convinced me that what we needed was more formal usage of C++ concepts and requirement that new libraries implement and document them where appropriate. I've been flogging this when/where I can - most recently in the Boost Incubator pages. So far, I haven't made much head way yet, but I'm only 66 so this won't go away for some time yet. Looking at your documentation, I realize now that you've implemented C++ concepts under the name TypeClasses (sounds vaguely familiar from the 20 times I read the haskel book trying to figure out what a monad is). And you have fairly extensive information about what TypeClass means - it means C++ concept / type requirement. I would have much preferred that you had explicitly used boost concept check library. I don't know if your code actually checks the concepts. And I have questions about particular concepts: e.g. Searchable - as opposed to more basic traits - we'll arm wrestle over that later. Given that boost has had no explicit requirements for documentation toolset it would be unfair to start imposing them ex-post facto. So I would say if you can get the content of the documentation to pass the review - we'd could live with your DOxygen version - even though I personally don't like it. So the more I look at this the more I like and the more work I think it needs. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
Robert Ramey
[...]
I'd also like to see unit testing that verified that the current compiler being tested has a time and space benchmark curve matching what is expected. It is too easy for code to slip in or the compilers themselves to gain a bug which creates pathological metaprogramming performance. Better to have Travis CI trap that for you than head scratching and surprises later.
I would also like to see a test dashboard of some sort implemented.
I wanted to do it, but then I could not register on my.cdash.org. After a couple of failed attempts, I resumed to fixing other stuff. I'll try again.
Especially since it's unclear which compilers can compile this thing.
It's very clear which compilers can compile it; only Clang trunk, and you have to ask politely or it segfaults. :)
[...]
Jul 29, 2014; 9:10am Gonzalo BG Gonzalo BG wrote
The documentation is awesome, thanks! I liked the inline discussions that relate the library with Fusion and MPL, and in particular your use of variable templates (e.g. type<>).
awesome is way too generous. needs work.
Agreed, I'm working on it.
On Tue, Jul 29, 2014 at 10:02 AM, Louis Dionne wrote:
Is it mandatory for a Boost library to have BoostBook documentation? I'd like to stay as mainstream as possible in the tools I use and reduce the number of steps in the build/documentation process for the sake of simplicity. Is there a gain in generating the documentation in BoostBook?
It's not mandatory as far as I know either. Boost documentation is all over the place - from terrible to incredible. And documentation tools have evolved. Library authors haven't often migrated tools once the documentation is done. You DOxygen is a lot better than most I've seen.
If it's not mandatory, I'd rather spend time making the documentation better than wrestling with a new system to generate it.
[...]
Looking at your documentation, I realize now that you've implemented C++ concepts under the name TypeClasses (sounds vaguely familiar from the 20 times I read the haskel book trying to figure out what a monad is). And you have fairly extensive information about what TypeClass means - it means C++ concept / type requirement.
I would have much preferred that you had explicitly used boost concept check library. I don't know if your code actually checks the concepts.
Hana type classes and C++ concepts are _related_, but the resemblance stops here. In terms of functionality, Hana type classes are much closer to the Fusion and MPL tag dispatching systems, but 10x more powerful. Using the ConceptCheck library was out of question for two reasons: - I don't need its functionality - Even if I had needed its functionality, Hana is a core library, a library used to build other libraries. I want to keep the dependencies as low as possible, and I have achieved to depend on nothing at all (right now the standard library isn't even used, but that might change). Here are some reasons for me taking such a radical 0-dependency approach: + The additional work to remove dependencies was almost trivial, it's basically one small header file. + Contrast this with the cyclic dependency problems introduced by the MPL and other core libraries, and I'm glad Hana is standalone. + Every time you include a Boost header, it includes many other headers. That's a sad reality and its the price to pay for portability and reuse, I understand this. However, Hana has like 5k SLOC and it needs to stay small to keep the include times low.
And I have questions about particular concepts: e.g. Searchable - as opposed to more basic traits - we'll arm wrestle over that later.
I'm especially satisfied about that type class; it's not present in Haskell and it makes it possible to search infinite structures by defining only two methods. Those familiar with Haskell will observe that Searchable contains the part of Haskell's Foldable which actually uses laziness.
Given that boost has had no explicit requirements for documentation toolset it would be unfair to start imposing them ex-post facto. So I would say if you can get the content of the documentation to pass the review - we'd could live with your DOxygen version - even though I personally don't like it.
So the more I look at this the more I like and the more work I think it needs.
Thanks for the comments! Louis
On 29 Jul 2014 at 13:41, Robert Ramey wrote:
of it, but it did affect me and convinced me that what we needed was more formal usage of C++ concepts and requirement that new libraries implement and document them where appropriate. I've been flogging this when/where I can - most recently in the Boost Incubator pages. So far, I haven't made much head way yet, but I'm only 66 so this won't go away for some time yet.
Everywhere I've seen Concepts used I've found it significantly worsens the documentation for them because it provides an excuse to even further scatter important information across even more pages. ASIO is a classic example here. I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Looking at your documentation, I realize now that you've implemented C++ concepts under the name TypeClasses (sounds vaguely familiar from the 20 times I read the haskel book trying to figure out what a monad is). And you have fairly extensive information about what TypeClass means - it means C++ concept / type requirement. I would have much preferred that you had explicitly used boost concept check library.
No, his are much more useful and functional, unsurprising as he really pushes out constexpr and generic lambdas. And he's left the door open to eventually use C++17 concepts if I understand correctly. I think he's done great here, quite astonishing for someone without decades of experience under his belt. I certainly think his approach of benchmarking all implementation options before beginning has paid off hugely in the design. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition. \e
Eric Niebler-4 wrote
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
+10 Usage of concepts is greatly: a) misunderstood b) misunderestimated as to their value in design AND documentation d) The word "concepts" is a big contributor to the problem - substitute "type requirements" or "type constraints" for concepts. c) usage of concepts is much confused with implementation of concepts. Usage of "type constraints" doesn't require any special support from C++. static_assert with type traits is usually all that is necessary. e) recent papers using examples such as "Sortable" add more bad advice and confusion. I've included a page in the Boost Incubator to promote my views on the subject - if anyone cares. http://rrsd.com/blincubator.com/advice_concepts/ The lack of "type constraints" in documentation and code is a big contributor to problems in boost documentation, software design and implementation. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-07-31 19:54, Robert Ramey wrote:
Eric Niebler-4 wrote
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help. Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition. +10 +10
Can't wait till Concepts Lite are there!
Usage of concepts is greatly:
a) misunderstood b) misunderestimated as to their value in design AND documentation
Right, last year, I heard several people basically asking "why on earth would anyone want this?" after listening to a talk about Concepts Lite. It took me a about an hour at dinner to convince some of them that it might be worth looking into the topic a bit more...
d) The word "concepts" is a big contributor to the problem - substitute "type requirements" or "type constraints" for concepts. I like "type constraints" best. c) usage of concepts is much confused with implementation of concepts. Usage of "type constraints" doesn't require any special support from C++. static_assert with type traits is usually all that is necessary. e) recent papers using examples such as "Sortable" add more bad advice and confusion.
I've included a page in the Boost Incubator to promote my views on the subject - if anyone cares.
http://rrsd.com/blincubator.com/advice_concepts/ Very nice compilation!
I am impressed by Boost Concept Check Library (the second link to it is broken in your document, btw). But I feel rather helpless when stumbling over problems (for example, your code does not compile with the admittedly ancient boost 1.46 on my machine and I have no idea what to do with the compile errors). I am therefore rather sticking with static_assert to enforce constraints with friendly error messages for the time being until Concepts Lite are available as TS or part of the standard. Cheers, Roland
On 1 Aug 2014 at 11:39, Roland Bock wrote:
Usage of concepts is greatly:
a) misunderstood b) misunderestimated as to their value in design AND documentation Right, last year, I heard several people basically asking "why on earth would anyone want this?" after listening to a talk about Concepts Lite. It took me a about an hour at dinner to convince some of them that it might be worth looking into the topic a bit more...
d) The word "concepts" is a big contributor to the problem - substitute "type requirements" or "type constraints" for concepts. I like "type constraints" best.
FYI Concepts Lite isn't purely a type constraint system. It can also be used to type specialise in a much more general way than partial type specialisation and eliminate a great many std::enable_if<> or equivalents. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
FYI Concepts Lite isn't purely a type constraint system. It can also be used to type specialise in a much more general way than partial type specialisation and eliminate a great many std::enable_if<> or equivalents.
Since Concepts Lite provides native support of type traits and enable_if which it can use to try an improve overloading. Of course, this has the disadvantage of preventing specializations on the type traits(or `concept bool` as they are called in the proposal). However, conditional overloading is a very simple way to solve the overloading problem, and doesn't prevent specializations. Also, with Concepts Lite you would still use an equivalent enable_if, it will just either be a `requires` clause or replacing the template or type parameter. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
I am therefore rather sticking with static_assert to enforce constraints with friendly error messages for the time being until Concepts Lite are available as TS or part of the standard.
Using static_assert to enforce constraints can become problematic when used with concept predicates. That is because static_assert causes a hard error. So, when its combined with function overloading, I would get a friendly error message, rather than the compiler calling the alternative function. This can be workaround by specializing the trait, which is possible with Tick, but not with `concept bool`. A lot of these problems will start showing up as more people start using concept predicate in C++11 and beyond. The difference between a hard error and template constraint is not really fully understood or utilized by many libraries. So to be prepared for the future you should use enable_if which is a template constraint, instead of static_assert which just produces an error. Most modern compilers will produce nice friendly messages for enable_if. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-01 13:27, pfultz2 wrote:
I am therefore rather sticking with static_assert to enforce constraints with friendly error messages for the time being until Concepts Lite are available as TS or part of the standard. Using static_assert to enforce constraints can become problematic when used with concept predicates. That is because static_assert causes a hard error. So, when its combined with function overloading, I would get a friendly error message, rather than the compiler calling the alternative function. This can be workaround by specializing the trait, which is possible with Tick, but not with `concept bool`. Sure. My answer was probably too short. I use enable_if, too, of course, and I use "partial specialization" of functions via a helper struct, etc.
A lot of these problems will start showing up as more people start using concept predicate in C++11 and beyond. The difference between a hard error and template constraint is not really fully understood or utilized by many libraries. So to be prepared for the future you should use enable_if which is a template constraint, instead of static_assert which just produces an error. Most modern compilers will produce nice friendly messages for enable_if.
I don't agree. There are situations where static assert is just perfect. Some things just must not happen. If they occur, it is an error. I use static_assert to catch those. I am using it in sqlpp11 and wouldn't want to express everything with enable_if instead. Of course, enable_if also has its place and I use it, but much less than before.
On 1/08/2014 9:56 PM, Roland Bock wrote:
I don't agree. There are situations where static assert is just perfect. Some things just must not happen. If they occur, it is an error. I use static_assert to catch those. I am using it in sqlpp11 and wouldn't want to express everything with enable_if instead.
Of course, enable_if also has its place and I use it, but much less than before.
+1 Some days you want SFINAE, some days you want SFIAE. --- Michael
Roland Bock-2 wrote
On 2014-07-31 19:54, Robert Ramey wrote:
Eric Niebler-4 wrote
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help. Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition. +10 +10
Can't wait till Concepts Lite are there!
Usage of concepts is greatly:
a) misunderstood b) misunderestimated as to their value in design AND documentation
Right, last year, I heard several people basically asking "why on earth would anyone want this?" after listening to a talk about Concepts Lite. It took me a about an hour at dinner to convince some of them that it might be worth looking into the topic a bit more...
d) The word "concepts" is a big contributor to the problem - substitute "type requirements" or "type constraints" for concepts.
I like "type constraints" best.
c) usage of concepts is much confused with implementation of concepts. Usage of "type constraints" doesn't require any special support from C++. static_assert with type traits is usually all that is necessary. e) recent papers using examples such as "Sortable" add more bad advice and confusion.
I've included a page in the Boost Incubator to promote my views on the subject - if anyone cares.
Very nice compilation!
I am impressed by Boost Concept Check Library (the second link to it is broken in your document, btw). But I feel rather helpless when stumbling over problems (for example, your code does not compile with the admittedly ancient boost 1.46 on my machine and I have no idea what to do with the compile errors).
I am therefore rather sticking with static_assert to enforce constraints with friendly error messages for the time being until Concepts Lite are available as TS or part of the standard.
I want to actively promote the usage of "template parameter type requirements" in Boost Library design. And I'm very interested in getting help on this. As a start, I'd be happy to receive suggestions for amendments / enhancements to the page on the incubator. http://rrsd.com/blincubator.com/advice_concepts . These suggestions / enhancements can be posted as comments to the page. In particular, I'm interested in the following: a) promoting the usage of the term "type parameter constraints" (or something similar over the very confusing term "concepts" b) settlement of "best practice in the real world" of implementation of this idea. I recommended Boost Concept Check. Which isn't bad for a start, but isn't perfect either. I ideally I would like to see this used where its a good fit and an alternative for cases where it isn't. c) perhaps a recommendation as to whether "type constraint classes" be used as members or base classes for checking. d) the promotion of the idea that type reference documentation look more like STL documentation and explicitly reference "type constraint" pages. I'm intrigued by the mention of enable_if. I would like to know how that might be used in this context. I'm also wondering if it would applicable in places where I've used BOOST_STATIC_WARNING which has been useful - but unreliable. Robert Ramey
Cheers,
Roland
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 08/01/2014 08:22 AM, Robert Ramey wrote:
b) settlement of "best practice in the real world" of implementation of this idea. I recommended Boost Concept Check. Which isn't bad for a start, but isn't perfect either. I ideally I would like to see this used where its a good fit and an alternative for cases where it isn't.
Boost Concept Check is horribly dated and very limited, IMO. For my work
on a new range library[*], I built a new concept check library for
C++11. You define a concept like:
namespace concepts
{
struct BidirectionalIterator
: refines<ForwardIterator>
{
template<typename I>
auto requires_(I i) -> decltype(
concepts::valid_expr(
concepts::model_of<Derived>(
category_t<I>{},
std::bidirectional_iterator_tag{}),
concepts::has_type(--i),
concepts::has_type<I>(i--),
concepts::same_type(*i, *i--)
));
};
}
template<typename I>
using BidirectionalIterator =
concepts::modelsconcepts::BidirectionalIterator;
Then you can use it like a constexpr Boolean function in enable_if,
which can be conveniently wrapped in a simple macro like:
template
And to answer the inevitable question, I'm not opposed to getting this into Boost, but it's pretty far down my list of priorities right now. If someone wanted to take this work and run with it, I'd be overjoyed.
I have actually built a separate library based on this. Currently its named Tick[1], but I'm planning on boostifying it and then renaming it to something like Boost.ConceptTraits. One of the big differences between Tick's concept traits and Range-V3's concept traits right now is that Tick doesn't support tag dispatching, since I still want to allow for specialisations(I have found this quite useful). So instead, conditional overloading should be used. However, if people prefer, I could add support for tag dispatching, as well. [1]: https://github.com/pfultz2/Tick -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 8/1/2014 12:46 PM, Eric Niebler wrote:
On 08/01/2014 08:22 AM, Robert Ramey wrote:
b) settlement of "best practice in the real world" of implementation of this idea. I recommended Boost Concept Check. Which isn't bad for a start, but isn't perfect either. I ideally I would like to see this used where its a good fit and an alternative for cases where it isn't.
Boost Concept Check is horribly dated and very limited, IMO. For my work on a new range library[*], I built a new concept check library for C++11. You define a concept like:
namespace concepts { struct BidirectionalIterator : refines<ForwardIterator> { template<typename I> auto requires_(I i) -> decltype( concepts::valid_expr( concepts::model_of<Derived>( category_t<I>{}, std::bidirectional_iterator_tag{}), concepts::has_type(--i), concepts::has_type<I>(i--), concepts::same_type(*i, *i--) )); }; }
template<typename I> using BidirectionalIterator = concepts::modelsconcepts::BidirectionalIterator;
Then you can use it like a constexpr Boolean function in enable_if, which can be conveniently wrapped in a simple macro like:
template
void advance(I i, iterator_difference_t<I> n) { // ... } There's even a form that you can use on (non-template) member functions without causing a hard error:
template<typename I> struct wrapper { void next() { ... }
CONCEPT_REQUIRES(BidirectionalIterator<I>()) void prev() { ... } };
Types like concepts::BidirectionalIterator can be used like tags for the sake of tag dispatching, a poor man's concept-based overloading.
I highly recommend working this way if your compiler is man enough. My range effort would have been DOA without it.
And to answer the inevitable question, I'm not opposed to getting this into Boost, but it's pretty far down my list of priorities right now. If someone wanted to take this work and run with it, I'd be overjoyed.
I seem to remember Matt Calabrese working on a Concepts-like library for C++ which needed a very C++11 compliant compiler. But I cannot remember what it was called, where it is, or what has become of it. When Boost Hana was mentioned I was also thinking if Mr. Dionne was aware of this previous work. Now you mention your own library. It seems that somewhere along the line there could be some confluence of ideas that would be more useful than the current concept check library.
Edward Diener
[...]
I seem to remember Matt Calabrese working on a Concepts-like library for C++ which needed a very C++11 compliant compiler. But I cannot remember what it was called, where it is, or what has become of it. When Boost Hana was mentioned I was also thinking if Mr. Dionne was aware of this previous work. Now you mention your own library. It seems that somewhere along the line there could be some confluence of ideas that would be more useful than the current concept check library.
I was aware of Boost.ConceptCheck and Eric's concepts for his range library, but nothing else. By the way, here's a general comment, not a reply to your particular message: Hana type classes should be seen as a glorified Fusion tag dispatching system before anything else. They are useful mainly because they are used in conjunction with "data types", which are the same as Fusion tags. All of this was built to make it easier to work with objects of heterogeneous types. The goal was never to define a general framework for concepts, only to make a Fusion tag dispatching system that was more flexible and in which defining new sequences was easier. I'll have to clarify this in the documentation, which, I agree, is misleading in this regard. Regards, Louis
On 1 Aug 2014 at 9:46, Eric Niebler wrote:
On 08/01/2014 08:22 AM, Robert Ramey wrote:
b) settlement of "best practice in the real world" of implementation of this idea. I recommended Boost Concept Check. Which isn't bad for a start, but isn't perfect either. I ideally I would like to see this used where its a good fit and an alternative for cases where it isn't.
Boost Concept Check is horribly dated and very limited, IMO. For my work on a new range library[*], I built a new concept check library for C++11. You define a concept like:
namespace concepts { struct BidirectionalIterator : refines<ForwardIterator> { template<typename I> auto requires_(I i) -> decltype( concepts::valid_expr( concepts::model_of<Derived>( category_t<I>{}, std::bidirectional_iterator_tag{}), concepts::has_type(--i), concepts::has_type<I>(i--), concepts::same_type(*i, *i--) )); }; }
template<typename I> using BidirectionalIterator = concepts::modelsconcepts::BidirectionalIterator;
Then you can use it like a constexpr Boolean function in enable_if, which can be conveniently wrapped in a simple macro like:
template
void advance(I i, iterator_difference_t<I> n) { // ... }
This is neat, but it still isn't really what I want. What I want is for the tedium of writing STL iterators and containers to go away. I want the compiler to tell me how and why my containers and iterators are incomplete and/or wrong where possible without them being instantiated first, but if instantiated why they're incomplete and/or wrong with the instantiated type(s). I then want to flip that on its head, and have the compiler generate the STL boilerplate for me, so for example I ask the compiler to generate for me these following variants of STL containers all from the same abstract base meta-container: concurrent_unordered_set concurrent_unordered_map concurrent_unordered_multimap ... by me specifying the absolute minimum physical typing necessary to delineate the differences, and then the compiler inferring the rest by stripping it from the system's unordered_map implementation for me so I don't have to. I appreciate that such a language isn't C++, either today or tomorrow. But that is what I would like one day, and it surely is feasible given enough resources thrown at it. And besides, such functionality really would be very cool. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 2014-08-01 18:46, Eric Niebler wrote:
Then you can use it like a constexpr Boolean function in enable_if, which can be conveniently wrapped in a simple macro like:
template
void advance(I i, iterator_difference_t<I> n) { // ... } Really nice! But won't work with variadic templates, would it? Since I cannot do something like
template
Really nice! But won't work with variadic templates, would it? Since I cannot do something like
template
())>
It will work for functions. Not for classes though. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-02 23:32, pfultz2 wrote:
Really nice! But won't work with variadic templates, would it? Since I cannot do something like
template
())> It will work for functions. Not for classes though. Could you post an example? Is the concept check done in the return type as with enable_if?
Could you post an example? Is the concept check done in the return type as with enable_if?
In C++11, we can use a default template parameter, rather than the return
type, which is what the `CONCEPT_REQUIRES_` uses. So for example, we can
write:
template
On 2014-08-03 15:56, pfultz2 wrote:
Could you post an example? Is the concept check done in the return type as with enable_if? In C++11, we can use a default template parameter, rather than the return type, which is what the `CONCEPT_REQUIRES_` uses. So for example, we can write:
template
())> auto foo(T&&... xs) { ... } This works as long as all the varidiac template types are to be deduced by the compiler.
Thanks for the explanation. Good to know!
For classes this won't work:
// ERROR template
())> struct foo { ... }; Because you can't have a default template parameter after the varidiac template parameters in this context(you can when using speczializations).
Of course, you can't use `CONCEPT_REQUIRES_` for class specialization. In my library, I do provide a `TICK_CLASS_REQUIRES` which can be used for class specializations. Note that since it relies on `enable_if` it won't resolve ambiguities like Concepts Lite will. For example:
template<typename T> struct foo;
template<typename T> struct foo
())> { ... }; // Here we have to add a check for `RandomAccessIterator` to avoid ambiguities template<typename T> struct foo () && !RandomAccessIterator<T>())> { ... }; Perhaps, there is a way to this using tag dispatching, I'm not sure.
Thanks and regards, Roland
Eric Niebler-4 wrote
On 08/01/2014 08:22 AM, Robert Ramey wrote:
b) settlement of "best practice in the real world" of implementation of this idea. I recommended Boost Concept Check. Which isn't bad for a start, but isn't perfect either. I ideally I would like to see this used where its a good fit and an alternative for cases where it isn't.
Boost Concept Check is horribly dated and very limited, IMO. For my work on a new range library[*], I built a new concept check library for C++11. ...
I took a look at this to satisfy my curiosity. It's interesting but I have a big problem with it. I'm trying to promote the simplest view of "type constraints" which can possible work. Your solution does more - at the cost of requiring a lot more effort to understand and use correctly. (there is no documentation about it either). Then there is the fact that it requires a compiler "man enough" (hmmm - why not woman enough?) to use. So I don't think it's a good vehicle for promoting the concept of type constraints. I would just like every library invoke a compile time assert whenever in instantiate a template with type arguments which won't work. Boost Concept Check can do this. In fact, maybe BCC is over kill. I seriously considered recommending making concept checking classes with just static_assert. But in the end I opted for recommending BCC. a) It has good documentation. This is fairly easy to understand - once the confusion regarding the misleading word "concept" is cleared away - which I've been trying to do. b) it has examples. c) It works with all known complier versions. d) It's very easy to use. So I'm sticking to my original advice. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
I'm trying to promote the simplest view of "type constraints" which can possible work. Your solution does more - at the cost of requiring a lot more effort to understand and use correctly. (there is no documentation about it either). Then there is the fact that it requires a compiler "man enough" (hmmm - why not woman enough?) to use. So I don't think it's a good vehicle for promoting the concept of type constraints.
I don't think Eric's solution requires more to understand than using Boost.ConceptCheck, unless you want to do more such as overloading. Also, my library version of the this does have documentation and has been tested and works on gcc 4.6/4.7/4.8/4.9 and clang 3.4(I do plan to submit a boostified version soon).
I would just like every library invoke a compile time assert whenever in instantiate a template with type arguments which won't work. Boost Concept Check can do this. In fact, maybe BCC is over kill. I seriously considered recommending making concept checking classes with just static_assert. But in the end I opted for recommending BCC.
Yes, of course, static assertions can help improve error messages and its better than nothing, but it is not the same as template constraints. This will become more obvious as more people start using concept predicates. Ideally, if a library is targeting C++11 they could perhaps use concept traits rather than Boost.ConceptCheck. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-02 23:57, pfultz2 wrote:
I would just like every library invoke a compile time assert whenever
in instantiate a template with type arguments which won't work. Boost Concept Check can do this. In fact, maybe BCC is over kill. I seriously considered recommending making concept checking classes with just static_assert. But in the end I opted for recommending BCC. Yes, of course, static assertions can help improve error messages and its better than nothing, but it is not the same as template constraints. True, but static_assert covers the following cases at least and is really simple:
a) You have one implementation (class or function) and arguments are
either valid or not:
template<...>
struct X
{
static_assert(check1, "failed check1");
...
};
b) You have one or more full or partial class specializations for
specific classes/templates/values:
Basically as above, but if you want to prohibited the default case or
certain specializations, here's a nice, minimalistic helper to do so [1]:
namespace detail
{
template
On 3/08/2014 5:58 PM, Roland Bock wrote:
namespace detail { template
struct wrong { using type = std::false_type; }; } template using wrong_t = typename detail::wrong ::type; It can be used to defer static_assert until a template is instantiated which is an error since it is prohibited:
// disabled default case, slightly shortened from [2]
template
struct serializer_t { static_assert(wrong_t ::value, "missing serializer specialization"); };
I saw this in your "template toffees" talk and I wondered then whether
it wouldn't be simpler to do this:
template
On 2014-08-03 11:05, Michael Shepanski wrote:
On 3/08/2014 5:58 PM, Roland Bock wrote:
namespace detail { template
struct wrong { using type = std::false_type; }; } template using wrong_t = typename detail::wrong ::type; It can be used to defer static_assert until a template is instantiated which is an error since it is prohibited:
// disabled default case, slightly shortened from [2]
template
struct serializer_t { static_assert(wrong_t ::value, "missing serializer specialization"); }; I saw this in your "template toffees" talk and I wondered then whether it wouldn't be simpler to do this:
template
struct serializer_t { static_assert(deferred_false, "missing serializer specialization"); };
Sure, that works too, but it introduces a template parameter, which does not really belong there. The technique with the wrong_t can be used without adding such artifacts to the interface. Regards, Roland
b) You have one or more full or partial class specializations for specific classes/templates/values:
Basically as above, but if you want to prohibited the default case or certain specializations, here's a nice, minimalistic helper to do so
This is a perfect example of when *not* to use static_assert. You should
leave
the class undefined:
template
On 2014-08-03 15:37, pfultz2 wrote:
b) You have one or more full or partial class specializations for specific classes/templates/values:
Basically as above, but if you want to prohibited the default case or certain specializations, here's a nice, minimalistic helper to do so This is a perfect example of when *not* to use static_assert. You should leave the class undefined:
template
struct serializer_t; Then the compiler will already give a simple and informative error about the class being undefined, which is almost the same error you put in the static_assert. I don't see it that way because
*) even though I as a library author /might/ be able to interpret the compiler message in the same way (and I admit, I am not), a user has no way of knowing what's going on. Is there an include missing which might contain the definition of the struct? Is it a library error? Or am I using the library in a way that I should not? The static assert on the other hand sends a clear message, that can even be documented (whereas compiler messages will change from version to version). *) the Enable parameter is an artifact, a trick and yes, I know it and I use it, but it is really only there for technical reasons. If I can do without it, I will.
Futhermore, say I want to know if there is a valid serializer at compile-time and if not then choose a different method. So, I create a predicate to detect a valid serializer:
TICK_TRAIT(has_serializer) { template
auto requires_(Context ctx, const T& x) -> decltype( has_type (), Context::_(x, ctx) ); }; Unfortunately, this won't work because of the hard error, and it won't work with Concepts Lite either.
Right. static_assert does not mix with enable_if or similar stuff, since it is a hard error. In the case of sqlpp's serializer, it would not make much sense to check that. And if someone really, really wanted it, I could add a flag for him to check, since in reality, the static_assert happens in a method of the struct. I shortened the code a bit for the mailing list. But yes, that is a drawback of static_assert. BTW: When I start to use Concepts Lite, I expect at least 90% of the static asserts to go away. Most of them are just used because they are better than the current alternatives IMO.
`static_assert` can be used to check variants outside of type requirements, but in general, it should not be used for type requirements.
I'd say static_assert is not the golden hammer for type requirements, but it can be used in quite a few cases to enforce type requirements. Cheers, Roland
Roland Bock-2 wrote
On 2014-08-02 23:57, pfultz2 wrote:
I would just like every library invoke a compile time assert whenever
in instantiate a template with type arguments which won't work. Boost Concept Check can do this. In fact, maybe BCC is over kill. I seriously considered recommending making concept checking classes with just static_assert. But in the end I opted for recommending BCC. Yes, of course, static assertions can help improve error messages and its better than nothing, but it is not the same as template constraints. True, but static_assert covers the following cases at least and is really simple:
a) You have one implementation (class or function) and arguments are either valid or not:
template<...> struct X { static_assert(check1, "failed check1"); ... };
b) You have one or more full or partial class specializations for specific classes/templates/values:
Basically as above, but if you want to prohibited the default case or certain specializations, here's a nice, minimalistic helper to do so [1]:
namespace detail { template
struct wrong { using type = std::false_type; }; } template using wrong_t = typename detail::wrong ::type; It can be used to defer static_assert until a template is instantiated which is an error since it is prohibited:
// disabled default case, slightly shortened from [2] template
struct serializer_t { static_assert(wrong_t ::value, "missing serializer specialization"); }; // Any number of valid specializations like this [3] template
struct serializer_t<Context, column_t<Args...>> { // You could have static asserts here, too }; // Or you can switch off certain cases [4]: template<typename Lhs, typename Rhs, typename On> struct serializer_t<sqlite3::serializer_t, join_t<outer_join_t, Lhs, Rhs, On>> { static_assert(::sqlpp::wrong_t
::value, "No support for outer join"); }; static_assert is not sufficient for selecting overloads or if you have several class specializations and need to select the correct one based on concepts like ForwardIterator or so. In those cases, I would use enable_if or similar (and wait for Concepts Lite).
In the sqlpp11 code, which enforces type constraints in pretty much every template, enable_if is required in only about 7% of the templates. Most of them are implementations of the constraint-checks.
Regards,
Roland
[1]: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/wrong.h [2]: https://github.com/rbock/sqlpp11/blob/develop/include/sqlpp11/serializer.h [3]: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/column.h [4]: https://github.com/rbock/sqlpp11-connector-sqlite3/blob/develop/include/sqlp...
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
I see using static_assert as a real practical method for many users. But just saying one can use static_assert to implement type constraints is not quite the same as showing users how to do it. I would encourage you go to enhance http://rrsd.com/blincubator.com/advice_concepts/ with this information. You an either post a comment or send it directly to me. My real goal is to convince the C++ community that any library which includes templates as part of the user interface must have explicit type constraints in order to be considered for formal review. I want to see type constraints be understood and appreciated by the C++ user community in general rather than a small group who spend years disputing how to implement the idea. Let's start making things better right now! Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-03 17:48, Robert Ramey wrote:
Roland Bock-2 wrote
On 2014-08-02 23:57, pfultz2 wrote:
I would just like every library invoke a compile time assert whenever
in instantiate a template with type arguments which won't work. Boost Concept Check can do this. In fact, maybe BCC is over kill. I seriously considered recommending making concept checking classes with just static_assert. But in the end I opted for recommending BCC. Yes, of course, static assertions can help improve error messages and its better than nothing, but it is not the same as template constraints. True, but static_assert covers the following cases at least and is really simple:
a) You have one implementation (class or function) and arguments are either valid or not:
template<...> struct X { static_assert(check1, "failed check1"); ... };
b) You have one or more full or partial class specializations for specific classes/templates/values:
Basically as above, but if you want to prohibited the default case or certain specializations, here's a nice, minimalistic helper to do so [1]:
namespace detail { template
struct wrong { using type = std::false_type; }; } template using wrong_t = typename detail::wrong ::type; It can be used to defer static_assert until a template is instantiated which is an error since it is prohibited:
// disabled default case, slightly shortened from [2] template
struct serializer_t { static_assert(wrong_t ::value, "missing serializer specialization"); }; // Any number of valid specializations like this [3] template
struct serializer_t<Context, column_t<Args...>> { // You could have static asserts here, too }; // Or you can switch off certain cases [4]: template<typename Lhs, typename Rhs, typename On> struct serializer_t<sqlite3::serializer_t, join_t<outer_join_t, Lhs, Rhs, On>> { static_assert(::sqlpp::wrong_t
::value, "No support for outer join"); }; static_assert is not sufficient for selecting overloads or if you have several class specializations and need to select the correct one based on concepts like ForwardIterator or so. In those cases, I would use enable_if or similar (and wait for Concepts Lite).
In the sqlpp11 code, which enforces type constraints in pretty much every template, enable_if is required in only about 7% of the templates. Most of them are implementations of the constraint-checks.
Regards,
Roland
[1]: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/wrong.h [2]: https://github.com/rbock/sqlpp11/blob/develop/include/sqlpp11/serializer.h [3]: https://github.com/rbock/sqlpp11/blob/master/include/sqlpp11/column.h [4]: https://github.com/rbock/sqlpp11-connector-sqlite3/blob/develop/include/sqlp...
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost I see using static_assert as a real practical method for many users. But just saying one can use static_assert to implement type constraints is not quite the same as showing users how to do it. I would encourage you go to enhance http://rrsd.com/blincubator.com/advice_concepts/ with this information. You an either post a comment or send it directly to me.
My real goal is to convince the C++ community that any library which includes templates as part of the user interface must have explicit type constraints in order to be considered for formal review. I want to see type constraints be understood and appreciated by the C++ user community in general rather than a small group who spend years disputing how to implement the idea.
Let's start making things better right now!
Robert Ramey Will do (may take a few days though).
On 3 Aug 2014 at 8:48, Robert Ramey wrote:
My real goal is to convince the C++ community that any library which includes templates as part of the user interface must have explicit type constraints in order to be considered for formal review. I want to see type constraints be understood and appreciated by the C++ user community in general rather than a small group who spend years disputing how to implement the idea.
I strongly disagree with this idea if by "type constraints" you mean "error out if it doesn't fit" which apparently you do. That isn't really concepts at all in my opinion, it's just some sort of super static assert. We are still in the very early days of concepts, language support isn't even in there yet - it is like trying to standardise exception handling before the language gained support. Once the TS is finished and accepted and we know exactly what will be in the language, then is the time to start thinking about formal review requirements. i.e. at least two or three years from now. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
On 3 Aug 2014 at 8:48, Robert Ramey wrote:
My real goal is to convince the C++ community that any library which includes templates as part of the user interface must have explicit type constraints in order to be considered for formal review. I want to see type constraints be understood and appreciated by the C++ user community in general rather than a small group who spend years disputing how to implement the idea.
I strongly disagree with this idea if by "type constraints" you mean "error out if it doesn't fit" which apparently you do.
Correct - that is EXACTLY what I mean. Function parameters have specific types and if a function is invoked with parameters of other types - it's a compile time error. The same is (or should be) true for template parameters. That is, what constitutes an acceptable parameter shouid be explicitly stated and checked at compile time. Otherwise there is no way to demonstrate program correctness. This has been the focus and intent of C++ concepts (type constraints) from the very beginning. (around 2000) My view on what C++ concepts are is widely held. http://en.cppreference.com/w/cpp/concept http://en.wikipedia.org/wiki/Concepts_(C%2B%2B) C++ standard paragraphs 17.5.1.3
That isn't really concepts at all in my opinion, it's just some sort of super static assert.
lol - maybe that's all it is. Using a parameter which doesn't fit the stated type requirements creates a compile time error. Using the term "super static assert" isn't really wrong. One could call all compile time errors "super static assert"s if he wanted to and not be totally wrong. But the term "C++ concept" has already been defined and widely used. So it tough for you decide that it means something else. Having said that, I would be pleased if we started using the them "type constraints" instead. Had we done that before we probably wouldn't be having this discussion.
We are still in the very early days of concepts, language support isn't even in there yet - it is like trying to standardise exception handling before the language gained support. Once the TS is finished and accepted and we know exactly what will be in the language, then is the time to start thinking about formal review requirements. i.e. at least two or three years from now.
Language support of C++ concepts is a side issue and a distraction. Note that the "concepts" them selves (E.G. Default constructible) are in fact defined in C++11 even thought we don't have language support to enforce them. see http://en.cppreference.com/w/cpp/types/is_default_constructible These can be used to enforce type requirements as well as dispatch based in type requirements using enable if. This discussion is about the following: a) what are C++ concepts. b) what role do/should they play in library design c) what role do/should they play in library documentation d) are current library implementations sufficient to be of use. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2014-08-03 22:16, Niall Douglas wrote:
On 3 Aug 2014 at 8:48, Robert Ramey wrote:
My real goal is to convince the C++ community that any library which includes templates as part of the user interface must have explicit type constraints in order to be considered for formal review. I want to see type constraints be understood and appreciated by the C++ user community in general rather than a small group who spend years disputing how to implement the idea. I strongly disagree with this idea if by "type constraints" you mean "error out if it doesn't fit" which apparently you do. Hmm? If the arguments do not fit, an error is required. The kind of error is debatable. As Paul points out, a hard error like static_assert is too strong if you want to use it with SFINAE.
But there has to be an error of some kind if the arguments don't fit.
That isn't really concepts at all in my opinion, it's just some sort of super static assert. It really depends on the kind of error you produce. Afaict Concepts Lite will create SFINAE-compatible errors.
Cheers, Roland
On 8/2/2014 11:24 AM, Robert Ramey wrote:
Eric Niebler-4 wrote
Boost Concept Check is horribly dated and very limited, IMO. For my work on a new range library[*], I built a new concept check library for C++11. ...
I took a look at this to satisfy my curiosity. It's interesting but I have a big problem with it.
<snip>
But in the end I opted for recommending BCC.
a) It has good documentation. This is fairly easy to understand - once the confusion regarding the misleading word "concept" is cleared away - which I've been trying to do.
b) it has examples.
c) It works with all known complier versions.
d) It's very easy to use.
So I'm sticking to my original advice.
Oh, of course! I wasn't suggesting that you needed to change your recommendation. I was merely pointing out the limits of Boost.Concept_check and stumping for an effort to replace it with something more modern, capable, and ConceptsLite-ish in flavor. My code can be the basis for such a replacement. \e
On 31 Jul 2014 at 10:11, Eric Niebler wrote:
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions. My type of programmer learns just enough of the verbiage to understand C++ Now presentations by programmers such as you (and I haven't failed to learn something new from yours yet), but we'll never think like you any more than you'll think like us. None of this a bad thing of course, diversity of approach and all. But to me and people like me a RandomAccessIterator is a pointer and is little different to a ForwardIterator. I care about the difference only in so far as I can use it to get the compiler to generate code one way or another according to need. I furthermore care about the difference only in so far as it will get later maintainers and team members to behave one way instead of another. Past that I feel no issue reinterpret casting STL internal types to bypass the type system if that gets me the assembler output I want and doesn't create maintenance debt later. It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really. Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve. I absolutely can't wait for language support for Concepts, and will use them with vengence when they turn up as they're another great tool for bending the compiler in new ways, but until it's fully baked as a language feature they get in my way. And hence I don't use them personally, and groan every time I'm faced with code by someone who has (no offence intended here, we all have our own personal likes and dislikes, and I entirely understand your opinions on this and respect them, I just don't have those opinions myself). Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 7/31/2014 6:28 PM, Niall Douglas wrote:
On 31 Jul 2014 at 10:11, Eric Niebler wrote:
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions. My type of programmer learns just enough of the verbiage to understand C++ Now presentations by programmers such as you (and I haven't failed to learn something new from yours yet), but we'll never think like you any more than you'll think like us.
None of this a bad thing of course, diversity of approach and all. But to me and people like me a RandomAccessIterator is a pointer and is little different to a ForwardIterator. I care about the difference only in so far as I can use it to get the compiler to generate code one way or another according to need. I furthermore care about the difference only in so far as it will get later maintainers and team members to behave one way instead of another. Past that I feel no issue reinterpret casting STL internal types to bypass the type system if that gets me the assembler output I want and doesn't create maintenance debt later.
It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really. Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve. I absolutely can't wait for language support for Concepts, and will use them with vengence when they turn up as they're another great tool for bending the compiler in new ways, but until it's fully baked as a language feature they get in my way. And hence I don't use them personally, and groan every time I'm faced with code by someone who has (no offence intended here, we all have our own personal likes and dislikes, and I entirely understand your opinions on this and respect them, I just don't have those opinions myself).
With all due respect to your super-practical low-level approach I think you must know that you are very much in the minority. Practical programming, as you reflect on above, certainly has its advantages but without knowing what one can do with a programming language I see it as impossible to design robust code that is both understandable and usable to others and effective in accomplishing its goal. I do not think 'concepts', aka largely 'type constraints' as Robert as aptly renamed it in this discussion, is a panacea for every template programming problem. But understanding the 'domain' of what a 'type' entails is just as important as understanding the 'domain' of some parameter in a function call. Without documentation of such things the user of your constructs has no idea what will work and what will not work and programming then becomes and endlessly wasteful game of trial and error. Nor is that really 'language-focused' programming. Its just programming that takes into account that others must usually be able to use what you create else it is worthwhile only to yourself and your own solution to a problem.
I'm a bit worried that:
- Hana exposes a C++1y library-based implementation of concepts on its
interface (typeclasses). Would moving it to concepts (once we get
language-support in 2015/16) introduce a big breaking change?
Then, Range-v3, TICK, Hana, and others are all using _different_ C++11/1y
library-based implementations of concept checking which I guess means that:
- there is need for such a library, and
- Boost.ConceptCheck is for whatever reason not good enough in C++11/1y
world yet.
On Fri, Aug 1, 2014 at 2:26 AM, Edward Diener
On 7/31/2014 6:28 PM, Niall Douglas wrote:
On 31 Jul 2014 at 10:11, Eric Niebler wrote:
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions. My type of programmer learns just enough of the verbiage to understand C++ Now presentations by programmers such as you (and I haven't failed to learn something new from yours yet), but we'll never think like you any more than you'll think like us.
None of this a bad thing of course, diversity of approach and all. But to me and people like me a RandomAccessIterator is a pointer and is little different to a ForwardIterator. I care about the difference only in so far as I can use it to get the compiler to generate code one way or another according to need. I furthermore care about the difference only in so far as it will get later maintainers and team members to behave one way instead of another. Past that I feel no issue reinterpret casting STL internal types to bypass the type system if that gets me the assembler output I want and doesn't create maintenance debt later.
It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really. Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve. I absolutely can't wait for language support for Concepts, and will use them with vengence when they turn up as they're another great tool for bending the compiler in new ways, but until it's fully baked as a language feature they get in my way. And hence I don't use them personally, and groan every time I'm faced with code by someone who has (no offence intended here, we all have our own personal likes and dislikes, and I entirely understand your opinions on this and respect them, I just don't have those opinions myself).
With all due respect to your super-practical low-level approach I think you must know that you are very much in the minority. Practical programming, as you reflect on above, certainly has its advantages but without knowing what one can do with a programming language I see it as impossible to design robust code that is both understandable and usable to others and effective in accomplishing its goal.
I do not think 'concepts', aka largely 'type constraints' as Robert as aptly renamed it in this discussion, is a panacea for every template programming problem. But understanding the 'domain' of what a 'type' entails is just as important as understanding the 'domain' of some parameter in a function call. Without documentation of such things the user of your constructs has no idea what will work and what will not work and programming then becomes and endlessly wasteful game of trial and error.
Nor is that really 'language-focused' programming. Its just programming that takes into account that others must usually be able to use what you create else it is worthwhile only to yourself and your own solution to a problem.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
On 1 Aug 2014 at 11:02, Gonzalo BG wrote:
I'm a bit worried that: - Hana exposes a C++1y library-based implementation of concepts on its interface (typeclasses). Would moving it to concepts (once we get language-support in 2015/16) introduce a big breaking change?
If I have read Louis' work correctly, then no there should be no major changes. Louis looks like he has intentionally designed C++ concepts to be used by internal implementation if available but it's okay if not, but he can probably answer better than I.
Then, Range-v3, TICK, Hana, and others are all using _different_ C++11/1y library-based implementations of concept checking which I guess means that: - there is need for such a library, and - Boost.ConceptCheck is for whatever reason not good enough in C++11/1y world yet.
Agreed on the need. God help anyone designing such a beastie though. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
I'm a bit worried that: - Hana exposes a C++1y library-based implementation of concepts on its interface (typeclasses). Would moving it to concepts (once we get language-support in 2015/16) introduce a big breaking change?
Concepts are not being added in the near future to C++. Concepts lite is proposed to add concept predicates to C++, which is very different.
Then, Range-v3, TICK, Hana, and others are all using _different_ C++11/1y library-based implementations of concept checking which I guess means that: - there is need for such a library, and - Boost.ConceptCheck is for whatever reason not good enough in C++11/1y world yet.
Hana does implement a form of concepts in C++. The Tick library only implements a form of concept predicates, which was based on what is done in the Range-V3. Of course, it might make sense to make Hana's typeclass a separate library(although it should be called 'concepts' instead). Of course, Boost.ConceptCheck is only good for improving error messages. It is very inadequate, and can be very problematic when combined with concept predicates. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 1 Aug 2014 at 3:55, pfultz2 wrote:
I'm a bit worried that: - Hana exposes a C++1y library-based implementation of concepts on its interface (typeclasses). Would moving it to concepts (once we get language-support in 2015/16) introduce a big breaking change?
Concepts are not being added in the near future to C++. Concepts lite is proposed to add concept predicates to C++, which is very different.
There is a bit more to it than just predicates though. Lots of other stuff changes its rules slightly too in response. I do agree with you that concept predicates is a more accurate description than concepts though - I like to think of the TS as proposing a minimum set of tools with which one could build a Concepts library without murdering the compiler every compile round. I thank Andrew for helping me come to that realisation during writing my C++ Now 2014 position paper. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 31 Jul 2014 at 20:26, Edward Diener wrote:
On 07/29/2014 05:14 PM, Niall Douglas wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add them to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions. [snip] It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really. Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve.
With all due respect to your super-practical low-level approach I think you must know that you are very much in the minority. Practical programming, as you reflect on above, certainly has its advantages but without knowing what one can do with a programming language I see it as impossible to design robust code that is both understandable and usable to others and effective in accomplishing its goal.
Practical programming isn't the right term ... I'm going to borrow a term Artur Laksberg used during my interview at Microsoft, let's call it "data flow centric programming". At scale, those patterns of assembler outputs start to look like stretchy sheets, and they come together to form a sort of hills and valleys. You could say that each CPU core is rather like a river trying to follow a path of least resistance amongst that topology, and our job as programmer is to apply transformations to the topology to yield new flows of river. It is rare that applying any transformation isn't a zero-sum outcome, and that is where skill and experience comes in. When visualised in that way, there is no essential difference between programming languages, or operating systems, or the humans working on those systems. They're all programmed the same way. Advantages include agnosticism towards languages, OSs, idelogies ... disadvantages include inability to relate, communicate or otherwise transmit meaning to other engineers, except those of a similar bent. Most engineers accept the difference as a good thing adding an uncertain value, but the occasional one feels it as a vital threat that must be destroyed at all costs, which is unfortunate. That can introduce politics into engineering, which is usually a bad thing.
I do not think 'concepts', aka largely 'type constraints' as Robert as aptly renamed it in this discussion, is a panacea for every template programming problem. But understanding the 'domain' of what a 'type' entails is just as important as understanding the 'domain' of some parameter in a function call. Without documentation of such things the user of your constructs has no idea what will work and what will not work and programming then becomes and endlessly wasteful game of trial and error.
Nor is that really 'language-focused' programming. Its just programming that takes into account that others must usually be able to use what you create else it is worthwhile only to yourself and your own solution to a problem.
You're making the enormous assumption that all worthwhile programmers think like you and have the same value sets you do e.g. that domains have any relevance to parameter types at all, or that outstanding quality code cannot be written without a language-based approach. One of my biggest problems with Boost code and its documentation (and the C++ 11/14 STL additions) is that meaning as well as implementation is scattered all over the place in fragments. Languagey type programmers grok that much easier than I or most on StackOverflow do, probably because they instantly spot the nouns and the verbs and type systems are natural to them. Me personally, I'm old fashioned and I'd like my API documentation to tell me what the API actually does and how it fits with other APIs like a man page does rather than a single line comment and many hyperlinks to all the semi-concept types being input and output. I also want to see an essay at the beginning on the philosophy of the design (what I normally call a design rationale), because that will tell me how it all is intended to hang together. After all, as a result of the lack of direct language support there are many Concept check designs, each incommensurate with the others, hell even the STL itself has at least two if not three separate ways of Concept checking, none of which are fully baked either. In truth, despite your assertion of me being in a minority, I suspect my sort of programmer is probably the majority in C++ if not in Boost. If you want a pure or elegant type system you wouldn't choose C++ for example - in fact, if you cross off the list of reasons, the only reasons that *anyone* would intentionally choose C++ is down to (i) potential bare metal performance and (ii) it is also C which is useful for low level work and (iii) the tools and ecosystem are mature. That ought to bias against languagey type programmers, though for obvious reasons the best of the field whom I would assume would like to congregate here or on ISO are much more likely to be languagey type programmers as they are interested in C++ as a language. One of my great hopes for Hana and why I am so excited by it is that we might finally get a reasonably good standardised way of doing abstract base templates for template classes. Concept check implementations have tended to use class inheritance in the past due to lack of language support to do otherwise, while libraries such as Fusion have used a ton of hand written metaprogramming which makes them very slow and brittle. Hana has been written to be fast which solves my first objection to using Fusion in any production code, the enormous question still remaining is how brittle or not it will be to use in real world code. Can I use Hana to stamp out entire families of STL iterator so I no longer have to write my own for example? Will it be easy for me to adjust entire families of STL iterator both from the top, in the middle and the bottom without tons of compiler error spew? Will the users of my iterator never notice that I used Hana underneath? Those are open questions of course, and it will be some years before I can use Hana in any code expected to be portable sadly :( Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On August 1, 2014 5:57:30 AM EDT, Niall Douglas
On 31 Jul 2014 at 20:26, Edward Diener wrote:
I'm all for Concepts as in compiler enforced ones, and I'll add
On 07/29/2014 05:14 PM, Niall Douglas wrote: them
to AFIO when and only when C++ gets them. But for documentation they don't help.
Wow, I couldn't disagree more. I can't imagine how the standard algorithms would be specified without the use of concepts like "RandomAccessIterator", for instance. Clustering requirements into meaningful abstractions and assigning them names makes it possible to document library interfaces without an explosion of verbosity and repetition.
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions.
I have to assume you're using hyperbole to make your point. C++ has types and it has classes, so I can't believe you perceive it otherwise. OTOH, I can imagine that you see those things merely as levers and knobs to produce the desired machine code.
[snip] It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really.
You use abstractions whenever you discuss real world objects. In fact, you use them in any natural language conversation. I'm sure you use them in programming conversations, too. Consequently, I think you're overstating your case.
Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve.
Concepts are so much noise in introductory documentation. However, for detailed and reference docs, they are indispensable to avoid repetition and to reveal similarities and differences among the types on offer by a library. I wonder if your complaint is more about their use on specific cases than about them on general.
With all due respect to your super-practical low-level approach I think you must know that you are very much in the minority. Practical programming, as you reflect on above, certainly has its advantages but without knowing what one can do with a programming language I see it as impossible to design robust code that is both understandable and usable to others and effective in accomplishing its goal.
Practical programming isn't the right term ... I'm going to borrow a term Artur Laksberg used during my interview at Microsoft, let's call it "data flow centric programming". At scale, those patterns of assembler outputs start to look like stretchy sheets, and they come together to form a sort of hills and valleys. You could say that each CPU core is rather like a river trying to follow a path of least resistance amongst that topology, and our job as programmer is to apply transformations to the topology to yield new flows of river. It is rare that applying any transformation isn't a zero-sum outcome, and that is where skill and experience comes in.
When visualised in that way, there is no essential difference between programming languages, or operating systems, or the humans working on those systems. They're all programmed the same way.
Advantages include agnosticism towards languages, OSs, idelogies ... disadvantages include inability to relate, communicate or otherwise transmit meaning to other engineers, except those of a similar bent.
Your view of programming languages and libraries as means to an end doesn't mean you don't have to know the languages and libraries you happen to use. Documentation use natural and programming languages to convert information. Human communication uses abstractions for efficiency. Good abstractions are helpful, even to someone like you, aren't they?
Most engineers accept the difference as a good thing adding an uncertain value, but the occasional one feels it as a vital threat that must be destroyed at all costs, which is unfortunate. That can introduce politics into engineering, which is usually a bad thing.
Differences are often beneficial. They can also be a source of friction. However, to my knowledge, I've never before encountered anyone with your view of programming in over 30 years of programming who wasn't using assembler.
I do not think 'concepts', aka largely 'type constraints' as Robert as aptly renamed it in this discussion, is a panacea for every template programming problem. But understanding the 'domain' of what a 'type' entails is just as important as understanding the 'domain' of some parameter in a function call. Without documentation of such things the user of your constructs has no idea what will work and what will not work and programming then becomes and endlessly wasteful game of trial and error.
I don't think anyone has suggested that concepts solve all documentation problems.
Nor is that really 'language-focused' programming. Its just programming that takes into account that others must usually be able to use what you create else it is worthwhile only to yourself and your own solution to a problem.
That's the purpose of good naming, clean APIs, and good documentation generally.
You're making the enormous assumption that all worthwhile programmers think like you and have the same value sets you do e.g. that domains have any relevance to parameter types at all, or that outstanding quality code cannot be written without a language-based approach.
I suspect, like me, he's never encountered someone like you. While the specific does not the general make, it certainly informs one's views.
One of my biggest problems with Boost code and its documentation (and the C++ 11/14 STL additions) is that meaning as well as implementation is scattered all over the place in fragments. Languagey type programmers grok that much easier than I or most on StackOverflow do, probably because they instantly spot the nouns and the verbs and type systems are natural to them.
I'm not sure where I fit in your taxonomy, but I dislike Boost.Filesystem's documentation because Beman wrote it as if it was to be submitted for inclusion in the Standard, while providing little other information. I read the Standards, and I can extract beneficial information from them without a lot of difficulty, but it isn't helpful when trying to learn about something for the first time. The same can be said for a manpage, which you mentioned below. They are focused on giving reference details, not on providing tutorial and introductory information, though they do take a step away from standardese, which is helpful.
Me personally, I'm old fashioned and I'd like my API documentation to tell me what the API actually does and how it fits with other APIs like a man page does rather than a single line comment and many hyperlinks to all the semi-concept types being input and output.
I want both. The former is the contract, the latter is the summary.
I also want to see an essay at the beginning on the philosophy of the design (what I normally call a design rationale), because that will tell me how it all is intended to hang together.
As do I.
After all, as a result of the lack of direct language support there are many Concept check designs, each incommensurate with the others, hell even the STL itself has at least two if not three separate ways of Concept checking, none of which are fully baked either.
Flawed concept checking does not negate the value of concepts for the documentation. It just makes them less helpful at compile time.
In truth, despite your assertion of me being in a minority, I suspect my sort of programmer is probably the majority in C++ if not in Boost.
Setting aside your view of languages as the means to the end of bits flowing over a landscape, I agree.
If you want a pure or elegant type system you wouldn't choose C++ for example - in fact, if you cross off the list of reasons, the only reasons that *anyone* would intentionally choose C++ is down to (i) potential bare metal performance and (ii) it is also C which is useful for low level work and (iii) the tools and ecosystem are mature. That ought to bias against languagey type programmers, though That's a misguided view. I choose C++ because it's multiparadigm. It provides low level tools, OO, meta-programming, etc.
for obvious reasons the best of the field whom I would assume would like to congregate here or on ISO are much more likely to be languagey type programmers as they are interested in C++ as a language.
I'm interested in improving my and other's ability to use C++ effectively. ___ Rob (Sent from my portable computation engine)
On 3 Aug 2014 at 12:58, Rob Stewart wrote:
Oh I know programmers similar to how you visualise code in your head would agree with you absolutely. But you must remember programmers like me don't see C++ as really in fact having types nor classes nor concepts - I just see methods of programming the compiler to output patterns of assembler code, and I think primarily in terms of chains of assembler instructions.
I have to assume you're using hyperbole to make your point.
I did make use of argumentum ad absurdum as otherwise no one would notice what I just said. People already filter out 75% of what you write, forcing you to repeat everything you say thrice. As I earn hourly, I can't afford that, writing email literally costs me money.
C++ has types and it has classes, so I can't believe you perceive it otherwise. OTOH, I can imagine that you see those things merely as levers and knobs to produce the desired machine code.
You actually took the words out of my mouth - yes, levers and knobs is a good example, or a superior assembler macro implementation is a better one.
[snip] It's the essential difference between language-focused coders and err ... mongrel coders? I have to admit I'm not sure what to call myself really.
You use abstractions whenever you discuss real world objects. In fact, you use them in any natural language conversation. I'm sure you use them in programming conversations, too. Consequently, I think you're overstating your case.
I clarified what I meant in a subsequent email, but I was trying to explain that there are two big classes of programmer, one who thinks in terms of language and symbols and grammar, and the other who thinks in terms of stretchy sheets. That's also an argumentum ad absurdum, there are certainly more than just two classes, but hopefully you get what I mean.
Either way, I see ConceptCheck as a half baked feature giving me nothing useful but bloat and complexity and significantly adding mess to documentation and steepening my learning curve.
Concepts are so much noise in introductory documentation. However, for detailed and reference docs, they are indispensable to avoid repetition and to reveal similarities and differences among the types on offer by a library. I wonder if your complaint is more about their use on specific cases than about them on general.
You make a *very* valid point here. My biggest complaint is how they are used as an excuse for bad documentation, you are absolutely right on the button on that. My second, and much more minor complaint is they can make code more brittle than it should be because I don't think class inheritance nor SFINAE is the right way to implement policy/trait/concepts when improved language support is the only right way there, but that doesn't bother me anything like as much because I can simply go use an alternative library.
Practical programming isn't the right term ... I'm going to borrow a term Artur Laksberg used during my interview at Microsoft, let's call it "data flow centric programming". At scale, those patterns of assembler outputs start to look like stretchy sheets, and they come together to form a sort of hills and valleys. You could say that each CPU core is rather like a river trying to follow a path of least resistance amongst that topology, and our job as programmer is to apply transformations to the topology to yield new flows of river. It is rare that applying any transformation isn't a zero-sum outcome, and that is where skill and experience comes in.
When visualised in that way, there is no essential difference between programming languages, or operating systems, or the humans working on those systems. They're all programmed the same way.
Advantages include agnosticism towards languages, OSs, idelogies ... disadvantages include inability to relate, communicate or otherwise transmit meaning to other engineers, except those of a similar bent.
Your view of programming languages and libraries as means to an end doesn't mean you don't have to know the languages and libraries you happen to use. Documentation use natural and programming languages to convert information. Human communication uses abstractions for efficiency. Good abstractions are helpful, even to someone like you, aren't they?
Of course. I just used a ton of similes to explain myself above which are the essence of abstraction. Abstraction basically *is* computer programming, perhaps even more so than maths. What I was trying to convey was that some programmers get upset by the ugliness of breaking the purity of the type system, and it innately bothers them. I, on the other hand, have no problem with the monstrosity at https://github.com/ned14/nedtries/blob/master/nedtrie.h#L1853 which I would imagine would appall any right minded C++ person here (for those curious, it's a generic STL container wrapper which subverts that container to become a bitwise trie indexed container. The code is spectacularly evil, and includes casting references through intermediate placeholder types and back into their originals and lots of other similar fun. Note that I explicitly warn people in the documentation not to use it). This is the difference I mean. Language/symbols/grammar programmers get upset at people using reinterpret_cast *at* *all* because it's seen like goto, a taboo. Me well I don't care about that, or goto either which I do use from time to time (it's sometimes the best solution available). What gets compilers to jump right in a maintainable way is all that matters to me (note I don't view the nedtries monstrosity as being maintainable, it was supposed to be a one-shot effort, I was quite mistaken on the popularity of that code sadly).
Most engineers accept the difference as a good thing adding an uncertain value, but the occasional one feels it as a vital threat that must be destroyed at all costs, which is unfortunate. That can introduce politics into engineering, which is usually a bad thing.
Differences are often beneficial. They can also be a source of friction. However, to my knowledge, I've never before encountered anyone with your view of programming in over 30 years of programming who wasn't using assembler.
Well ... I still think of myself as primarily an assembler programmer, but I learned to not say so on my resume quite early on. I learned to program in 6502, moved into ARM, then adopted C as a more powerful macro programmer with the added benefit of being portable. That naturally evolved into C++ as an even more powerful macro engine and metaprogramming as even more powerful again, and I still debug mostly using the disassembly view which is why I still like Visual Studio so much (it has a much better disassembly based debugger than Eclipse). That way of seeing things is of course how a compiler writer sees things, and I found porting clang to QNX in my spare time whilst at BlackBerry effortless which some others thought too good to be possible or feasible or indeed wise. This doesn't mean I can't write in other languages, which I'd call "interpreter" and includes BBC Basic, Python, Javascript, XSLT etc. They all have interpreter dynamics i.e. avoid creating and destroying things and you've probably got high performance. In 2011 as part of creating a cloudy Web 2.0 startup I wrote the main product in a combination of Python, C#, Javascript and Java, all of which was surprisingly quick and was far above the performance needed. If anything, in hindsight, I should have taken more shortcuts actually.
You're making the enormous assumption that all worthwhile programmers think like you and have the same value sets you do e.g. that domains have any relevance to parameter types at all, or that outstanding quality code cannot be written without a language-based approach.
I suspect, like me, he's never encountered someone like you. While the specific does not the general make, it certainly informs one's views.
Many read my repeated posts here over these past eighteen months and have asked me personally why I bother. Your reply is why I do. It is still worth writing back here. Also, I would not have my current employment if it were not for those who read this mailing list and thought something of my posts, and I am extremely grateful for that.
I'm not sure where I fit in your taxonomy, but I dislike Boost.Filesystem's documentation because Beman wrote it as if it was to be submitted for inclusion in the Standard, while providing little other information. I read the Standards, and I can extract beneficial information from them without a lot of difficulty, but it isn't helpful when trying to learn about something for the first time.
I do see what you mean. However Filesystem is a very thin abstraction over POSIX. It contains few surprises, apart from the occasional bug. That significantly reduces my demand of its documentation - almost all of it can suffice with a single line and a few usage examples. Where Filesystem seriously falls down in my opinion is its lack of API guarantees. For example, a big bugbear of mine is what are its guarantees of metadata consistency on a filesystem experiencing rapid change? If I read metadata about a file, and during the (let's say) five API calls used to read metadata that metadata changes, you end up with inconsistent metadata. I want the relevant APIs to state how consistent they are on various operating systems. For example, POSIX lets us have stat which is atomic and all major POSIX implementation also provide a limited atomic directory enumeration API. Win32 provides lots of itty bitty calls which are not atomic, but going direct to the NT kernel gives you not just stat but a whole load of extra lovely stuff like you can atomically fetch the stat for all files matching a glob pattern in a directory. I have tried to raise this with Beman before, but seen nothing about it. This sort of stuff affects how useful Filesystem is in the real world. Without guarantees Filesystem is not useful for database code etc.
I also want to see an essay at the beginning on the philosophy of the design (what I normally call a design rationale), because that will tell me how it all is intended to hang together.
As do I.
After all, as a result of the lack of direct language support there are many Concept check designs, each incommensurate with the others, hell even the STL itself has at least two if not three separate ways of Concept checking, none of which are fully baked either.
Flawed concept checking does not negate the value of concepts for the documentation. It just makes them less helpful at compile time.
Ok, if you can deliver me documentation where the concept documentation appears at every point of use in a non-obtrusive way (let's say a wee '+' you can expand or hover over or something), I'll retract any objection I have to the use of concept checks in other people's code. You may need to give me a while to come around with my own code, but I think you're right earlier on, my single biggest objection is the laziness it introduces into documentation.
In truth, despite your assertion of me being in a minority, I suspect my sort of programmer is probably the majority in C++ if not in Boost.
Setting aside your view of languages as the means to the end of bits flowing over a landscape, I agree.
FYI Artur Laksberg in Microsoft has a similar view to my own on flows of data. We talked about our similarity of vision at the pub during my interview there :)
If you want a pure or elegant type system you wouldn't choose C++ for example - in fact, if you cross off the list of reasons, the only reasons that *anyone* would intentionally choose C++ is down to (i) potential bare metal performance and (ii) it is also C which is useful for low level work and (iii) the tools and ecosystem are mature. That ought to bias against languagey type programmers, though
That's a misguided view. I choose C++ because it's multiparadigm. It provides low level tools, OO, meta-programming, etc.
I can agree with that statement too.
for obvious reasons the best of the field whom I would assume would like to congregate here or on ISO are much more likely to be languagey type programmers as they are interested in C++ as a language.
I'm interested in improving my and other's ability to use C++ effectively.
And fittingly, we end on absolute agreement with one another. It's one of the reasons I volunteered to act as Boost GSoC admin, to help in getting more young blood in here which is sorely needed. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 03/08/2014 08:37 p.m., Niall Douglas wrote:
On 3 Aug 2014 at 12:58, Rob Stewart wrote:
I have to assume you're using hyperbole to make your point.
I did make use of argumentum ad absurdum as otherwise no one would notice what I just said. People already filter out 75% of what you write, forcing you to repeat everything you say thrice. As I earn hourly, I can't afford that, writing email literally costs me money.
The same applies to people reading what you write. In consideration to them, you could write emails to be concrete, concise and to the point. It would be a win-win situation! Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
Awesome library! Here are some of my thoughts on the library. It seems like you are bringing haskell to C++. However, in the past, functional libraies seem to struggle with acceptance in boost. It seems like it would be nicer to name and organize things in a way that is familiar to C++ programmers. One of the things that helped make it easy for me when I firsted learned Boost.MPL, was it was built around the STL interface. I understand in Boost.Hana, you don't use iterators for performance, but it still could be organized in a way that is more familiar to C++ programmers(Some places I've worked, there was resistance to even using Boost.Fusion, which is built around familiar C++ interfaces. Imagine if I tried to introduce Boost.Hana.) Here's a few list of names off the top of my head that could use more C++-like names: foldl = fold foldr = reverse_fold fmap = transform cons = push_front scons = push_back datatype = tag,tag_of head = front last = back typeclass = concept Also, most C++ programmers are used to seeing libraries divided into containers and algorithms, whereas Boost.Hana seemed to be built around haskell-like interfaces. As C++ programmers, its difficult to find where to look for functionality. Ideally, it would be nice to start with simple concepts, such as iteration, and then have algorithms built on top of that. And then have a mechanism to overload algorithms to improve the performance for certain sequences(for example `mpl::vector` algorithms could be specialized to improve compile time performance). It seems like you can overload the algorithms in your library, but its spread across different typeclasses. And then the `List` typeclass has a bunch of algorithms in them. Since, `List` is heavy, I can't split the specializations of these algorithms across several header files, for example. It would be better to have smaller and simpler concepts. However, the specializations of these concepts should be for advance use of the libraries. It should start with some simple concepts to get people to start using the library, and then if someone wants to create an optimized `reverse_fold`, they could. It would be nice to see the docs divide into something like: - The basic type concepts(such as `type<>`, integral constants, metafunctionn, etc) - The basic sequence concepts, what concepts are necessary to define a hana sequence, and the different sequences - Accessing data from a hana sequence, such as `at`, `front`, `back` - Views or adaptors on sequences, such as filter, transform - Algorithms on sequences, such as `fold`, `max`, etc - Extending Boost.Hana Those are some thoughts after reading through the library. Perhaps, I missed some design goals you have. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
Awesome library! Here are some of my thoughts on the library.
It seems like you are bringing haskell to C++. However, in the past, functional libraies seem to struggle with acceptance in boost. It seems like it would be nicer to name and organize things in a way that is familiar to C++ programmers. One of the things that helped make it easy for me when I firsted learned Boost.MPL, was it was built around the STL interface. I understand in Boost.Hana, you don't use iterators for performance,
That is slightly inacurate. I don't use iterators because other abstractions provide much more expressiveness and are much more general than iterators. Since performance is not an argument in favor of iterators in our case (as you rightly observe), they are not used. The #1 reason is expressiveness.
but it still could be organized in a way that is more familiar to C++ programmers (Some places I've worked, there was resistance to even using Boost.Fusion, which is built around familiar C++ interfaces. Imagine if I tried to introduce Boost.Hana.)
Hana is fundamentally functional, it uses rather abstract concepts and that's what makes it so powerful. Truth is, "Sequence" and "Container" concepts are easy to understand but they only get you so far; they are not general enough to be used as building blocks. I did try using both.
Here's a few list of names off the top of my head that could use more C++-like names:
foldl = fold foldr = reverse_fold fmap = transform cons = push_front scons = push_back datatype = tag,tag_of head = front last = back typeclass = concept
I agree that using more C++ish names could make it more tempting for people to adopt the library. I'll think about providing aliases.
Also, most C++ programmers are used to seeing libraries divided into containers and algorithms, whereas Boost.Hana seemed to be built around haskell-like interfaces.
Hana is actually built around concepts (type classes) and models of those concepts (data types). Apart from the names, which I agree could have been chosen differently, it's the same "design paradigm" as usual generic C++ libraries.
As C++ programmers, its difficult to find where to look for functionality.
Improving the documentation should make it easier for people to find the functionality they want.
Ideally, it would be nice to start with simple concepts, such as iteration, and then have algorithms built on top of that.
I'm not sure what you mean by that. Do you mean change the structure of type classes, or how they are introduced?
And then have a mechanism to overload algorithms to improve the performance for certain sequences(for example `mpl::vector` algorithms could be specialized to improve compile time performance).
The current type class system allows you to do just that, and it's used extensively in the implementation. This is explained in the tutorial section on type classes, when I give an example with `std::string` specializing `Printable` to make `to_string` more efficient. Should I clarify this section?
It seems like you can overload the algorithms in your library, but its spread across different typeclasses. And then the `List` typeclass has a bunch of algorithms in them. Since, `List` is heavy, I can't split the specializations of these algorithms across several header files, for example. It would be better to have smaller and simpler concepts.
There are two things here. First, the decision not to split the library into a thousand headers was considered a feature, because it meant that you did not need to include countless headers to get some functionality. Since the library is very lightweight, it does not cause a compile-time performance problem to include the whole list.hpp header. Second, I agree that the List type class is heavy, but that's because it's more like a data type really. The only reason I made it a type class is because I wanted to give `fusion::vector`, `std::tuple` and friends the same methods as `hana::list`, for free. The laws of List force every instance to be isomorphic to `hana::list`, so there is no reason why you would want to instantiate it. Basically, if you're instantiating List, you've just created a new container that does _exactly_ the same as `hana::list`, and you would be better off using this one in that case. This is why I don't see `List` being such a large type class as a big problem.
However, the specializations of these concepts should be for advance use of the libraries. It should start with some simple concepts to get people to start using the library, and then if someone wants to create an optimized `reverse_fold`, they could.
I take it that you're talking about the order in which things are presented in the documentation. I think that expanding the tutorial into something closer to the quick start will be beneficial for people without a FP background, and I'm taking a note to do it. That section will come after the quick start and before the "type classes" section, and will present the most concrete and important type classes used all the time (Comparable, Functor, Foldable, Iterable). People who want to read about Applicatives, Monads and Traversables will have the reference for that.
It would be nice to see the docs divide into something like:
- The basic type concepts(such as `type<>`, integral constants, metafunctionn, etc)
- The basic sequence concepts, what concepts are necessary to define a hana sequence, and the different sequences
- Accessing data from a hana sequence, such as `at`, `front`, `back`
- Views or adaptors on sequences, such as filter, transform
- Algorithms on sequences, such as `fold`, `max`, etc
- Extending Boost.Hana
Ok, my previous paragraph is basically that. I agree with you, and I think this is a better way of presenting things.
Those are some thoughts after reading through the library. Perhaps, I missed some design goals you have.
Thanks so much for your comments. Louis
On Wed, Jul 30, 2014 at 1:19 AM, Louis Dionne
pfultz2
writes: Here's a few list of names off the top of my head that could use more C++-like names:
foldl = fold foldr = reverse_fold fmap = transform cons = push_front scons = push_back datatype = tag,tag_of head = front last = back typeclass = concept
I agree that using more C++ish names could make it more tempting for people to adopt the library. I'll think about providing aliases.
I agree with Paul, for FWIW. (as an average C++ dev who only dabbled in MPL and Fusion, with little FP experience, and who finds cons and scons really "esoteric")
Also, most C++ programmers are used to seeing libraries divided into containers and algorithms, whereas Boost.Hana seemed to be built around haskell-like interfaces.
Hana is actually built around concepts (type classes) and models of those concepts (data types). Apart from the names, which I agree could have been chosen differently, it's the same "design paradigm" as usual generic C++ libraries.
Names matter of course. And the closer you stay to C++ lore, as Paul mentioned, the better IMHO. And I'm not sure aliases is the way to go, since having two names for the same thing only makes code reading difficult if it uses the "other" name one's not used to. It's your library of course, I'd never -1 it based on the name you prefer to use. Thanks, --DD
That is slightly inacurate. I don't use iterators because other abstractions provide much more expressiveness and are much more general than iterators. Since performance is not an argument in favor of iterators in our case (as you rightly observe), they are not used. The #1 reason is expressiveness.
I understand how it is simpler to define sequences using `head`/`tail`, but how is it more expressive?
Hana is fundamentally functional, it uses rather abstract concepts and that's what makes it so powerful. Truth is, "Sequence" and "Container" concepts are easy to understand but they only get you so far; they are not general enough to be used as building blocks.
I believe Alex Stepanov would like to have a word with you ;)
I did try using both.
What limitiation did you run into seperating "Sequences" and "Algorithms"?
Also, most C++ programmers are used to seeing libraries divided into containers and algorithms, whereas Boost.Hana seemed to be built around haskell-like interfaces.
Hana is actually built around concepts (type classes) and models of those concepts (data types). Apart from the names, which I agree could have been chosen differently, it's the same "design paradigm" as usual generic C++ libraries.
Yet it uses a lot of foreign functional concepts. Despite what google says, boost libraries are built around mostly C++-like 'interfaces'. I don't think its bad to introduce some functional-interfaces, but I know in the past libraies that libraries that were very functional were rejected from boost, because many people didn't grasp the purpose of them. So introducing a lot of functional interfaces on the surface of a library can hinder its adoption. Also, the uses concepts that have a default methods are not very common. In general, a C++ concept defines a core set of functionality that can be used to extend a library.
Ideally, it would be nice to start with simple concepts, such as iteration, and then have algorithms built on top of that.
I'm not sure what you mean by that. Do you mean change the structure of type classes, or how they are introduced?
Yes, change the structure of the typeclasses. For example, `Iterable` has
`for_each` included(which is defaulted). However, you could seperate
`for_each` out as an algorithm that requires an `Iterable`. Then have a
seperate mechanism to override the algorithms, such as for varidiac
templates:
template class Sequence, class... Ts>
struct algorithm
There are two things here. First, the decision not to split the library into a thousand headers was considered a feature, because it meant that you did not need to include countless headers to get some functionality. Since the library is very lightweight, it does not cause a compile-time performance problem to include the whole list.hpp header.
Even so, in your implementation of list.hpp, it may not be performance hit to include it together, but perhaps as a user, I could be adapting a type where each of those methods are heavyweight, so I would want to seperate them out into seperate headers.
Second, I agree that the List type class is heavy, but that's because it's more like a data type really. The only reason I made it a type class is because I wanted to give `fusion::vector`, `std::tuple` and friends the same methods as `hana::list`, for free. The laws of List force every instance to be isomorphic to `hana::list`, so there is no reason why you would want to instantiate it.
Just as non-member functions are preferred when they only access public members, I believe this same advice can be applied here. Each method that has a default implementation could be made as a seperate function(which could be designed to be overloaded as well). -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 7/30/2014 9:10 AM, pfultz2 wrote:
On 7/29/2014 4:19 PM, Louis Dionne wrote:
Hana is fundamentally functional, it uses rather abstract concepts and that's what makes it so powerful. Truth is, "Sequence" and "Container" concepts are easy to understand but they only get you so far; they are not general enough to be used as building blocks.
I believe Alex Stepanov would like to have a word with you ;)
"Container" isn't really a concept in the STL. "Iterator" is. And the concept of iterator was derived from studying runtime algorithms as executed by hardware. There's no reason to think that studying compile-time algorithms as executed by modern C++ compilers would lead to the same abstractions. \e
Then again, if the interface of hana::fmap is nothing like the interface to std::accumulate, it could lead to confusion. The MPL names work in part because MPL is STL-ish design (containers/iterators/algorithms).
Just throwing that out there. No strong feelings. Naming Is Hard.
Well fmap is nothing like accumulate. However, transform is like fmap(at least how it is used in Boost.Fusion, Boost.MPL, and Boost.Range). The only difference is that in hana, fmap can go beyond just sequences. It could be applied to boost::optional, boost::future, pointers, etc(although you could just adapt these type as sequences), but transform still works in a similar manner, so I don't see a problem with calling it transform.
Hana is fundamentally functional, it uses rather abstract concepts and that's what makes it so powerful. Truth is, "Sequence" and "Container" concepts are easy to understand but they only get you so far; they are not general enough to be used as building blocks.
I believe Alex Stepanov would like to have a word with you ;)
"Container" isn't really a concept in the STL. "Iterator" is. And the concept of iterator was derived from studying runtime algorithms as executed by hardware. There's no reason to think that studying compile-time algorithms as executed by modern C++ compilers would lead to the same abstractions.
I agree. I was referring to being able to make algorithms and sequences orthogonal, which should still be possible at compile-time. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
That is slightly inacurate. I don't use iterators because other abstractions provide much more expressiveness and are much more general than iterators. Since performance is not an argument in favor of iterators in our case (as you rightly observe), they are not used. The #1 reason is expressiveness.
I understand how it is simpler to define sequences using `head`/`tail`, but how is it more expressive?
Of course it's not more expressive in a formal way because you can achieve the same things with both. I was speaking about the generality of Hana type classes, e.g. instead of having iterator categories I have Functor, Foldable, Iterable and Traversable. These are more general concepts and they can be used more widely than iterators. For example, Maybe is like a compile-time std::optional. It is easy to make it a Functor, but it would seem wrong to try and define an iterator over a Maybe. Not that it can't be done, just that it feels wrong.
Hana is fundamentally functional, it uses rather abstract concepts and that's what makes it so powerful. Truth is, "Sequence" and "Container" concepts are easy to understand but they only get you so far; they are not general enough to be used as building blocks.
I believe Alex Stepanov would like to have a word with you ;)
I was speaking in the context of metaprogramming. Whether iterators are suited for this or that in another context is a different debate I'm neither qualified nor willing to have.
I did try using both.
What limitiation did you run into seperating "Sequences" and "Algorithms"?
I said Sequences and Containers, not Sequences and Algorithms. Splitting sequences and algorithms is fine, and that's what's done in Hana. Related algorithms are bundled into type classes and sequences (or more generally data types) must define a fixed set of primitives to be usable with those algorithms. When I first started working on the MPL11, I took all the concepts from the MPL and tried to split it so it would work without iterators. That's when I ran into problems. I don't remember the specifics because that was a while ago, but I could look into my old notes if you really want details.
[...]
Yet it uses a lot of foreign functional concepts. Despite what google says, boost libraries are built around mostly C++-like 'interfaces'. I don't think its bad to introduce some functional-interfaces, but I know in the past libraies that libraries that were very functional were rejected from boost, because many people didn't grasp the purpose of them. So introducing a lot of functional interfaces on the surface of a library can hinder its adoption.
That is sad, because the functional concepts are exactly what makes Hana so powerful. Some problems are well suited for functional programming, and some are not. Metaprogramming is __inherently__ functional, and it's about time people embrace it.
Also, the uses concepts that have a default methods are not very common. In general, a C++ concept defines a core set of functionality that can be used to extend a library.
Ideally, it would be nice to start with simple concepts, such as iteration, and then have algorithms built on top of that.
I'm not sure what you mean by that. Do you mean change the structure of type classes, or how they are introduced?
Yes, change the structure of the typeclasses. For example, `Iterable` has `for_each` included(which is defaulted). However, you could seperate `for_each` out as an algorithm that requires an `Iterable`. Then have a seperate mechanism to override the algorithms, such as for varidiac templates:
template class Sequence, class... Ts> struct algorithm
> { template<class F> static void call(Sequence & s, F f) { ... } }; The same can be done for the accessors as well in the `Iterable` typeclass.
I could rip out all methods except their minimal complete definition from every type class, and have those other methods be "algorithms that require instances of the type class". However, you can now only have a single minimal complete definition for each type class, which is less flexible. Let's pretend that you can somehow have multiple MCDs. Now, the default definition of the algorithms that were ripped out can't depend on that MCD, because they are out of the type class. That's another loss of flexibility. All in all, I see no gains and only losses in ripping out methods from the type classes. You seem to be asking for flexibility. I designed every single bit of the type class dispatching system so 1. Every single algorithm can be overriden provided you instantiate the type class. 2. You always get the maximum number of algorithms for as few definitions as possible. 3. You do not _ever_ have to duplicate the definition of an algorithm. 4. Depending on the basis operations you provide when instantiating a type class, it is possible that the defaults will be more or less performant. If you can explain to me what your proposal allows that can't be done right now, I'm open for discussion. That being said, and based on your specific example for variadic templates, I think you're wondering why there are no efficient operations for variadic sequences. It's because I haven't had the time to add it, but here's what's planned: // Minimal complete definition: unpack struct Foldable::unpack_mcd { // Now, I can implement foldr, foldl, sum and whatnot _super_ // efficiently, and any sequence-like thing that can unpack its // content into a variadic function is qualified to get it. };
There are two things here. First, the decision not to split the library into a thousand headers was considered a feature, because it meant that you did not need to include countless headers to get some functionality. Since the library is very lightweight, it does not cause a compile-time performance problem to include the whole list.hpp header.
Even so, in your implementation of list.hpp, it may not be performance hit to include it together, but perhaps as a user, I could be adapting a type where each of those methods are heavyweight, so I would want to seperate them out into seperate headers.
Ahhh, now I understand what you mean. I think that's possible and I'll reply when I'm fixed. Basically, I'll try to split the adaptor for Fusion into separate header and see if it works.
Second, I agree that the List type class is heavy, but that's because it's more like a data type really. The only reason I made it a type class is because I wanted to give `fusion::vector`, `std::tuple` and friends the same methods as `hana::list`, for free. The laws of List force every instance to be isomorphic to `hana::list`, so there is no reason why you would want to instantiate it.
Just as non-member functions are preferred when they only access public members, I believe this same advice can be applied here. Each method that has a default implementation could be made as a seperate function(which could be designed to be overloaded as well).
I really don't think the same advice can be applied here, and I don't see why it should. Type classes and regular structs are conceptually very different entities, even if they are implemented in the same way. Louis
Louis Dionne
pfultz2
writes: [...]
There are two things here. First, the decision not to split the library into a thousand headers was considered a feature, because it meant that you did not need to include countless headers to get some functionality. Since the library is very lightweight, it does not cause a compile-time performance problem to include the whole list.hpp header.
Even so, in your implementation of list.hpp, it may not be performance hit to include it together, but perhaps as a user, I could be adapting a type where each of those methods are heavyweight, so I would want to seperate them out into seperate headers.
Ahhh, now I understand what you mean. I think that's possible and I'll reply when I'm fixed. Basically, I'll try to split the adaptor for Fusion into separate header and see if it works.
Ok, I tried it in a sandbox and here's what you could do:
// In some forward declaration header.
template <>
struct Foldable::instance<YourDatatype> : Foldable::mcd {
template
I could rip out all methods except their minimal complete definition from every type class, and have those other methods be "algorithms that require instances of the type class". However, you can now only have a single minimal complete definition for each type class, which is less flexible. Let's pretend that you can somehow have multiple MCDs. Now, the default definition of the algorithms that were ripped out can't depend on that MCD, because they are out of the type class. That's another loss of flexibility. All in all, I see no gains and only losses in ripping out methods from the type classes.
Of course, you don't have to rip out every default method. It makes sense to keep some when you have multiple MCDs.
If you can explain to me what your proposal allows that can't be done right now, I'm open for discussion.
Of course, you can achieve that already in your library, but what I'm proposing is to separate the two purposes you have for implementing typeclasses. Instead, the user would implement a typeclass to fulfill type requirements, and would overload an algorithm to provide optimizations.
I really don't think the same advice can be applied here, and I don't see why it should. Type classes and regular structs are conceptually very different entities, even if they are implemented in the same way.
Perhaps, the advice doesn't directly apply. However, bloated typeclasses seems to be bad design. C++ concepts are never bloated like this nor are haskell typeclasses.
I agree that `bind` isn't a good choice. I'll think of something, but `apply` and `compute` are out of question because Monads are not computations; that's just one metaphor.
Also, I think it would be better to call the `Monad` concept, `Computation` or something like that, since monad doesn't mean anything at all outside the FP community.
First, I find Computation to be too reductive. Second, I don't want to rename a concept that's well known in FP and unknown in C++ to something else that's unknown in C++; I'd rather keep the well known FP word. Anyway, the day where all C++ programmers will know Monads is coming, but DQMOT.
The definition of a monad from wikipedia is a structure that represents computations defined as sequences of steps, so calling it `Computation` makes sense. Also the definition and the name is simple and clear to a lot of non-FP programmers. In contrast, calling it a monad and then defining it as an applicative with the ability to flatten values that were lifted more than once, is pretty meaningless to most C++ programmers. Monads as computations is just one metaphor, but it is the metaphor most familiar to C++ programmers. Or do you think there is a better metaphor for C++ programmers? Here are some other thoughts after looking at it more: - A lot of functions can be implemented with just `Iterable`, such as fmap, concat, cons, filter, init, take, take_while, take_until, etc. Or am I missing something? - The parameters to the functions are backwards to what most C++ programmers are used to, which will be the source of unendless confusion. I assume they are in this order because of its usefulness for curring inside applicatives. Perhaps instead, you could have another function adapter that rotates the first parameter to the last parameter, and then still keep the C++ convention of putting the sequence first. - It's important to note that the `decltype_` function will only work for constexpr-friendly types. - There seems to be a lot of copying by value, where it should use perfect forwarding. Has this been tested with non-copyable types and expensive to copy types as well? -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
I could rip out all methods except their minimal complete definition from every type class, and have those other methods be "algorithms that require instances of the type class". However, you can now only have a single minimal complete definition for each type class, which is less flexible. Let's pretend that you can somehow have multiple MCDs. Now, the default definition of the algorithms that were ripped out can't depend on that MCD, because they are out of the type class. That's another loss of flexibility. All in all, I see no gains and only losses in ripping out methods from the type classes.
Of course, you don't have to rip out every default method. It makes sense to keep some when you have multiple MCDs.
If you can explain to me what your proposal allows that can't be done right now, I'm open for discussion.
Of course, you can achieve that already in your library, but what I'm proposing is to separate the two purposes you have for implementing typeclasses. Instead, the user would implement a typeclass to fulfill type requirements, and would overload an algorithm to provide optimizations.
If the methods are separated from the type classes, they can't have a default
definition which depends on the MCD that's used. For example, let's pretend I
have two MCDs for Foldable; `fold_mcd` and `unpack_mcd`. `fold_mcd` requires
both `foldl` and `foldr`, and `unpack_mcd` requires `unpack`. Now, there are
multiple ways to implement `sum`. Two of them are:
auto sum = [](auto xs) {
return foldl(_+_, int_<0>, xs);
};
auto sum = [](auto xs) {
return unpack(some_very_fast_sum_on_variadic_packs, xs);
};
where `some_very_fast_sum_on_variadic_packs` would put the `xs` in an array
and then return a `int_
I really don't think the same advice can be applied here, and I don't see why it should. Type classes and regular structs are conceptually very different entities, even if they are implemented in the same way.
Perhaps, the advice doesn't directly apply. However, bloated typeclasses seems to be bad design. C++ concepts are never bloated like this nor are haskell typeclasses.
Honestly, I fail to see why that would be bad design __as long as__: 1. The type class is not actually two type classes (i.e. the type class is really _one_ bundle of related operations). 2. The MCD(s) are kept minimal. If you have a type class with a lot of methods but these two points are respected, IMO you just found a sweetspot where you could express a whole lot of things with only a couple of base methods (the MCDs). If you go look at the Foldable type class in Haskell, you'll see that there are a bunch of related functions provided with the type class, yet they are not included in it. My opinion is that they might just as well be included in the type class, as you could then redefine them for improved performance. I just searched online for a rationale or at least some insight about this decision, but I did not find anything.
[...]
The definition of a monad from wikipedia is a structure that represents computations defined as sequences of steps, so calling it `Computation` makes sense. Also the definition and the name is simple and clear to a lot of non-FP programmers. In contrast, calling it a monad and then defining it as an applicative with the ability to flatten values that were lifted more than once, is pretty meaningless to most C++ programmers. Monads as computations is just one metaphor, but it is the metaphor most familiar to C++ programmers. Or do you think there is a better metaphor for C++ programmers?
Right, but then Monads can also be seen as an abstract category theoretical construction (a Functor with two natural transformations). I'm not saying this is the right way for programmers to see it (probably isn't), but I do think that dumbing it down to "it's a computation" is blindly taking refuge in a metaphor. Lists are Monads, Maybes are Monads; I definitely don't think of these as some kind of computation. That being said, Monads are a common hurdle for people learning FP (I myself am super new to this stuff, BTW) and I'm not sure changing the name would do any good. To grok Monads, you have to bang your head a bit, not think of them as one particular metaphor[1]. Also, FWIW, I think that defining Monads with `join` (as in Hana) instead of `bind` (as in Haskell) makes them easier to understand, but that's just me.
Here are some other thoughts after looking at it more:
- A lot of functions can be implemented with just `Iterable`, such as fmap, concat, cons, filter, init, take, take_while, take_until, etc. Or am I missing something?
Yup, you're missing `cons` and `nil`. But you're right that `List` can be refactored, and I plan to do it. For example, `filter` can be implemented if you give me `nil` and a `Monad`, which makes a `MonadZero` (a Monad with a neutral element): // pseudo code auto filter = [](auto pred, auto xs) { auto go = [=](auto x) { return pred(x) ? lift(x) : nil; }; return join(fmap(go, xs)); }; You get `fmap` from Functor, `lift` from Applicative and `nil` from MonadZero. Then you can filter Maybes, with `nil` being `nothing`!
- The parameters to the functions are backwards to what most C++ programmers are used to, which will be the source of unendless confusion. I assume they are in this order because of its usefulness for curring inside applicatives. Perhaps instead, you could have another function adapter that rotates the first parameter to the last parameter, and then still keep the C++ convention of putting the sequence first.
The reason this was done at the beginning is that I respected the order of the arguments of Haskell functions with the same name. I had no reason to change it, so I left it that way. I assume the Haskell order is optimized to make currying easier. That being said, I now see it as a major PITA because it turns out that I don't curry much and I have to write stuff like: // ewww all([](auto x) { some predicate maybe on many lines }, the_sequence); So I plan on reversing the order of most function arguments in the next days: // ahhh! much better! all(the_sequence, [](auto x) { some predicate maybe on many lines });
- It's important to note that the `decltype_` function will only work for constexpr-friendly types.
I don't get it? Can you please expand?
- There seems to be a lot of copying by value, where it should use perfect forwarding. Has this been tested with non-copyable types and expensive to copy types as well?
Nope, that's in my TODO list. I plan on writing runtime benchmarks, but for now I have put all runtime performance considerations on the side. For the time being, if you have expensive-to-copy types, you might want to use `std::ref` if that gives you the semantics you're looking for, and otherwise you're out of luck. Regards, Louis [1]: http://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-... fallacy/
If the methods are separated from the type classes, they can't have a default definition which depends on the MCD that's used. For example, let's pretend I have two MCDs for Foldable; `fold_mcd` and `unpack_mcd`. `fold_mcd` requires both `foldl` and `foldr`, and `unpack_mcd` requires `unpack`. Now, there are multiple ways to implement `sum`. Two of them are:
auto sum = [](auto xs) { return foldl(_+_, int_<0>, xs); };
auto sum = [](auto xs) { return unpack(some_very_fast_sum_on_variadic_packs, xs); };
where `some_very_fast_sum_on_variadic_packs` would put the `xs` in an array and then return a `int_
`. However, in `fold_mcd`, `unpack` is implemented inefficiently, and in `unpack_mcd`, `fold` is decent but it is still likely not as efficient as a user-provided one. Which implementation for `sum` do I choose? If I pick the first one, it's going to be suboptimal with objects that used `unpack_mcd`. If I pick the second one, it's going to be suboptimal with objects that used `fold_mcd`.
But as user if I was defining my own `sum` function, how would I do it? I can't change the typeclass, since it is in another library. Is there a way for me as an user to optimize my sum function based on which MCD was implemented? And if so, couldn't the library do the same?
If you go look at the Foldable type class in Haskell, you'll see that there are a bunch of related functions provided with the type class, yet they are not included in it. My opinion is that they might just as well be included in the type class, as you could then redefine them for improved performance. I just searched online for a rationale or at least some insight about this decision, but I did not find anything.
I think the rational is similar to the rational for using non-member functions. One reason is consistency. People are going to add new algorithms, but they won't be added to the typeclass. Futhermore, if they want to allow overloading the algorithm for optimization, they will create new typeclasses. So now you have two different ways to accomplish the same thing. Another reason, is it will make the typeclass simpler and improve encapsulation. A typeclass is defined by the minimum necessary and not by another 50 algorithms.
That being said, Monads are a common hurdle for people learning FP (I myself am super new to this stuff, BTW) and I'm not sure changing the name would do any good. To grok Monads, you have to bang your head a bit, not think of them as one particular metaphor[1]. Also, FWIW, I think that defining Monads with `join` (as in Hana) instead of `bind` (as in Haskell) makes them easier to understand, but that's just me.
I do agree that `join` is easier than `bind`.
Yup, you're missing `cons` and `nil`. But you're right that `List` can be refactored, and I plan to do it. For example, `filter` can be implemented if you give me `nil` and a `Monad`, which makes a `MonadZero` (a Monad with a neutral element):
// pseudo code auto filter = [](auto pred, auto xs) { auto go = [=](auto x) { return pred(x) ? lift(x) : nil; }; return join(fmap(go, xs)); };
You get `fmap` from Functor, `lift` from Applicative and `nil` from MonadZero. Then you can filter Maybes, with `nil` being `nothing`!
Awesome.
- It's important to note that the `decltype_` function will only work for constexpr-friendly types.
I don't get it? Can you please expand?
I should clarify that I'm referring to the use of `decltype_` within the context of constexpr. It will work outside of that. So, for example, if I were to static_assert that two types were the same, as a simple example: template<class T> void foo(T x) { auto y = bar(); static_assert(decltype_(x) == decltype_(y), "Not matching types"); } This won't work for if types are not a literal type nor have a constexpr constructed. This will fail even if `decltype_` was to take the expression by reference. Ultimately, I believe the language should be changed to allow for this. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
[...]
But as user if I was defining my own `sum` function, how would I do it? I can't change the typeclass, since it is in another library. Is there a way for me as an user to optimize my sum function based on which MCD was implemented? And if so, couldn't the library do the same?
Well, if you define the `sum` function, that's because you are defining an instance of the corresponding type class (Foldable). Hence, you are already choosing the MCD: namespace boost { namespace hana { template <> struct Foldable::instance<YourDatatype> : Foldable::the_mcd_you_want { // minimal complete definition here template <typename Xs> static constexpr auto sum_impl(Xs xs) { // your custom sum implementation } }; }} // end namespace boost::hana Am I missing your point?
If you go look at the Foldable type class in Haskell, you'll see that there are a bunch of related functions provided with the type class, yet they are not included in it. My opinion is that they might just as well be included in the type class, as you could then redefine them for improved performance. I just searched online for a rationale or at least some insight about this decision, but I did not find anything.
I think the rational is similar to the rational for using non-member functions. One reason is consistency. People are going to add new algorithms, but they won't be added to the typeclass. Futhermore, if they want to allow overloading the algorithm for optimization, they will create new typeclasses. So now you have two different ways to accomplish the same thing.
The truth is that I think users should not feel the need to add methods to existing type classes. If a method can be implemented in a type class _and_ has a general utility, then it should be added to the type class for everyone to benefit. If, however, you need more "structure" than provided by existing type classes to do something, then you create a new type class which carries that additional "structure" and in which you can implement more operations. That's how I see it.
Another reason, is it will make the typeclass simpler and improve encapsulation. A typeclass is defined by the minimum necessary and not by another 50 algorithms.
I disagree. Type classes are _already_ defined by their minimal complete definition(s). While this is not the case in the current documentation, I'd like to actually document an equivalent implementation for each method using only methods in a minimal complete definition. That would make it more obvious that type classes are defined by their MCDs.
- It's important to note that the `decltype_` function will only work for constexpr-friendly types.
I don't get it? Can you please expand?
I should clarify that I'm referring to the use of `decltype_` within the context of constexpr. It will work outside of that. So, for example, if I were to static_assert that two types were the same, as a simple example:
template<class T> void foo(T x) { auto y = bar(); static_assert(decltype_(x) == decltype_(y), "Not matching types"); }
This won't work for if types are not a literal type nor have a constexpr constructed. This will fail even if `decltype_` was to take the expression by reference. Ultimately, I believe the language should be changed to allow for this.
It should work; Hana was designed exactly to deal with that. Here's what
happens (that's going to be in the tutorial also):
1. decltype_(x) == decltype_(y) returns a bool_<true or false>
2. bool_<b> has a constexpr conversion to bool defined as:
template <bool b>
struct bool_type {
constexpr operator bool() const { return b; }
};
Since the conversion is always constexpr, it's used in the `static_assert`
and it works. Now, it does not __actually__ works because of what I think
is a bug in Clang. For it to work, you have to define a dummy object like
that:
template
Well, if you define the `sum` function, that's because you are defining an instance of the corresponding type class (Foldable). Hence, you are already choosing the MCD:
namespace boost { namespace hana { template <> struct Foldable::instance<YourDatatype> : Foldable::the_mcd_you_want { // minimal complete definition here
template <typename Xs> static constexpr auto sum_impl(Xs xs) { // your custom sum implementation } }; }} // end namespace boost::hana
Am I missing your point?
Yes, I shouldn't have used `sum` as an example. So say a another library wants to implement `xor` or `bitand`, can they use different MCDs to optimize it like what `sum` does? And if they can, then `sum` could do the same thing as well. If they cannot, then this should be fixed. Its not possible to think of every single fold algorithms and put it into a typeclass beforehand.
The truth is that I think users should not feel the need to add methods to existing type classes. If a method can be implemented in a type class _and_ has a general utility, then it should be added to the type class for everyone to benefit. If, however, you need more "structure" than provided by existing type classes to do something, then you create a new type class which carries that additional "structure" and in which you can implement more operations. That's how I see it.
It should be possible to build on top of your library to create general utilities without requiring the user to patch your library.
It should work; Hana was designed exactly to deal with that. Here's what happens (that's going to be in the tutorial also):
1. decltype_(x) == decltype_(y) returns a bool_<true or false> 2. bool_* has a constexpr conversion to bool defined as:
template <bool b> struct bool_type { constexpr operator bool() const { return b; } };
Since the conversion is always constexpr, it's used in the `static_assert` and it works. Now, it does not __actually__ works because of what I think is a bug in Clang. For it to work, you have to define a dummy object like that:
template
void foo(X x, Y y) { auto dummy_result = decltype_(x) == decltype_(y); static_assert(dummy_result, ""); } And that will compile. Am I exploiting some hole in Clang or is this correct w.r.t. C++14? I'm unfortunately not really a standards guy, so if someone can help here that'd be helpful.
According to my understanding of the language, if you try to call `foo` with `std::vector` it will fail, since `std::vector` is not a literal type nor is it constexpr-constructible. If it does work on clang using the dummy variable trick then it looks like you are exposing a hole in Clang. Of course, I coulde be wrong. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
Well, if you define the `sum` function, that's because you are defining an instance of the corresponding type class (Foldable). Hence, you are already choosing the MCD:
namespace boost { namespace hana { template <> struct Foldable::instance<YourDatatype> : Foldable::the_mcd_you_want { // minimal complete definition here
template <typename Xs> static constexpr auto sum_impl(Xs xs) { // your custom sum implementation } }; }} // end namespace boost::hana
Am I missing your point?
Yes, I shouldn't have used `sum` as an example. So say a another library wants to implement `xor` or `bitand`, can they use different MCDs to optimize it like what `sum` does? And if they can, then `sum` could do the same thing as well. If they cannot, then this should be fixed. Its not possible to think of every single fold algorithms and put it into a typeclass beforehand.
I don't see a way of doing that right now; you have to put the method in the type class to be able to give it a default definition which depends on the MCD. I'll try to see if that can be achieved without re-designing the whole type class system.
[...]
It should work; Hana was designed exactly to deal with that. Here's what happens (that's going to be in the tutorial also):
1. decltype_(x) == decltype_(y) returns a bool_<true or false> 2. bool_* has a constexpr conversion to bool defined as:
template <bool b> struct bool_type { constexpr operator bool() const { return b; } };
Since the conversion is always constexpr, it's used in the `static_assert` and it works. Now, it does not __actually__ works because of what I think is a bug in Clang. For it to work, you have to define a dummy object like that:
template
void foo(X x, Y y) { auto dummy_result = decltype_(x) == decltype_(y); static_assert(dummy_result, ""); } And that will compile. Am I exploiting some hole in Clang or is this correct w.r.t. C++14? I'm unfortunately not really a standards guy, so if someone can help here that'd be helpful.
According to my understanding of the language, if you try to call `foo` with `std::vector` it will fail, since `std::vector` is not a literal type nor is it constexpr-constructible. If it does work on clang using the dummy variable trick then it looks like you are exposing a hole in Clang. Of course, I coulde be wrong.
Supposing this is not valid C++, then the following is equivalent (but
uglier
to write):
template
On 7/29/2014 2:17 PM, pfultz2 wrote:
Here's a few list of names off the top of my head that could use more C++-like names:
foldl = fold foldr = reverse_fold fmap = transform cons = push_front scons = push_back datatype = tag,tag_of head = front last = back typeclass = concept
Then again, if the interface of hana::fmap is nothing like the interface to std::accumulate, it could lead to confusion. The MPL names work in part because MPL is STL-ish design (containers/iterators/algorithms). Just throwing that out there. No strong feelings. Naming Is Hard. \e
Then again, if the interface of hana::fmap is nothing like the interface to std::accumulate, it could lead to confusion. The MPL names work in part because MPL is STL-ish design (containers/iterators/algorithms).
Just throwing that out there. No strong feelings. Naming Is Hard.
Also, I would like to mention that some names don't have exact equivalents in C++. For example, `bind` could be called `flat_transform`, however, that really only makes sense for sequences. Although, I do think, it would be better not to call it `bind` since it already means something else to C++ programmers, so another name could avoid confusion, I'm not sure what that name would be(perhaps `apply` or `compute`?). Also, I think it would be better to call the `Monad` concept, `Computation` or something like that, since monad doesn't mean anything at all outside the FP community. Just some additional thoughts and suggestions. Naming, of course, is hard. -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2
Then again, if the interface of hana::fmap is nothing like the interface to std::accumulate, it could lead to confusion. The MPL names work in part because MPL is STL-ish design (containers/iterators/algorithms).
Just throwing that out there. No strong feelings. Naming Is Hard.
Also, I would like to mention that some names don't have exact equivalents in C++. For example, `bind` could be called `flat_transform`, however, that really only makes sense for sequences. Although, I do think, it would be better not to call it `bind` since it already means something else to C++ programmers, so another name could avoid confusion, I'm not sure what that name would be(perhaps `apply` or `compute`?).
I agree that `bind` isn't a good choice. I'll think of something, but `apply` and `compute` are out of question because Monads are not computations; that's just one metaphor.
Also, I think it would be better to call the `Monad` concept, `Computation` or something like that, since monad doesn't mean anything at all outside the FP community.
First, I find Computation to be too reductive. Second, I don't want to rename a concept that's well known in FP and unknown in C++ to something else that's unknown in C++; I'd rather keep the well known FP word. Anyway, the day where all C++ programmers will know Monads is coming, but DQMOT.
Just some additional thoughts and suggestions. Naming, of course, is hard.
Yes it is! Louis
On 07/30/2014 11:53 PM, Louis Dionne wrote:
I agree that `bind` isn't a good choice. I'll think of something, but `apply` and `compute` are out of question because Monads are not computations; that's just one metaphor.
I do not associate 'apply' with computation. The documentation of hana::bind states that it "Appl[ies] a function returning a monad to the value(s) inside a monad", so 'apply' does not seem like such a bad name.
Bjorn Reese
On 07/30/2014 11:53 PM, Louis Dionne wrote:
I agree that `bind` isn't a good choice. I'll think of something, but `apply` and `compute` are out of question because Monads are not computations; that's just one metaphor.
I do not associate 'apply' with computation. The documentation of hana::bind states that it "Appl[ies] a function returning a monad to the value(s) inside a monad", so 'apply' does not seem like such a bad name.
`apply` is already used for the same purpose as in the MPL: apply(f, args...) == f(args...) Furthermore, `bind` is used as follows: bind(bind(bind(monad, monadic_f), monadic_g), monadic_h) For it to be named `apply`, I'd at least have to reverse the order of the arguments so we can say "apply a monadic function to a monad". I'll think about another name for `bind` but I'm really not sure `apply` is better suited. ------------------- Oh and I just thought about that, but `ap` is also used to apply a function inside an applicative to arguments inside applicatives. So I'm even less sure about `apply` for Monads, since Monads are Applicatives. So you'd have ap(function_inside_monad, argument_inside_monad) apply(function_returning_a_monad, argument_inside_monad) and I think we're in for some confusion if we go with that. Louis
On 31/07/2014 10:27 a.m., Louis Dionne wrote:
Bjorn Reese
writes: On 07/30/2014 11:53 PM, Louis Dionne wrote:
I agree that `bind` isn't a good choice. I'll think of something, but `apply` and `compute` are out of question because Monads are not computations; that's just one metaphor.
I do not associate 'apply' with computation. The documentation of hana::bind states that it "Appl[ies] a function returning a monad to the value(s) inside a monad", so 'apply' does not seem like such a bad name.
`apply` is already used for the same purpose as in the MPL:
apply(f, args...) == f(args...)
`apply` is also the name chosen by the Library Fundamentals TS for calling a function with a tuple of arguments: https://rawgit.com/cplusplus/fundamentals-ts/n4023/fundamentals-ts.html#tupl... Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
Agustín K-ballo Bergé
[...]
`apply` is also the name chosen by the Library Fundamentals TS for calling a function with a tuple of arguments:
https://rawgit.com/cplusplus/fundamentals-ts/n4023/fundamentals-ts.html#tupl...
I use `unpack` for this, and I think it is more consistent both with the previous use of `apply` in the MPL, and with the expected meaning of `apply`. That's somewhat subjective though. Is there any chance the Fundamentals TS can be influenced? Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.
Hi Louis, Louis Dionne wrote:
Dear Boost,
It has been a while since I've given news about my GSoC project, Boost.Hana[1]. Things are going very well and I uploaded the library to the Boost Incubator. I think it is time to push the library forward for review.
I've looked through much of the documentation, and this looks really good. I was impressed with your post on improved performance using lambdas, but I was concerned about it being very cumbersome. I really like the syntax you've developed. Regarding performance, I note that you have several graphs in the reference section showing (generally) good performance scaling. I think it would be useful to add a section to the manual speaking generally about performance. One common criticism of metaprogramming is the time it takes to compile and link. Addressing it would certainly help someone understand more benefits of the library without having to follow the boost list. I have a few other suggestions about the documentation that I can send along, if you're looking for that at this point. As the author, I know everything that's left to be done and polished before
I can be fully satisfied. Still, I believe the library is worthwhile in its current state as it implements a superset of the functionality found in the Boost.MPL and Boost.Fusion. I also think the library will benefit greatly from a larger user base and more feedback.
It sounds like you're already planning to do this, but mentioning type_list<>, even if it's essentially just list(type<>...) would help show that this is a valid replacement for MPL. Do you plan to implement things like MPL's vector_c? I ask because a C++14 units library could be much nicer than the (already nice) boost units library. Using Hana could be nice for compile times.
Here are some caveats: - The library requires a full C++14 compiler, so only Clang 3.5 can compile the unit tests right now. However, compilers will eventually catch up. Also, pushing cutting edge libraries forward might motivate compilers to support C++14 ASAP, which is probably a good thing.
I would have no problem with a C++14-only requirement. It would definitely slow adoption, but I'd rather the code stay more pure and performant. As others have noted, we have fallbacks. My only concern would be the performance on other compilers once they implement the necessary features. Is there some assurance that the lambda trick would work (i.e. be fast) on g++? I look forward to playing with this in about a month. I'll definitely post a review if and when a review occurs. Thanks, Nate
Nathan Crookston
Hi Louis,
[...]
Regarding performance, I note that you have several graphs in the reference section showing (generally) good performance scaling. I think it would be useful to add a section to the manual speaking generally about performance. One common criticism of metaprogramming is the time it takes to compile and link. Addressing it would certainly help someone understand more benefits of the library without having to follow the boost list.
I agree that a section on performance could be useful. At least explaining roughly how performance is measured at compile-time would be a nice to have. At the same time, I'm inclined to think that people who don't know about this stuff are not C++ power users and should not have to worry about it anyway, while those who know don't need it explained to them. Also, one goal of Hana was to make metaprogramming reasonably efficient so you would not actually have to care about this stuff anymore. I'll add a section on performance, or at least how you can maximize performance with Hana, but I don't know in what level of details I'll go, because we could probably write a small book on compile-time performance.
I have a few other suggestions about the documentation that I can send along, if you're looking for that at this point.
Definitely; please shoot anything you've got.
[...]
It sounds like you're already planning to do this, but mentioning type_list<>, even if it's essentially just list(type<>...) would help show that this is a valid replacement for MPL.
I did not mention it anywere because I'd like to see this one go. It's there for performance reasons only, but it's possible to implement those optimizations in `list` without introducing a new container. I've started writing a cheat sheet to translate between MPL and Hana; this should pretty much give you a mechanical transformation for all algorithms written with the MPL.
Do you plan to implement things like MPL's vector_c? I ask because a C++14 units library could be much nicer than the (already nice) boost units library. Using Hana could be nice for compile times.
See integer_list. However, like type_list, I'd like to see this one go. Note that optimizations for homogeneous sequences are not implemented yet (so integer_list is not implemented with an internal always-constexpr array).
[...]
My only concern would be the performance on other compilers once they implement the necessary features. Is there some assurance that the lambda trick would work (i.e. be fast) on g++?
For this, we'll have to run the benchmarks on GCC. The benchmark suite is hooked up in the CMake-generated build system and the benchmarks will run on GCC with 2-3 modifications to a Ruby gem I use internally.
I look forward to playing with this in about a month. I'll definitely post a review if and when a review occurs.
Thanks! Louis
On 07/28/2014 12:20 PM, Louis Dionne wrote:
Dear Boost,
It has been a while since I've given news about my GSoC project, Boost.Hana[1]. [snip] [1]: http://github.com/ldionne/hana
Hi Louis, I looked at: http://ldionne.github.io/hana/index.html but saw no mention of the `let-expressions` that was mentioned here: http://article.gmane.org/gmane.comp.lib.boost.devel/245231 Did I miss where they are documented or was there some problem in implementing them and, consequently, they were left out? -regards, Larry
Larry Evans
[...]
I looked at:
http://ldionne.github.io/hana/index.html
but saw no mention of the `let-expressions` that was mentioned here:
http://article.gmane.org/gmane.comp.lib.boost.devel/245231
Did I miss where they are documented or was there some problem in implementing them and, consequently, they were left out?
That was for the MPL11. With Hana, we use generic C++14 lambdas instead, so we don't need that emulation anymore. Regards, Louis
On 07/29/2014 04:54 PM, Louis Dionne wrote:> Larry Evans
[...]
I looked at:
http://ldionne.github.io/hana/index.html
but saw no mention of the `let-expressions` that was mentioned here:
http://article.gmane.org/gmane.comp.lib.boost.devel/245231
Did I miss where they are documented or was there some problem in implementing them and, consequently, they were left out?
That was for the MPL11. With Hana, we use generic C++14 lambdas instead, so we don't need that emulation anymore.
Regards, Louis
So, let me check to see if I understand how generic c++14 lambdas would be used to emulate let-expressions. The let-expressions I'm talking about are described here: http://docs.racket-lang.org/reference/let.html Generic c++14 lambda expression are described here: https://isocpp.org/wiki/faq/cpp14-language#generic-lambdas The correspondence between the two, IIUC, is the let-expresion: ( let ( [id1 val-expr1] [id2 val-expr2] [id3 val-expr3] ... ) body-expr ) would be, using generic-lambdas: [] ( auto id1 , auto id2 , auto id3 ) { return body-expr } ( val-expr1 , val-expr2 , val-expr3 ... ) Is that about right? TIA. -regards, Larry
Larry Evans
[...]
The correspondence between the two, IIUC, is the let-expresion:
( let ( [id1 val-expr1] [id2 val-expr2] [id3 val-expr3] ... ) body-expr )
would be, using generic-lambdas:
[] ( auto id1 , auto id2 , auto id3 ) { return body-expr } ( val-expr1 , val-expr2 , val-expr3 ... )
Is that about right?
That would be it, but you could also use the lambda capture. If I remember, my motivation for let expressions was to define branches inline when I had conditionals. Here's what I do in Hana when I want to branch on a (possibly compile-time) condition: auto result = eval_if(condition, [](auto _) { return then_branch; }, [](auto _) { return else_branch; } ); Now, I can use the lambda capture if there's something I need to capture in either branches. I'll explain the dummy argument now. Since we use lambdas, we achieve one level of laziness; we achieve runtime laziness. Only the branch which is chosen by the condition will be executed inside eval_if, so all is good. However, since we also want to support heterogeneous branches, and so branches whose well-formedness might depend on the value of the condition, we use a dummy argument to delay the template instantiation inside the branches. Let's pretend our branches are functions of some arbitrary x: auto result = eval_if(condition, [](auto _) { return then_branch(_(x)); }, [](auto _) { return else_branch(_(x)); } ); Now, because the compiler does not know what is _ until either branch is called, it can't instantiate a (possibly invalid) branch. Then, the trick we play to the compiler is that eval_if always calls us with an identity function, so _(x) is actually x, but not knowing it until the last moment bound the compiler to wait before instantiating the whole expression. This is long, but I hope it clarifies why, IMO, we don't need let expressions anymore. Louis
On Mon, Jul 28, 2014 at 05:20:01PM +0000, Louis Dionne wrote:
Dear Boost,
It has been a while since I've given news about my GSoC project, Boost.Hana[1]. Things are going very well and I uploaded the library to the Boost Incubator. I think it is time to push the library forward for review.
As the author, I know everything that's left to be done and polished before I can be fully satisfied. Still, I believe the library is worthwhile in its current state as it implements a superset of the functionality found in the Boost.MPL and Boost.Fusion. I also think the library will benefit greatly from a larger user base and more feedback.
Here are some caveats: - The library requires a full C++14 compiler, so only Clang 3.5 can compile the unit tests right now.
Hi Louis, I believe the clang C++ language support page claims that clang-3.4 fully supports C++14. http://clang.llvm.org/cxx_status.html Could you please explain why I will need clang-3.5 to work with your library? thanks. Karen However, compilers will eventually catch up.
Also, pushing cutting edge libraries forward might motivate compilers to support C++14 ASAP, which is probably a good thing.
- The library is not fully stable yet, so interface changes are to be expected. I don't see this as a problem as long as this is documented, especially since I expect Hana to be used mostly for toying around for at least a while. I could be mistaken.
So unless someone thinks the library isn't ready or would get rejected right away in its current state for reason X, I am requesting a formal review for Boost.Hana.
Regards, Louis
[1]: http://github.com/ldionne/hana
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost --- end quoted text ---
-- Karen Shaeffer Be aware: If you see an obstacle in your path, Neuralscape Services that obstacle is your path. Zen proverb
Karen Shaeffer
[...]
Hi Louis,
I believe the clang C++ language support page claims that clang-3.4 fully supports C++14.
http://clang.llvm.org/cxx_status.html
Could you please explain why I will need clang-3.5 to work with your library?
Clang 3.4 has several C++14-related bugs that are fixed in 3.5, so it segfaults almost instantaneously when you feed it Hana. Regards, Louis
participants (22)
-
Agustín K-ballo Bergé
-
Bjorn Reese
-
Dominique Devienne
-
Edward Diener
-
Eric Niebler
-
Glen Fernandes
-
Gonzalo BG
-
Karen Shaeffer
-
Larry Evans
-
Louis Dionne
-
louis_dionne
-
Michael Shepanski
-
Mostafa
-
Nathan Crookston
-
Niall Douglas
-
Paul A. Bristow
-
pfultz2
-
Rob Stewart
-
Robert Ramey
-
Roland Bock
-
TONGARI J
-
Vicente J. Botet Escriba