Niall Douglas
On 29 Jul 2014 at 17:02, Louis Dionne wrote:
Also, regarding formal review, I personally would feel uncomfortable accepting a library that only works with a single version of clang. I would feel much happier if you got trunk GCC working, even if that means workarounds.
That would mean a lot of workarounds.
Not necessarily. You could wait until GCC trunk catches up instead, helping it along by adding however many bug reports as is necessary. Timing well a formal review is often as important as the quality of your library.
I always feel heebie jeebies about code which works only on one compiler. For me to vote yes for a Boost library to enter Boost I need to feel it is well tested and reliable and has had all its kinks knocked out. I struggle to see me feeling this with a library which can't be tested widely.
Instead of heading straight into the community review queue, perhaps a few rounds of intermediate informal reviews like this one?
That's ok with me.
I'd particularly like to see Eric and Joel's opinion of your library so far too.
I'd like that too.
[...]
My problem always with MPL98 was I had no idea what was fast or slow for my use cases.
Lol. Number one tip if you want to improve your compile-time performance with the MPL: do not _ever_ use mpl::vector, use mpl::list instead.
[...]
I thought about doing this, but I did not because I thought it was a HUGE undertaking to automate it.
No, it's easier than you think. Have a look at https://ci.nedprod.com/ whose default dashboard shows a graph labelled "RUDP performance". This tracks performance of a build over time to ensure performance doesn't regress. All you need is for your performance test tool to output some CSV, a Jenkins Plot plugin does the rest.
That's pretty cool!
I think we're better off integrating the benchmarks in the documentation and then when something is really weird, we just have to look at the generated benchmarks and see what's wrong. If someone can suggest a way to do it automatically that won't take me weeks to set up, I'm interested.
Mastering Jenkins takes months, but once mastered configuring all sorts of test scenarios becomes trivial. I'd actively merge your Jenkins/Travis output into your docs too, it is nowadays an online world.
I don't have months, but if someone is willing to help I'll collaborate. The current build system is setup with CMake; surely it integrates easily with Jenkins?
[...]
As you'll find when formal review comes, to pass isn't about how good your library is, it's about eliminating as many rational objections others can think of.
I hope it's at least _a bit_ about how good the library is. :)
[...]
Besides, you'll invest five days or so of wishing pain on those responsible for the tools, and once it's working you'll never need touch it again. I did find it took some months to find and fix all the corner cases in the doc output though, and even now PDF generation from AFIO's docs are a joke due to the long template strings.
If I spend 5 days on improving the current documentation, I'll have the best freakin' documentation you could ever wish to have in Boost. I'll favor doing that before BoostBook, and hopefully the quality of the resulting documentation clears up a lot of objections.
Anyway, it's up to you. BTW, I've noticed that when peer review managers volunteer to manage they tend to favour ones with BoostBook docs. I think they also think it's another problem they don't have to think about during managing.
Regards, Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Re-GSoC-Boost-Hana-Formal-review-request-... Sent from the Boost - Dev mailing list archive at Nabble.com.