And the other one, probably the one people actually wanted ..
N
------- Forwarded message follows -------
Date sent: Mon, 19 May 2014 10:04:22 -0700 (PDT)
From: Robert Ramey
2. Everyone recognises there are serious problems with process, everything from how you were treated Stephen when you tried cleaning out cruft and only really Dave supported you,
This effort may have been well intended - but is ill conceived and had major flaws in approach and execution. I don't want to re-debate this here so let's just agree to not use this as an example of anything. right through to the
fact that peer review simply doesn't work any more.
There is wide agreement that there are problems here. The boost library incubator (www.blincubator.com) is a major effort to address this. We'll see if this helps. The Boost
community has become quite selfish in recent years, as you would expect from those with a vested interest in the status quo before evolution. And for the record, I have no problem with those vested interests keeping their existing Boost,
It's hard to attribute motives to a person - much less than an organization.
but I personally want as far away from C++ 03 as soon as is possible.
Not everyone has that luxury. 3. All the interesting new C++ 11 libraries you find around the
internet have zero interest in trying to join Boost, with a very few honorable exceptions. That speaks volumes, to me at least.
Without seeing a list of these, it's hard to comment. Which suggests to me a need for a new Boost which attempts to apply
the benefits of hindsight to what we learned with Boost v1.
Hmmm - not to me. Note that the problem of various C++ versions has existed since the beginning of boost. Not differing standards but no compiler implement the whole standard and all had bugs. This was addressed by a couple of things. a) Boost.Config which abstracted all the platforms to a common interface. b) An explicit policy that no library be required to be backward compatible or be compilable by any buggy or out of date platform. Many authors have made their libraries backward/buggy compatible in the interests of getting wider usage - but that has not been a boost requirement. c) Though it has not been explicitly stated, (maybe it should be) all libraries should be (and I believe they are) build able and testable with any compiler which meets the latest standard. Now, it
is extremely clear from this conference there is little appetite for that, but I think later on this year we're going to have three C++11 libraries in the queue,
no problem. And here is what I think should be in a fork of Boost:
A fork of boost would be a spectacularly bad idea - it would mean the end of boost as we know it.
1. Minimum required compiler feature set will be VS2014's.
Hmm - I'm not sure what this means. I believe that all libraries current build and test under the latest C++ compilers.
No use of Boost STL permitted where the C++ 11 STL provides a feature.
This is for the library review process to determine. 2. cmake instead of Boost.Build. There is mixed history regarding boost tools. In some cases things have worked well other cases less so. A big problem is that boost has imposed which tools must be used. It seems you're comfortable with this - but just thing the tool choices need to be updated. I think this is the wrong approach has has contributed to our recent difficulties in evolving to the current software development landscape. I believe we will eventually want to pull back from tool selection specifics and move toward specifying tool requirements. More less like the boost documentation toolset situation. There is no specific requirement as far as toolset but it's wide agreed that documentation is required and that meet some sort of standard. (though that standard hasn't really been explicitly defined - we all think we know what it is).
3. Eliminate peer review in favour of a suite of automated libclang based AST analysers. Instead of persuading people to review libraries, persuade them to review and improve the AST analysers.
I'm trying to restrain myself, but this idea is so naive as to be ridiculous. 4. Mandatory cross platform per-commit CI with unit testing exceeding
95% coverage. We don't care what unit test library is used so long as it can output results Jenkins can understand.
These kinds of mandates are a bad idea. No one can predict the future with the precision necessary to make them.
5. Mandatory all green thread, memory, UB sanitisers and clean valgrind. All also tested per-commit.
lol - see above.
6. Mandatory CI testing for exception safety. I am hoping a clang rewriter can basically patch all exception throws and have them randomly throw for testing.
again - not realistic given our variety of environments.
7. Per-library source distributions instead of a monolithic blob. This implies some dependency management,
I agree with this. Though the scale of the effort seems way under estimated - as usual
but cmake makes that much easier.
CMake is not nearly up to this job. In fact, I haven't seen any tool that can actually work for dependency management in our context. Actually, other than John Maddocs post on the problem), I haven't seen any posts which demonstrate an understanding of the problems with this.
It also means we can eliminate the release cycle because each library does its own release cycle,
OK - we've got that now with modular boost whenever anyone merges their develop branch to master. We do need to agree on per library versioning scheme though. and the correct (i.e. tested)
version of dependencies are included into each per-library source distro.
This is never going to be all that reliable. This solves the version lock problem currently plaguing
git-ised Boost, at the cost of pushing the version lock problem onto users [1].
I don't know what this means.
BTW I want to see a soak test of the unit tests for 24 hours be all green before a release.
For now I would say that a Boost Release would be: a) All master branches would be tagged with Boost.1.xx b) All tests are run on this branch - if everything passes that Boost 1.xx become the release and git export is used to create the tar, zip, et This purpose of this is catch errors where some library has changed it's interface and broken any dependent libraries. I think (though I'm not sure) that this is inline with your idea above
8. Reusable utilities in a submitted library need merging into some common utilities library which follows the STL conventions. Other than that, no source code, naming conventions, namespace or anything else needs converting or changing. We are looking for very high quality C++ libraries, nothing more. Obviously if someone hopes for a library to enter the C++ 17 STL they'll need much more rigour, but that's up to them.
I don't understand what you're getting at here. But .. I think we will evolve a classification scheme which distinguishes levels of dependency a) std b) boost::core e.g. config, mpl c) boost::application e.g. odint, seriaiization? Basically to be able keep out circular dependencies and depency explosion. 9. There is no longer an "in" or an "out" for distribution. I'm
thinking of a scorecard page where member libraries are ranked by how high quality they are according to all the automated review, so when I say "mandatory" above, I simply mean they don't get to appear on the main downloads page without that precondition. All submitted libraries do appear though, just ranked very low if their quality is low. I would hope all this is generated from a database and requires very little human input.
This has merit but the explanation isn't concrete enough. Here's my take on this. Boost has a number of aspects a) lists and communication and community b) library certification via reviews and acceptance c) creation of a monolithic "product" c) testing d) deployment. In the future I would expect to see something different a) community - same as above b) library certification via reviews and acceptance - same as above. c) deployment of closed subset. subsets would be selected either by users when they download (e.g. cygwin) or by third parties (e.g. some embedded systems vendor includes a subset with his IDE). Hopefully boost could then better focus on what only boost can do - certify some specific level of quality. 10. BoostBook documentation using the Boost look and feel becomes
mandatory.
I've had enough with library authors thinking they can
improve on BoostBook's output with things like using Comic Sans as the font or weird colour schemes throughout.
LOL - i hear you. But it's not a great idea to make a big deal about this stuff. We can't really do a good job and we would spend huge amounts of time on trying agree on stuff which is less important. To grow boost has to narrow it's focus - not expand it.
So, basically, in some areas requirements are significantly tightened. In other areas requirements are significantly loosened. The aims of the above ten items are as follows:
1. To reduce the amount of human work involved in maintaining Boost.
I think your specific suggestions would expand the amount of work required to maintain a working boost by a very large amount. Actually just agreeing on what to do would be huge. And it would result in bad decisions being taken since only a few could have the time to participate. What Dave and Daniel had to do for the git conversion last year was
far above what anyone should ever feel they need to do.
lol - right - should we have not made the conversion?
2. To decentralise Boost, letting it scale up far better.
Correct. 3. To provide a gradual process of entering Boost which authors can
tip away at slowly with automated scripts slowly improving their rankings instead of the current "lumps" of high intensity output and the review approved/failed scheme currently used.
Not clear what you mean - But If you look at the Boost Library Incubator it has exactly that idea. Reviews can accumulate until it becomes clear that a library should be accepted or rejected. It doesn't replace the final review. Maybe someday it might, but I doubt it. 4. To incentivise authors to maintain their libraries as the quality
bar is improved - the automated scripts will start to drop library rankings as the quality is raised, thus dropping that library in the overall rankings. Rather more importantly, it provides a natural "deprecation" mechanism, something sorely lacking in my opinion from present Boost.
This is also on the right track. I think the idea of a "deployment subset" as stated above will address this. Eventually I expect that some libraries (e.g. those which have std equivalents) would be dropped from standard deployments. They would still be approved boost libraries though. I would deprecate the word "deprecate" as it has a perjorative flavor. But rather characterize these libraries as "not part of standard (or minimal or core, or...) deployment. That, the set of deployed libraries would not be the same as the set of "boost certified" libraries as it is now. In fact, I'd like to see boost get out of the deployment business all together eventually. Robert Ramey
Thoughts?
[1]: If this model proves popular as we can get a regular donation stream going, we could deploy an automated SHA stamper which through unit test iteration, figures out which SHAs in which combinations of libraries are compatible. Then you could safely mix library A's distro with library B's distro. I'd imagine Debian upstream et al might be coaxed into providing for this as they need one unified set of compatible libraries.
Niall
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------- End of forwarded message ------- -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/