[peer review queue tardiness] Cleaning out the Boost review queue
I've been reviewing the Boost Formal Review queue as part of preparing my C++ Now presentation and I observe the following things: Observation 1. The following review submissions have not been updated by their authors in over two years and therefore count as unmaintained. Abandoned libraries before peer review are even more undesirable than abandoned official Boost libraries, I therefore suggest they be removed from the review queue until their authors bring them up to date: * Join (last update 2009) * Block pointer (last update: well, it's still in the SVN sandbox) * Singularity (last update 2011) * Extended Complex Numbers (last update 2012) * Array (last update 2012) * Countertree (last update 2012, I see a github import in 2013 but no new commits) * Process (last update 2012). This is a particularly useful library I have used myself in production code, and I see a number of mirrors of it on github each with varying patches and bug fixes. As a minimum, somebody needs to create a canonical github project for Process and merge the fixes from the many github copies. Otherwise, I'm sorry but this library needs to be considered abandoned otherwise, and removed from the review queue. It may be the case that the links on the review queue wiki page are simply out of date. If so, they need to be updated. Otherwise a purge of the queue is very desirable such that the queue accurately reflects ready to use new Boost libraries. Observation 2: Edit Distance (Algorithm) by Erik Erlandson shouldn't be in the queue. What he's done is to add a new algorithm to Boost.Algorithm, so the maintainer for Algorithm (Marshall?) should either merge his contribution or reject it as inappropriate or badly implemented. Failing that, Edit Distance should be spun into a standalone library instead of forking Algorithm. Right now it is supplied quite literally as a fork of the algorithm github. It should be a pull request, yet I don't see it in the pull requests for boostorg/algorithm, neither open nor closed. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 03/30/2015 05:12 PM, Niall Douglas wrote:
Observation 2: Edit Distance (Algorithm) by Erik Erlandson shouldn't be in the queue. What he's done is to add a new algorithm to Boost.Algorithm, so the maintainer for Algorithm (Marshall?) should either merge his contribution or reject it as inappropriate or badly implemented.
It ended up in the review queue because the maintainer was unresponsive to inquiries on both this mailing-list and in private.
FWIW Block pointer and Process can be found in the incubator. I'm guessing that the authors haven't totally given up hope. Array? isn't that a boost library (and now a standard library component) already? Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 30 Mar 2015 at 15:41, Robert Ramey wrote:
FWIW Block pointer and Process can be found in the incubator. I'm guessing that the authors haven't totally given up hope. Array? isn't that a boost library (and now a standard library component) already?
I believe that libraries in the review queue ought to be "Boost ready", and if they are not then they should not be in the queue. That means: 1. Configured as a Boost module according to modular Boost. Any library not updated since the 1.56 release which was the first modular release surely fails this. 2. Known to be working perfectly and passing all unit tests on recent compilers configured into C++ 11 and C++ 14 modes. 3. Known to be working perfectly and passing all unit tests with the latest Boost release. As you know Robert, I would personally make it mandatory for Travis CI to be testing the above three requirements per commit against latest Boost if a library wishes to be in the review queue (actually I'd ask for a whole lot more, but it isn't as free of cost as Travis). I don't think this asks much of the author. Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go. I might add that all the very recently added libraries to the queue I examined _do_ use Travis, though to what depth I do not know. Nevertheless, I find this a very welcome improvement that many of the official Boost libraries could do with. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
On 30 Mar 2015 at 15:41, Robert Ramey wrote:
FWIW Block pointer and Process can be found in the incubator. I'm guessing that the authors haven't totally given up hope. Array? isn't that a boost library (and now a standard library component) already?
I believe that libraries in the review queue ought to be "Boost ready", and if they are not then they should not be in the queue. That means:
1. Configured as a Boost module according to modular Boost. Any library not updated since the 1.56 release which was the first modular release surely fails this.
2. Known to be working perfectly and passing all unit tests on recent compilers configured into C++ 11 and C++ 14 modes.
3. Known to be working perfectly and passing all unit tests with the latest Boost release.
As you know Robert, I would personally make it mandatory for Travis CI to be testing the above three requirements per commit against latest Boost if a library wishes to be in the review queue (actually I'd ask for a whole lot more, but it isn't as free of cost as Travis). I don't think this asks much of the author. Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go.
Insisting on the above would be a significant policy change for Boost. "Cleaning out the Review Queue" based on this policy would be quite a change. I would like to hear what the Review Wizard and other members of the boost community think about this. I don't think anyone can impose such a change unilaterally.
I might add that all the very recently added libraries to the queue I examined _do_ use Travis, though to what depth I do not know. Nevertheless, I find this a very welcome improvement that many of the official Boost libraries could do with.
I'm certainly in favor of improving our testing coverage. But I'm not convinced that travis is the way to do it. In any case, this would be a separate topic (I think). Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 30 Mar 2015 at 18:33, Robert Ramey wrote:
Insisting on the above would be a significant policy change for Boost.
I think the policy that libraries in the review queue are maintained has always been policy as we don't admit unmaintained libraries to Boost. In fact, to my knowledge, we don't (or haven't ever) even admit libraries maintained by non-individuals. The introduction of the CMT was to my knowledge the first time a non individual person ever maintained a Boost library. I therefore think that the review queue should be purged of unmaintained libraries appropriately. Once the authors have been notified and given a chance to update their submissions of course.
"Cleaning out the Review Queue" based on this policy would be quite a change. I would like to hear what the Review Wizard and other members of the boost community think about this. I don't think anyone can impose such a change unilaterally.
As I mentioned, the newer libraries in the queue mostly have Travis enabled (from my quick scan), so it appears it's happening anyway for new libraries. Some of the libraries in the queue have been there for many years now, but their maintainers have done a great job keeping them up to date. QVM and Nowide are amongst the oldest, yet are well maintained - for example Nowide just gained a STL11 only implementation, so no more Boost dependency. I think after a purge of the unmaintained libraries, requiring Travis CI testing might only affect three to four libraries in there, two of which I just mentioned. The bigger ask than Travis support might actually be the ask of putting their submissions onto github as travis doesn't work well with non-github for open source projects.
I might add that all the very recently added libraries to the queue I examined _do_ use Travis, though to what depth I do not know. Nevertheless, I find this a very welcome improvement that many of the official Boost libraries could do with.
I'm certainly in favor of improving our testing coverage. But I'm not convinced that travis is the way to do it. In any case, this would be a separate topic (I think).
Travis only has four big pros: 1. It's free of cost. 2. It is very flexible, if awkward. Want to test some weird version of Boost with some weird version of libstdc++ and do a curl push of results to some RESTful API on your own custom server farm? No problem. 3. It's the only access to OS X CI testing for almost everyone as OS X costs a bomb for licensing. BTW, I developed a system for remotely debugging a segfaulting OS X unit test on Travis, anyone who needs it ping me. 4. It's very well integrated with other open source CI web tooling such as Coveralls. CI status integration with github, especially pull requests, is excellent. Everything else about Travis is negative compared to alternatives, including a lousy UI which gets fussy very quickly, and no (easy) way of debugging except endless git commit and push cycles. That said, for testing it compiles and passes unit tests for some recent clang + GCC + Boost, it really is very good indeed. If you're already on github and your Boost library is modular, it's a few hours of your time at most. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Mon, Mar 30, 2015 at 7:17 PM, Niall Douglas
QVM and Nowide are amongst the oldest, yet are well maintained
Speaking of which -- anyone familiar with the subject matter care to volunteer as a review manager for QVM? :) Emil
On 03/30/2015 09:33 PM, Robert Ramey wrote:
Niall Douglas wrote
On 30 Mar 2015 at 15:41, Robert Ramey wrote:
FWIW Block pointer and Process can be found in the incubator. I'm guessing that the authors haven't totally given up hope. Array? isn't that a boost library (and now a standard library component) already?
I believe that libraries in the review queue ought to be "Boost ready", and if they are not then they should not be in the queue. That means:
1. Configured as a Boost module according to modular Boost. Any library not updated since the 1.56 release which was the first modular release surely fails this.
2. Known to be working perfectly and passing all unit tests on recent compilers configured into C++ 11 and C++ 14 modes.
3. Known to be working perfectly and passing all unit tests with the latest Boost release.
As you know Robert, I would personally make it mandatory for Travis CI to be testing the above three requirements per commit against latest Boost if a library wishes to be in the review queue (actually I'd ask for a whole lot more, but it isn't as free of cost as Travis). I don't think this asks much of the author. Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go.
Insisting on the above would be a significant policy change for Boost. "Cleaning out the Review Queue" based on this policy would be quite a change. I would like to hear what the Review Wizard and other members of the boost community think about this. I don't think anyone can impose such a change unilaterally. ... Robert Ramey
I think I can speak for both Ron and myself when I say that our opinion has been that we run the review system the Boost community wants to have much more than we decide what the review system should be. Historically, we have long had libraries that were in the queue, but not scheduled for a review, that were not yet Boost ready. In a sense they were placeholders that made it clear someone was trying to create a library for task X. The queue was where they could be visible to a large fraction of the community and start discussions about implementation decisions and other useful questions. The more recent creation of the incubator probably makes this practice outdated, and I expect newer libraries under development will receive more valuable attention in the incubator than in the queue. As a community member, I would support moving such libraries out of the queue and into the incubator, but preferably with developer agreement. Testing policy is a more difficult question in my mind. It has not been the history of Boost to require any specific testing infrastructure (or most other sorts of infrastructure) for libraries. There have been times when this non-requirement has increased complexity, but there have also been times when experimentation has found better solutions. As a community member, I'm wary of forcing standardization, and would need some pretty persuasive arguments to support it. One thing I strongly support is community discussions on how to improve our review process. Thanks to Niall for starting one, and to all the other participants, as well. John Phillips Review Wizard
John Phillips wrote
The more recent creation of the incubator probably makes this practice outdated, and I expect newer libraries under development will receive more valuable attention in the incubator than in the queue. As a community member, I would support moving such libraries out of the queue and into the incubator, but preferably with developer agreement.
FWIW I searched the queue, made efforts to track down the authors and sent them specific invitations to submit information on their libraries to the incubator. The majority complied those libraries are in the incubator right now.
Testing policy is a more difficult question in my mind. It has not been the history of Boost to require any specific testing infrastructure (or most other sorts of infrastructure) for libraries. There have been times when this non-requirement has increased complexity, but there have also been times when experimentation has found better solutions. As a community member, I'm wary of forcing standardization, and would need some pretty persuasive arguments to support it.
Requirements for the incubator where designed to reflect those of boost. Personally, I agree with the above that the current requirements as far as testing is concerned are appropriate. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 1 Apr 2015 at 23:31, John Phillips wrote:
I think I can speak for both Ron and myself when I say that our opinion has been that we run the review system the Boost community wants to have much more than we decide what the review system should be.
Historically, we have long had libraries that were in the queue, but not scheduled for a review, that were not yet Boost ready. In a sense they were placeholders that made it clear someone was trying to create a library for task X. The queue was where they could be visible to a large fraction of the community and start discussions about implementation decisions and other useful questions.
I had thought that the historical placeholder page was https://svn.boost.org/trac/boost/wiki/ReviewScheduleLibraries? Whereas the page at http://www.boost.org/community/review_schedule.html was for supposedly "Boost ready" libraries?
The more recent creation of the incubator probably makes this practice outdated, and I expect newer libraries under development will receive more valuable attention in the incubator than in the queue. As a community member, I would support moving such libraries out of the queue and into the incubator, but preferably with developer agreement.
I would far prefer a scoreboard based system which shows a ranked list of libraries by quality score. Auto generated from a database, of course. I also think that Boost 2.0 should be about being a single stop portal for "Boost quality" libraries rather than Boost libraries. I was recently working with eggs.variant for example, and that is Boost quality written to Boost guidelines and yet I understand there is zero interest in it entering Boost, despite it being superior to Boost.Variant in almost every way. Same goes for HPX and plenty more. Anyway, I'll elaborate during the Boost 2.0 talk.
Testing policy is a more difficult question in my mind. It has not been the history of Boost to require any specific testing infrastructure (or most other sorts of infrastructure) for libraries. There have been times when this non-requirement has increased complexity, but there have also been times when experimentation has found better solutions. As a community member, I'm wary of forcing standardization, and would need some pretty persuasive arguments to support it.
Requiring Travis support does not cause the exclusion of any other form of testing, it just sets an absolute minimum bar to pass - a minimum bar that I might add is increasingly becoming the bar for ALL open source projects, so by not requiring it Boost looks behind the times to the wider open source community. It does require people to use github which is a bit locked in alright, but the huge advantage of git is that you can have github auto mirror a git repo elsewhere to avail of the github-specific free tooling.
One thing I strongly support is community discussions on how to improve our review process. Thanks to Niall for starting one, and to all the other participants, as well.
By the end of this week I'll post the list of forthcoming C++ 11/14 only Boost libraries I'll be reviewing in my C++ Now talk. I am currently building a "traffic light" matrix of "Boost readiness" of the C++ 11/14 only libraries that I know of ordered by closeness to entering Boost. I think it'll be surprisingly interesting to the list, I was a bit surprised myself. What would be really great is if the formal review schedule at http://www.boost.org/community/review_schedule.html could be enhanced to also show the same traffic light matrix of "Boost readiness" of its entrants. Actually, I'd personally like such a traffic light matrix for *all* Boost libraries, because it would illuminate just how badly maintained some of them are (and hopefully encourage their timely removal). Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
The more recent creation of the incubator probably makes this practice outdated, and I expect newer libraries under development will receive more valuable attention in the incubator than in the queue. As a community member, I would support moving such libraries out of the queue and into the incubator, but preferably with developer agreement.
I would far prefer a scoreboard based system which shows a ranked list of libraries by quality score. Auto generated from a database, of course.
FWIW the boost library incubator already has the implemented. Reviewers fill out the traditional boost review "form" (a standard email) with their comments on different aspects. The only new wrinkle is that they attach 1-5 stars to each aspect. When a library has some number of reviews (5?) the review summary shows the average (or median) stars rating (e.g. 3 1/2). It's all in there. But since there are only a whopping total of 2 reviews - it hasn't been really visible. (It's also possible that there might be some php bug lurking in there). But I don't believe that all reviews should be weighted equally so I don't see this system is supplanting the boost review process (which I see as the soul of boost) but rather reinforcing it by a) increasing the number of reviews by de-coupling the review from some narrow specific time period. b) tying the reviews to the library "forever" as the reviews contain very useful information regarding rationale for design choices and listings of issues to be considered. Currently all the information get's lost and has to be re-discovered when someone else needs to understand and/or maintain the library. c) expediting the review process. the "pre-reviews" will often smoke out deal breakers which will prevent a library which isn't really ready yet. The serialization library flunked it's first review. I concluded that if I had more feedback earlier, I might have saved myself and everyone else a lot of pain. (In spite of the fact that I uploaded 27 different versions of the library in the course of this saga). Basically if a library has 5 very positive rave reviews or 5 awful ones - review itself is going to be a no-brainer and we can just move on to the more difficult cases. d) getting feedback to library authors earlier so that they can fix things earlier. This will save everyone lot's of time and enhance the quality of libraries being reviewed and hence the chances of success. Naturally I'm disappointed that so far the site has so far only garnered a whopping two reviews. But as should be apparent, I'm not giving up on this until I'm successful. So in order to save myself, everyone else even more aggravation, as well as diminished wasted space on this list, I urge everyone who has knowledge and/or interest in some library on the incubator to write a review and encourage other parties to do the same.
I also think that Boost 2.0 should be about being a single stop portal for "Boost quality" libraries rather than Boost libraries. I was recently working with eggs.variant for example, and that is Boost quality written to Boost guidelines and yet I understand there is zero interest in it entering Boost, despite it being superior to Boost.Variant in almost every way. Same goes for HPX and plenty more.
LOL - it should be pretty apparent that this is the goal of the incubator. Please don't let the cat out the bag. There will be a HUGE announcement at C++Now.
Anyway, I'll elaborate during the Boost 2.0 talk.
Hey - I thought I was giving this talk !
What would be really great is if the formal review schedule at http://www.boost.org/community/review_schedule.html could be enhanced to also show the same traffic light matrix of "Boost readiness" of its entrants.
Tweaks of this nature are relatively easy to add to the incubator should some consensus occur.
Actually, I'd personally like such a traffic light matrix for *all* Boost libraries, because it would illuminate just how badly maintained some of them are (and hopefully encourage their timely removal).
You're preaching to the choir here. There's lots of fertile ground here. One thing I would like to see right now would be for review wizard (maybe after running it by the steering committee or other influential boosters) to impose the requirement that any library to be reviewed be on the incubator. This would be the first official connection between Boost itself and the incubator. I think the time is right for this now. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2 Apr 2015 at 7:57, Robert Ramey wrote:
Naturally I'm disappointed that so far the site has so far only garnered a whopping two reviews. But as should be apparent, I'm not giving up on this until I'm successful. So in order to save myself, everyone else even more aggravation, as well as diminished wasted space on this list, I urge everyone who has knowledge and/or interest in some library on the incubator to write a review and encourage other parties to do the same.
I'm sure you remember I want most of the scores to be generated by automated scripts on a daily basis. That tightens the feedback loop between each commit improving the code and its ranking to optimal. Not that I have any problem with some scores being manual, _so_ _long_ as those expire after let's say 250 commits reaching master branch.
I also think that Boost 2.0 should be about being a single stop portal for "Boost quality" libraries rather than Boost libraries. I was recently working with eggs.variant for example, and that is Boost quality written to Boost guidelines and yet I understand there is zero interest in it entering Boost, despite it being superior to Boost.Variant in almost every way. Same goes for HPX and plenty more.
LOL - it should be pretty apparent that this is the goal of the incubator. Please don't let the cat out the bag. There will be a HUGE announcement at C++Now.
I am as usual completely out of the loop Robert.
Anyway, I'll elaborate during the Boost 2.0 talk.
Hey - I thought I was giving this talk !
You are. And you'll see what I'll say as soon as you show me what you'll say. I was hoping we'd both be fully preaware of what the other will say, that should save time for everybody. My elaboration is only about five slides, and will only contest what I disagree with your main presentation in a factual fashion, not argumentative.
What would be really great is if the formal review schedule at http://www.boost.org/community/review_schedule.html could be enhanced to also show the same traffic light matrix of "Boost readiness" of its entrants.
Tweaks of this nature are relatively easy to add to the incubator should some consensus occur.
As a proof of concept I think yes. It is just web form => database => HTML page, very 1990s web. Though as I'm sure you'll agree it's far harder than it should be with Wordpress. As a test bot driven system whereby I mean a v2 of the existing test regression results submission system, it's nowhere close. A full fat system would calculate hundreds of scores per commit. Those scores need to aggregate into traffic lights on a dashboard in a fair representation way that has consensus behind it. Every year a stakeholder analysis needs to happen to figure out a new set of equations for the rankings so there is continual improvement. Quite bluntly, I don't think Wordpress is up to it Robert. I actually don't think Wordpress is up to the current incubator either, it's the wrong CMS for the task at hand.
Actually, I'd personally like such a traffic light matrix for *all* Boost libraries, because it would illuminate just how badly maintained some of them are (and hopefully encourage their timely removal).
You're preaching to the choir here. There's lots of fertile ground here.
We are consistently moving closer to a common position no doubt. The main technical differences are on scalability. I essentially want as little human involvement as possible so things really can scale out. I think you think that loses the whole point of Boost - the human review.
One thing I would like to see right now would be for review wizard (maybe after running it by the steering committee or other influential boosters) to impose the requirement that any library to be reviewed be on the incubator. This would be the first official connection between Boost itself and the incubator. I think the time is right for this now.
I think reviews on the incubator are unworkable. Wordpress is the wrong tool for discussing code. Github's per line and per commit discussion system is considerably better. As I've suggested before, some AJAX which asks github for all the comments and aggregates them onto the incubator makes enormous sense. Someone has to write that though, and it's not a trivial bit of work. At least 200 hours to write something which (a) displays the comments coherently with code expansion (b) doesn't overload the github api (i.e. caches locally) and (c) allows two way commenting, so either you can comment on github or on the incubator and comments appear on both. And that 200 hours doesn't include a voting system. A custom view of github source code with a custom commenting and ranking system is possible, but now you're talking 500 hours at least. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
Hey - I thought I was giving this talk !
You are. And you'll see what I'll say as soon as you show me what you'll say.
I've committed to previewing my presentation to any parties which are interested enough in the topic to delve deeper into the subject. (of course this includes you!). Also the conference also has provision for lightening talks which anyone can present on short notice. In addition there are other presentations which touch on related themes as well as a "Future of Boost" session. So I'm confident that we'll all get to have our say.
As a proof of concept I think yes. It is just web form => database => HTML page, very 1990s web. Though as I'm sure you'll agree it's far harder than it should be with Wordpress.
Quite bluntly, I don't think Wordpress is up to it Robert. I actually don't think Wordpress is up to the current incubator either, it's the wrong CMS for the task at hand.
I spent some significant time looking at an experimenting with alternatives. When one makes "toy" applications they all look good. Delving deeper - they all had problems covering the breadth that of applicability that the incubator requires. I definitely have my share of complaints about wordpress. But I don't think any other alternative would have been better. The incubator contains about 1000 lines of php code and 28 active wordpress plug-ins. It's a pain to figure the stuff out, but once one does it works reliably. So I don't regret the choice.
We are consistently moving closer to a common position no doubt. The main technical differences are on scalability. I essentially want as little human involvement as possible so things really can scale out. I think you think that loses the whole point of Boost - the human review.
correct.
One thing I would like to see right now would be for review wizard (maybe after running it by the steering committee or other influential boosters) to impose the requirement that any library to be reviewed be on the incubator. This would be the first official connection between Boost itself and the incubator. I think the time is right for this now.
I think reviews on the incubator are unworkable. Wordpress is the wrong tool for discussing code.
Github's per line and per commit discussion system is considerably better. ... snip .. but now you're talking 500 hours at least.
The incubator's review setup is meant to implement the current practices of Boost Reviews. I believe it does that in an effective way. You're advocating a whole different way of reviewing libraries. That's fine - but it has nothing to do with the incubator. Should Boost change it's way of reviewing/certifying libraries the the question of implementing the new system would be wide open. -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2 Apr 2015 at 11:56, Robert Ramey wrote:
You're advocating a whole different way of reviewing libraries. That's fine - but it has nothing to do with the incubator. Should Boost change it's way of reviewing/certifying libraries the the question of implementing the new system would be wide open.
I'll tell you my ideal outcome, and it's what I'll pitch after your talk. When a user thinks "I need a (C++) library to do X", they instantly think of http://choose.boost.org/. On that page is a set of user selectable fields which lets the user choose any ranking criteria they want, with results displayed with any detail columns of their choice. In the live populated results of ranked libraries, a button marked "Download" downloads a ready to go tarball of that library and all its dependencies. Another button marked "Live Trial" opens an online web compiler with that library preinstalled. Anyone can add any library to the database using a simple form. Boost's value add is that all results are scored by the same (hopefully high quality) rules, so if we choose the rules well then the best libraries bubble to the top, and the worst appear at the bottom. As you correctly point out, that is a completely different Boost to the current one. But then it *is* a "Boost 2.0". And note that one of the ranking criteria can be "has passed a peer review", in which case you've just selected Boost 1.x type libraries only for the results to display. Before you say this is pretty much what the Incubator does, I'll ask this: does the present implementation of the Incubator scale to 1,000 C++ libraries? All being repeatedly updated on a daily basis by automated Jenkins and Travis CI instances? What about multiple versions of those libraries, as surely C++ 11/14 only versions of well known libraries are coming and the 03 edition remains supported. Indeed, if BindLib proves popular, we'll even be seeing API versioning become popular, so library X may depend on vA to vD of the library Y API, but won't work with anything else. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
On 2 Apr 2015 at 11:56, Robert Ramey wrote:
You're advocating a whole different way of reviewing libraries. That's fine - but it has nothing to do with the incubator. Should Boost change it's way of reviewing/certifying libraries the the question of implementing the new system would be wide open.
I'll tell you my ideal outcome, and it's what I'll pitch after your talk.
I was thinking we'd keep building the suspense until the conference so I won't go in to detail here - but just a couple of points.
As you correctly point out, that is a completely different Boost to the current one. But then it *is* a "Boost 2.0".
fair enough.
Before you say this is pretty much what the Incubator does,
LOL - wasn't going to say that.
I'll ask this: does the present implementation of the Incubator scale to 1,000 C++ libraries? ... snip ..
I don't think the capacity of the incubator is going to be insufficient for any vision Boost can realistically expect to agree on and implement in my lifetime. (but then I'm 67). I just don't think it will every come up and issue. My experience in implementing the incubator has led me to a few conclusions: a) wordpress is slow - at least compared to what we expect here at Boost. This is a feature of the way wordpress is implemented and won't change as long as wordpress is being used. b) a) doesn't matter. It's a facade over a git server (usually github) and some other servers. right now we have 25 libraries and maybe 50 visits a day. I see it a long time before that increases by a factor of 100 - and even then I'm pretty sure it could easiliy handle the load. And as I said - I'll be dead by the time that happens. c) wordpress has a number of interesting features for this app. (the incubator - not the "Naill Biter"). 1) wordpress is pretty good with it's "market place" of add-ons. Like most software - most of these are crappy and incomplete. So I have to test on the average about 5 to keep 1. BTW this is pretty much my experience with C++ libraries as well. So it's not really a wordpress thing. In any case it's quite inspiring as far as seeing where I'd like to see Boost go. In fact, if I have nothing else else to do I might do a lightening talk "What boost can learn from wordpress". I'm lobbying to have the iightning takes moved to the Aspen Meadows Bar - but so far without success. 2) wordpress claims that 25% of all websites on the planet run on wordpress. It might be true. 3) I went to a local word camp - organized by ? - I don't know. Very fun - hippies, braless females, the works. Party like it's 1969 !!!! Way too much fun for C++ programmers. 2) PHP surprised me by being much better than I thought it was going to be. A great thing that it has is that it's well documented - as good as CPP Reference - and it lets users annotate the documentation with all the little gotchas they discover - and indispensable feature. We could learn from this. 4) If someone had nothing else to do, he could make one kickass CMS by just re-implementing wordpress in C++ with an API for add-ons as shared libraries. Like COM all over again. In todays market - easily worth several billion dollars. Just the savings heat energy would like eliminate the global warming problem. 5) The number one problem with the incubator is ..... Finding libraries which meet the minimal standards of the incubator - which are significantly lower than Boost standards. The incubator standards require some tests, some documentation, and some working code - that's about it. If you troll the web for C++ code - not 1 "library" in 1000 meets even these minimal standards. (don't get me started on concepts). All of the above certainly influences my views about what we want and can expect to accomplish. This may well be the most run C++Now since we had keynoters Linus Torvalds and David Abrahams give a demonstration of pair programming. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
On 04/02/2015 04:13 PM, Niall Douglas wrote:
The more recent creation of the incubator probably makes this practice outdated, and I expect newer libraries under development will receive more valuable attention in the incubator than in the queue. As a community member, I would support moving such libraries out of the queue and into the incubator, but preferably with developer agreement.
I would far prefer a scoreboard based system which shows a ranked list of libraries by quality score. Auto generated from a database, of course.
It would seem that automatic scoring of libraries is both hard technically, and likely to result in arguments.
Testing policy is a more difficult question in my mind. It has not been the history of Boost to require any specific testing infrastructure (or most other sorts of infrastructure) for libraries. There have been times when this non-requirement has increased complexity, but there have also been times when experimentation has found better solutions. As a community member, I'm wary of forcing standardization, and would need some pretty persuasive arguments to support it.
Requiring Travis support does not cause the exclusion of any other form of testing, it just sets an absolute minimum bar to pass - a minimum bar that I might add is increasingly becoming the bar for ALL open source projects, so by not requiring it Boost looks behind the times to the wider open source community.
While Travis is a convenient tool for some purposes, it's just tool to run scripts on commits. Having .travis.yml in a repository does not say much about the quality of the code per se. Did you mean a more specific suggestion? Also, it does not support Windows, for all I know, whereas Boost is all about portable C++, which very much includes Windows. -- Vladimir Prus CodeSourcery / Mentor Embedded http://vladimirprus.com
On 2 Apr 2015 at 18:09, Vladimir Prus wrote:
I would far prefer a scoreboard based system which shows a ranked list of libraries by quality score. Auto generated from a database, of course.
It would seem that automatic scoring of libraries is both hard technically, and likely to result in arguments.
You're thinking bigger than me. I was thinking automatic clang AST tests such as: * Identifier naming conventions followed: 0 to 100, made up of: * Macro naming conventions * Type naming conventions * enum naming conventions ... and so on. I'm thinking the really basic, really uncontroversial stuff. Once all those are done, *then* you might starting thinking about more complex analysis. But I'd do all the simple scoring tests first - the "box ticking" tests for qualifying for Boost entry.
Requiring Travis support does not cause the exclusion of any other form of testing, it just sets an absolute minimum bar to pass - a minimum bar that I might add is increasingly becoming the bar for ALL open source projects, so by not requiring it Boost looks behind the times to the wider open source community.
While Travis is a convenient tool for some purposes, it's just tool to run scripts on commits. Having .travis.yml in a repository does not say much about the quality of the code per se. Did you mean a more specific suggestion?
By examining .travis.yml I can tell within a beat if a library is maintained or not, even if no other source changes have occurred. The simple question is "does this library compile and pass its unit tests with the latest Boost release?". If the author updates .travis.yml with latest Boost releases then the answer is yes, this library is maintained. Even better if travis.yml auto fetches the latest Boost release as Antony's script does. Travis itself isn't hugely important. It's what support for it, or any other CI, signifies.
Also, it does not support Windows, for all I know, whereas Boost is all about portable C++, which very much includes Windows.
Travis support is about absolute minimum bar. I personally believe that when searching for an open source library to solve a problem that if there is no CI testing in there when Travis is free, that sends a very strong message about the lack of dilligence of the authors of that library. Therefore if Boost is about finding quality C++ libraries, those libraries need CI testing in there. As libraries in the review queue don't appear on the regression testing for Boost, that implies that they ought to get some CI testing from somewhere. And Travis is free of cost. Though if someone wants to set up a proper Jenkins install, that's much better again. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall, thanks for clarifications, I indeed initially misunderstood your suggestion. What you propose, a certain minimum level of quality/testing to demand, seems quite reasonable to me. On 04/02/2015 08:51 PM, Niall Douglas wrote:
On 2 Apr 2015 at 18:09, Vladimir Prus wrote:
I would far prefer a scoreboard based system which shows a ranked list of libraries by quality score. Auto generated from a database, of course.
It would seem that automatic scoring of libraries is both hard technically, and likely to result in arguments.
You're thinking bigger than me. I was thinking automatic clang AST tests such as:
* Identifier naming conventions followed: 0 to 100, made up of: * Macro naming conventions * Type naming conventions * enum naming conventions
... and so on.
I'm thinking the really basic, really uncontroversial stuff. Once all those are done, *then* you might starting thinking about more complex analysis. But I'd do all the simple scoring tests first - the "box ticking" tests for qualifying for Boost entry.
Requiring Travis support does not cause the exclusion of any other form of testing, it just sets an absolute minimum bar to pass - a minimum bar that I might add is increasingly becoming the bar for ALL open source projects, so by not requiring it Boost looks behind the times to the wider open source community.
While Travis is a convenient tool for some purposes, it's just tool to run scripts on commits. Having .travis.yml in a repository does not say much about the quality of the code per se. Did you mean a more specific suggestion?
By examining .travis.yml I can tell within a beat if a library is maintained or not, even if no other source changes have occurred. The simple question is "does this library compile and pass its unit tests with the latest Boost release?". If the author updates .travis.yml with latest Boost releases then the answer is yes, this library is maintained. Even better if travis.yml auto fetches the latest Boost release as Antony's script does.
Travis itself isn't hugely important. It's what support for it, or any other CI, signifies.
Also, it does not support Windows, for all I know, whereas Boost is all about portable C++, which very much includes Windows.
Travis support is about absolute minimum bar. I personally believe that when searching for an open source library to solve a problem that if there is no CI testing in there when Travis is free, that sends a very strong message about the lack of dilligence of the authors of that library.
Therefore if Boost is about finding quality C++ libraries, those libraries need CI testing in there. As libraries in the review queue don't appear on the regression testing for Boost, that implies that they ought to get some CI testing from somewhere. And Travis is free of cost. Though if someone wants to set up a proper Jenkins install, that's much better again.
Niall
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Vladimir Prus CodeSourcery / Mentor Embedded http://vladimirprus.com
On 2 Apr 2015 at 21:59, Vladimir Prus wrote:
thanks for clarifications, I indeed initially misunderstood your suggestion. What you propose, a certain minimum level of quality/testing to demand, seems quite reasonable to me.
Just to clarify a bit further, somewhere in my archives I drew up a list of 21 things a Boost library must have which were probably easy to check using libclang. Naming conventions, proper use of a DECL macro for visibility, proper use of the virtual keyword and so on. My only real concern really is why keep such tooling Boost only when it could be contributed to the clang static analyser. It's also boring and tedious work writing and debugging such "style checkers". I also feel surprise that no corporate sponsor hasn't sponsored such tooling yet, and that makes me suspicious if they are as easy to implement as I think. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 04/02/2015 10:40 PM, Niall Douglas wrote:
On 2 Apr 2015 at 21:59, Vladimir Prus wrote:
thanks for clarifications, I indeed initially misunderstood your suggestion. What you propose, a certain minimum level of quality/testing to demand, seems quite reasonable to me.
Just to clarify a bit further, somewhere in my archives I drew up a list of 21 things a Boost library must have which were probably easy to check using libclang. Naming conventions, proper use of a DECL macro for visibility, proper use of the virtual keyword and so on.
My only real concern really is why keep such tooling Boost only when it could be contributed to the clang static analyser. It's also boring and tedious work writing and debugging such "style checkers". I also feel surprise that no corporate sponsor hasn't sponsored such tooling yet, and that makes me suspicious if they are as easy to implement as I think.
I'm not really surprised about that part - selling tools is generally hard, especially targeting less tangible aspects like quality, especially addressing third-party open-source products. It's not like corporations have budgets specifically for helping open-source projects they use. - Volodya -- Vladimir Prus CodeSourcery / Mentor Embedded http://vladimirprus.com
On 3 Apr 2015 at 13:14, Vladimir Prus wrote:
I also feel surprise that no corporate sponsor hasn't sponsored such tooling yet, and that makes me suspicious if they are as easy to implement as I think.
I'm not really surprised about that part - selling tools is generally hard, especially targeting less tangible aspects like quality, especially addressing third-party open-source products. It's not like corporations have budgets specifically for helping open-source projects they use.
You hugely speak the truth here. For some odd reason, tooling is seen as a cost overhead instead of a productivity investment. Expenditure is unwisely allocated as a result, and I've seen that attitude at almost every corporate employer I've ever worked for. One of the few orgs to really understand that tooling is a productivity investment not cost overhead is Microsoft, and you can see that in their excellent Visual Studio which is effectively given away for free nowadays, but it pays off hugely in saving the time of anyone using it (i.e. the rest of the org, everyone in the ecosystem). We're guilty of it here at Boost too though. The SC has been clear on not being willing to fund paid work on Boost, though they kindly made an exception for GSoC student extensions. I can see their rationale on this - some open source orgs are really marketing, branding and funding platforms, and as much as I think systems programming (including C++) direly needs one of those, I can see the arguments against funding major feature work. However tooling I think is different. Crappy tooling is a productivity drag on everybody, so it's a public good and therefore needs community funding as no one person will fund it alone given the individual benefit to cost. It would be super great if everybody affected chipped in $5 to a fund to get a given piece of tooling fixed, but the SC did not indicate enormous enthusiasm for what would effectively be a bug bounty system. The big problem of course is choosing someone to do the work for the bounty, and then others might feel excluded, and then it might get political and aggravated and unpleasant. Essentially it could be a lot of admin aggro for little gain. In short, there are no easy fixes, and it's always easier to do nothing. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
2015-03-30 21:31 GMT-03:00 Niall Douglas
Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go.
Where can I find such work? -- Vinícius dos Santos Oliveira https://about.me/vinipsmaker
On 30 Mar 2015 at 23:07, Vinícius dos Santos Oliveira wrote:
Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go.
Where can I find such work?
I'd say https://github.com/apolukhin/Boost.DLL/blob/develop/.travis.yml is a very good bet. Antony's script runs rings around my own. His is lovely and generic. Mine is a nasty hack job. Plus his does everything mine can, and lots more. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 31 Mar 2015 at 3:21, Niall Douglas wrote:
I'd say https://github.com/apolukhin/Boost.DLL/blob/develop/.travis.yml is a very good bet.
Antony's script runs rings around my own. His is lovely and generic. Mine is a nasty hack job. Plus his does everything mine can, and lots more.
I would add though that often you don't want to test against Boost HEAD as Antony's script does, but rather some Boost release. I use one of three methods here: (i) simply apt-get install libboost-dev on travis, that'll fetch some fairly ancient but stable Boost (ii) do add-apt-repository -y ppa:boost-latest/ppa first and then install Boost, this gets a fairly new (currently 1.55) Boost (iii) wget https://github.com/ned14/boost-release/archive/master.zip which is always the latest Boost stable release and unpack it, symlinking in your project into libs and running ./b2 headers to link it in. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 31/03/2015 04:21, Niall Douglas wrote:
On 30 Mar 2015 at 23:07, Vinícius dos Santos Oliveira wrote:
Antony has done a great job at making a generic Travis script for Boost libraries which just drops in ready to go.
Where can I find such work? I'd say https://github.com/apolukhin/Boost.DLL/blob/develop/.travis.yml is a very good bet.
There is also a wiki page explaining how to use it https://svn.boost.org/trac/boost/wiki/TravisCoverals MAT.
On 31 Mar 2015 at 6:47, Mathieu Champlon wrote:
There is also a wiki page explaining how to use it https://svn.boost.org/trac/boost/wiki/TravisCoverals
Why is there only one 'l' in the page name? Antony's script in DLL can also call Coverity Free. Coverity's big advantage over other tools is being able to see across translation units. I keep meaning to add it to AFIO's per commit CI testing. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Tue, 31 Mar 2015 00:41:51 +0200, Robert Ramey
FWIW Block pointer and Process can be found in the incubator. I'm guessing that the authors haven't totally given up hope.
I'm fine if Process is removed from the review queue (I'm the one who asked for the library to be reviewed). The library can still be found on GitHub (https://github.com/BorisSchaeling/boost-process is still the latest version; the code hasn't been changed for two years). I'm also fine if Process remains in the incubator. It would be great if others pick up the library from where it's left (lots of people worked on Process since 2006!). The library has pretty good code coverage, and there shouldn't be a single unit test failing (otherwise please open a ticket on GitHub; I try to pass on the library in a good state). Boris
On 3/30/15, Niall Douglas
I've been reviewing the Boost Formal Review queue as part of preparing my C++ Now presentation and I observe the following things:
Observation 1. The following review submissions have not been updated by their authors in over two years and therefore count as unmaintained. Abandoned libraries before peer review are even more undesirable than abandoned official Boost libraries, I therefore suggest they be removed from the review queue until their authors bring them up to date:
* Array (last update 2012)
Array is a multidimensional version of boost/std array that's not required much updating since the last commit. Locally, dependencies on other boost libs have been removed, and I'm open to suggestions for improvements either on github or if/when it gets to review. Brian -- www.maidsafe.net
On 31 Mar 2015 at 22:54, Brian Smith wrote:
Array is a multidimensional version of boost/std array that's not required much updating since the last commit. Locally, dependencies on other boost libs have been removed, and I'm open to suggestions for improvements either on github or if/when it gets to review.
Any chance of getting its docs to appear as html on a site somewhere? Every other library in the queue has a link to its docs. Many people use github's own website publisher. Simply push the html to a branch called gh-pages after enabling. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 4/1/15, Niall Douglas
Any chance of getting its docs to appear as html on a site somewhere? Every other library in the queue has a link to its docs.
Many people use github's own website publisher. Simply push the html to a branch called gh-pages after enabling.
I've created a gh-pages branch with a link to the documentation at, https://github.com/BrianJSmith/Array/tree/gh-pages. Thanks for the hint Niall. Brian -- www.maidsafe.net
On 03/30/2015 11:12 AM, Niall Douglas wrote:
I've been reviewing the Boost Formal Review queue as part of preparing my C++ Now presentation and I observe the following things:
Observation 1. The following review submissions have not been updated by their authors in over two years and therefore count as unmaintained. Abandoned libraries before peer review are even more undesirable than abandoned official Boost libraries, I therefore suggest they be removed from the review queue until their authors bring them up to date:
* Join (last update 2009)
* Block pointer (last update: well, it's still in the SVN sandbox)
* Singularity (last update 2011)
* Extended Complex Numbers (last update 2012)
* Array (last update 2012)
* Countertree (last update 2012, I see a github import in 2013 but no new commits)
* Process (last update 2012). ... Niall
As a friendly reminder, one thing that leads to libraries stagnating on the queue is that no one volunteers to be the Review Manager for them. So, if you have some experience with Boost, and you are interested in one or more of the libraries on the queue, please contact the author, and Ron and me, and volunteer. The authors pour substantial sweat and cogitation into these libraries, and will be overjoyed by your effort to help them move forward. John
On 4/1/2015 11:37 PM, John Phillips wrote:
On 03/30/2015 11:12 AM, Niall Douglas wrote:
I've been reviewing the Boost Formal Review queue as part of preparing my C++ Now presentation and I observe the following things:
Observation 1. The following review submissions have not been updated by their authors in over two years and therefore count as unmaintained. Abandoned libraries before peer review are even more undesirable than abandoned official Boost libraries, I therefore suggest they be removed from the review queue until their authors bring them up to date:
* Join (last update 2009)
* Block pointer (last update: well, it's still in the SVN sandbox)
* Singularity (last update 2011)
* Extended Complex Numbers (last update 2012)
* Array (last update 2012)
* Countertree (last update 2012, I see a github import in 2013 but no new commits)
* Process (last update 2012). ... Niall
As a friendly reminder, one thing that leads to libraries stagnating on the queue is that no one volunteers to be the Review Manager for them. So, if you have some experience with Boost, and you are interested in one or more of the libraries on the queue, please contact the author, and Ron and me, and volunteer.
The authors pour substantial sweat and cogitation into these libraries, and will be overjoyed by your effort to help them move forward.
A useful idea might be a list of people willing to serve as a library reviewer manager. You and Ron could keep the list and periodically you could ask those on the list if they will have time within the next nnn months to serve as review manager and, if so, for what library on the review queue. If you establish such a list, and are willing to ask people to be on that list, I will gladly volunteer. What does bother me is that I agree with you that the stagnation on the review queue comes largely from a lack of review managers, not from poor quality libraries on the review queue, but as soon as this is mentioned, many people agree this is holding up the review process but none of those people themselves are willing to be review managers. If there were such an official list it might encourage people to sign up for it and at least when they are contacted periodically they can then decide if they will have the time to serve. Sort of like the jury system but totally voluntary.
On 1 Apr 2015 at 23:37, John Phillips wrote:
As a friendly reminder, one thing that leads to libraries stagnating on the queue is that no one volunteers to be the Review Manager for them. So, if you have some experience with Boost, and you are interested in one or more of the libraries on the queue, please contact the author, and Ron and me, and volunteer.
The authors pour substantial sweat and cogitation into these libraries, and will be overjoyed by your effort to help them move forward.
I still think that requiring anyone submitting a library for review must first act as review manager for another library would be a very wise strategy. I don't think it introduces the conflict of interest others think, and even if it does, movement is better than stagnation. Both myself and Antony have served as review manager for other libraries since submitting our libraries. The present situation is frustrating, though I'd imagine for Emil it is even worse seeing as he's been waiting a year longer, and yet has been doing all the work a library maintainer does except without the recognition or visibility of being included into Boost official. One of the things I was going to recommend at Robert's Boost 2.0 talk at C++ Now was that if a Boost ready library does not see a review after three years, and during that time it has remained maintained to the same quality as a Boost library, it should enter Boost regardless. Whilst peer review is important, it is impractical for very niche libraries, and where the quality of implementation, documentation, testing, maintainance and the maintainer are all up to Boost standards repeatedly demonstrated over a three year period then peer review is in my opinion dispensible. Similarly, if an existing library is not substantially maintained for three years, especially if its maintainer has vanished, it gets dropped from Boost regardless. Obviously notification of automatic addition and automatic removals would form part of the release notes for two preceding major releases. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On April 2, 2015 8:49:28 AM EDT, Niall Douglas
One of the things I was going to recommend at Robert's Boost 2.0 talk at C++ Now was that if a Boost ready library does not see a review after three years, and during that time it has remained maintained to the same quality as a Boost library, it should enter Boost regardless. Whilst peer review is important, it is impractical for very niche libraries, and where the quality of implementation, documentation, testing, maintainance and the maintainer are all up to Boost standards repeatedly demonstrated over a three year period then peer review is in my opinion dispensible.
We determine that a library is up to Boost standard through the peer review process. ___ Rob (Sent from my portable computation engine)
On Thu, Apr 2, 2015 at 4:56 PM, Rob Stewart
We determine that a library is up to Boost standard through the peer review process.
Which might not be ideal and might not scale as Boost gets bigger.. -- Olaf
On Thu, Apr 2, 2015 at 5:49 AM, Niall Douglas
On 1 Apr 2015 at 23:37, John Phillips wrote: I still think that requiring anyone submitting a library for review must first act as review manager for another library would be a very wise strategy.
IMO acting as a review manager shouldn't be something one does because he must.
Both myself and Antony have served as review manager for other libraries since submitting our libraries. The present situation is frustrating, though I'd imagine for Emil it is even worse seeing as he's been waiting a year longer, and yet has been doing all the work a library maintainer does except without the recognition or visibility of being included into Boost official.
Doesn't this simply mean that there isn't enough interest in the library within the Boost community? :)
Whilst peer review is important, it is impractical for very niche libraries
Should niche libraries be part of Boost? In the case of QVM I like to think that a generic quaternion/vector/matrix library is not *that* niche but the evidence seems to show that it is. Regardless I don't feel that the Boost community owes me a review. :) -- Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode
On 2 Apr 2015 at 11:08, Emil Dotchevski wrote:
Both myself and Antony have served as review manager for other libraries since submitting our libraries. The present situation is frustrating, though I'd imagine for Emil it is even worse seeing as he's been waiting a year longer, and yet has been doing all the work a library maintainer does except without the recognition or visibility of being included into Boost official.
Doesn't this simply mean that there isn't enough interest in the library within the Boost community? :)
It would appear so. Another way of saying this is that your library is so niche that very few consider themselves competent to review manage it. I, for one, wouldn't even try.
Whilst peer review is important, it is impractical for very niche libraries
Should niche libraries be part of Boost?
That's part of the wider debate, definitely. Is quality what we want for the Boost brand, or is popularity? I'm in the former camp, mainly because as the standard library grows it is necessarily the case that the low hanging fruit is picked and subsequent libraries must be more niche, and less popular. Therefore, to grow and evolve Boost I believe should aim for quality, not popularity.
In the case of QVM I like to think that a generic quaternion/vector/matrix library is not *that* niche but the evidence seems to show that it is. Regardless I don't feel that the Boost community owes me a review. :)
I don't think the community does no. I do think that someone seeking a review manage needs to first give a review manage in return. Otherwise it's bad karma. For the record, when I originally reviewed which library to review manage, I did genuinely try to select the longest waiting Boost ready library first which was yours. Problem is, I don't really understand what makes your library good or bad - I have no experience programming such maths on a computer you see, all my experience was on paper at university. So I ended up on TypeIndex, a topic I understood very well. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Thu, Apr 2, 2015 at 12:08 PM, Niall Douglas
On 2 Apr 2015 at 11:08, Emil Dotchevski wrote:
Should niche libraries be part of Boost?
That's part of the wider debate, definitely. Is quality what we want for the Boost brand, or is popularity?
I'm in the former camp, mainly because as the standard library grows it is necessarily the case that the low hanging fruit is picked and subsequent libraries must be more niche, and less popular. Therefore, to grow and evolve Boost I believe should aim for quality, not popularity.
I am too, which is why I haven't retracted QVM from the review queue.
In the case of QVM I like to think that a generic quaternion/vector/matrix library is not *that* niche but the evidence seems to show that it is. Regardless I don't feel that the Boost community owes me a review. :)
I don't think the community does no.
I do think that someone seeking a review manage needs to first give a review manage in return. Otherwise it's bad karma.
I'm guilty as charged. :) My concern is that while encouraging experts to act as review managers is a good thing, encouraging developers of arbitrary experience (there is no screening process for submitting a library for Boost review) to act as review managers probably isn't. -- Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode
On 2 Apr 2015 at 12:42, Emil Dotchevski wrote:
My concern is that while encouraging experts to act as review managers is a good thing, encouraging developers of arbitrary experience (there is no screening process for submitting a library for Boost review) to act as review managers probably isn't.
I know what you mean. However acting as review manager requires quite a different skill set. All you really need is the ability to tell whether a point in an expert review has merit or not, and therefore to weight it appropriately in the report and recommendation. You don't need to be expert yourself, just "expert aware" if that makes sense. And if a review manager wrote a report which made no sense, people would call them on it. Besides, looking at the libraries in the formal review queue as I've been doing a lot of recently there isn't one where the programmer isn't well above average. I think Boost library review queue submission is probably highly self selecting - your average programmer isn't willing to sacrifice the blood and treasure to submit a library for review. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 4/2/2015 4:09 PM, Niall Douglas wrote:
On 2 Apr 2015 at 12:42, Emil Dotchevski wrote:
My concern is that while encouraging experts to act as review managers is a good thing, encouraging developers of arbitrary experience (there is no screening process for submitting a library for Boost review) to act as review managers probably isn't.
I know what you mean.
However acting as review manager requires quite a different skill set. All you really need is the ability to tell whether a point in an expert review has merit or not, and therefore to weight it appropriately in the report and recommendation. You don't need to be expert yourself, just "expert aware" if that makes sense. And if a review manager wrote a report which made no sense, people would call them on it.
+1 That is what is disappointing in that so few people are willing to be a review manager.
Emil Dotchevski-3 wrote
In the case of QVM I like to think
While we're on the subject, I was hoping to see QVM submitted to the incubator. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/peer-review-queue-tardiness-Cleaning-out-... Sent from the Boost - Dev mailing list archive at Nabble.com.
Emil Dotchevski wrote:
On Thu, Apr 2, 2015 at 5:49 AM, Niall Douglas
wrote: Both myself and Antony have served as review manager for other libraries since submitting our libraries. The present situation is frustrating, though I'd imagine for Emil it is even worse seeing as he's been waiting a year longer, and yet has been doing all the work a library maintainer does except without the recognition or visibility of being included into Boost official.
Doesn't this simply mean that there isn't enough interest in the library within the Boost community? :)
Actually there is interest. From quite long time we're considering using QVM in/with Geometry.
Whilst peer review is important, it is impractical for very niche libraries
Should niche libraries be part of Boost? In the case of QVM I like to think that a generic quaternion/vector/matrix library is not *that* niche but the evidence seems to show that it is. Regardless I don't feel that the Boost community owes me a review. :)
I promissed you to be a manager some time ago, so at least I owe you that. :) Therefore I'd like to volunteer. Regards, Adam
Thank you for volunteering to manage the review of QVM. I have added you to the review schedule.
Best,
Ron
On 2015-04-02, at 2:47 PM, Adam Wulkiewicz
Emil Dotchevski wrote:
On Thu, Apr 2, 2015 at 5:49 AM, Niall Douglas
wrote: Both myself and Antony have served as review manager for other libraries since submitting our libraries. The present situation is frustrating, though I'd imagine for Emil it is even worse seeing as he's been waiting a year longer, and yet has been doing all the work a library maintainer does except without the recognition or visibility of being included into Boost official.
Doesn't this simply mean that there isn't enough interest in the library within the Boost community? :)
Actually there is interest. From quite long time we're considering using QVM in/with Geometry.
Whilst peer review is important, it is impractical for very niche libraries
Should niche libraries be part of Boost? In the case of QVM I like to think that a generic quaternion/vector/matrix library is not *that* niche but the evidence seems to show that it is. Regardless I don't feel that the Boost community owes me a review. :)
I promissed you to be a manager some time ago, so at least I owe you that. :) Therefore I'd like to volunteer.
Regards, Adam
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Thank you Adam and Ron!
Emil
On Fri, Apr 10, 2015 at 10:44 AM, Ron Garcia
Thank you for volunteering to manage the review of QVM. I have added you to the review schedule.
Best, Ron
I promissed you to be a manager some time ago, so at least I owe you
On 2015-04-02, at 2:47 PM, Adam Wulkiewicz
wrote: that. :) Therefore I'd like to volunteer.
Regards, Adam
participants (15)
-
Adam Wulkiewicz
-
Bjorn Reese
-
Boris Schäling
-
Brian Smith
-
Edward Diener
-
Emil Dotchevski
-
John Phillips
-
Mathieu Champlon
-
Niall Douglas
-
Olaf van der Spek
-
Rob Stewart
-
Robert Ramey
-
Ron Garcia
-
Vinícius dos Santos Oliveira
-
Vladimir Prus