Review quality (was: [stacktrace] review (changing vote to NO))
On Mon, Dec 26, 2016 at 3:58 AM, Vladimir Batov
On 2016-12-26 10:41, Andrey Semashev wrote:
Acknowledging the need for a particular functionality is not the same as accepting a particular implementation.
Put bluntly users do not care about implementation.
Please, don't speak for others. I, personally, very much care about implementation, and I can see other reviewers did take a look at the implementation as well. I think, making judgment on the library quality without regard to the implementation quality is irresponsible and short sighted. That kind of approach tend to result in "5 minute hack" solutions of obviously poor quality. But that's just my opinion.
They care about functionality provided. I advocate giving the lib a "foot in the door". That and far better feedback from the users will be a strong incentive for the author to keep working and improving the lib. You seem to prefer all-or-nothing approach. I believe it's unrealistic and bad. You might disagree.
First, nowhere in my message I advocated for "all-or-nothing approach". What I advocated for is a fair review, outlining all ups and downs of the library, and if the reviewer feels downs are significant enough, he would be right to vote for rejection. Second, it's not like the library does not exist before it is accepted. Users and the author have every opportunity to try the library in the field, if they want to. There is blincubator.com as well.
By turning a blind eye on a design flaw of a presented library you're doing a disservice to the library author and the library users. Instead, by giving constructive feedback (positive and negative), especially if supported by real world experience, you help the library improvement. In fact, this (the feedback) is the most valuable reward for the author from the review. And let me say, there's nothing humiliating in having a library rejected during a review.
Well, I do not get scolded often these days. So, it is quite "refreshing" to learn that I am "turning a blind eye on a design" and "doing a disservice". I thought I was merely expressing my humble opinion... but probably I got it all wrong.
Well, maybe you shouldn't have taken my message personally then. It wasn't meant to scold anyone. If anything, it was written with the intent to provide another (mine) perspective on what a review is and object to your notion of library review as being a humiliation. The last part, actually, was kind of offending from your side and what triggered my reply in the first place.
As for the feedback, then I do not see voting something useful "no" to be much of a feedback. Indeed, the author does get initial feedback during a review. However, not all of that feedback is constructive. Some valid, some subjective, some capricious. After it's voted out the feedback is no more.
The review entails a discussion. Some reviewers get convinced. Some review points get refined, and the author gets convinced. Sometimes review votes get changed. And it's not like the votes are final - the actual decision is taken by the review manager, and he can weigh the points made in reviews and discussion at his discretion. And it's not like the library disappears after the review. If it's interesting and useful, it'll have its users, and they will no doubt provide feedback.
Real valuable feedback will come when the lib is in Boost. People'll start using the lib (I would) in their real projects asking for this and that.
Real world usage produces valuable feedback, no doubt about that. But Boost acceptance is not a pre-requisite for real world usage.
There are things that can be fixed post-review. ... There are other things that cannot be fixed easily and would probably require changing library design. Those changes often affect library API and usage patterns, which warrants the need of a new review of the reworked library.
Yes, all that sounds wonderful... on paper. In real life as a user I'll take what's available first. Then, if that is improved in the next version, I'll take it gladly. If design changes and gives me more or fixes something, I'll accept that.
I doubt you'd be so easy to accept the changes if that required you to rewrite half of your code. :)
I do not remember Spirit transitioning from V1 to not backward-compatible V2 being re-reviewed. I can be wrong though.
I don't remember the review either. But Spirit v2 was added in addition to Spirit Classic (aka v1), which is still available in Boost. It was more like a new library was added. Should it have been reviewed? In my opinion, at least a mini-review should have been performed. Luckily, Boost.Spirit dev team has rich experteese in the domain and a lot of experience gained with v1, so v2 was a success.
In those cases it's better to reject the library so that the new, improved version is presented later.
Again, wonderful.. in theory. The reality IMO is different because 1) "improving" is better with real user feedback that you deny to the author by voting "no"; 2) "later" might never come as the review process is quite exhausting/draining; not everyone wants to experience it again; you might say "too bad for him"; I'll say it's bad for the user as well as they end up with your good intentions and no library; 3) "later" might be too late as by that time the user'll have something else in place.
Again, it's not like the library doesn't exist outside Boost. If it fits you, go right ahead, so 1 is not true. But Boost does set a certain bar of quality, and the review process exists precisely to maintain that bar. And I'll take your 2 and 3 any time if it results in a high quality library in Boost.
Consider also that whenever a library is accepted into Boost, the matter of backward compatibility appears. If the accepted library is somehow seriously flawed, that flaw may be difficult and painful to fix in the future.
"difficult", "painful", "backward compatibility"... Come on. I am sure you've been in the industry for a while, have "survived" many upgrades/updates, etc. We all know it's not an issue. We update, adjust, move on.
Let's just say it can be an issue.
On Mon, Dec 26, 2016 at 3:54 AM, Robert Ramey
On 12/25/16 3:41 PM, Andrey Semashev wrote:
Consider also that whenever a library is accepted into Boost, the matter of backward compatibility appears.
How so? Certainly there is no requirement that boost libraries support older versions of C++ and/or compilers - and indeed many don't. I don't think I see what you're referring to here.
I think you misunderstood. I wasn't speaking of C++ versions backward compatibility. I was speaking of the library backward compatibility. AFAIK, making backward incompatible changes is still considered an undesirable practice.
If the accepted library is somehow seriously flawed,
Then of course it should be rejected. Of course reaching a concensus as to whether it's flawed might not be so easy.
Right, that was my point.
I checked this page and don't see where it says anything relevent to your points.
Maybe you missed this section: http://www.boost.org/community/reviews.html#Comments <quote> Your comments may be brief or lengthy, but basically the Review Manager needs your evaluation of the library. If you identify problems along the way, please note if they are minor, serious, or showstoppers. The goal of a Boost library review is to improve the library through constructive criticism, and at the end a decision must be made: is the library good enough at this point to accept into Boost? If not, we hope to have provided enough constructive criticism for it to be improved and accepted at a later time. The Serialization library is a good example of how constructive criticism resulted in revisions resulting in an excellent library that was accepted in its second review. </quote>
On 12/25/16 7:18 PM, Andrey Semashev wrote:
Second, it's not like the library does not exist before it is accepted. Users and the author have every opportunity to try the library in the field, if they want to. There is blincubator.com as well.
Indeed, the ability to make a library visible in a convenient way for usage and experimentation in advance of the formal review was one of the main goals of the inclubator. The hope was that the authors would get enough feedback to detect and make adjustments for obvious issues in advance of the formal review. The hope was that this would make the review process run smoother and diminish the number of libraries rejected in the review process. To my disappointment it hasn't worked out that way. Libraries get very little feedback on the blincubator or anywhere else for that matter. I understand this as it's actualy a fair bit of work to review a library. But that doesn't keep me from being disappointed though. Library authors are anxious to get their library on to the review queue and feel compelled to find a reviewer to accept the task. I understand this as well. But still I'd like to see more "pre-review" feedback. And a few authors have declined to post their library on the blincubator at all. I'm sure they have their reasons, but I'm disappointed that they don't find it compelling or necessary. I should say I received very little feedback on my safe numerics library. BUT I found it to be very, very, useful. It made me realize that that I had to make a strong case for the necessity for such a library. In hindsight it's incredible that this had never occurred to me. Up to that point I had always assumed that the whole world was anxiously awaiting my solution to the very obvious and glaring problem with computing which has been around since it's inception. So I am interested in receiving feedback on the incubator. On the other hand, making changes of the incubator requires understanding of modern web tools which are very, very, depressing to use for the compable C++ programmer. Oh well. Robert Ramey
On 12/25/16 7:18 PM, Andrey Semashev wrote:
Second, it's not like the library does not exist before it is accepted. Users and the author have every opportunity to try the library in the field, if they want to. There is blincubator.com as well.
Indeed, the ability to make a library visible in a convenient way for usage and experimentation in advance of the formal review was one of the main goals of the inclubator. The hope was that the authors would get enough feedback to detect and make adjustments for obvious issues in advance of the formal review. The hope was that this would make the review process run smoother and diminish the number of libraries rejected in the review process.
To my disappointment it hasn't worked out that way. Libraries get very little feedback on the blincubator or anywhere else for that matter. I understand this as it's actualy a fair bit of work to review a library. But that doesn't keep me from being disappointed though.
Library authors are anxious to get their library on to the review queue and feel compelled to find a reviewer to accept the task. I understand this as well. But still I'd like to see more "pre-review" feedback. Since I recently went through the review process, I would like to share my thoughts on that: it is very, very difficult to get feedback before
Am 26.12.2016 um 17:39 schrieb Robert Ramey: the review. This does however make sense from the perspective of the reviewer, since he doesn't want to invest hours into a library that might look completely different. That leads to the rather unfortunate situation, that library authors are operating in the dark before the review; you actually only have to have the feedback of one person, the review manager. Secondly it seems to me, that the conditions of acceptance are not clearly defined, i.e. what an approving review actually means. Some say yes because they think the design is alright, others say no, because the implementation is not as good as they wish. For me the criteria would be: it solves a problem, the design is sound, the implementation works, and we have sufficient reason to believe, that further improvements will not break code using it. I think this could be more clearly statet, i.e. at which point a library should be accepted into boost.
And a few authors have declined to post their library on the blincubator at all. I'm sure they have their reasons, but I'm disappointed that they don't find it compelling or necessary.
I should say I received very little feedback on my safe numerics library. BUT I found it to be very, very, useful. It made me realize that that I had to make a strong case for the necessity for such a library. In hindsight it's incredible that this had never occurred to me. Up to that point I had always assumed that the whole world was anxiously awaiting my solution to the very obvious and glaring problem with computing which has been around since it's inception.
So I am interested in receiving feedback on the incubator. On the other hand, making changes of the incubator requires understanding of modern web tools which are very, very, depressing to use for the compable C++ programmer. Oh well.
I think the incubator can only solve that problem, if there is some incentive to actually use it. Currently it is in the way I've seen it only a list of libraries that people have proposed for boost. That is not meant to say it's bad, but all the interaction is going on on the mailing lists and thus there is not much attention put on the incubator. At least by me. Here's what I would do, which might create an incentive for people to pay more attention: The libraries on the incubator should be in a late phase of their development, thus they have to be filtered. They need to solve a problem, have a documentation & test. As a condition, there might be a poll, and if 3 or 5 boost contributors approve it is added. Then it has a time-limit, of let's say one year. If it doesn't reach review then or gets an extension approved, it gets removed, thus keeping the incubator current. If you look a the incubator list, there are just way to many libraries that don't have a review date. boost.proces was 2 years on it, without anything moving. I don't have a problem, with development being paused, but if you have an incubator, it should be making progress. So the way I'd see it: a library in the incubator is partially usable and in active development. That way the libraries in the incubator would actually be interesting; instead of a list of planned developments it would be a list of stuff I could try out. Thus there would be more interest, I'd think. Now if we had that approach, we could go one step further: have a boost-incubator release/github-branch which would give libraries in the incubator more publicity (if put on the boost.org website). Btw.: the incubator is also outdated, hana, fiber, metaparse are already released with boost.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Hi,
Btw.: the incubator is also outdated, hana, fiber, metaparse are already released with boost.
this is also something that I noticed, it gave me a bad feeling that the blincubator is not really maintained. Still, I found rather good advice on the incubator for my library "histogram", ie. everything in the "Requirements" and "Advice" sections. I suppose that the blincubator needs a dedicated manager, someone who promotes new contributions on the mailing list and asks on the list for (pre-)reviews. Everything needs a push, I guess. In principal, I see potential in the blincubator. Non-experts could play a greater part in (pre-)reviewing projects on the blincubator. A non-expert can still apply a checklist and see whether all the rules are followed. The checklist could be implemented in the page as a form. Best regards, Hans
Here's what I would do, which might create an incentive for people to pay more attention:
For many years now I have been a champion of an additional boost-testing distribution in addition to the main distribution. To join boost-testing one simply sends a pull request adding your github repo to the superrepo. A hook script verifies the new repo compiles and passes all its unit tests against the most recently released boost distro. If after that your github repo sees no updates to its master branch in a month, it gets auto expunged from the boost-testing superrepo by a script. *After* a boost main distro release, boost-testing is run against the just released Boost from thence onwards. One month later all boost-testing subprojects passing all their tests are assembled by a script into a boost-testing release distro and automatically published. This proposal was not smiled upon in the past, so it hasn't happened. But if someone were to just go ahead and do it anyway, it might just get itself some legs and become inevitable. Niall
On 02 Jan 2017, at 14:48, Niall Douglas
wrote: For many years now I have been a champion of an additional boost-testing distribution in addition to the main distribution. To join boost-testing one simply sends a pull request adding your github repo to the superrepo. A hook script verifies the new repo compiles and passes all its unit tests against the most recently released boost distro. If after that your github repo sees no updates to its master branch in a month, it gets auto expunged from the boost-testing superrepo by a script.
That sounds pretty cool, because it automatises a lot of the mundane things.
On 02/01/2017 13:58, Hans Dembinski wrote:
On 02 Jan 2017, at 14:48, Niall Douglas
wrote: For many years now I have been a champion of an additional boost-testing distribution in addition to the main distribution. To join boost-testing one simply sends a pull request adding your github repo to the superrepo. A hook script verifies the new repo compiles and passes all its unit tests against the most recently released boost distro. If after that your github repo sees no updates to its master branch in a month, it gets auto expunged from the boost-testing superrepo by a script.
That sounds pretty cool, because it automatises a lot of the mundane things.
I'm very keen on automation, but I am also aware of the false quality issues it can generate. For example my boost-lite based libraries have a cronjob script run at midnight GMT which looks at the commits done that day on the develop branch. For each new commit, it does a RESTful query of the CDash for the project, asking if that commit SHA passed all its update/configure/build/test/packaging CTest stages. If it did, it merges that SHA from develop into master branch, and pushes master back to the github repo. That way master branch always refers to a SHA where everything passed on all CI tested platforms (i.e. Travis and Appveyor). Now that sounds cool and everything, but in fact the quality of the master branch for proposed Boost.Outcome was quite subpar. For at least a month if anybody tried using Boost.Outcome master branch they would have found it broken because the subrepo for the docs had an invalid SHA. If they then tried downloading the tarball generated by the cronjob script for every "all passing" SHA, they would have found it entirely missing because the auto old releases purge script had deleted the script doing the tarballing. Human based release management when done well is always higher quality than automated release management. To date Boost has done releases by hand. Also, automation requires a human janitor, and to date it's been easier to find reliable release managing humans than to find reliable automation janitors willing to work for zero money. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Niall Douglas Sent: 02 January 2017 14:19 To: boost@lists.boost.org Subject: Re: [boost] Review quality [ was stack trace review]
On 02/01/2017 13:58, Hans Dembinski wrote:
On 02 Jan 2017, at 14:48, Niall Douglas
wrote: For many years now I have been a champion of an additional boost-testing distribution in addition to the main distribution. To join boost-testing one simply sends a pull request adding your github repo to the superrepo. A hook script verifies the new repo compiles and passes all its unit tests against the most recently released boost distro. If after that your github repo sees no updates to its master branch in a month, it gets auto expunged from the boost-testing superrepo by a script.
That sounds pretty cool, because it automatises a lot of the mundane things.
Sounds too clever by far ;-) If all tests don't pass, this doesn't mean it is so useless that it should be deleted, perhaps OK on some platforms? But I've been keen on an 'accepted as candidate for Boost' distribution for many years, and I would still like to see this adopted. It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one. Too few people are reviewing 'real-life' usage. We need more users and that won't happen until we have a two-stage acceptance process.
I'm very keen on automation, but I am also aware of the false quality issues it can generate. Human based release management when done well is always higher quality than automated release management. To date Boost has done releases by hand. Also, automation requires a human janitor, and to date it's been easier to find reliable release managing humans than to find reliable automation janitors willing to work for zero money.
On this can't we trust the author to move from his develop branch to master when he thinks fit? Keep It Simple Sir? Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
If all tests don't pass, this doesn't mean it is so useless that it should be deleted, perhaps OK on some platforms?
Unfortunately that depends on your tests and what they mean. For example, Outcome has special workarounds for Visual Studio 2015 because it has a broken template variables implementation. If you try Outcome on Visual Studio 2017 RC, it blows up in a completely new way, both the special workaround AND the ISO C++ paths cause an ICE. Right now Outcome is unusable on VS 15 entirely (yay). So in this circumstance your tests failing don't mean a problem in your code, but rather the compiler. Without a human to judge that, you can't argue for deletion outright. You CAN argue that your library should not be supplied to users of that particular compiler version though.
But I've been keen on an 'accepted as candidate for Boost' distribution for many years, and I would still like to see this adopted. It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one. Too few people are reviewing 'real-life' usage. We need more users and that won't happen until we have a two-stage acceptance process.
Exactly my position on the matter. Too many reviews are made without use experience. Some libraries have been accepted that never should have been (Iostreams). Some libraries have been rejected that should never have been (too many to list).
On this can't we trust the author to move from his develop branch to master when he thinks fit?
Keep It Simple Sir?
My libraries have a three quality level branch system. Develop, or other experimental branches is where the commits happen. Master is what passes all the tests on the CIs. Stable is when I consider a particular master SHA as being of superb quality. I make use of github restricted push enforcement, so even I the owner cannot push any SHAs to master branch which did not pass all tests on another branch. It keeps me from being lazy :) The reason for the three quality levels is because of all the other automation scripting. You need a quality level in between stable and develop so all the projects dependent on you have something to test against in their CI passes in order to stamp their own master branches with updated git submodule SHAs. That flags when you've made an API breaking change in an upstream dependency BUT without breaking the downstream dependency which is still pinned to an older but still working earlier SHA. Also, when you fix the thing you broke, the downstream dependencies will automatically update themselves to your latest SHA, no manual effort needed. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 1/2/17 7:41 AM, Paul A. Bristow wrote:
But I've been keen on an 'accepted as candidate for Boost' distribution for many years, and I would still like to see this adopted.
How would this be different than having a library placed into the review queue by the review wizard? I don't know if the review wizard puts every request in or if he does some sort of checking. The incubator does requirements. But they are pretty easy to meet. No one has complained that they are too strict. Basically, I don't think a new designation would add any thing.
It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one.
FWIW - I don't think the reviews are all that acrimonious. But maybe that's just me.
Too few people are reviewing 'real-life' usage.
We need more users and that won't happen until we have a two-stage acceptance process.
Well we sort of have a two-stage process now. Stage I = inclubator Stage II reviewed I don't know how many users actually use libraries in the incubator (I'd love to get statistics on that but github doesn't have download stats). For any library in the incubutor, there's a button you press which shows a graph of the number of times people have brought up the library page. If one had nothing else to do and was a wordpress/php guru he could clone the library directly from the library page and gather statistics on that. In any case, I think we have enough process. We just need to use it more.
On this can't we trust the author to move from his develop branch to master when he thinks fit?
Right. But I think we need an iteration/evolution in the test and modularization procedures of boost
Keep It Simple Sir?
LOL always
Paul
--- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 2017-01-03 06:24, Robert Ramey wrote:
On 1/2/17 7:41 AM, Paul A. Bristow wrote: ...
It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one.
FWIW - I don't think the reviews are all that acrimonious.
I have to site with Paul here as from what I've seen people do tend to expect everything on a plate from the set-go.
Too few people are reviewing 'real-life' usage. We need more users and that won't happen until we have a two-stage acceptance process.
Well we sort of have a two-stage process now.
Stage I = inclubator Stage II reviewed
The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future. The deployment requirements might well be different for other people but my situation is that we simply cannot include an external library/dependency without such a guarantee. The burden/impact of retiring/replacing a no-longer-supported library is likely to be unacceptably high.
On 1/2/17 12:08 PM, Vladimir Batov wrote:
On 2017-01-03 06:24, Robert Ramey wrote:
On 1/2/17 7:41 AM, Paul A. Bristow wrote: ...
It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one.
FWIW - I don't think the reviews are all that acrimonious.
I have to site with Paul here as from what I've seen people do tend to expect everything on a plate from the set-go.
Too few people are reviewing 'real-life' usage. We need more users and that won't happen until we have a two-stage acceptance process.
Well we sort of have a two-stage process now.
Stage I = inclubator Stage II reviewed
The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future.
No one - not even boost - can make such a guarentee.
The deployment requirements might well be different for other people but my situation is that we simply cannot include an external library/dependency without such a guarantee.
I do not think a real world product can depend on anything outside it's own organization. This is the motiviation behind open source code.
The burden/impact of retiring/replacing a no-longer-supported library is likely to be unacceptably high.
Here is the way a Boost - or any other library should be used. 0) determine that a library is suitable to one's sitation 1) clone the library(ies) to one's local system. 2) If the libraries require building - build them 3) run tests on all libaries used. 4) build and link product/application 5) run application tests On "upgrade" of libraries or tools 1) update some libraries which need it 2) run tests on all libraries 3) re-build ap and re-run app tests So in no way should you be depending on something outside of your control. You should depend only on your local copy. This is true for any library accepted into boost or not!!! Now - I believe that library users do not run the test suites of the libraries they use. I believe this because no one in 14 years has complained about a serialization library test failing other than on the the boost test matrix. This is very, very shortsighted. Of course this is not just boost but everywhere. Test systems, including boosts, do not make this easy. It's a big problem for code quality. So short answer - do not depend on code that you do not test and keep a copy of on your own system. Do not do this regardless of where it comes from boost, the incubator or anywhere else. Then there is the standard library which compiler vendors send. They don't include a test suite - or even post a test matrix. Actually, I think they just use Boost to test their compilers - oh well. Robert Ramey
On Mon, Jan 2, 2017 at 12:35 PM, Robert Ramey
On 1/2/17 12:08 PM, Vladimir Batov wrote:
On 2017-01-03 06:24, Robert Ramey wrote:
On 1/2/17 7:41 AM, Paul A. Bristow wrote: ...
It would lead to better (and less acrimonious) reviews because we are not expecting perfection from day one.
FWIW - I don't think the reviews are all that acrimonious.
I have to site with Paul here as from what I've seen people do tend to expect everything on a plate from the set-go.
Too few people are reviewing 'real-life' usage.
We need more users and that won't happen until we have a two-stage acceptance process.
Well we sort of have a two-stage process now.
Stage I = inclubator Stage II reviewed
The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future.
No one - not even boost - can make such a guarentee.
+1 Can you afford to take the risk of a Boost library (or any other library) to not be maintained and bug-free in the future? Nobody but the user can answer this question, because even if the risk could be evaluated objectively, the risk-tolerance is user-specific. Most definitely this is not a matter the review process should be concerned with. Emil
On 2017-01-03 07:35, Robert Ramey wrote:
On 1/2/17 12:08 PM, Vladimir Batov wrote:
... The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future.
No one - not even boost - can make such a guarantee.
Oh, come on. Of course, nothing in life is guaranteed. I can't guarantee I wake up tomorrow morning. But surely we all understand what I was trying to say. Is there really anything to debate/discuss here?
The deployment requirements might well be different for other people but my situation is that we simply cannot include an external library/dependency without such a guarantee.
I do not think a real world product can depend on anything outside it's own organization. This is the motivation behind open source code.
I simply described the "real-world" project/product I am involved in. I've been doing it for quite some time and as far as I can remember all those "real-world" projects were using external libs and Boost has always been one of them. Forgive my saying but to say "I do not think a real world product can depend on anything outside it's own organization" seems very much out of touch with this very real-world... unless I misunderstood/misinterpreted it.
The burden/impact of retiring/replacing a no-longer-supported library is likely to be unacceptably high.
Here is the way a Boost - or any other library should be used.
0) determine that a library is suitable to one's sitation 1) clone the library(ies) to one's local system. 2) If the libraries require building - build them 3) run tests on all libaries used. 4) build and link product/application 5) run application tests
On "upgrade" of libraries or tools
1) update some libraries which need it 2) run tests on all libraries 3) re-build ap and re-run app tests
So in no way should you be depending on something outside of your control. You should depend only on your local copy. This is true for any library accepted into boost or not!!!
There seems to be misunderstanding as I was not referring to a local copy. It's not an issue. It appears obvious to me that an external lib (local copy or not) is outside of my control... unless I am prepared to take responsibility of maintaining the lib... which is not an option. With Boost that risk is minimal (practically non-existent); with incubator that risk is quite high. Others might disagree.
On 1/2/17 1:48 PM, Vladimir Batov wrote:
On 2017-01-03 07:35, Robert Ramey wrote:
On 1/2/17 12:08 PM, Vladimir Batov wrote:
... The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future.
No one - not even boost - can make such a guarantee.
Oh, come on. Of course, nothing in life is guaranteed. I can't guarantee I wake up tomorrow morning. But surely we all understand what I was trying to say. Is there really anything to debate/discuss here?
Hmmm - actually there is. If you use a boost library, there's a real world chance that it may stop working in the future. This could occur because you upgrade your compiler or because someone "improves" the official version so that it doesn't work for you any longer. Of the course it's much more likely for something like this to occur on some other library - in the incubator or otherwise. You seem to suggest that this is not a problem with boost libraries while it is with others. My view is that it's a problem with all code and libraries. It just that due to a higher standard, it may be much less of a problem than it its with boost libraries. But my view is that this concern never goes away and this fact should be built in to the development procedures of any application which depends on boost or any other library. I don't think that depending on boost or on some component in the incubator or anywhere else is all that different. I trust no one.
I simply described the "real-world" project/product I am involved in. I've been doing it for quite some time and as far as I can remember all those "real-world" projects were using external libs and Boost has always been one of them. Forgive my saying but to say "I do not think a real world product can depend on anything outside it's own organization" seems very much out of touch with this very real-world... unless I misunderstood/misinterpreted it.
Let me clarify. Of course we rely on externally produced code. But if you ship it, you still have to be responsibility for it. The only way you can do this now is to run the tests yourself. I realize that people don't do this and I maintain that they are wrong not to do it. But if one accepts the view that he should run the tests on every library he includes in his own product, than the distinction between using a library review or in the incubator or anywhere else goes away.
It appears obvious to me that an external lib (local copy or not) is outside of my control...
Agreed.
unless I am prepared to take responsibility of maintaining the lib... which is not an option.
Unfortunately, you can't evade responsibility for the promises made for your final product. Hence you can't evade responsibility for the functioning of the libraries you use. Currently, the only way you can credibly claim you've fulfilled those responsibilities is to run tests on the libraries. Libraries in boost and in the incubator are required to have tests (unlike any other source code collections). So this is possible in either case.
With Boost that risk is minimal (practically non-existent); with incubator that risk is quite high. Others might disagree.
Certainly one would hope that libraries in boost have fewer bugs. But since I believe that all code should be tested by myself before I ship it as part of a product, I can't accept/reject libraries on a case by case basis. This is what I recommend that anybody do. Sort of off-topic.
unless I am prepared to take responsibility of maintaining the lib... which is not an option.
Actually it IS and option. Take a look at the Boost Library Official Maintainer (BLOM) program. If you really depend on some library which no longer has a maintainer, Your company can take on that responsibility and gain some benefits besides. The theory is that since you have to run tests anyway and perhaps apply bug fixes in the normal course of your work, you might as well take on the job officially and get and inside track in getting your fixes into the official version and maybe some free promotion/publicity for your organization. Robert Ramey
Robert, I do agree with your points below about importance of tests and duties and ever-present risks associated with deployments of external libs. Still, I am not quite sure how it all stemmed from my fairly simple (and I thought non-controversial) view that the incubator had those risks prohibitively greater compared to the Boost proper. That is why I personally did not see the incubator being considered/evaluated for my particular projects and probably not taking off on a larger scale. Accidentally, that incubator-related view of mine does not immediately seem related to the original "Review quality" topic. So, it might well be that I was first to veer off. :-) Apologies. On 2017-01-03 09:56, Robert Ramey wrote:
On 1/2/17 1:48 PM, Vladimir Batov wrote:
On 2017-01-03 07:35, Robert Ramey wrote:
On 1/2/17 12:08 PM, Vladimir Batov wrote:
... The problem with the incubator IMO is that it does not provide any guarantee whatsoever that the library will be accepted/around/maintained in the future.
No one - not even boost - can make such a guarantee.
Oh, come on. Of course, nothing in life is guaranteed. I can't guarantee I wake up tomorrow morning. But surely we all understand what I was trying to say. Is there really anything to debate/discuss here?
Hmmm - actually there is. If you use a boost library, there's a real world chance that it may stop working in the future. This could occur because you upgrade your compiler or because someone "improves" the official version so that it doesn't work for you any longer. Of the course it's much more likely for something like this to occur on some other library - in the incubator or otherwise. You seem to suggest that this is not a problem with boost libraries while it is with others. My view is that it's a problem with all code and libraries. It just that due to a higher standard, it may be much less of a problem than it its with boost libraries.
But my view is that this concern never goes away and this fact should be built in to the development procedures of any application which depends on boost or any other library. I don't think that depending on boost or on some component in the incubator or anywhere else is all that different. I trust no one.
I simply described the "real-world" project/product I am involved in. I've been doing it for quite some time and as far as I can remember all those "real-world" projects were using external libs and Boost has always been one of them. Forgive my saying but to say "I do not think a real world product can depend on anything outside it's own organization" seems very much out of touch with this very real-world... unless I misunderstood/misinterpreted it.
Let me clarify. Of course we rely on externally produced code. But if you ship it, you still have to be responsibility for it. The only way you can do this now is to run the tests yourself. I realize that people don't do this and I maintain that they are wrong not to do it. But if one accepts the view that he should run the tests on every library he includes in his own product, than the distinction between using a library review or in the incubator or anywhere else goes away.
It appears obvious to me that an external lib (local copy or not) is outside of my control...
Agreed.
unless I am prepared to take responsibility of maintaining the lib... which is not an option.
Unfortunately, you can't evade responsibility for the promises made for your final product. Hence you can't evade responsibility for the functioning of the libraries you use. Currently, the only way you can credibly claim you've fulfilled those responsibilities is to run tests on the libraries. Libraries in boost and in the incubator are required to have tests (unlike any other source code collections). So this is possible in either case.
With Boost that risk is minimal (practically non-existent); with incubator that risk is quite high. Others might disagree.
Certainly one would hope that libraries in boost have fewer bugs. But since I believe that all code should be tested by myself before I ship it as part of a product, I can't accept/reject libraries on a case by case basis. This is what I recommend that anybody do.
Sort of off-topic.
unless I am prepared to take responsibility of maintaining the lib... which is not an option.
Actually it IS and option. Take a look at the Boost Library Official Maintainer (BLOM) program.
If you really depend on some library which no longer has a maintainer, Your company can take on that responsibility and gain some benefits besides. The theory is that since you have to run tests anyway and perhaps apply bug fixes in the normal course of your work, you might as well take on the job officially and get and inside track in getting your fixes into the official version and maybe some free promotion/publicity for your organization.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 1/2/17 5:14 PM, Vladimir Batov wrote:
Robert,
I do agree with your points below about importance of tests and duties and ever-present risks associated with deployments of external libs. Still, I am not quite sure how it all stemmed from my fairly simple (and I thought non-controversial) view that the incubator had those risks prohibitively greater compared to the Boost proper.
I felt that this misunderstood the role I had envisioned for the incubator and I wanted to take the opportunity to clarify this. I has always been my hope those who aspire to contribute to boost would have a place where could get information on how to make a boost quality library and a place to get feed back on their efforts. From personal experience I know that even a little feedback before the review helps the author immensely and helps the review process by helping the author prepare.
That is why I personally did not see the incubator being considered/evaluated for my particular projects
Of course that's up to you and probably not taking off on a larger scale. Last count I believe that 50 libraries have been submitted to the incubator. Some number of these have been accepted into boost. Of course I'd like to see it more successful on a larger scale - but it's not nothing. I know that the authors who make submissions to the incubator aspire to meet the same standards that accepted libraries do. And a good number have been accepted. So I don't think that if the incubator were to contain something that one needs, it would be smart to consider it as one would an accepted boost library. The only difference is that you you should review and test it. But wait, you should be doing with that with accepted boost libraries as well. That's my point. Basically the only difference between a library in the incubator and one accepted into boost is that some small number (in some cases only two) have certified the library for acceptance into boost. Just keep that in mind.
Accidentally, that incubator-related view of mine does not immediately seem related to the original "Review quality" topic. So, it might well be that I was first to veer off. :-) Apologies.
Oh no - I'm always the one to veer off. I'm well known for this. Robert Ramey
participants (8)
-
Andrey Semashev
-
Emil Dotchevski
-
Hans Dembinski
-
Klemens Morgenstern
-
Niall Douglas
-
Paul A. Bristow
-
Robert Ramey
-
Vladimir Batov