Nat Goodspeed-2 wrote
On Mon, Jul 21, 2014 at 7:02 AM, Paul A. Bristow <
pbristow@.u-net
> wrote:
Speaking from the viewpoint of "a little knowledge" (in other words, beware!) I read through parts of the Boost.Build documentation a few weeks ago and was a bit ashamed of the unkind remarks I myself have made about it from time to time. It seems to me that it wouldn't be too unreasonable to post a few cookbook Jamfile.v2 files for a hypothetical simple Boost.Foo library, then for a somewhat less simple Boost.Bar library: code and tests and documentation. (Let's just stipulate that as your library's build requirements grow in complexity, you must eventually bite the bullet and learn Boost.Build yourself.)
I've been using Safe Numerics as a canonical example to illustrate how I've envisioned that authors use this site. For this purpose I've used CMake as described in the Simple Tools section of the website. After years of dealing with Boost Build, I believe the CMake solution is far superior on this context. However, there is absolutely no reason that one can't include JAM files in the site as well. This would permit those interested in using Boost Build to test Safe Numerics to do so. But more important it would provide a simple canonical example for library authors who want to use Boost Build. If you or anyone else want's to provide some such files (with lots of comments!) I'll gladly include them along with updated text in the website to refer to them.
But a new library author, whose requirements are presumably still pretty straightforward, could copy, tweak, iterate and ask questions until s/he gets results.
Again, I wouldn't recommend this to new submitters - but if that works for them - I'm all for it. Now we enter the realm of fantasy. ... <snip> The above speculation is intended to address the question: how could we provide online test results and documentation for a candidate library that uses Boost.Build, without having to commit generated files to the repository? Consider it an existence proof: there does seem to be at least one way. My view is that boost testing doesn't scale. When it was conceived there were far fewer libraries than there are now. Testing takes longer than it used to. But we have faster machines so things have managed to keep up. But, still, I feel reluctant to add more tests (for example portable binary archives) for fear of slowing down the system even more. How could we get to 500 libraries with the current system? So my vision is entirely different. In my world, each user tests the libraries he's going to use on his own machine and posts the results to a common dashboard - one per library. The advantages are a) it scales without limit b) it doesn't require any boost infrastructure. c) it automatically tests the platforms/os that users actually use rather than having special boost testers select the platforms they want to test with. d) It doesn't waste time testing platforms/os combinations that no one is actually using. c) there is already free infrastructure available - CDash and several CI websites. d) It's ready to start using now and in fact. the incubator is already using it! Were boost to evolve to use this approach, the only thing we would need would be a website listing all the links to the dashboards. (Hmmm - that's what the incubator is). Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Library-Incubator-Unable-to-submit-... Sent from the Boost - Dev mailing list archive at Nabble.com.