What would make tool authors happier..
Warning.. Some of this may sound contentious at times.. But that's life ;-) There are a number of disparate discussions going on at the moment that can be summarized as: * A tool author suggesting a change (to structure, or process, or something else) that would make writing tools easier and less error prone. * Lots of responses about how such a change would be disruptive to current practices. * Some responses about how such a change is beneficial to the future of Boost, its growth and possible real modular future. * Some more responses about how the tool authors should just account for all the special cases of the current practices and still bring about the vaunted modular future. So I want to summarize some key changes, as I understand them, that I think will make movement towards a better tooling and development future less painful for the minuscule minority of the people who write the tools that aim to bring that future into existence. First the key aspects of the changes are: 1. Normalized and regular directory structure of libraries, tools, and root. 2. Minimal non-library content and structure. 3. Stricter rules and guidelines for library authors to follow regarding the needs of the infrastructure tools. Some of the specific changes some of us have mentioned that touch on the above aspects are.. * Making the combined library headers go into boost-root/include. See separate post from Peter Dimov about this. This touches on #1 as it makes for a user and tool expected location. That is, users are accustomed to looking for a top level include. And having such a location would reduce the documentation and instruction needed to point them away from their intuitive behavior. External tools also expect to find such a directory for a library, which is what a monolithic Boost looks like to tools. Hence it would make it easier to use, incorporate, author, etc tools. * What I've mentioned as "flattening" of libraries. Which touches on #1 and #3 above. This has multiple parts/meanings so let me explain each one separately. First, it means is banning of git sub-modules outside of directly under boost-root/libs and boost-root/tools. Dealing with git sub-modules is difficult enough even when it's just a handful of sub-modules. But dealing with the 130 sub-modules we have now is an insanity inducing experience when there needs to be handling for special locations. Currently the only culprit of this is "libs/numeric". And I understand that it was done this way because that was the old structure converted to the modular git form. But I think it's time to truly abandon the old structures as they are standing in the way of better tools and user experience. As an example of user experience, I once tried to set up my own subset of Boost as a git repo that had all the libraries and tools needed for the documentation toolchain. It was excruciating to manually discover what all those libraries where particularly because of no initially knowing about this aspect of our current sub-modules. Second, it means formalizing and regulating the top level structure of libraries. For the longest time we've had an accepted top level structure. Unfortunately library authors have added to that top level structure. For example to manage "sub-libraries" or "sub-parts" of their library... Which is understandable. But it makes life more difficult for the tools that rely on the structure assertions. For example currently the testing scripts rely on people updating a single listing file at "boost-root/status/Jamfile.v2". When in an ideal world the test tools would be able to automatically discover that information. Practically it means that currently that Jamfile lists 127 test dirs. But a cursory discovery of test dirs goes up to a possible 197. As far as following the top-level library structure.. There are currently 279 files and directories at the library top level that are not in the accepted set (and I'm already excluding things like readmes and changelogs, even though changelogs should be in docs). But do note.. That I'm not suggesting that we immediately ban the old structure. But that we start discussing what are the needs of library authors and that we come up with a consistent and enforced structure that can relied upon by users and tools. Some changes that may be the first time I bring them up.. * Remove as much as possible from boost-root. This is mostly #2 above. First, I would like to remove the "boost-root/more" directory. Stuff in it are docs of various kinds and should be placed in "boost-root/doc", in the website, in tool docs, or in library docs. While doing that I'm sure we will likely also rewrite/clean-up those documents to better reflect the present. Second, I would like to move as many of the root build files to a "boost-root/build" directory. Ideally they would all be moved but it may not be possible, and practical, to do so. This would mirror the general top-level structure and hence make it a bit more intuitive for new authors and users to find. Third, I would like to clean up the various CSS and HTML sources at the root to either a smaller number (do we really need index.htm *and* index.html?), or to the doc directory. I know.. This isn't exactly in the vein of what makes it easier for tool developers. But it does make it easier for users to initially navigate (because there's less noise for them to look at). Last, I would like to re/move the "boost-root/status" directory. Two options I'm considering are moving to be "boost-root/test" to match the usual name for testing scripts. Or removing it and replacing it logic in the regression tools that is equivalent (i.e. move the functionality to the separate regression git repo). I believe that all those changes will help in moving us toward what I'm starting to call a "comprehensive" Boost release. Such a release would not follow the monolithic structure we currently have of needing the big include directory. But instead would be a plain collection of the individual libraries and tools that users can enable/install from the comprehensive package they download. Or possibly use directly if they add the individual libraries to their project search (and obviously build for the ones that need it). Such a "comprehensive" release would make the release process, and the tools driving it, a much simpler process and almost certainly increase the frequency of comprehensive release (possibly even to an almost weekly or daily occurrence). Last note.. It's never too late to fix problems even if there's work needed to adjust to the fixes. After all, programmers are accustomed to change. We deal with it and move on. So if we can make changes that reduce our future pain, we should do those changes. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
* Making the combined library headers go into boost-root/include. See separate post from Peter Dimov about this.
This touches on #1 as it makes for a user and tool expected location. That is, users are accustomed to looking for a top level include. And having such a location would reduce the documentation and instruction needed to point them away from their intuitive behavior. External tools also expect to find such a directory for a library, which is what a monolithic Boost looks like to tools. Hence it would make it easier to use, incorporate, author, etc tools. If we're going to do this, just do it, and take the consequences later ;)
Second, it means formalizing and regulating the top level structure of libraries. For the longest time we've had an accepted top level structure. Unfortunately library authors have added to that top level structure. For example to manage "sub-libraries" or "sub-parts" of their library... Which is understandable. But it makes life more difficult for the tools that rely on the structure assertions. For example currently the testing scripts rely on people updating a single listing file at "boost-root/status/Jamfile.v2". When in an ideal world the test tools would be able to automatically discover that information. Practically it means that currently that Jamfile lists 127 test dirs. But a cursory discovery of test dirs goes up to a possible 197. As far as following the top-level library structure.. There are currently 279 files and directories at the library top level that are not in the accepted set (and I'm already excluding things like readmes and changelogs, even though changelogs should be in docs). +1
Boost libraries should be following the common structure - as should the tools of course.
Last, I would like to re/move the "boost-root/status" directory. Two options I'm considering are moving to be "boost-root/test" to match the usual name for testing scripts. Or removing it and replacing it logic in the regression tools that is equivalent (i.e. move the functionality to the separate regression git repo). +1
Not so controversial after all, John.
Rene Rivera wrote:
* What I've mentioned as "flattening" of libraries.
Specifically, this means that 1. the headers live in libs/*/include (already implemented) 2. the test Jamfiles live in libs/*/test 3. the build Jamfiles live in libs/*/build This is _almost_ the case today, and making this an official and enforced policy will not be as disruptive as it seems. The only offenders are (I think) libs/numeric/* and libs/function_types/build. There's also 4. the documentation root is libs/*/index.html and the documentation lives (as a general rule) in libs/*/doc/html which is a bigger change as currently some libraries place there documentation in $BOOST_ROOT/doc.
On Tue, Jun 2, 2015 at 3:18 PM, Peter Dimov
Rene Rivera wrote:
* What I've mentioned as "flattening" of libraries.
Specifically, this means that
1. the headers live in libs/*/include (already implemented) 2. the test Jamfiles live in libs/*/test 3. the build Jamfiles live in libs/*/build
This is _almost_ the case today, and making this an official and enforced policy will not be as disruptive as it seems. The only offenders are (I think) libs/numeric/* and libs/function_types/build.
There's also
4. the documentation root is libs/*/index.html and the documentation lives (as a general rule) in libs/*/doc/html
And.. 5. the source lives in libs/*/src. The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On Tue, Jun 2, 2015 at 3:39 PM, Rene Rivera
On Tue, Jun 2, 2015 at 3:18 PM, Peter Dimov
wrote: Rene Rivera wrote:
* What I've mentioned as "flattening" of libraries.
Specifically, this means that
1. the headers live in libs/*/include (already implemented) 2. the test Jamfiles live in libs/*/test 3. the build Jamfiles live in libs/*/build
This is _almost_ the case today, and making this an official and enforced policy will not be as disruptive as it seems. The only offenders are (I think) libs/numeric/* and libs/function_types/build.
There's also
4. the documentation root is libs/*/index.html and the documentation lives (as a general rule) in libs/*/doc/html
And.. 5. the source lives in libs/*/src.
The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive.
PS. The quick search I used is:
cd boost-root ls -d1 libs/*/* | grep -E -v "[/](build|doc|example|include|meta|src|test|index.html|README|README.md|ChangeLog)$"
-- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
Rene Rivera wrote:
The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive.
I don't consider extraneous files/dirs within library directories a problem as they neither confuse tools nor interfere with modularization.
On Tue, Jun 2, 2015 at 4:05 PM, Peter Dimov
Rene Rivera wrote:
The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive.
I don't consider extraneous files/dirs within library directories a problem as they neither confuse tools nor interfere with modularization.
Right. Which is why I said "roughly". But for many of the above I'm not actually sure if they are OK or not. Which is also why I want to discuss what all those files and dirs are. I want to be sure we aren't missing tests we should be running. Or documentation that is being misplaced. Or if there additional directories we should document (and enforce) because they are useful to have. For example I see "tools" is used in a few places. Should we document what should/might go in there? I see Cmake & VS files.. Should we suggest/require those go to the build dir? So that we can generally tell users that for *all* libraries they should look in the library build directory. We also have a few libraries that have source, docs, and headers, in the top level dir. We've traditionally allowed that for "small" libraries. Should we stop allowing that? And so on for other types of files and dirs. I.e. I don't mind so much that they are there.. Just that we don't have documentation for why they are there. So that users and future authors (and some tools) have an easier time. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive.
Since there are several of mine listed there, these typically have: tools: Stuff to help me maintain the library, look away now ;) example(s): Examples for end users, these are typically built into the documentation, so I guess they could go under doc/ at a pinch, but I feel they are more "discoverable" if they're in a top level directory. The Jamfile in test/ references the one in example/ so these get built as part of the tests, even though they're not "tests" as such. config: Common directory for build time configuration. Could go under build/config I guess. performance: Performance tests. Not built/run as part of the test suite as they take *much* too long. There are a few other stray directories in Math that should probably be tidied up when I get the time, but they should be harmless I would have thought. John.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: 03 June 2015 17:29 To: boost@lists.boost.org Subject: Re: [boost] What would make tool authors happier..
The set of libraries that have extraneous files/dirs are roughly: algorithm, align, asio, assert, bind, chrono, compatibility, compute, concept_check, config, container, context, conversion, convert, core, crc, date_time, disjoint_sets, dynamic_bitset, endian, filesystem, format, functional, fusion, geometry, gil, heap, interprocess, intrusive, lexical_cast, locale, log, math, move, mpl, multi_index, multiprecision, numeric, phoenix, polygon, predef (yes my own lib, I know), property_tree, proto, python, random, random, regex, serialization, smart_ptr, sort, spirit, statechart, static_assert, test, thread, tokenizer, type_index, type_traits, typeof, units, unordered, utility, uuid, variant, wave, and xpressive.
Since there are several of mine listed there, these typically have:
tools: Stuff to help me maintain the library, look away now ;) example(s): Examples for end users, these are typically built into the documentation, so I guess they could go under doc/ at a pinch, but I feel they are more "discoverable" if they're in a top level directory. The Jamfile in test/ references the one in example/ so these get built as part of the tests, even
though
they're not "tests" as such. config: Common directory for build time configuration. Could go under build/config I guess. performance: Performance tests. Not built/run as part of the test suite as they take *much* too long.
There are a few other stray directories in Math that should probably be tidied up when I get the time, but they should be harmless I would have thought.
I think the key thing is always having the following folders /include /... /test /doc and /doc/html /example index.html to redirect, usually to /doc/html/index.html It's when these are missing or elsewhere that causes trouble? (Other folders are pretty harmless and optional (no /src or /build for a header-only library)?). My 2p Paul
(Other folders are pretty harmless and optional (no /src or /build for a header-only library)?).
I think Rene's ultimate goal is to be able to verify that nothing is accidentally left out of testing, that and automated code coverage analysis are certainly worthy goals, the question is how we get there and whether greater discipline is required to get there. John.
4. the documentation root is libs/*/index.html and the documentation lives (as a general rule) in libs/*/doc/html
... and, I forgot, library documentation that requires building has a Jamfile in libs/*/doc. If there's no Jamfile, the documentation is presumed to be already checked into Git in HTML form.
I frankly don't understand why this is so hard. Shouldn't the build/test of all of boost just be the sequence of buid/test of each library? In a short time I following proof of concept: # Bash shell script - run from boost root # set debug output #set -x # set extended globs shopt -s extglob # expand null files lists to ... null file lists shopt -s nullglob function walk_subdirectories () { for dir in */ do # drop into the directory cd $dir >/dev/null #if it's the subdirectory that interests us if test ${dir%/} = "$1" then # invoke the requested command inside the directory echo $PWD echo $2 else # check subdirectories walk_subdirectories "$1" "$2" fi cd .. done } Then I can run all the library build/tests with: . build.sh then cd libs walk_subdirectories test 'echo b2 ...' cd .. Or if I want to create test tables for all the libraries I can use cd libs walk_subdirectories test 'echo library_status ...' cd .. Of if I want to invoke the build test for the libraries which support CMake I can cd libs walk_subdirectories CMake 'echo b2 ...' cd .. Of course to build all the tools I'd just move to another dirctory. etc. Note that this already handles the multi-level libraries. Also I realize that it fails to include boost test because of a naming conflict so let's not start spitballing this on this basis. Also note that is this trivial to debug and fix by anyone if something fails. The same can't be said for other boost tools. I realize that one might not want to use shell scripts - though they work pretty well here. If you're building this with C++ or bjam or whatever, it shouldn't be that much more complicated than this. I can't help but believe that all our build/test/release stuff is over engineered. Robert Ramey
On Wed, Jun 3, 2015 at 3:46 PM, Robert Ramey
I frankly don't understand why this is so hard.
Shouldn't the build/test of all of boost just be the sequence of buid/test of each library?
In a short time I following proof of concept:
[cut]
Then I can run all the library build/tests with: . build.sh
then
cd libs walk_subdirectories test 'echo b2 ...' cd ..
Or if I want to create test tables for all the libraries I can use
cd libs walk_subdirectories test 'echo library_status ...' cd ..
Of if I want to invoke the build test for the libraries which support CMake I can
cd libs walk_subdirectories CMake 'echo b2 ...' cd ..
Of course to build all the tools I'd just move to another dirctory.
etc.
Note that this already handles the multi-level libraries. Also I realize that it fails to include boost test because of a naming conflict so let's not start spitballing this on this basis. Also note that is this trivial to debug and fix by anyone if something fails. The same can't be said for other boost tools.
I realize that one might not want to use shell scripts - though they work pretty well here. If you're building this with C++ or bjam or whatever, it shouldn't be that much more complicated than this. I can't help but believe that all our build/test/release stuff is over engineered.
No matter how bright anyone is an automated approach, as above, can't account for human nature. In particular your approach.. First misses the following test directories (ones that are currently listed for testing): libs/concept_check, libs/container/example, libs/core/test/swap, libs/disjoint_sets, libs/dynamic_bitset, libs/hash/test/extra, libs/interprocess/example, libs/move/example, libs/regex/example, libs/static_assert, libs/unordered/test/unordered, libs/unordered/test/exception, libs/wave/test/build. Second it adds the following not to be tested (maybe.. as we don't really know the intent of the authors) directories: libs/chrono/stopwatches/test, libs/compute/test (this looks like a true missing tested lib but I can't be sure without asking the compute author), libs/config/test/link/test, libs/filesystem/example/test, libs/functional/hash/test (I'm shacking my fist towards the functional authors!), libs/gil/io/test, libs/gil/numeric/test, libs/gil/toolbox/test. And oh how painful it was to visually have to compare two large lists of test dirs to discover manually that information! -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On 6/3/15 2:21 PM, Rene Rivera wrote:
No matter how bright anyone is an automated approach, as above, can't account for human nature. In particular your approach..
First misses the following test directories (ones that are currently listed for testing): libs/concept_check,
Doesn't have a test directory libs/container/example, Hmmm - I wouldn't expect that to get tested. I have that directory in the serialization library and it never gets tested even though it includes a Jamfile.v2
libs/core/test/swap,
Hmm core includes a Jamfile.v2 I'm not sure why that wouldn't get test libs/disjoint_sets, libs/dynamic_bitset,
libs/hash/test/extra, libs/interprocess/example, libs/move/example, libs/regex/example, libs/static_assert, libs/unordered/test/unordered, libs/unordered/test/exception, libs/wave/test/build.
Second it adds the following not to be tested (maybe.. as we don't really know the intent of the authors) directories: libs/chrono/stopwatches/test, libs/compute/test (this looks like a true missing tested lib but I can't be sure without asking the compute author), libs/config/test/link/test, libs/filesystem/example/test, libs/functional/hash/test (I'm shacking my fist towards the functional authors!), libs/gil/io/test, libs/gil/numeric/test, libs/gil/toolbox/test.
And oh how painful it was to visually have to compare two large lists of test dirs to discover manually that information!
well, maybe I'm just buying your argument that library authors must adhere to some reasonable conventions if they want to get their stuff tested. I think that's your original point. You pointed out as an example the multi-level directory in numeric. I guess that mislead me. Sooooo - I'm going to support your contention that it's totally reasonable and indeed necessary that library authors adhere to some reasonable conventions if they want thier libraries to build and test in boost. All we have to do is agree on these conventions. Here's my starting point: Libraries should have the following directories with jamfiles in them build test doc Libraries can be nested if they adhere to the above convention. ... add your own list here So we can agree on that. (I think). Now I've got another issue. Why can't we just run the local testing setup and upload the results somewhere. Right now I'd like users to be able to run b2 in the directories which interest them, and upload summary test results to a server which would display them. This would mean that testing would be much more widespread. The current system requires that one download a python script which ... does what? it looks like it downloads a large number of other python scripts which then do a whole bunch of other stuff. My view is that this is and always has been misguided. a) It's too complex b) too hard to understand and maintain. c) isn't usable for the casual user d) requires a large amount of resources by the test e) which he can't figure out in advance f) and does a lot of unknown stuff. Wouldn't be much easier to do something like the following: a) pick a library b) run b2 c) run process jam log d) run X which walks the tree and produces a pair of files - like library_status does. e) ftp the files to "someplace" f) the reporting system would consolidate the results and display them. This would be much more flexible and easier. It would be much easier to maintain as well. Of course this is somewhat speculative as it's not clear to me how the python scripts work and it's not clear how to see them without actually running the tests myself. I've been very happy with this whole system for many years. Robert Ramey
On Wed, Jun 3, 2015 at 5:46 PM, Robert Ramey
On 6/3/15 2:21 PM, Rene Rivera wrote:
No matter how bright anyone is an automated approach, as above, can't account for human nature.
well, maybe I'm just buying your argument that library authors must adhere to some reasonable conventions if they want to get their stuff tested. I think that's your original point.
Yes, that's a big point.
You pointed out as an example the multi-level directory in numeric. I guess that mislead me.
Sorry about that.. It was easier to just count and post numbers than go through the pain of enumerating individual examples. Sooooo - I'm going to support your contention that it's totally reasonable
and indeed necessary that library authors adhere to some reasonable conventions if they want thier libraries to build and test in boost. All we have to do is agree on these conventions. Here's my starting point:
Libraries should have the following directories with jamfiles in them
build test doc
Which is what we've already agreed to for more than a decade. Libraries can be nested if they adhere to the above convention.
Yep. ... add your own list here
Peter, John, and I have some more specific ones for the list. So we can agree on that. (I think).
We, as in Peter, John, you, and I, can ;-) Can't speak for others. But I would think there's some disagreement given the current variety of structures. Now I've got another issue. Why can't we just run the local testing setup
and upload the results somewhere.
It's not impossible, or even hard.
Right now I'd like users to be able to run b2 in the directories which interest them, and upload summary test results to a server which would display them.
You've mentioned this desire before :-)
This would mean that testing would be much more widespread.
It may be more widespread. But it likely will not be more varied. The current system requires that one download a python script which ...
does what?
Mostly it does a lot of error checking, fault tolerance, and handling of options (although I keep working to remove the options that are no longer used in the hope that things will get simpler).
it looks like it downloads a large number of other python scripts which then do a whole bunch of other stuff.
It downloads 3 other Python files (I could reduce that to 2 now.. just haven't gotten to it). But it also downloads the current Boost Build, process_jam_log (the C++ program), and of course Boost itself. Although it does clone the regression repo to get the related process_jam_log sources and build files.
My view is that this is and always has been misguided.
a) It's too complex
Yes, but it used to be worse.
b) too hard to understand and maintain.
Yes, but I'm trying to fix that. From multiple fronts, including investigating the option of bypassing it entirely.
c) isn't usable for the casual user
It's not meant to be as it's a serious commitment of resources to be tester for Boost. But it's easy enough to read the brief instructions and run it without knowing what it actually does.
d) requires a large amount of resources by the test
The test system resources are minuscule compared to the resources to run the tests themselves (i.e. if you just ran b2 in the status dir).
e) which he can't figure out in advance
We have experimentally arrived at resources numbers for full testing (the test scripts themselves use so little they would likely run on a modern smart phone).
f) and does a lot of unknown stuff.
I gave a presentation long ago at BoostCon #2 (IIRC), yes that far back, saying what gets done. And it does less now than it used to. But can be summarized as: 1) downloads the test system, 2) downloads Boost, 3) builds the test system, 4) builds and tests Boost, 5) processes the results, 6) uploads to a server.
Wouldn't be much easier to do something like the following:
0) download Boost (and deal with errors and proxies) a) pick a library a.2) build b2 a.3) install/setup b2 and your toolset, and possibly device or simulator or VM b) run b2
b.2) download process jam log (and deal with errors and proxies) b.3) build process jam log c) run process jam log
c.2) download X if it's not part of Boost c.3) build X d) run X which walks the tree and produces a pair of files - like
library_status does.
...Would need to produce a considerably more information laden file than what library_status does to be useful to lib authors (like what one of those Python scripts above currently does). But I understand your point. e) ftp the files to "someplace"
Like it does now.
f) the reporting system would consolidate the results and display them.
Like it does now. And of course don't forget to add a bunch of error handling (remember it's the internet, errors are everywhere), and proxy options. This would be much more flexible and easier. It would be much easier to
maintain as well.
What part would be more flexible? I don't see how it would be easier to maintain. The code would be easier to maintain? The servers easier to maintain? The report consolidation servers and code would be easier? Of course this is somewhat speculative as it's not clear to me how the
python scripts work and it's not clear how to see them without actually running the tests myself.
They pretty much work just like you described but with more automation glue :-) I've been very happy with this whole system for many years. Thank you.. But I haven't been happy with it. It works, but it has many drawbacks (none of which you mentioned). The biggest being that it suffers from low resilience. And in an ideal world I would rather see a testing system in which an individual Boost library.. 1) The authors registers to be tested on a could CI system (like Travis-CI and Appveyor) 2) Has a configuration that is standard for such testing across all of the Boost libraries. Such a configuration would: a) Automatically clone the appropriate git branch (including develop, master, PRs, whatever else you want) b) Download the latest, small, single, test script. Which would be setup from the common configuration to run for each of the cloud CI test steps. c) Download & install required software (for example it would install the version of gcc, clang, etc it's going to test with). d) Download the latest Boost Build (and build + install it) e) Download (aka git clone) the dependent Boost libraries (master by default, but could be any branch) f) Run the tests with b2. g) As part of (f) b2 "itself" would upload test results to a cloud results system live (which would process the results live and present them live) Anyway.. Test system engineering was not actually the substance of the thread. But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you can take a look at: https://ci.appveyor.com/project/boostorg/predef https://travis-ci.org/boostorg/predef https://github.com/boostorg/predef/blob/develop/appveyor.yml https://github.com/boostorg/predef/blob/develop/.travis.yml https://github.com/boostorg/regression/blob/develop/ci/src/script.py Note, the script.py is going to get smaller soon as there's extra code in it I though I needed as I implemented this for the past two weeks. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
I've been very happy with this whole system for many years.
LOL - sorry I meant to say "unhappy"
Thank you.. But I haven't been happy with it.
So we're in agreement again. It works, but it has many
drawbacks (none of which you mentioned). The biggest being that it suffers from low resilience. And in an ideal world I would rather see a testing system in which an individual Boost library..
1) The authors registers to be tested on a could CI system (like Travis-CI and Appveyor) 2) Has a configuration that is standard for such testing across all of the Boost libraries. Such a configuration would: a) Automatically clone the appropriate git branch (including develop, master, PRs, whatever else you want) b) Download the latest, small, single, test script. Which would be setup from the common configuration to run for each of the cloud CI test steps. c) Download & install required software (for example it would install the version of gcc, clang, etc it's going to test with). d) Download the latest Boost Build (and build + install it) e) Download (aka git clone) the dependent Boost libraries (master by default, but could be any branch) f) Run the tests with b2. g) As part of (f) b2 "itself" would upload test results to a cloud results system live (which would process the results live and present them live)
How about something much simpler 1) clone or update the boost super project 2) bootstrap.sh to create binaries - if he hasn't alread2) run b2 headers 3) cd to any library he want's to test 4) run the test script - a little more complicated than the library_status one - leaves two files - test result table and html text 5) ftp test_result tables to "someplace" 6) if desired run library_status 2 to display test result table This would be immensely simpler than the current system - basically because it does less. it would: 1) Permit and encourage each users to test the libraries he's going to use on the platforms he's going to use them on and upload the results for those libraries. 2) would be easy for users of non-accepted boost libraries to use. That is once one cloned in a non-boost library into the right place it could be tested just a the boost libraries are. 3) There would be no separate testing procedure for official testers vs library developers - same system for everyone. Much simpler. 2) wouldn't require much in the way of script and not require python
Anyway.. Test system engineering was not actually the substance of the thread.
I see that now. It never occurred to me that you would somehow try to accommodate gratuitous deviations from our traditional/standard directory structure. I think you made a mistake going down this path in the first place. Actually I think you should stop doing that now and only support the standard layout. Anything that doesn't get tested is the library maintenance department's problem. (We're working on that as a separate initiative) But this subject is very important to me - I'm sort of amazed that most everyone else seems content with the current setup. But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you
can take a look at:
https://ci.appveyor.com/project/boostorg/predef https://travis-ci.org/boostorg/predef https://github.com/boostorg/predef/blob/develop/appveyor.yml https://github.com/boostorg/predef/blob/develop/.travis.yml https://github.com/boostorg/regression/blob/develop/ci/src/script.py
Note, the script.py is going to get smaller soon as there's extra code in it I though I needed as I implemented this for the past two weeks.
Hmmm - looks like we're on divergent paths here.
On Thu, Jun 4, 2015 at 12:36 AM, Robert Ramey
I've been very happy with this whole system for many years.
LOL - sorry I meant to say "unhappy"
Thank you.. But I haven't been happy with it.
So we're in agreement again.
:-)
I see that now. It never occurred to me that you would somehow try to accommodate gratuitous deviations from our traditional/standard directory structure. I think you made a mistake going down this path in the first place.
It actually wasn't my mistake.. I inherited the "bad test list". And have been dealing with it for a long time now. But never caring enough to bring down the hammer. Now I've reached the point where people are asking why stuff is so fragile, unreliable, uninformative, etc. And my answer is now "you made it that way", and now I'm trying to fix it and make it better.
But this subject is very important to me - I'm sort of amazed that most everyone else seems content with the current setup.
It's important to about 4 or 5 of us only it seems :-( I'm not surprised though. I've learned that there's a distinct disdain for programmers to value infrastructure. As it's not a "sexy" endeavor. But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you
can take a look at:
https://ci.appveyor.com/project/boostorg/predef https://travis-ci.org/boostorg/predef https://github.com/boostorg/predef/blob/develop/appveyor.yml https://github.com/boostorg/predef/blob/develop/.travis.yml https://github.com/boostorg/regression/blob/develop/ci/src/script.py
Note, the script.py is going to get smaller soon as there's extra code in it I though I needed as I implemented this for the past two weeks.
Hmmm - looks like we're on divergent paths here.
Not really.. I was only referring to a replacement to the current system. I see your path as an addition worth doing also. And some of the steps (the ones not implemented yet) will help in making your ideal easier to operate in tandem with the CI testing. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On Thu, Jun 4, 2015 at 12:36 AM, Robert Ramey
I've been very happy with this whole system for many years.
LOL - sorry I meant to say "unhappy"
Thank you.. But I haven't been happy with it.
So we're in agreement again.
:-)
I see that now. It never occurred to me that you would somehow try to accommodate gratuitous deviations from our traditional/standard directory structure. I think you made a mistake going down this path in the first place.
It actually wasn't my mistake.. I inherited the "bad test list". And have been dealing with it for a long time now. But never caring enough to bring down the hammer. Now I've reached the point where people are asking why stuff is so fragile, unreliable, uninformative, etc. And my answer is now "you made it that way", and now I'm trying to fix it and make it better.
But this subject is very important to me - I'm sort of amazed that most everyone else seems content with the current setup.
It's important to about 4 or 5 of us only it seems :-( I'm not surprised though. I've learned that there's a distinct disdain for programmers to value infrastructure. As it's not a "sexy" endeavor. But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you
can take a look at:
https://ci.appveyor.com/project/boostorg/predef https://travis-ci.org/boostorg/predef https://github.com/boostorg/predef/blob/develop/appveyor.yml https://github.com/boostorg/predef/blob/develop/.travis.yml https://github.com/boostorg/regression/blob/develop/ci/src/script.py
Note, the script.py is going to get smaller soon as there's extra code in it I though I needed as I implemented this for the past two weeks.
Hmmm - looks like we're on divergent paths here.
Not really.. I was only referring to a replacement to the current system. I see your path as an addition worth doing also. And some of the steps (the ones not implemented yet) will help in making your ideal easier to operate in tandem with the CI testing. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On 6/4/15 7:15 AM, Rene Rivera wrote:
But this subject is very important to me - I'm sort of amazed that most everyone else seems content with the current setup.
It's important to about 4 or 5 of us only it seems :-( I'm not surprised though. I've learned that there's a distinct disdain for programmers to value infrastructure. As it's not a "sexy" endeavor.
totally off topic, but I just wanted to get it off my chest. This is therapy for me. I suppose your right. Maybe infrastructure isn't sexy. But dealing with bad infrastructure is a HUGE time waster. I can't see how it is that I seem to the only one that gets frustrated with this. BTW it's not just Boost. In many of the organizations I've worked for it's even worse. I just can't wrap my head around this. Don't get me started on documentation. Robert Ramey
On Thu, Jun 4, 2015 at 1:46 AM, Robert Ramey
Libraries should have the following directories with jamfiles in them
build
build can be missing if the library does not need compiling.
test doc
The most important one is missing: include. I would go as far as proposing this single directory as the sign of a library root directory. If that looks too generic (e.g. if include directories are expected to be inside examples or tests), maybe meta is suitable for that as well.
Libraries can be nested if they adhere to the above convention.
+1. I think nesting should not be prohibited.
On 6/4/15 12:32 AM, Andrey Semashev wrote:
On Thu, Jun 4, 2015 at 1:46 AM, Robert Ramey
wrote: Libraries should have the following directories with jamfiles in them
build
build can be missing if the library does not need compiling.
test doc
The most important one is missing: include.
Of course, this convention is needed to support b2 headers capability. I think that a) we've had enough discussion b) all interested parties have had opportunity to comment c) we have a consensus that enforcing a few rules which have been in place for many years will making tooling much simpler d) Rene should announce this policy update will take effect after ... and there after, libraries which don't conform to the convention won't be automatically tested. e) and request that library maintainers make necessary updates. Thanks for giving us the opportunity to have some say. Robert Ramey
On Thu, Jun 4, 2015 at 8:13 AM, Robert Ramey
a) we've had enough discussion b) all interested parties have had opportunity to comment c) we have a consensus that enforcing a few rules which have been in place for many years will making tooling much simpler d) Rene should announce this policy update will take effect after ... and there after, libraries which don't conform to the convention won't be automatically tested. e) and request that library maintainers make necessary updates.
Agreed. Thanks for giving us the opportunity to have some say. Welcome. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On 6/4/15 12:32 AM, Andrey Semashev wrote:
On Thu, Jun 4, 2015 at 1:46 AM, Robert Ramey
wrote: Libraries should have the following directories with jamfiles in them
build
build can be missing if the library does not need compiling.
test doc
also I would expand this slightly to build(if required) test doc jamfile.v2 (if required html I've always disliked having documentation in one place if they are built with boost book and in an other place if they are built some other way. So either use the above or copy all the handcrafted html to the "other place" I prefer the above because it keeps all the library under one directory. Robert Ramey
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Robert Ramey Sent: 04 June 2015 14:17 To: boost@lists.boost.org Subject: Re: [boost] What would make tool authors happier..
On 6/4/15 12:32 AM, Andrey Semashev wrote:
On Thu, Jun 4, 2015 at 1:46 AM, Robert Ramey
wrote: Libraries should have the following directories with jamfiles in them
build
build can be missing if the library does not need compiling.
test doc
also I would expand this slightly to
build(if required) test doc jamfile.v2 (if required html
I've always disliked having documentation in one place if they are built with boost book and in an other place if they are built some other way. So either use the above or copy all the handcrafted html to the "other place" I prefer the above because it keeps all the library under one directory.
Wherever it is, surely there should be a /libs/some_library/index.html redirect to get to wherever the real stuff is. (and there is no reason why not to have another index.html in the /doc folder as well - in case users expect it there). Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On 06/04/2015 10:47 AM, Paul A. Bristow wrote:
Wherever it is, surely there should be a
/libs/some_library/index.html
redirect to get to wherever the real stuff is. (and there is no reason why not to have another index.html in the /doc folder as well - in case users expect it there).
Paul
I don't want to bike-shed this discussion but I have seen this come up a few times and decided to ask: Why /libs/some_library/index.html ? /libs/some_library/doc/index.html makes great sense. By convention an index.html at the library's lib dir could mean the documentation index but since we have a doc dir shouldn't the canonical place be there? michael -- Michael Caisse ciere consulting ciere.com
Michael Caisse wrote:
I don't want to bike-shed this discussion but I have seen this come up a few times and decided to ask:
Why /libs/some_library/index.html ?
Opening libs/some_library with a browser has always been the canonical way to get the documentation for some_library, on the website and off.
On June 4, 2015 3:05:46 PM EDT, Peter Dimov
Michael Caisse wrote:
Why /libs/some_library/index.html ?
Opening libs/some_library with a browser has always been the canonical way to get the documentation for some_library, on the website and off.
Exactly. I use it frequently. ___ Rob (Sent from my portable computation engine)
On 2 June 2015 at 19:08, Rene Rivera
First, it means is banning of git sub-modules outside of directly under boost-root/libs and boost-root/tools. Dealing with git sub-modules is difficult enough even when it's just a handful of sub-modules.
I've had no problem dealing with the numeric modules. The output of 'git config -f .gitmodules -l' contains pretty much everything you need to know. You can get a list of submodule paths using: git config -f $BOOST_ROOT/.gitmodules -l | sed -n 's/submodule\.\([^.]*\)\.path=\(.*\)/\2/p' If you're trying to find modules by walking the filesystem, you'll just create problems for yourself. FWIW I think most of what you've written about is a waste of time, but knock yourself out.
Daniel James wrote:
I've had no problem dealing with the numeric modules. The output of 'git config -f .gitmodules -l' contains pretty much everything you need to know.
...
If you're trying to find modules by walking the filesystem, you'll just create problems for yourself.
Walking the filesystem produces better results in the following cases: - when you have manually placed a (proposed) library in libs/ that is not yet a submodule - when you have a Boost directory structure that has no Git metadata - or when you have a Boost directory that doesn't have all the libraries in libs/ but just a subset It's convenient for tools to be able to handle these cases and to not be limited to what .gitmodules says. (At other times it can indeed be convenient for tools to look at .gitmodules and not the filesystem, but such is life.) If we're going to make modular/subset releases work, it would indeed be beneficial for all tools to look at libs/* and "just work" on whatever is there. I'd argue that it would even be necessary. We could use a separate manifest file to avoid looking at libs/* just to be contrary, but I see no point in doing so.
On 4 June 2015 at 19:56, Peter Dimov
Daniel James wrote:
I've had no problem dealing with the numeric modules. The output of 'git config -f .gitmodules -l' contains pretty much everything you need to know.
...
If you're trying to find modules by walking the filesystem, you'll just create problems for yourself.
Walking the filesystem produces better results in the following cases:
- when you have manually placed a (proposed) library in libs/ that is not yet a submodule - when you have a Boost directory structure that has no Git metadata
In these cases, why does your tool need to understand git modules?
- or when you have a Boost directory that doesn't have all the libraries in libs/ but just a subset
That's easily dealt with. You just check that the module is present.
Daniel James wrote:
On 4 June 2015 at 19:56, Peter Dimov
wrote: ... Walking the filesystem produces better results in the following cases:
- when you have manually placed a (proposed) library in libs/ that is not yet a submodule - when you have a Boost directory structure that has no Git metadata
In these cases, why does your tool need to understand git modules?
It doesn't.
On 4 June 2015 at 21:30, Peter Dimov
Daniel James wrote:
On 4 June 2015 at 19:56, Peter Dimov
wrote: ...
Walking the filesystem produces better results in the following cases:
- when you have manually placed a (proposed) library in libs/ that is > not yet a submodule - when you have a Boost directory structure that has no Git metadata
In these cases, why does your tool need to understand git modules?
It doesn't.
So why does it matter where the modules are?
Daniel James wrote:
On 4 June 2015 at 21:30, Peter Dimov
wrote: Daniel James wrote:
On 4 June 2015 at 19:56, Peter Dimov
wrote: ...
Walking the filesystem produces better results in the following cases:
- when you have manually placed a (proposed) library in libs/ that is not yet a submodule - when you have a Boost directory structure that has no Git metadata
In these cases, why does your tool need to understand git modules?
It doesn't.
So why does it matter where the modules are?
It doesn't matter where the modules are. It matters where the libraries are. To determine that, you walk the filesystem. It's much simpler if the directory structure is regular and libraries are libs/*. So for example you want to find all headers - you look at libs/*/include. You want to build all libraries that require building - you look for libs/*/build/Jamfile*. You want to test all libraries - you look for libs/*/test/Jamfile*. You want the failure markup - you look for libs/*/test/failure-markup.xml. You want the changelog - you look for libs/*/doc/changes.xml (or whatever). The first two already work, although they do need to account for a few special cases. I can't think of a use case that would need to be concerned with where the Git modules are, or with whether there are Git modules at all.
On 4 June 2015 at 22:06, Peter Dimov
It doesn't matter where the modules are. It matters where the libraries are. To determine that, you walk the filesystem. It's much simpler if the directory structure is regular and libraries are libs/*. So for example you want to find all headers - you look at libs/*/include. You want to build all libraries that require building - you look for libs/*/build/Jamfile*. You want to test all libraries - you look for libs/*/test/Jamfile*. You want the failure markup - you look for libs/*/test/failure-markup.xml. You want the changelog - you look for libs/*/doc/changes.xml (or whatever).
The first two already work, although they do need to account for a few special cases.
Multiple people have mentioned the need for nested libraries. I think they need to be supported.
I can't think of a use case that would need to be concerned with where the Git modules are, or with whether there are Git modules at all.
The commit bot I wrote obviously has to use them, some of my other tools pull data from bare repositories so that I can quickly deal with different branches, a build tool might need to deal with them if it wants to do something like link to the module's current version. My original post was a reply to a comment about the difficulty of dealing with git modules. If you're not concerned with git modules, why did you reply to it?
Daniel James wrote:
My original post was a reply to a comment about the difficulty of dealing with git modules. If you're not concerned with git modules, why did you reply to it?
Because, from context, it seemed to me that Rene didn't actually mean git modules as such, but libraries. But I may've been wrong.
On Thu, Jun 4, 2015 at 5:55 PM, Peter Dimov
Daniel James wrote:
My original post was a reply to a comment about the difficulty of dealing
with git modules. If you're not concerned with git modules, why did you reply to it?
Because, from context, it seemed to me that Rene didn't actually mean git modules as such, but libraries. But I may've been wrong.
I guess I meant both.. I've run into one case two times now of trying to create a git repo that is a subset of Boost but otherwise has the same structure. Here's one of them < https://github.com/grafikrobot/boost-doctools>. It was a chore to create because it went something like this: a) build the doctools (quickbook) and get errors, b) look at the mising header errors and determine which library was missing, c) find the library git repo, d) add the submodule for it at root/libs, e) repeat. This would be something I would highly consider automating the next time around. But because step (d) was actually hard as the library did not always need to go under libs, but instead needed to go in some further level deep I made various mistakes that where hard to recover from because I don't know enough about how to deal with recovering from git submodule errors to make it "easy". Although what's funny is that I've done a subset of Boost as part of a git repo now three times. The third time my memory of the previous two times made me choose to avoid that pain. Instead I wrote a Boost Build extension that made the requirement of having the Boost structure go away. The other experience is having start thinking about modular testing of libraries. And realizing that searching around for where libraries are that need to get tested is not obvious. But I've already mentioned this case at length, so I wont bore you further with it ;-) -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On 5 June 2015 at 03:50, Rene Rivera
I've run into one case two times now of trying to create a git repo that is a subset of Boost but otherwise has the same structure. Here's one of them < https://github.com/grafikrobot/boost-doctools>. It was a chore to create because it went something like this: a) build the doctools (quickbook) and get errors, b) look at the mising header errors and determine which library was missing, c) find the library git repo, d) add the submodule for it at root/libs, e) repeat. This would be something I would highly consider automating the next time around. But because step (d) was actually hard as the library did not always need to go under libs, but instead needed to go in some further level deep I made various mistakes that where hard to recover from because I don't know enough about how to deal with recovering from git submodule errors to make it "easy". Although what's funny is that I've done a subset of Boost as part of a git repo now three times. The third time my memory of the previous two times made me choose to avoid that pain. Instead I wrote a Boost Build extension that made the requirement of having the Boost structure go away.
I'd see that a something that's just too painful to do, regardless of module locations. But if you must, instead of adding the modules to a repo, I'd create a clean clone of the super project and only initialise the modules that I need. That way you can quickly grab all the numeric modules using: git submodule update --init -- libs/numeric/ And pretend they're a single module. Which means you'll get a few extra modules - but only a few. Since Boost Build scans for include files, is it possible to get it to list all the files it thinks are required? I realise it's not entirely accurate because of macros, but hopefully it would be good enough. Or perhaps get it to run 'g++ -H' or 'cl /showIncludes' to tell which files were used. With that information it shouldn't be that hard to work out which modules are required.
Daniel James wrote: ...
On 5 June 2015 at 03:50, Rene Rivera
wrote: ... It was a chore to create because it went something like this: a) build the doctools (quickbook) and get errors, b) look at the mising header errors and determine which library was missing, c) find the library git repo, d) add the submodule for it at root/libs, e) repeat. ...
Since Boost Build scans for include files, is it possible to get it to list all the files it thinks are required? I realise it's not entirely accurate because of macros, but hopefully it would be good enough.
boostdep can do that, but it doesn't support modules in tools/ as a starting point. bcp can also output an include report, but it only lists headers, whereas boostdep lists modules. The easiest way right now is probably to symlink libs/quickbook to tools/quickbook and then run "boostdep --secondary quickbook".
participants (9)
-
Andrey Semashev
-
Daniel James
-
John Maddock
-
Michael Caisse
-
Paul A. Bristow
-
Peter Dimov
-
Rene Rivera
-
Rob Stewart
-
Robert Ramey