On 4/30/23 18:37, Robert Ramey via Boost wrote:
the implementation of Boost CI follows the normal pattern for the implementation of this functionality. This entails download the latest version of the library to be tested and it's dependencies (or maybe all of boost?) from the repo, Building everything from scratch, running all the tests and logging the output there of. This is done for each compiler / environment to be tested. Other than the raw console logs, there is no presentation of the tests results (e.g boost test matrix).
This is extremely unsatisfactory.
a) It's ridiculously resource intensive. Rebuilding everything all the time.
Rebuilding everything is good. Incremental builds are fragile.
b) takes forever unless one has many more servers than we have.
c) builds/tests even with the most trivial check-in. like adding a period to a documentation page.
You can mark commits to skip CI, if you want.
I would suggest we throw the whole thing out and consider a different approach. This approach would be more along the lines of that used by CMake test server. Complete documentation of how CMake does it is described in the CMake documentation. Basically it works like this.
I run my CMake tests locally. This means that tests are not run on a server, but on some users configuration. This spreads the load.
This means I run a special CMake target which takes the latest results and posts them to a server. The results are not posted automatically - I have to ask for it so it only happens when I know that a change is not trivial.
It also means that all configurations actually being used (and only those configurations) are tested and the results logged.
The server can than be queried to produce a table of results by configuration, compiler, library, etc on demand. CMake people maintain a basic server where one can post his results. But it's query facility isn't really complete enough for boost needs. This facility would need to be upgraded to boost requirements. Actually we have several application which might serve as a starting point: my library_status app which I use to produce a table of results and the app which produces the boost library test matrix.
All in all this would produce a system which would actually be useful.
As a developer, I would like to know that the commit I pushed works on all configurations I care to support relatively soon (in a matter of minutes to hours), so that I can merge this commit to master. I do not want to wait until someone bothers to checkout and test this commit - on every supported system. You cannot expect this to happen any time soon, and you'll be lucky if you hear a bug report before a release deadline, when it is already too late to fix it.