status/expected_results.xml must go!
In a discussion on the Steering Committee mailing list, Robert Ramey wrote:
I'm still having a problem with a newer library which only supports a more recent version of C++ such as C++11. Our testing system doesn't permit me to say "don't test this compilers which don't support C++11". With our current system, anyone who looks at test results will get failures for compilers which don't support the version his library requires. This is incorrect, and misleading to potential users of the library, and discourages authors from submitting libraries which depend on modern C++ features. Also, it wastes a lot of testing time generating this bogus information. I would ask that the powers that be try to convince the maintainers of the boost testing setup to address this.
The current way we markup expected failures is via boost-root/status/expected_results.xml. That creates two problems: * It is a maintenance nightmare as more and more libraries are added to Boost. This is library-specific information and so should be part of each module, not the super-project. * As Robert points out, we shouldn't even be running tests that are bound to fail anyhow. It would seem that each library needs to be able to tell bjam/b2 to skip certain tests, or even all tests, under certain conditions. As Marshall Clow has been pointing out, we desperately need someone to take over the regression test reporting maintenance, and redesign it to meet our current needs. --Beman
* It is a maintenance nightmare as more and more libraries are added to Boost. This is library-specific information and so should be part of each module, not the super-project.
+1
* As Robert points out, we shouldn't even be running tests that are bound to fail anyhow. It would seem that each library needs to be able to tell bjam/b2 to skip certain tests, or even all tests, under certain conditions.
We have that now: take a look at the multiprecision tests and how conditional dependencies (GMP MPFR etc) are handled. If Robert needs a hand to set this up for serialization I'm willing to help as I've been there and done this sort of thing already. John.
* As Robert points out, we shouldn't even be running tests that are bound to fail anyhow. It would seem that each library needs to be able to tell bjam/b2 to skip certain tests, or even all tests, under certain conditions.
We have that now: take a look at the multiprecision tests and how conditional dependencies (GMP MPFR etc) are handled. If Robert needs a hand to set this up for serialization I'm willing to help as I've been there and done this sort of thing already.
Just to add to that: it should be possible for Boost.Config to provide boilerplate rules to check for one or more features at build time, I'll have an experiment, as potentially we can simply (and centralize) this kind of checking a lot. John.
participants (2)
-
Beman Dawes
-
John Maddock