On 11/05/2015 16:46, Adam Wulkiewicz wrote:
Hi,
Recently the regression summary was upgraded. The cells corresponding to the failing tests can have names comp, link, run or fail. This way we can see what's the nature of the failure without going into details. This is great: many thanks!
See, e.g. the tests for Math: http://www.boost.org/development/tests/develop/developer/math.html
The last category of failures (generic "fail") is for the others or unknown. Currently the "Lib" errors are falling into this category. There are some in Math, e.g.: http://www.boost.org/development/tests/develop/developer/output/CrystaX-NET-...
That's a weird one - if you follow the links it actually says that linking the lib succeeded - which leaves me wondering what actually went wrong?
So we could mark them explicitly as "lib" failures and then the other/unknown failures could be marked as "unkn" or left as it is now - "fail". This has sense if there can be other "types" of failures, e.g. caused by some error in the testing scripts, Build or Regression, for which the reason is somehow unknown. Though I haven't seen them.
I propose to go further with this and to mark the compilation failures for which we know the reason of the failure in a special way: - "file" - file too big (reported on Windows, e.g. http://www.boost.org/development/tests/develop/developer/output/MinGW-w64-4-...) - "time" - time limit exceeded (e.g. http://www.boost.org/development/tests/develop/developer/output/teeks99-03b-...) I saw them in many libraries and I think it would be very useful for libraries having many tests like Math or Geometry.
+1, time limit exceptions are a big issue for the Math and Multiprecision libs... and yes, I've already spent a lot of time refactoring to make them smaller, but they still persist. I suspect many of these are caused by virtual machines with insufficient RAM allocated, that then end up thrashing the HD: many of the tests that time out compile really pretty quickly here (even on obsolete hardware).
The other types of failures I could think of aren't that important for me personally but maybe someone could appreciate them, e.g. the compilers developers: - "ierr" or "cerr" - internal compiler error or segmentation fault during compilation (e.g. http://www.boost.org/development/tests/develop/developer/output/PNNL-RHEL6_6... or http://www.boost.org/development/tests/develop/developer/output/teeks99-03h-...) - "segf" - segmentation fault during run (http://www.boost.org/development/tests/develop/developer/output/BenPope%20x8...) +1, internal compiler errors aren't really our fault are they? It would be good to be able to screen them out easily.
Furthermore I propose to use various colors for different failure reasons.
You can see the examples here: https://github.com/awulkiew/summary-enhancer https://github.com/awulkiew/summary-enhancer/tree/master/example/pages
Of course the colors could be changed. We should be able to modify them in the master.css file. In the examples above they are varying much to make a clear distinction between them. But if you like the yelow collor then all of the new failures could be yelowish with a slight accent of another color, e.g. compilation errors could be yellow with orange accent and "time"/"file" errors with green accent.
+1 on the use of color, again it helps us screen out what explained and what's not. John.