On 18 Sep 2014 at 17:40, Antony Polukhin wrote:
Can we enable automated tests coverage using Coverall for a Boost repo?
Yes. This should also be mandatory for all Boost libraries. It's free, nobody has an excuse.
We should really add a Boost wiki page on setting this stuff up, and strongly hint at it being nearly a requirement on the community review page for new libraries. I can help write this if you'd like to start it Antony?
Having such a wiki page would be good.
I've finished writing a generic .travis.yml draft file that is suitable for almost any library that uses Boost: https://github.com/apolukhin/variant/blob/travisci/.travis.yml There's some 'sed' black magic to provide only current library files to coveralls site. Fast recursive git cloning is taken from run.py script.
You may find my Travis script inspirational at https://github.com/BoostGSoC13/boost.afio/blob/master/.travis.yml In particular, I find the time spent cloning all of Boost takes away valuable unit testing time especially when you're running valgrind or the thread sanitiser, so I keep an automatically updated copy of Boost releases at https://github.com/ned14/boost-release. What I do then on my Jenkins CI is to extract that only, delete from libs/ the libraries I want trunk for, and symbolic link in the trunk submodules just for those libraries I want trunk for. A quick b2 headers later and it's good. I also skip the Travis GEM and use curl :) Note my script is heavily based on Daniel Pfeifer's, so it's not all mine.
I've also made a draft of a README.md file with results table https://github.com/apolukhin/variant/tree/travisci.
Nice. Only other thing is you should really disambiguate by compiler version and platform. For example, https://boostgsoc13.github.io/boost.afio/.
We could start writing the wiki. Do you know where to start?
I guess go to https://svn.boost.org/trac/boost/wiki and start a page.
It would be good to hear more opinions about the TravisCI+Coveralls before we start to add .travis.yml files to all the libraries.
Yes, I think that where the full unit test library is being used that a summary of failing versus passing tests should be recorded. Coverage can be great, but if unit tests don't return failure for a problem it can get overlooked. For example, I patched Boost.Expected to spit out unit test results, and Jenkins makes this nice table: https://ci.nedprod.com/view/Boost%20Thread-Expected-Permit/job/Boost.E xpected%20Test%20Linux%20GCC%204.8/lastCompletedBuild/testReport/ I also think a valgrind pass needs to happen, plus a thread sanitiser pass, plus a clang static analysis pass. For all libraries. Unless they have big red fail marks all over them, libraries won't get fixed. If we ever get Windows on Travis, I was surprised how good the MSVC static analyser has become, and we should have a pass with that too. Niall Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/