On 27 Jul 2017, at 22:29, Edward Diener via Boost
wrote: On 7/27/2017 12:58 PM, Florent Castelli via Boost wrote:
On Jul 27, 2017 16:41, "Edward Diener via Boost"
wrote: On 7/27/2017 8:43 AM, Florent Castelli via Boost wrote: On 26/07/2017 20:49, Edward Diener via Boost wrote:
Following John Maddock's appeal for practical solutions related to the move to CMake, I would like to know what the CMake equivalent is to the Boost Build unit test functionality.
In other words what do I write for CMake in order to do a Boost Build compile, compile-fail, link, link-fail, run, and run-fail unit tests ?
In my own Boost-CMake project, I have implemented regular "RUN" in a way that looks similar to the original Boost Build using functions: https://github.com/Orphis/boost-cmake/blob/master/libs/system.cmake
What is RUN supposed to be in your link above ? An equivalent of run from Boost Build. See https://github.com/boostorg/system/blob/develop/test/Jamfile.v2
Please show the code for RUN in CMake. To ease the transition from Boost Build to CMake we really need CMake equivalents to the unit testing rules of Boost Build at the very least.
Let’s start over. Basically, you end up with those equivalents.
Run is compiling, linking an executable and running it, checking the return code is success. It could translate to:
add_executable(test_name test_file.cpp)
target_link_libraries(test_name PRIVATE boost_library)
add_test(NAME test_name COMMAND test_name)
But, if you do that, you need 2 steps to run the test. First you compile the program (using regular "make test_name" or even “cmake —build . —target test_name”) and then you run your test through ctest “ctest . -R test_name” (or you run all the tests without the filter). The problem there is you have 2 commands, and ctest won’t capture the compilation or linking errors.
To work around that, you can use this instead:
add_test(NAME test_name COMMAND cmake —build . —target test_name && $
Other commands should be easy enough to implement as well, but I haven't
had the interest in doing that in my project yet (which predates the SC decision).
It is possible to do "compile" by creating a regular static library target with that file and then invoking "cmake --build . --target <compile test name>". Compile failures can be done similarly and check they fail. Link and Link-fail can be done the same way using actual binaries that are built, but not run.
This is a poor solution. Could you please explain why? Using Cmake directly to build the sources with the same flags as set in the toolchain is not controversial I'd say. It's actually working better than other solutions I've seen trying to reproduce the features.
I said it was a poor solution because all the Boost Build 'compile' rule does is to try to compile one or more source files. Success is if there are no compilations errors, while failure is if there is at least one compilation error. Why should one have to create an actual library to do this, especially as the source(s) may contain a main() function, and in fact usually does ?
I think the right solution, as suggested to me by someone else, is to create a CMake OBJECT library, which as I understand it is not really a library but just OBJECT files, and then run a test based on that. But of course I do not know if CMake supports that.
The thing is, static libraries are guaranteed to work and fairly standard, object libraries are a bit finicky and may not work depending on the generator. Using one or the other is merely an optimisation though, static libraries are just an archive of object files after all. Similar to the code above, you could implement them with this code: add_library(test_name STATIC (or OBJECT) test_file.cpp) target_link_libraries(test_name PRIVATE boost_library) add_test(NAME test_name COMMAND cmake —build . —target test_name) In this code, ctest will just build it and capture any compilation error. Easy enough. It doesn’t matter if there are functions called main or not since it’s not linked, the compiler won’t care for the most part about the name of the functions (well, main has some special semantics, but I doubt any test would rely on that). For completeness, here’s a possible implementation of “LINK”. I understand this one links the source files into a binary, but doesn’t run it. This is simple CMake code as well: add_executable(test_name test_file.cpp) add_test(NAME test_name COMMAND cmake —build . —target test_name) In this case, CMake will build and link the files from the ctest invocation. To support the failing variants for all the above, you need this set_test_properties(test_name PROPERTIES WILL_FAIL TRUE). Note that if you expect a test to fail running, but still succeed linking, that’s not directly supported by the code above and some changes need to be made. Overall, that’s a lot of boilerplate for each test, which is why you need some convenience functions to make the build files clean and simple, with a clear intent. In complex test cases, it will certainly be possible to have people using CMake commands directly, but that should be rare.
The difference is making sure you have a clear separation of building Boost itself and running the tests. I found out that failures are pretty common with those tests and many of them will fail to run without errors on all supported compiler and platform, so isolation is quite important.
Again I need to mention that the tests are run for header only libraries which are not "built" at all in most cases. That's irrelevant for CMake. You end up building the tests still, they will depend on the header only library and its dependencies.
The problem is that some build tools will not support parallel invocations (Ninja doesn't for example, see https://groups.google.com/foru m/#!topic/ninja-build/4VP7whvWSH8 ) and thus running tests would be need to be linearized or moved to make, which should support that scenario better.
That is the end-users problem. I am just interested in Boost solving its own problem of moving from the Boost Build testing rules to something with CMake which does the same thing. Not really, because if a solution that works correctly and doing perfect forwarding of build flags can't be parallelized in some case, it would need fixing. If a solution doesn't have perfect forwarding of compiler settings and requirements, it's not really acceptable either.
Why would not each test have the correct flags whether the tests run in parallel or not ? Why would you want tests to run in parallel and not be linearized ?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost http://lists.boost.org/mailman/listinfo.cgi/boost