Le 23/04/16 à 06:34, Paul Fultz II a écrit :
[snip]
Then to set each one to the same name you can use the OUTPUT_NAME property:
set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround).
Its neither, this is optimization because the user could just build the library twice, once for shared and another for static. Of course, this type of optimization mainly affects system maintainers and so everyday users of cmake don’t see this as a big problem.
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
We can of course iterate further (set the output folder per type, etc).
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
There is bpm, cget, conan, and hunter to name a few that is cross platform and targets all platforms.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized? Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system. Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated"? Example: Fink/MacPort/HomeBrew.
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?).
There is no layout requirements. It just requires a CMakeLists.txt at the top level(which all cmake-supported libraries have), but the library can be organized however.
The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice".
I also need another file, "requirements" that I need to maintain externally to the build system.
But building and installing the dependencies is external to the build system anyways. The requirements.txt just lets you automate this process.
I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points.
Currently cget doesn’t handle versions. I plan to support channels in the future which can support versions and resolve dependencies using a SAT solver(which pip does not do).
SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
BTW, is cget able to work offline?
Yes.
Good :)
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph).
It sets the CMAKE_PREFIX_PATH(and a few other variables), which cmake uses to find libraries.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that?
`cget -U` will replace the current version.
Does that downgrade as well?
Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
By default it installs everything in the local directory `cget`, but this can be changed by using the `—prefix` flag or setting the `CGET_PREFIX` environment variable.
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same):
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable)
Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files.
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs
overloads:
What cmake can do is:
-------- declare possibly N combinations
targetA(variant1, compilation_options1);
targetA(variant1, compilation_optionsM);
...
targetA(variantN, compilation_optionM);
--------
and then consume a subset of the declared combination:
--------
targetA(variantX, compilation_optionsY);
--------
with 1<= X <= N, 1 <= Y <= M.
--------
What BJam can do is:
--------
template
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
That is true. I see it as an chicken and egg problem also, and we have to start somewhere.
Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake.
Yep, and instead of competing with cmake, boost could collaborate with cmake and would have a much larger impact.
Maybe CMake ppl are interested, but I do not see in what extent. They are de facto limited by the capabilities of the IDEs.