Le 23/04/16 à 19:19, Paul Fultz II a écrit :
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
Yes, thats just for windows, which would need special treatment.
Which is "just" one platform that is targeted, and "just" one use case not covered by what you propose.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized?
Bpm wouldn’t be official and centralized?
I do not know what BPM is.
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Yet, we do not let the developer/user with the fake impression that he installed something in a good manner.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
Cget is open source. Also, its fairly non-intrusive, so it can be easily replaced by another tool if necessary.
There is a lot of open-source dead projects.
Example: Fink/MacPort/HomeBrew. [snip] The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice”.
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I know how cmake works. From what I understood, your requirement is to have a top level CMakeLists.txt. This is not a CMake requirement (as I can have references to a parent dir in my CMakeLists.txt).
[snip] SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
SAT solver is what most package managers use to resolve constraints(such as dpkg).
I stay corrected. Yet this is something cget does not have.
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
You made a point. But packaging is not the purpose of the boost superproject.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
CMAKE_PREFIX_PATH is a list.
Right, it was not the case in 3.0 apparently.
[snip]
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template
targetA(variants, compilation_options); -------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
I felt smart when I made this analogy. And this is still the case :) BJam defines metatargets (or target functions) which is fundamentally different from simple targets: see here http://www.boost.org/build/doc/html/bbv2/overview/build_process.html. Properties associated to CMake targets are static. They may be associated with generating functions (https://cmake.org/cmake/help/v3.3/manual/cmake-generator-expressions.7.html) yet it is less powerful. I see it like targetA(f(variants, compilation_options)) which I believe BJam can do (maybe with a less sexy syntax...)
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
We have diverging opinions. I am using cmake for more than 10 years now, I do not feel like I am missing some big part of it. I feel more like I am yet in the learning curve of BJam instead (although it would be a risky choice for my other projects... but this is interesting).
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
I do not see any problem for boost, which is the scope here. My opinion is this: *if* a CMake solution is "production ready" for boost, then let's continue the discussion. Right now, you exposed the "range of possible", while I tried to point out what is expected.