On 26.09.18 20:52, Stefan Seefeld via Boost wrote:
On 2018-09-26 02:29 PM, Raffi Enficiaud via Boost wrote:
On 26.09.18 20:09, Stefan Seefeld via Boost wrote:
On 2018-09-26 01:59 PM, Raffi Enficiaud via Boost wrote:
* create a cmake build tree only for a subset of boost. Say you want to compile only library X that depends on other libraries Y and Z, and only X,Y and Z will be added to the cmake project. This is already in place. * create a stub from cmake that automatically checks out the required dependencies. This needs to be added, but should be easy to do.
Is this second use case a good scenario for you?
A use-case I'd like to see supported:
* allow an individual project to be built against prerequisite boost libraries that are pre-installed on the system (no matter whether that installation was done manually or using some system package management, as is common on Linux).
This is a precondition for considering Boost project repos as truly independent, as the two use-cases you suggest above would still imply a dependency on the source repo level, rather than the package level.
I am sure if I dig deep in the mail archive, I will find some details about what you need, right now it is not very clear to me.
So, is this what you want:
1. you do eg a sudo apt-get boost-X 2. you clone boost-Y that requires boost-X 3. you develop your local clone of boost-Y that links to boost-X installed on your system
Is that what you are describing? If not, you can stop reading...
It is, exactly.
In that case, we need a versioning.
You are a couple of steps ahead of me. All I want is to be able run a build of my project such that it picks up its Boost dependencies from some other location, no matter how it got there. It may got there by my running `b2 install` on those prerequisite libs first. Versioning only comes into it once I want to make claims as to my (Boost) library's compatibility with those versions of prerequisites. But I'm not even wanting to make any such claims - yet, I'm only wanting to do a build !
But I believe this is a bad practice. I do not know if I should elaborate...
IMO it will make the life of developers really hard for the following reasons:
1. First of all, the dependencies change over time: for the system libraries you would have for instance X<-Y<-Z and for the local clone you may have X<-Q<-Z. This ends up in weird scenarios: you checked out only Y and Z, but you need Q and not Y. 2. Imagine you have for library X<-Y<-Z again, and you work on X and Z. You may have then 2 copies of "Z" (system + clone) with different versions. We can say that one takes precedence over the other one, but still you will end-up in an inconsistent chain of dependencies, and the developer can silently source/link the wrong "Y" or "Z" ("Y" on system links to user "Z", etc)
Why are you making things so complicated ? Of course, if I have two versions of library 'Z', I'm entirely on my own as to not mixing them up in downstream dependencies. But why do we even have to discuss this ?
Sure, but you do not see the dependency graph in its full extent, and this is a good thing: you want to see your direct dependencies, because those are the ones that you directly manage. As soon as you (or any tool) start unrolling the chains of dependencies, it may becomes messy. By taking the previous examples -- very simple I have to say -- you may not even be aware of the 2nd copy of library "Z".
All I want is the ability to work on a Boost library project, compiling it against some other prerequisite (versioned) libraries, some of which may be part of Boost.
Yes, I understand. Do you mind having the ones that are part of boost - but only the ones you need for working on your library - checked out/cloned on request?
None of this is rocket-science, and has been a rather common use-case throughout the industry. The only "new" thing here is my asking to consider two Boost libraries as separate entities, rather than the whole bunch of >150 Boost projects as a single monolithic entity.
The industry struggles to find a good tool for managing the 3rd party dependencies, especially in a multi-platform environment. I agree that none of this is rocket science, yet every single tool I've seen has its limitations, and rarely plays well with the other tools in place. Also, every dependency you add comes with its own set of constraints, and each added constraint multiplies with all previous ones. What I was saying before is the following. If I know that libX depends on libY, I can make it such that by cloning or setting up libX, I would like to get libY automatically. If I need external libraries, I can specify them explicitly from the command line, or use facilities such as the cmake's find_package to get them from the OS or any other location. I am fine with: cmake -DLIB_GIL=ON -DLIB_JPEG_LOCATION=/some/path/on/disk everything needed by GIL within boost checked out automatically, and external libraries found on the OS (default) or from other location (explicit on command line). I am almost fine with cmake -DLIB_GIL=ON -DLIB_X=/some/path/on/disk where X is a dependency of GIL. Not checking out X directly from the boost git has many side effects I do not want to deal with.
And just for avoidance of doubt (and to avoid this discussion going even deeper into the rat-hole): I'm *not* asking for individual Boost libraries to follow separate release cycles. But yes, we should also start talking about versioning as well as backward-compatibility (i.e., API and ABI stability). That, too, is a rather common concern in the industry. It just so happens that Boost has been managing to ignore it so far.
I precisely *do not want* any of those :) (see for instance recent discussion on making boost invisible). It should not be the concern of C++ developers to do packaging and what not ABI system stuff. I want tools to support my C++ developments. I am happy with a simple, humble system that does the job. That is why I am asking. Raffi