proposal - modularize Boost build system
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...) such that they may optionally be built stand-alone. At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot". What would be beneficial to many would be a workflow like this: 1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on). 2) Check out a single Boost repository (e.g., https://github.com/boostorg/python) 3) Invoke a command to build it (if there is anything to build) 4) Invoke a command to test it 5) Invoke a command to install it 6) Invoke a command to package it (optional) While it's of course already possible to do all the above by adding support for another build infrastructure (point in case: Boost.Python right now uses SCons for this very reason), this means duplication of effort, as Boost as a whole is still built, tested (and even packaged for binary packages) using Boost.Build, meaning I need to maintain two sets of build infrastructure. This proposal thus has two parts: 1) Make the nested build logic independent of the outer bits, so individual libraries can be built stand-alone (for example, using b2 by adding a `--with-boost` set of options to point to the location of the prerequisite parts of Boost). 2) Define a clear interface the outer build logic will use to invoke the nested build commands. Note that for my own case above (Boost.Python), providing 2) would be enough, i.e. by having a way for me to "plug in" my SCons commands to build and test Boost.Python could obsolete the existing Boost.Build logic as defined in Boost.Python (as well as the prerequisite parts in Boost.Build's "python" module). However, as my proposal is *not* actually advocating to move away from Boost.Build, but rather to modularize it, I think 1) is essential to let everyone else (who may not be inclined to use anything other than Boost.Build) to also take advantage of modularization. While I'd rather avoid delving into the technical details of how this could possibly be implemented (if you really must, *please* do so in a new thread !), let me outline a few use-cases that would be made possible by the above: * individual projects would be free to switch to their preferred infrastructure, including CMake, SCons, etc., if they so wish * individual projects could be much more easily developed and contributed to * individual projects could be much more easily tested, notably in CI environments * individual projects could be much more easily packaged All of the above advantages are *huge*, and reflect real-world needs, and the technical issues to solve these are all minor. The question really is whether there is enough will to move into that direction. I'd be very happy to participate in the work needed to implement this. But first we need to agree that this is where we want to go. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
It's actually possible to do this with Boost.Build, but I want to talk about something else here. You keep wishing for that, and you keep missing the point. This is antithetical to the original Boost spirit. The idea of Boost is that you test your library not against the last Boost release, or against whatever old Boost release happens to be installed. The idea is that you test against the current development state, which today means the develop branch. Yes, this makes things less convenient for you, because it means that people's changes break your build. This is on purpose. It is how Boost has achieved its track record of stability and quality. This is part of the price you pay for being accepted as part of Boost - the duty to act as an integration test for your Boost dependencies. This is beneficial for you in the long term, because you can detect breaking changes in your dependencies before they get shipped. If you only test against 1.56, and 1.64 breaks your library, you won't hear about it before 1.72. This does you no good, and it does your users no good. You WANT to know if changes in 1.65's SmartPtr would break you BEFORE 1.65 gets released. This is not theoretical. At one point in the past, certain changes in enable_shared_from_this would have broken Boost.Python. Without it being there to catch this fact, they would have went into a release, because nobody else was affected. TL;DR Boost is tested as a unit, which ensures higher quality. This is deliberate. It's not a bad habit that needs to be broken.
On 6/18/17 10:22 AM, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
It's actually possible to do this with Boost.Build, but I want to talk about something else here.
You keep wishing for that, and you keep missing the point. This is antithetical to the original Boost spirit.
Wow - I don't get this at all.
The idea of Boost is that you test your library not against the last Boost release, or against whatever old Boost release happens to be installed. The idea is that you test against the current development state, which today means the develop branch.
I've think this is a very bad idea and creates lot's of problems. I've advocated for years that we should not do this. I've pulled back on this for the last few years because: a) I can't convince anyone else I'm right b) Since git let's me easily select master branch for the boost libraries I'm not working on I can easily and unobtrusively test against the latest release. So the problem doesn't concern me personally any more.
Yes, this makes things less convenient for you, because it means that people's changes break your build. This is on purpose. It is how Boost has achieved its track record of stability and quality.
LOL - not because of that but in spite of that. And what are users expected to do? One thing that gets a little lost in this discussion is the distinction between boost library developers and users of boost libraries in their current applications. I think some of the CMake motivation is to help our users.
This is part of the price you pay for being accepted as part of Boost - the duty to act as an integration test for your Boost dependencies. This is beneficial for you in the long term, because you can detect breaking changes in your dependencies before they get shipped. If you only test against 1.56, and 1.64 breaks your library, you won't hear about it before 1.72. This does you no good, and it does your users no good. You WANT to know if changes in 1.65's SmartPtr would break you BEFORE 1.65 gets released.
yada, yada, yada. Testing is an experiment. If you change all the variables at once it's an uncontrolled experiment and the results are useless and often misleading. Change one variable at a time - your code.
This is not theoretical. At one point in the past, certain changes in enable_shared_from_this would have broken Boost.Python. Without it being there to catch this fact, they would have went into a release, because nobody else was affected.
Depending on source code from one library build to be the test of another library is not really a test.
TL;DR Boost is tested as a unit, which ensures higher quality. This is deliberate. It's not a bad habit that needs to be broken.
This means were testing against implementation rather than specifications. It lowers the standard of quality by permitting the argument "well it passes the test" to be an argument. It is actually a dis-insenetive for developers to write more tests. The more tests you write the more you get bogged down in other persons poor code. It's part of the motivating for those who want to diminich library dependencies even to the point in importing source code from other libraries into our own distribution. But the fundamental problem is that it doesn't scale. Boost cannot grow if the whole damn thing has to be released as a unit. Robert Ramey
On 18.06.2017 13:22, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
It's actually possible to do this with Boost.Build, but I want to talk about something else here.
Can you elaborate ? "Possible" in the sense of "it just a matter of some programming" ? Or am I missing something ? The last times I asked (specifically for the Boost.Python project) I was told some of the required functionality was being worked on (on some Boost.Build development branch), but wasn't actually neither on "master" nor "develop".
You keep wishing for that, and you keep missing the point. This is antithetical to the original Boost spirit.
Sorry, not sure what "the point" is in your reply. And not sure why you mention the "original Boost spirit". I'm not asking what it takes to conform to some spirit or other, I'm asking whether a specific use-case that is obviously important to me (and to many others I gather, even though few people express it the way I do) could be supported. Are you telling me that it can't, because it's not in line with some "original Boost spirit" ??
The idea of Boost is that you test your library not against the last Boost release, or against whatever old Boost release happens to be installed. The idea is that you test against the current development state, which today means the develop branch.
While I think that this is a problem in its own (which we could argue about in a separate thread), let me clarify: I'm not suggesting that the current workflow should be abandoned. I'm asking for another workflow to be supported. I think both can coexist if that is useful.
Yes, this makes things less convenient for you, because it means that people's changes break your build. This is on purpose. It is how Boost has achieved its track record of stability and quality.
I'm not worried about stability (as far as Boost.Python's prerequisites are concerned, at least). I'm worried about scalability. It just doesn't work. Or it works badly.
This is part of the price you pay for being accepted as part of Boost - the duty to act as an integration test for your Boost dependencies. This is beneficial for you in the long term, because you can detect breaking changes in your dependencies before they get shipped. If you only test against 1.56, and 1.64 breaks your library, you won't hear about it before 1.72. This does you no good, and it does your users no good. You WANT to know if changes in 1.65's SmartPtr would break you BEFORE 1.65 gets released.
I'v replicated this paragraph and answered in a separate thread, to keep this discussion on-topic.
This is not theoretical. At one point in the past, certain changes in enable_shared_from_this would have broken Boost.Python. Without it being there to catch this fact, they would have went into a release, because nobody else was affected.
I understand, and appreciate the role downstream projects have as "integration tests" for Boost libraries. Again, I'm not saying this isn't useful to do. I'm asking for support for a separate workflow.
TL;DR Boost is tested as a unit, which ensures higher quality. This is deliberate. It's not a bad habit that needs to be broken.
I understand and disagree. "Boost as a unit" just doesn't work any longer. I think it's possible to support the kind of integration testing you have in mind, while at the same time break up Boost into more autonomous components. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
I'm asking whether a specific use-case that is obviously important to me (and to many others I gather, even though few people express it the way I do) could be supported.
What I use is this: https://github.com/boostorg/system/blob/develop/.travis.yml#L288 This is not quite what you want, because it doesn't use the system-installed Boost, it git clones one. But it only clones the minimum subset necessary for the tests (of, in this case, Boost.System) to run. Now admittedly in Boost.System's case this is just $ python tools/boostdep/depinst/depinst.py system Installing module core Installing module predef Installing module winapi Installing module assert Installing module config taking 4 seconds, and in Boost.Python's case you'll need much more than that. But it's (a) still not the whole Boost and (b) is only dependent on what you #include, not on Boost's overall size. Appendix A, Boost.Python test dependencies: C:\Projects\boost-git\boost>dist\bin\boostdep --test python Test dependencies for python: assert bind config conversion core detail foreach function graph integer iterator lexical_cast mpl numeric~conversion preprocessor property_map smart_ptr static_assert tuple type_traits utility throw_exception (from conversion) typeof (from conversion) range (from foreach) move (from function) type_index (from function) algorithm (from graph) any (from graph) array (from graph) bimap (from graph) concept_check (from graph) disjoint_sets (from graph) functional (from graph) graph_parallel (from graph) math (from graph) multi_index (from graph) optional (from graph) parameter (from graph) property_tree (from graph) random (from graph) regex (from graph) serialization (from graph) spirit (from graph) test (from graph) tti (from graph) unordered (from graph) xpressive (from graph) function_types (from iterator) fusion (from iterator) container (from lexical_cast) predef (from mpl) mpi (from property_map) exception (from algorithm) lambda (from bimap) (unknown) (from container) intrusive (from container) dynamic_bitset (from graph_parallel) filesystem (from graph_parallel) variant (from graph_parallel) atomic (from math) format (from property_tree) system (from random) io (from serialization) endian (from spirit) iostreams (from spirit) locale (from spirit) phoenix (from spirit) pool (from spirit) proto (from spirit) thread (from spirit) timer (from test) winapi (from system) chrono (from thread) date_time (from thread) ratio (from chrono) tokenizer (from date_time) rational (from ratio)
Stefan Seefeld wrote:
I'm asking whether a specific use-case that is obviously important to me (and to many others I gather, even though few people express it the way I do) could be supported.
What I use is this:
https://github.com/boostorg/system/blob/develop/.travis.yml#L288
This is not quite what you want, because it doesn't use the system-installed Boost, ...
What you want is this: git clone --depth=1 -b develop https://github.com/boostorg/boost.git cd boost git submodule update --init libs/config git submodule update --init tools/build git submodule update --init libs/python ./bootstrap.sh cd libs/python ../../b2 test include=include This works for me on CentOS 7, using the glorious preinstalled Boost 1.53. Two tests fail, exec and import_. (And on second thought, it should work even without include=include, and it does.) For Travis, replace the `git submodule update --init libs/python` with a copy of the already checked out repo, as I do in the file referenced above. Unless you hit the CI job limit, I'd still recommend the depinst-based one though.
On 6/18/17 9:09 AM, Stefan Seefeld via Boost wrote:
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...) such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
5) Invoke a command to install it Hmmm - I don't know what this would actually do than just copy the built
sort of a Boost without the libraries - a Boost developers kit? library to some "special place"
6) Invoke a command to package it (optional)
After all this time I have no idea what "packaging" means in this context. I never could figure out what CPack is for or what it would do for me.
While it's of course already possible to do all the above by adding support for another build infrastructure (point in case: Boost.Python right now uses SCons for this very reason), this means duplication of effort, as Boost as a whole is still built, tested (and even packaged for binary packages) using Boost.Build, meaning I need to maintain two sets of build infrastructure.
This proposal thus has two parts:
1) Make the nested build logic independent of the outer bits, so individual libraries can be built stand-alone (for example, using b2 by adding a `--with-boost` set of options to point to the location of the prerequisite parts of Boost).
Hmmm - too fuzzy. For me, I just move to the library directory test and invoke b2 and it builds and runs the tests. Then I just copy the built library from the bin.v2 tree to whereever I want so I'm done. I'm thinking this is addressed in the current system.
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
too fuzzy. Needs to be more specific.
Note that for my own case above (Boost.Python), providing 2) would be enough, i.e. by having a way for me to "plug in" my SCons commands to build and test Boost.Python could obsolete the existing Boost.Build logic as defined in Boost.Python (as well as the prerequisite parts in Boost.Build's "python" module).
In practice I invoke b2 with my own shell script which builds/runs the tests and generates my own cool test matrix - library_status. Doesn't that mean that a library/build/test can't be done from just about any system including windows IDE, CMake etc.
However, as my proposal is *not* actually advocating to move away from Boost.Build, but rather to modularize it, I think 1) is essential to let everyone else (who may not be inclined to use anything other than Boost.Build) to also take advantage of modularization.
While I'd rather avoid delving into the technical details of how this could possibly be implemented (if you really must, *please* do so in a new thread !), let me outline a few use-cases that would be made possible by the above:
* individual projects would be free to switch to their preferred infrastructure, including CMake, SCons, etc., if they so wish
* individual projects could be much more easily developed and contributed to
* individual projects could be much more easily tested, notably in CI environments
* individual projects could be much more easily packaged
All of the above advantages are *huge*, and reflect real-world needs, and the technical issues to solve these are all minor. The question really is whether there is enough will to move into that direction.
I'd be very happy to participate in the work needed to implement this.
LOL - you'll regret this.
But first we need to agree that this is where we want to go.
LOL - way too ambitious. How about suggestion some incremental enhancements which would support decoupling. Example Systems which want to support CMake build should have a CMakeList.txt script in the library root. Systems which support Bjam Build should have a Jamfile script in the library root, etc. There is no reason that a library cannot support more than one build system if the library maintainer is willing to do the work. Bjam in the boost root would invoke bjam on on the libraries which support bjam. CMakeList.txt in the boost root would invoke build/test for all libraries with CMakeList.txt in the root. Bjam/CMake would inhabit parallel universes - perhaps both simultaneously. So boost would be in effect a practical application of ideas developed in the area of quantum mechanics. For now, Bjam would be required - others would be optional. In the future, this requirement might be relaxed. FWIW I actually need both Bjam and CMake. CMake can only do a half assed test/build of the serialization library. BUT only CMake can build my Xcode IDE project. An xcode project has at least 1000 settings. All in all I like this proposal and approach. It will let CMake enthusiasts do their thing to create something that is useful to me and users without making my life miserable. I'm actually skeptical that they can actually do this, but this doesn't prevent me from rooting for them. Robert Ramey
On 18.06.2017 13:33, Robert Ramey via Boost wrote:
On 6/18/17 9:09 AM, Stefan Seefeld via Boost wrote:
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...)
such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
sort of a Boost without the libraries - a Boost developers kit?
Call it what you want. I thought traditionally it has been named "Boost core". (But let's not have a bikeshed discussion about it ;-) )
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
5) Invoke a command to install it Hmmm - I don't know what this would actually do than just copy the built library to some "special place"
6) Invoke a command to package it (optional)
After all this time I have no idea what "packaging" means in this context. I never could figure out what CPack is for or what it would do for me. Likewise. There are many different packaging formats in use, depending on the OS (e.g., "Linux distribution"), and someone needs to build those
...as well as headers, documentation, etc. This is all part of the normal Unix development culture. packages. It would be helpful to provide some infrastructure for that, as otherwise (i.e., in the current state), different OSes break up Boost in different and incompatible ways, making it even harder for developers using Boost to write portable code.
While it's of course already possible to do all the above by adding support for another build infrastructure (point in case: Boost.Python right now uses SCons for this very reason), this means duplication of effort, as Boost as a whole is still built, tested (and even packaged for binary packages) using Boost.Build, meaning I need to maintain two sets of build infrastructure.
This proposal thus has two parts:
1) Make the nested build logic independent of the outer bits, so individual libraries can be built stand-alone (for example, using b2 by adding a `--with-boost` set of options to point to the location of the prerequisite parts of Boost).
Hmmm - too fuzzy. For me, I just move to the library directory test and invoke b2 and it builds and runs the tests. By "the directory" you are surely referring to the library directory within the superproject repo, right ? Here is again what I wrote as my ideal use-case:
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
which right now does not work.
Then I just copy the built library from the bin.v2 tree to whereever I want so I'm done. I'm thinking this is addressed in the current system.
See above.
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
too fuzzy. Needs to be more specific.
It may appear fuzzy to you because of the above. Once you see what I mean in 1), 2) should become clear. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Le 18.06.17 à 18:09, Stefan Seefeld via Boost a écrit :
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...) such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
If you want to easily distribute boost libraries independently, having boost preinstalled (or already cloned or already downloaded from a release) is not exactly what I would like to see as a user. So my user story would be: * I have a compiler, and the prerequisite for building boost.whatever, which are stated by the boost.whatever readme.txt file, and that should be light enough, or at least the minimal thing for building stuff for the platform I am running. * I git clone * I build, and possibly run the tests to feel confident. From my user perspective, all the rest is "implementation details". This is the type of build/run I can see for other popular libraries, I wish we get there at some point with boost. As a library maintainer, I can say that boost.test is having hard time to compete with other unit testing libraries such as Catch or google test, not because of the number of files in boost.test itself, because you need to pull a full boost release to run it, which is another level of magnitude. Packaging is out of the user story to my opinion, and there are nice people very good at this. Also if installation is easy, packaging is less important I would say (IMO). Installation depends and can be complicated: do we want this installation to live with another existing one (like the superproject, or the one shipped with a eg. Debian package) ? Do we want to support installations in user folders? etc etc When I look at all the options that PIP INSTALL provides me with, I am sometime lost. Now if I take my developer hat, it seems to me that there is a different approach to take depending on the environment the library is built from: 1. I build from within boost super project 2. I build by considering boost.whatever as an entry point, and possibly point to an existing boost installation for dependencies 3. I build on a naked system where only the build tools are installed. What I describe above is the very simplest use case, and even with that this is not clear to me the approach to take. Also, I would like to point out the annoyance for users that have to deal with the possibly many tools I have to install. You are mentioning SCons, and I recall myself having looked at this in early 2004 and never looked at it since. As a user, I do not want to deal with this, that is very specific to one library. If I need to install yet-another-tool just to satisfy some library, then sometimes this is just a no-go (sometimes not, I am currently looking at Bazel ... but I am kind of forced to to build TensorFlow). The CMake thread mentioned started mentioning this, and CMake is gathering some consensus because also, it is defacto already installed on many of the systems developer use today. So I see Cmake not only as a tool for building, testing etc, but also as the extra tool of less annoyance. There is also something a bit awkward I wanted to indicate, but we just forgot to mention one of the most important tools so far: the **compiler**. We cannot achieve any modularity if there is no agreement on what should be supported, and this definitely is the most important variable to take into account in the dependency graph of the boost.whatever libraries. If I consider only boost.test, a lot of dependencies can just be removed just by considering C++11, which in turn would lower dramatically the user overhead in order to use the library. Raffi
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
5) Invoke a command to install it
6) Invoke a command to package it (optional)
While it's of course already possible to do all the above by adding support for another build infrastructure (point in case: Boost.Python right now uses SCons for this very reason), this means duplication of effort, as Boost as a whole is still built, tested (and even packaged for binary packages) using Boost.Build, meaning I need to maintain two sets of build infrastructure.
This proposal thus has two parts:
1) Make the nested build logic independent of the outer bits, so individual libraries can be built stand-alone (for example, using b2 by adding a `--with-boost` set of options to point to the location of the prerequisite parts of Boost).
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
Note that for my own case above (Boost.Python), providing 2) would be enough, i.e. by having a way for me to "plug in" my SCons commands to build and test Boost.Python could obsolete the existing Boost.Build logic as defined in Boost.Python (as well as the prerequisite parts in Boost.Build's "python" module).
However, as my proposal is *not* actually advocating to move away from Boost.Build, but rather to modularize it, I think 1) is essential to let everyone else (who may not be inclined to use anything other than Boost.Build) to also take advantage of modularization.
While I'd rather avoid delving into the technical details of how this could possibly be implemented (if you really must, *please* do so in a new thread !), let me outline a few use-cases that would be made possible by the above:
* individual projects would be free to switch to their preferred infrastructure, including CMake, SCons, etc., if they so wish
* individual projects could be much more easily developed and contributed to
* individual projects could be much more easily tested, notably in CI environments
* individual projects could be much more easily packaged
All of the above advantages are *huge*, and reflect real-world needs, and the technical issues to solve these are all minor. The question really is whether there is enough will to move into that direction.
I'd be very happy to participate in the work needed to implement this. But first we need to agree that this is where we want to go.
Thanks, Stefan
On 18.06.2017 14:48, Raffi Enficiaud via Boost wrote:
Le 18.06.17 à 18:09, Stefan Seefeld via Boost a écrit :
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...)
such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
If you want to easily distribute boost libraries independently, having boost preinstalled (or already cloned or already downloaded from a release) is not exactly what I would like to see as a user.
You are right. As was pointed out before, "user" stands for two distinct roles: * "downstream boost developer" using an "upstream boost library" * developers of other software using any boost library My proposal is more targeted to benefit the former. While independent release cycles would obviously also impact the latter. But decoupling releases is an entirely separate discussion, which I'd rather not get into in the context of this proposal.
As a library maintainer, I can say that boost.test is having hard time to compete with other unit testing libraries such as Catch or google test, not because of the number of files in boost.test itself, because you need to pull a full boost release to run it, which is another level of magnitude.
Precisely, this is exactly one of the things modularization may improve.
Packaging is out of the user story to my opinion, and there are nice people very good at this. Also if installation is easy, packaging is less important I would say (IMO).
Not true in practice. Consider a developer (of the second category above) on Linux. Different distributions use different ways to split Boost libraries into packages, so to be portable, I need to check for them all. That's a huge maintenance burden which could be solved if some basic support for packaging would be built right into the Boost build system (be it Boost.Build or something else). Of course you could argue that in many real-world cases even that use-case isn't very common, as lots of commercial software is built by cloning Boost and building that in-house. I think that's very unfortunate if unavoidable, given the current state of things. See my other proposal (about stability / compatibility) for how to improve things on that front.
There is also something a bit awkward I wanted to indicate, but we just forgot to mention one of the most important tools so far: the **compiler**.
We cannot achieve any modularity if there is no agreement on what should be supported, and this definitely is the most important variable to take into account in the dependency graph of the boost.whatever libraries. If I consider only boost.test, a lot of dependencies can just be removed just by considering C++11, which in turn would lower dramatically the user overhead in order to use the library.
True. But again, let's not kill the discussion by widening the scope further. For the sake of the discussion let's assume that the compiler is a pre-defined constant, rather than an additional (almost free) parameter. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Le 18.06.17 à 21:05, Stefan Seefeld via Boost a écrit :
On 18.06.2017 14:48, Raffi Enficiaud via Boost wrote:
Le 18.06.17 à 18:09, Stefan Seefeld via Boost a écrit :
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...)
such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
If you want to easily distribute boost libraries independently, having boost preinstalled (or already cloned or already downloaded from a release) is not exactly what I would like to see as a user.
You are right. As was pointed out before, "user" stands for two distinct roles:
* "downstream boost developer" using an "upstream boost library" * developers of other software using any boost library
My proposal is more targeted to benefit the former. While independent release cycles would obviously also impact the latter. But decoupling releases is an entirely separate discussion, which I'd rather not get into in the context of this proposal.
As a library maintainer, I can say that boost.test is having hard time to compete with other unit testing libraries such as Catch or google test, not because of the number of files in boost.test itself, because you need to pull a full boost release to run it, which is another level of magnitude.
Precisely, this is exactly one of the things modularization may improve.
Packaging is out of the user story to my opinion, and there are nice people very good at this. Also if installation is easy, packaging is less important I would say (IMO).
Not true in practice. Consider a developer (of the second category above) on Linux. Different distributions use different ways to split Boost libraries into packages, so to be portable, I need to check for them all. That's a huge maintenance burden which could be solved if some basic support for packaging would be built right into the Boost build system (be it Boost.Build or something else).
Maybe I misunderstood, but I thought that you were suggesting that we include the possibility to package boost or any of its component "easily" in boost itself. First of all, my understanding of packaging is: I can create "packages" that indicate their dependencies and the way they can be built such that I can install a library on my OS and remove it afterwards (like .deb, pip install, and in some extent brew). To me this goes to the distros/OS packager desk, like the Debian packagers or brew people. Also for some, whatever good packaging support you provide them with, they will just not use and use their own system instead, because it is stable and they are comfortable with. So whatever initiative we take, IMO, within boost concerning that is a bit useless. I see packaging as hard, I do not want to do it, not even starting thinking of it :) Also for users, packaging is a function taking as first parameter the OS version, rather than the boost version. There are things like PPA but this is not really helping the end user I have to say.
Of course you could argue that in many real-world cases even that use-case isn't very common, as lots of commercial software is built by cloning Boost and building that in-house. I think that's very unfortunate if unavoidable, given the current state of things. See my other proposal (about stability / compatibility) for how to improve things on that front.
Well, after almost two decades of boost, I think now (at least on linux) ppl know that there is no ABI compatibility among minor versions of boost, but only between patch version (and since there is barely patch release ...). Which means that the way boost make version number is just different from the "convention" (whatever convention means). To sum up, I would rather limit the discussion at the "installation" level. As a user of boost.whatever, I want to install boost.whatever once and use it many times. I think this impacts the two notions of users you mentioned.
There is also something a bit awkward I wanted to indicate, but we just forgot to mention one of the most important tools so far: the **compiler**.
We cannot achieve any modularity if there is no agreement on what should be supported, and this definitely is the most important variable to take into account in the dependency graph of the boost.whatever libraries. If I consider only boost.test, a lot of dependencies can just be removed just by considering C++11, which in turn would lower dramatically the user overhead in order to use the library.
True. But again, let's not kill the discussion by widening the scope further. For the sake of the discussion let's assume that the compiler is a pre-defined constant, rather than an additional (almost free) parameter.
Right. Let's call the pruning of the dependency graph an optimization.
On 6/18/2017 12:09 PM, Stefan Seefeld via Boost wrote:
One of Boost's bottlenecks is its monolithic build infrastructure. It would be beneficial for many different use cases (testing, packaging, etc.) to decouple the building of Boost libraries (well, those that require building, see the listing in http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html...) such that they may optionally be built stand-alone.
At present, building a Boost library requires the entire (super-)repository to be checked out, and the build logic itself involves traversing the entire source tree, looking for a "Jamroot".
What would be beneficial to many would be a workflow like this:
1) Have a development environment with (some version of) Boost pre-installed (or at least the parts that the library being built depends on).
2) Check out a single Boost repository (e.g., https://github.com/boostorg/python)
3) Invoke a command to build it (if there is anything to build)
4) Invoke a command to test it
5) Invoke a command to install it
6) Invoke a command to package it (optional)
While it's of course already possible to do all the above by adding support for another build infrastructure (point in case: Boost.Python right now uses SCons for this very reason), this means duplication of effort, as Boost as a whole is still built, tested (and even packaged for binary packages) using Boost.Build, meaning I need to maintain two sets of build infrastructure.
This proposal thus has two parts:
1) Make the nested build logic independent of the outer bits, so individual libraries can be built stand-alone (for example, using b2 by adding a `--with-boost` set of options to point to the location of the prerequisite parts of Boost).
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
Note that for my own case above (Boost.Python), providing 2) would be enough, i.e. by having a way for me to "plug in" my SCons commands to build and test Boost.Python could obsolete the existing Boost.Build logic as defined in Boost.Python (as well as the prerequisite parts in Boost.Build's "python" module).
However, as my proposal is *not* actually advocating to move away from Boost.Build, but rather to modularize it, I think 1) is essential to let everyone else (who may not be inclined to use anything other than Boost.Build) to also take advantage of modularization.
While I'd rather avoid delving into the technical details of how this could possibly be implemented (if you really must, *please* do so in a new thread !), let me outline a few use-cases that would be made possible by the above:
* individual projects would be free to switch to their preferred infrastructure, including CMake, SCons, etc., if they so wish
* individual projects could be much more easily developed and contributed to
* individual projects could be much more easily tested, notably in CI environments
* individual projects could be much more easily packaged
All of the above advantages are *huge*, and reflect real-world needs, and the technical issues to solve these are all minor. The question really is whether there is enough will to move into that direction.
I'd be very happy to participate in the work needed to implement this. But first we need to agree that this is where we want to go.
A serious problem to consider, whenever anyone speaks of a modularized Boost where a library can be distributed on its own, is library dependencies. This problem exists for all libraries, not just Boost. But most libraries have their own dependency system, usually based on whatever OS that library is being used, whereas Boost libraries almost always intend to be cross-platform. How should Boost solve this problem is for me the bottleneck of distributing a particular Boost library. When I speak of dependencies I am not just speaking of library X depending on other libraries A, B, and C etc. I am also speaking of library X depending on particular versions of library A, B, C etc. But since the only versioning system Boost has is a single version number for a Boost release, and since their is no way a library can check even that single version number of a Boost release either at compile or run-time, Boost libraries have no way to check versioning of other individual Boost libraries on which a library may depend. If you say, I am going to distribute my library X with particular releases of library A, B, and C etc. with which I know my library X will work correctly you then have end-users of your library who may have multiple copies of libraries X, A, B, and C etc. on their systems, running each set within its own environment, not knowing how to identify each set of libraries, and hoping they can run these things and avoid the well-known shared library hell which has plagued end-users for years. Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
Thanks, Stefan
On 6/18/17 1:34 PM, Edward Diener via Boost wrote:
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
If I may be so bold as to summarize your point: In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted. I think this is indisputable. But I don't think we have to worry about in practice. Whatever we do, it will take sometime to get there and, if we ever do get there, I think that adding this feature won't be a big problem. Of course if we don't ever get there, we've got nothing nothing to worry about. Robert Ramey
On 18/06/2017 22:45, Robert Ramey via Boost wrote:
On 6/18/17 1:34 PM, Edward Diener via Boost wrote:
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
If I may be so bold as to summarize your point:
In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted.
I think this is indisputable. But I don't think we have to worry about in practice. Whatever we do, it will take sometime to get there and, if we ever do get there, I think that adding this feature won't be a big problem. Of course if we don't ever get there, we've got nothing nothing to worry about.
I respectfully disagree. If you want to release your library individually, then it's no longer Boost. Boost is a coherent library collection. You are free to release your library individually, just like ASIO does. You'll need to somehow solve dependency and compatibility issues on other boost libraries yourself. If git clone is huge for Boost, then it's a git user problem, because it's a decentralized VCS, just use a shallow clone. If we further modularize Boost libraries, then someone will propose that each library should choose its VCS, bug system and mailing list. I don't like each library to use a different build tool (CMake, SCons, etc...) I like the fact that I can write my test jamfile triggers the creation of any dependent library just because all of them use bjam and other Boost libraries are designed to act friendly with my library. If you want to have a Boost library you need to maintain the style and rules of Boost. If you want to be a standalone library then you can already do that, but don't call it Boost. If we want to say those standalone libraries are somewhat related to Boost, then let's invent another name and define more relaxed rule for them. Best, Ion
On 6/18/17 2:15 PM, Ion Gaztañaga via Boost wrote:
If you want to release your library individually, then it's no longer Boost.
Hmmm - now we're getting down to "what is boost" ....
Boost is a coherent library collection. some might differ on this point You are free to release your library individually, just like ASIO does.
thank you You'll need to somehow
solve dependency and compatibility issues on other boost libraries yourself.
Hmmm - honestly, boost doesn't do as much in this area as one would think. Bjam does handle dependencies - true. Compatibity is managed through test. I don't think any one as suggested changing any of this. I think (though I'm not actually sure) that this discussion is to facilitate the usage of CMake by boost users to don't want to be boost developers but rather "just" boost users.
If git clone is huge for Boost, then it's a git user problem, because it's a decentralized VCS, just use a shallow clone.
If we further modularize Boost libraries, then someone will propose that each library should choose its VCS, bug system and mailing list.
Actually, I already proposed this some time ago. In fact we already have much of that. For example, for bugs some libraries use git issues while others use the traditional system. Each library chooses it's own documentation tools. The git submodule implementation could be seen as each library having it's own VCS just tied together at the top.
I don't like each library to use a different build tool (CMake, SCons, etc...) I like the fact that I can write my test jamfile triggers the creation of any dependent library just because all of them use bjam and
we're not talking about what you want to do as a boost developer. You can do whatever you want. The question is should you, boost or anyone else tell developers of other libraries what they should do?
other Boost libraries are designed to act friendly with my library.
Right - but only at the source code and local build level. For users using a portion of boost in their apps, they don't see it this way.
If you want to have a Boost library you need to maintain the style and rules of Boost.
Hmm - boost has a lot of rules related to the source code, directory structure, requirements for tests, etc. I don't see this as being impacted.
If you want to be a standalone library then you can already do that, but don't call it Boost. If we want to say those standalone libraries are somewhat related to Boost, then let's invent another name and define more relaxed rule for them.
If one of the promoters of CMake want to make a "thing" which incorporates the most recent version of boost source code by reference, I wouldn't object. They could call it "modular boost". But I doubt they'll do it. It's really only an appealing idea if someone else does the actual work. Robert Ramey
Best,
Ion
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 19/06/2017 0:05, Robert Ramey via Boost wrote:
Actually, I already proposed this some time ago. In fact we already have much of that. For example, for bugs some libraries use git issues while others use the traditional system. Each library chooses it's own documentation tools. The git submodule implementation could be seen as each library having it's own VCS just tied together at the top.
We have both the old a the new bug system, but no library uses Bugzilla for that. That said, I would like to migrate all the old issues from track to github. I prefer a trac-style bug management system instead of pull requests, but unification has many advantages. Common tools help maintenance and collaboration between boost developers and any abandoned library can be rescued because all used tools are familiar. They help reviews. Documentation is a different issue, we can't compare the coupling between libraries to the different documentation styles. In any case, many Boost libraries use Quickbook or Boostbook which IMHO should be encouraged.
I don't like each library to use a different build tool (CMake, SCons, etc...) I like the fact that I can write my test jamfile triggers the creation of any dependent library just because all of them use bjam and
we're not talking about what you want to do as a boost developer. You can do whatever you want. The question is should you, boost or anyone else tell developers of other libraries what they should do?
Boost is voluntary organization. Organizations have rules that try to help the goal of the organization. If you want to call you library "Boost" you need to follow the rules. This makes a lot of sense to me. I could release my library directly in Github, but following Boost rules my library is interoperable with other libraries, and my dependencies don't break often. It's not as flexible as I might want but it has a lot of advantages. If each library is free to choose the build system, release dates, track system... what's the point of naming it "Boost"? Just because they were reviewed in the boost mailing list? I need to build several Boost libraries as my library depends on them. Do I need to learn different build tools just to run my tests?
other Boost libraries are designed to act friendly with my library.
Right - but only at the source code and local build level. For users using a portion of boost in their apps, they don't see it this way.
If you want to have a Boost library you need to maintain the style and rules of Boost.
Hmm - boost has a lot of rules related to the source code, directory structure, requirements for tests, etc. I don't see this as being impacted.
The impact on how other libraries build, name, find dependencies, ... is much more important to my library than how the source code of those libraries is written. I understand that some aspects of Boost don't work as desired/expected, but I doubt any "modularization" that allows different build/trac systems will solve them. IMHO the entropy can only increase. Best, Ion
On 6/18/2017 5:15 PM, Ion Gaztañaga via Boost wrote:
On 18/06/2017 22:45, Robert Ramey via Boost wrote:
On 6/18/17 1:34 PM, Edward Diener via Boost wrote:
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
If I may be so bold as to summarize your point:
In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted.
I think this is indisputable. But I don't think we have to worry about in practice. Whatever we do, it will take sometime to get there and, if we ever do get there, I think that adding this feature won't be a big problem. Of course if we don't ever get there, we've got nothing nothing to worry about.
I respectfully disagree.
If you want to release your library individually, then it's no longer Boost. Boost is a coherent library collection. You are free to release your library individually, just like ASIO does. You'll need to somehow solve dependency and compatibility issues on other boost libraries yourself.
If you think about this a little more deeply I think you will understand that this methodology will not work. As in: 1) You release Boost library X which uses some version of Boost library A. 2) Boost library Y is released which uses a different version of Boost library A. 3) The end-user attempts to use library X and library Y in the same TU. This has absolutely nothing to do with you releasing library X and "somehow solve dependency and compatibility issues on other boost libraries yourself". The current equivalent to this with monolithic Boost is the end-user using Boost 1.63 and Boost 1.64 together in his own module ( library or executable ). The end-user knows enough not to do this because for either 1.63 or 1.64 all libraries are "guaranteed" to work in tandem. But this can't happen with individual Boost libraries ( and their dependencies ) being released by themselves without any form of versioning.
If git clone is huge for Boost, then it's a git user problem, because it's a decentralized VCS, just use a shallow clone.
If we further modularize Boost libraries, then someone will propose that each library should choose its VCS, bug system and mailing list.
I don't like each library to use a different build tool (CMake, SCons, etc...) I like the fact that I can write my test jamfile triggers the creation of any dependent library just because all of them use bjam and other Boost libraries are designed to act friendly with my library.
If you want to have a Boost library you need to maintain the style and rules of Boost. If you want to be a standalone library then you can already do that, but don't call it Boost. If we want to say those standalone libraries are somewhat related to Boost, then let's invent another name and define more relaxed rule for them.
Best,
Ion
On 18.06.2017 16:45, Robert Ramey via Boost wrote:
On 6/18/17 1:34 PM, Edward Diener via Boost wrote:
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
If I may be so bold as to summarize your point:
In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted.
I think this is indisputable. But I don't think we have to worry about in practice. Whatever we do, it will take sometime to get there and, if we ever do get there, I think that adding this feature won't be a big problem. Of course if we don't ever get there, we've got nothing nothing to worry about.
I watch (somewhat in horror, I have to admit) the follow-up mails as they predictably derail into un-manageable scenarios. So I'd like to point out that my proposal in no way implies any particular release policy, i.e. whether individual libraries are released independently or not. So when you dive into that discussion, please be aware that it's entirely orthogonal to the proposal at hand. Whether or not releasing Boost libraries as independent entities has no bearing on the usefulness or feasibility of modularising Boost. Thank you, Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 6/19/2017 1:21 AM, Stefan Seefeld via Boost wrote:
On 18.06.2017 16:45, Robert Ramey via Boost wrote:
On 6/18/17 1:34 PM, Edward Diener via Boost wrote:
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
If I may be so bold as to summarize your point:
In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted.
I think this is indisputable. But I don't think we have to worry about in practice. Whatever we do, it will take sometime to get there and, if we ever do get there, I think that adding this feature won't be a big problem. Of course if we don't ever get there, we've got nothing nothing to worry about.
I watch (somewhat in horror, I have to admit) the follow-up mails as they predictably derail into un-manageable scenarios. So I'd like to point out that my proposal in no way implies any particular release policy, i.e. whether individual libraries are released independently or not. So when you dive into that discussion, please be aware that it's entirely orthogonal to the proposal at hand. Whether or not releasing Boost libraries as independent entities has no bearing on the usefulness or feasibility of modularising Boost.
You did mention in your OP: 5) Invoke a command to install it 6) Invoke a command to package it (optional) Maybe you need to be more specific about what you mean in each case. It sure sounds to me, by the 2 items above, as if you meant to suggest that you could distribute an individual Boost library ( and its dependencies ) separately from the current monolithic Boost tree. If so, I am suggesting that without a very well worked out versioning system for individual Boost libraries such a plan will end up with serious problems for the end-user.
Thank you, Stefan
Edward Diener wrote:
You did mention in your OP:
5) Invoke a command to install it
Ah, I forgot about those, thanks Edward. ./b2 --with-python stage There is a subtlety here, one which might not matter in practice. This libboost_python will be compiled against the checked out Config, instead of the 1.53 preinstalled one. I'm pretty sure that this will not cause any problems, but if you want to be strict, you'll have to murder -rf libs/config/include before starting the test/stage procedure. The reason for this is that currently a part of Boost.Build depends on a part of Boost.Config (for the feature checks such as cxx11_smart_ptr I see Boost.Python uses) so libs/config needs to be checked out for tools/build to work. But the actual C++ portion of Boost.Config isn't needed, we could use the preinstalled one.
6) Invoke a command to package it (optional)
On your own here, sorry.
On 19.06.2017 09:32, Edward Diener via Boost wrote:
You did mention in your OP:
5) Invoke a command to install it
6) Invoke a command to package it (optional)
Maybe you need to be more specific about what you mean in each case. It sure sounds to me, by the 2 items above, as if you meant to suggest that you could distribute an individual Boost library ( and its dependencies ) separately from the current monolithic Boost tree.
I did indeed.
If so, I am suggesting that without a very well worked out versioning system for individual Boost libraries such a plan will end up with serious problems for the end-user.
Distributing boost libraries separately doesn't imply that I don't respect Boost's releases or version numbers. Packaging and releasing are orthogonal concepts. See for example https://apps.fedoraproject.org/packages/boost and https://packages.debian.org/jessie/libboost1.55-dev. Fedora and Debian are two popular Linux distributions, and they do of course provide separate Boost packages. But given the lack of guidelines (or infrastructure) to package Boost components (which includes not just libraries, but also tools to build them and their documentation), they differ... Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
Packaging and releasing are orthogonal concepts.
Not really. Releasing separately from Boost means that you now have two Boost versions to communicate: the version of Boost.Python, and the version of the preinstalled Boost against which it was built (I built, f.ex. libboost-python-1.65.0-1.53.0 in my CentOS 7 case.) This consequently affects your downstream dependencies, which also have to include these two versions, in addition to their own. So when a library X depends on Boost.Python and it's compiled against Boost.Python 1.65.0-1.53.0, its version would now be X-1.17-1.65.0-1.53.0. This is not compatible with Y-2.1-1.63.0-1.53.0, even though both are built in the same CentOS 7 environment.
On 19.06.2017 10:32, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
Packaging and releasing are orthogonal concepts.
Not really. Releasing separately from Boost means that you now have two Boost versions to communicate: the version of Boost.Python, and the version of the preinstalled Boost against which it was built (I built, f.ex. libboost-python-1.65.0-1.53.0 in my CentOS 7 case.)
This consequently affects your downstream dependencies, which also have to include these two versions, in addition to their own. So when a library X depends on Boost.Python and it's compiled against Boost.Python 1.65.0-1.53.0, its version would now be X-1.17-1.65.0-1.53.0. This is not compatible with Y-2.1-1.63.0-1.53.0, even though both are built in the same CentOS 7 environment.
I have no idea why you are mentioning all this. How often do I need to repeat that this proposal is about modularization, not about decoupling the release process of individual Boost components ? Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
I have no idea why you are mentioning all this. How often do I need to repeat that this proposal is about modularization, not about decoupling the release process of individual Boost components ?
All right. What, specifically, do you want further modularized? Did you even read what I wrote in my other posts?
On 19.06.2017 10:59, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
I have no idea why you are mentioning all this. How often do I need to repeat that this proposal is about modularization, not about decoupling the release process of individual Boost components ?
All right. What, specifically, do you want further modularized? Did you even read what I wrote in my other posts?
I did, yes. Sorry for not specifically acknowledging that. I know about these tricks (I use something similar to set up my Boost.Python CI environment), but consider them hacks as they don't really solve the fundamental requests, which are: * I want to be able to build a given Boost library stand-alone, with nothing but the library's repo being checked out. (In other words: prerequisite Boost components should be assumed pre-installed.) * I want to be able to choose the build (, test, etc.) infrastructure used for my library, and make sure that when Boost is built as a whole, that build infrastructure is used. I can't stress this enough: this second point is to 99% non-technical. It's about letting project maintainers decide how they develop (build, test, etc.) so we don't have to have these "lets all switch from tool A to tool B" discussions any longer. The choice between b2, cmake, scons should be done per project, rather than the whole Boost organization at once. The only technical aspect of this is the definition of the interface between super-project and sub-project, i.e. the mechanism by which Boost.Build invokes the library-specific build systems for those of us who want to continue building Boost as a whole. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
I know about these tricks (I use something similar to set up my Boost.Python CI environment), but consider them hacks as they don't really solve the fundamental requests, which are:
* I want to be able to build a given Boost library stand-alone, with nothing but the library's repo being checked out. (In other words: prerequisite Boost components should be assumed pre-installed.)
Why is this so important for you? The difference is just one superproject shell and one tools/build. Is this a matter of principle, or are there technical reasons? Anyway. Boost.Build actually supports this use case: sudo yum install boost-jam sudo yum install boost-build git clone --depth=1 https://github.com/boostorg/python cd python touch Jamroot bjam test The first error here is IMPORT error: rule "requires" unknown in module "../../config/checks/config" because we don't have Boost.Config checked out as a sibling here. When I comment out your use of ../../config/checks/config in test/Jamfile, it errors out with rule numpy_test unknown Trying to build with bjam build fails with rule py-version unknown Both of these errors are because these rules don't exist in the 1.53 python.jam. So frankly, I'm not sure how do you suggest this needs to be addressed. On one hand, you want to use the system Boost.Build, and on the other, your Jamfiles depend on features that aren't present in it. This simply cannot work, your desires are contradictory.
On 19.06.2017 12:32, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
I know about these tricks (I use something similar to set up my Boost.Python CI environment), but consider them hacks as they don't really solve the fundamental requests, which are:
* I want to be able to build a given Boost library stand-alone, with nothing but the library's repo being checked out. (In other words: prerequisite Boost components should be assumed pre-installed.)
Why is this so important for you? The difference is just one superproject shell and one tools/build. Is this a matter of principle, or are there technical reasons?
Both. I want to control the environment in which my library is being built (, tested, etc.), and thus I don't want to fetch that environment from a repository and build it on-the-fly.
Anyway.
Boost.Build actually supports this use case:
sudo yum install boost-jam sudo yum install boost-build git clone --depth=1 https://github.com/boostorg/python cd python touch Jamroot bjam test
The first error here is
IMPORT error: rule "requires" unknown in module "../../config/checks/config"
because we don't have Boost.Config checked out as a sibling here.
When I comment out your use of ../../config/checks/config in test/Jamfile, it errors out with
rule numpy_test unknown
Trying to build with
bjam build
fails with
rule py-version unknown
Both of these errors are because these rules don't exist in the 1.53 python.jam.
So frankly, I'm not sure how do you suggest this needs to be addressed.
The numpy_test rule should probably have been added to the local Jamfile (i.e., be part of Boost.Python, rather than Boost.Build). I'm not sure about the config checks. Arguably they are part of the build system, and thus should be included in Boost.Build, rather than a separate Boost library.
On one hand, you want to use the system Boost.Build, and on the other, your Jamfiles depend on features that aren't present in it. This simply cannot work, your desires are contradictory.
Not sure what your point is. I think it's perfectly normal that on some platforms the default (system) version of a package is too old, so a newer version needs to be pulled in from another repo ("testing", perhaps ?). That doesn't invalidate my desire to work with system packages though, rather than development versions. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
On one hand, you want to use the system Boost.Build, and on the other, your Jamfiles depend on features that aren't present in it. This simply cannot work, your desires are contradictory.
Not sure what your point is. I think it's perfectly normal that on some platforms the default (system) version of a package is too old, so a newer version needs to be pulled in from another repo ("testing", perhaps ?). That doesn't invalidate my desire to work with system packages though, rather than development versions.
My point is that the reason it doesn't work is that your Jamfiles are using features not present in the system Boost.Build. You were complaining that we need to make it work. But there's no deficiency to be addressed here. It already works. So what's the complaint?
On 19.06.2017 13:00, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
On one hand, you want to use the system Boost.Build, and on the other, > your Jamfiles depend on features that aren't present in it. This simply > cannot work, your desires are contradictory.
Not sure what your point is. I think it's perfectly normal that on some platforms the default (system) version of a package is too old, so a newer version needs to be pulled in from another repo ("testing", perhaps ?). That doesn't invalidate my desire to work with system packages though, rather than development versions.
My point is that the reason it doesn't work is that your Jamfiles are using features not present in the system Boost.Build.
You were complaining that we need to make it work. But there's no deficiency to be addressed here. It already works.
So what's the complaint?
I'm not "complaining", I'm *proposing* an architectural change. Again: As a boost library maintainer I want to decouple my library from the rest of Boost to be able to * build it as a unit (i.e., with everything else being fixed as a prerequisite, rather than being built on-the-fly) * use tools of my own choice to build, test, package, issue-track, document (etc., etc.) my library, so we won't need to agree on this scale whether to use tool A or tool B. (Recall the question triggering this proposal was the proposal *for the entirely of Boost libraries* to switch from Boost.Build to CMake.) I'd be more than happy to learn that this is already possible, at which point I'd write an article (a wiki page, say) to document how to do it. So let's assume that creating a "Jamroot" file in my library's root directory is all it takes to let b2 build my library stand-alone. Then what about the second point, which was:
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
In other words, what does that Jamroot file need to contain at a minimum, to satisfy the global build processes (i.e., the ones used to build Boost as a whole, including building release docs etc.) ? There are globally called rules such as "boost-install", "boostrelease", etc. that seem to be required. And what about parameters such as build variants or toolchain versions ? How can I intercept those such that I can call my own (local) build logic ? Is that documented anywhere ? Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
Then what about the second point, which was:
2) Define a clear interface the outer build logic will use to invoke the nested build commands.
The clear interface at present is that you need to have a Jamfile. Building using something else as part of the global build is not supported, unless you somehow invoke this something else from your Jamfile. But this would imply that whatever something else you pick - for instance, SCons - would now become a prerequisite for building Boost.
In other words, what does that Jamroot file need to contain at a minimum, to satisfy the global build processes (i.e., the ones used to build Boost as a whole, including building release docs etc.) ?
The global build process currently in use requires you to not have a Jamroot.
There are globally called rules such as "boost-install", "boostrelease", etc. that seem to be required.
Yes, if you want to use the boost-install rule, it won't work without the global Boost Jamroot. I'm not entirely clear on what we're talking about here though. Are we in the "git clone boostorg/python" standalone case yet, or are we in "git clone --recursive boostorg/boost", except you want to use SCons for libs/python instead?
On 19.06.2017 14:58, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
Then what about the second point, which was:
2) Define a clear interface the outer build logic will use to invoke the > nested build commands.
The clear interface at present is that you need to have a Jamfile. Building using something else as part of the global build is not supported, unless you somehow invoke this something else from your Jamfile. But this would imply that whatever something else you pick - for instance, SCons - would now become a prerequisite for building Boost.
In other words, what does that Jamroot file need to contain at a minimum, to satisfy the global build processes (i.e., the ones used to build Boost as a whole, including building release docs etc.) ?
The global build process currently in use requires you to not have a Jamroot.
There are globally called rules such as "boost-install", "boostrelease", etc. that seem to be required.
Yes, if you want to use the boost-install rule, it won't work without the global Boost Jamroot.
I'm not entirely clear on what we're talking about here though. Are we in the "git clone boostorg/python" standalone case yet, or are we in "git clone --recursive boostorg/boost", except you want to use SCons for libs/python instead?
*Sigh*. Am I expressing myself really that poorly ? I want more autonomy / independence for individual libraries. I want to be able to build them stand-alone, which you tell me already works if I have toplevel Jamroot in my repo, except you then tell me that I may in fact not because the global build process requires me not to have that. I also want to be able to pick my own build (etc.) tools, not in addition to Boost.Build, but instead of it. I understand that right now that's not supported, which is why I'm writing this proposal. What would it take for Boost to support individual libraries to be built with anything else ? What requirements would that "anything" have to meet, and how would it interact with the existing infrastructure to work ? Is that such a strange request ? Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
What would it take for Boost to support individual libraries to be built with anything else ?
In what scenario? Standalone, or as part of the Boost release? If standalone, it's up to you to support whatever you like. If as part of the release, this would mean that everyone who wants to build a Boost release would now need to have your preferred build system installed. Currently, we don't require anything else, as Boost.Build is part of the release. So this would be a significant regression in usability.
On 19.06.2017 15:48, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
What would it take for Boost to support individual libraries to be built with anything else ?
In what scenario? Standalone, or as part of the Boost release?
Both, as the goal is not to add more infrastructure, but to replace it.
If standalone, it's up to you to support whatever you like.
If as part of the release, this would mean that everyone who wants to build a Boost release would now need to have your preferred build system installed. Currently, we don't require anything else, as Boost.Build is part of the release. So this would be a significant regression in usability.
I understand. This is a bit of a vicious circle: Right now Boost is always built as a whole, so lots of people do it. In a modular Boost world, fewer people would build all of boost, as it's much easier to build just the libraries people need. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Mon, Jun 19, 2017 at 2:57 PM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 19.06.2017 15:48, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
What would it take for Boost to support individual libraries to be built with anything else ?
In what scenario? Standalone, or as part of the Boost release?
Both, as the goal is not to add more infrastructure, but to replace it.
If standalone, it's up to you to support whatever you like.
If as part of the release, this would mean that everyone who wants to build a Boost release would now need to have your preferred build system installed. Currently, we don't require anything else, as Boost.Build is part of the release. So this would be a significant regression in usability.
I understand. This is a bit of a vicious circle: Right now Boost is always built as a whole, so lots of people do it. In a modular Boost world, fewer people would build all of boost, as it's much easier to build just the libraries people need.
I don't see it as much of a circle. As long as boost is monolithic, we need to stick with one tool (b2 or cmake or whatever). After it splits into a modular structure and no one needs/wants to build it all at once, then we could open up other tools. I don't mind installing one or two pre-requisites on my build machines, but if each library has their own (conflicting?!?) requirements that'd get un-workable. However, just because we go modular, doesn't mean that we should throw open the door to each library maintainer doing whatever they want. It might not be ideal for each library, but there is some benefit to standardizing on tools. Other organizations put requirements on disparate project. For a long time (not sure if this is still the case) the Apache project required all its member projects to use SVN for source control, and those projects are a lot less homogeneous than ours. Tom
On 19.06.2017 17:28, Tom Kent via Boost wrote:
On Mon, Jun 19, 2017 at 2:57 PM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 19.06.2017 15:48, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
What would it take for Boost to support individual libraries to be built with anything else ? In what scenario? Standalone, or as part of the Boost release? Both, as the goal is not to add more infrastructure, but to replace it.
If standalone, it's up to you to support whatever you like.
If as part of the release, this would mean that everyone who wants to build a Boost release would now need to have your preferred build system installed. Currently, we don't require anything else, as Boost.Build is part of the release. So this would be a significant regression in usability. I understand. This is a bit of a vicious circle: Right now Boost is always built as a whole, so lots of people do it. In a modular Boost world, fewer people would build all of boost, as it's much easier to build just the libraries people need.
I don't see it as much of a circle. As long as boost is monolithic, we need to stick with one tool (b2 or cmake or whatever). After it splits into a modular structure and no one needs/wants to build it all at once, then we could open up other tools. I don't mind installing one or two pre-requisites on my build machines, but if each library has their own (conflicting?!?) requirements that'd get un-workable.
I agree. So, please consider my use-case of "I want to use my own build tool" purely as an illustration of why modularization is useful. The main goal of this modularization proposal remains to break the build process up, so a top-level `./b2` invocation would do little more than iterate over all Boost libraries and invoke some build command there (the details of which remaining to be determined).
However, just because we go modular, doesn't mean that we should throw open the door to each library maintainer doing whatever they want. It might not be ideal for each library, but there is some benefit to standardizing on tools. Other organizations put requirements on disparate project. For a long time (not sure if this is still the case) the Apache project required all its member projects to use SVN for source control, and those projects are a lot less homogeneous than ours.
Understood. One reason why I'm writing this proposal is also to question whether the level of homogeneity that we currently have (and require) is actually enforcible (or even desirable) for a project the size of Boost. It may well have been the right choice when Boost only consisted of a handful of libraries. But nowadays, the question is at least worth being asked again. Stefan -- ...ich hab' noch einen Koffer in Berlin...
I'd be happy with boost easily supporting some way of cross compiling
that does not involve undocumented features and writing a bunch of
custom jam files that show up in random locations. I'd be ecstatic if
there was a way to integrate it with an existing cmake build system.
Heck, I'd be happy with a clearly defined procedure so that I could
write a build file for some libraries. Some libraries build pretty
easily (python), some are so difficult and pull in so many
dependencies that it is faster to rewrite the code then figure out how
to install it independently (Log). I don't see as a user why this is
such a hard ask. If a tool like cmake can't be easily supported for
individual libs then the build system is quite frankly unmaintainable.
Is cmake such an exotic today that supporting it as an option is
prohibitive?
On Mon, Jun 19, 2017 at 1:48 PM, Peter Dimov via Boost
Stefan Seefeld wrote:
What would it take for Boost to support individual libraries to be built with anything else ?
In what scenario? Standalone, or as part of the Boost release?
If standalone, it's up to you to support whatever you like.
If as part of the release, this would mean that everyone who wants to build a Boost release would now need to have your preferred build system installed. Currently, we don't require anything else, as Boost.Build is part of the release. So this would be a significant regression in usability.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Mon, 19 Jun 2017 at 15:39 Stefan Seefeld via Boost
I also want to be able to pick my own build (etc.) tools, not in addition to Boost.Build, but instead of it. I understand that right now that's not supported, which is why I'm writing this proposal. What would it take for Boost to support individual libraries to be built with anything else ? What requirements would that "anything" have to meet, and how would it interact with the existing infrastructure to work ? Is that such a strange request ?
I strongly disagree with allowing that to happen. I'm a fan of the current model of distributing boost because of how I've seen it distributed at large companies. Every company I have worked at has maintained their own custom, internal build system. To integrate boost into any of those systems has been pretty easy -- it involves writing a script to bootstrap, invoke b2, copy headers and libs and possibly generate some synthetic targets to let the custom build system know how to find boost. In some cases I've had to modify boost build to support weird proprietary compiler variants -- which was a pain -- but imagine if every library was doing their own thing? Now in order to use your library, I need to get approval for the licence on the build system, then I need to make the build system work and integrate it with whatever our custom setup is. Then possibly I need to patch it to work with whatever special compiler we're currently using, and I need to potentially do that more than once. For me, the simplicity of monolithic (or at least unified) boost is worth a huge amount and I would really hate to lose that. I feel the same about the licensing issue that came up a few weeks ago for the same reasons -- getting approval to use and distribute boost at a large company is vastly simpler if things are consistent. -- chris
I also want to be able to pick my own build (etc.) tools, not in addition to Boost.Build, but instead of it. I understand that right now that's not supported, which is why I'm writing this proposal. What would it take for Boost to support individual libraries to be built with anything else ? What requirements would that "anything" have to meet, and how would it interact with the existing infrastructure to work ? Is that such a strange request ?
Absolutely anything at all? You cannot do integration testing if every library uses something different. You can't even do a single build and install everything. IMO there does have to be a common build system for that stuff (whatever that may be), if authors want to ship with some other build system as well, then that's just fine too. John. --- This email has been checked for viruses by AVG. http://www.avg.com
On 20.06.2017 04:17, John Maddock via Boost wrote:
I also want to be able to pick my own build (etc.) tools, not in addition to Boost.Build, but instead of it. I understand that right now that's not supported, which is why I'm writing this proposal. What would it take for Boost to support individual libraries to be built with anything else ? What requirements would that "anything" have to meet, and how would it interact with the existing infrastructure to work ? Is that such a strange request ?
Absolutely anything at all?
(I'm not sure I understand what you mean. I'm specifically asking about requirements that would restrict that "anything". So no, not absolutely anything.)
You cannot do integration testing if every library uses something different. You can't even do a single build and install everything.
I think that's part of my point: At this point in time, who actually needs the entirety of Boost built and installed as a single entity, other than by habit ? There are so many different libraries, targeting different audiences. Is there anybody using all of them ? Would it really hurt anyone if they had to install Boost.MPI, Boost.Compute, and Boost.Python (to name a few domain-specific ones) separately ?
IMO there does have to be a common build system for that stuff (whatever that may be), if authors want to ship with some other build system as well, then that's just fine too.
John.
Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Tue, Jun 20, 2017 at 7:02 AM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
I think that's part of my point: At this point in time, who actually needs the entirety of Boost built and installed as a single entity, other than by habit ? There are so many different libraries, targeting different audiences. Is there anybody using all of them ? Would it really hurt anyone if they had to install Boost.MPI, Boost.Compute, and Boost.Python (to name a few domain-specific ones) separately ?
There have been a number of people who've expressed the experience and need to *only* use Boost as a single entity in precisely these cmake/modular threads. Who I would point out I've never seen post before. So it tells you something about how strong their position is. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail
On 20.06.2017 08:22, Rene Rivera via Boost wrote:
On Tue, Jun 20, 2017 at 7:02 AM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
I think that's part of my point: At this point in time, who actually needs the entirety of Boost built and installed as a single entity, other than by habit ? There are so many different libraries, targeting different audiences. Is there anybody using all of them ? Would it really hurt anyone if they had to install Boost.MPI, Boost.Compute, and Boost.Python (to name a few domain-specific ones) separately ?
There have been a number of people who've expressed the experience and need to *only* use Boost as a single entity in precisely these cmake/modular threads. Who I would point out I've never seen post before. So it tells you something about how strong their position is. Yes, of course ! I do understand the advantage of Boost being a single entity. And to some there is just that advantage, as they don't have to deal with the disadvantage(s). So in the end it's a balancing act where we have to weigh the different arguments.
I still think we are getting ahead of ourselves, as my proposal wasn't (and still isn't) about replacing build systems (even though it is definitely motivated by that option), but it's about modularizing the process, to make it easier to build components (i.e., libraries) stand-alone. Once that is possible, and once people start to actually do build (and use) components separately, our perspective on what Boost is and how it is being used may change. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stefan Seefeld wrote:
Why is this so important for you? The difference is just one superproject shell and one tools/build. Is this a matter of principle, or are there technical reasons?
Both. I want to control the environment in which my library is being built (, tested, etc.), and thus I don't want to fetch that environment from a repository and build it on-the-fly.
...
I think it's perfectly normal that on some platforms the default (system) version of a package is too old, so a newer version needs to be pulled in from another repo ("testing", perhaps ?).
So you consider it perfectly normal to pull boost.build 1.64 from some "testing" repo, but completely unacceptable to checkout the boost-1.64.0 tag of github.org/boostorg/build? I'm sorry, but this makes no sense to me.
On Mon, 2017-06-19 at 19:32 +0300, Peter Dimov via Boost wrote:
Stefan Seefeld wrote:
I know about these tricks (I use something similar to set up my Boost.Python CI environment), but consider them hacks as they don't really solve the fundamental requests, which are:
* I want to be able to build a given Boost library stand-alone, with nothing but the library's repo being checked out. (In other words: prerequisite Boost components should be assumed pre-installed.)
Why is this so important for you? The difference is just one superproject shell and one tools/build. Is this a matter of principle, or are there technical reasons?
Anyway.
Boost.Build actually supports this use case:
sudo yum install boost-jam sudo yum install boost-build git clone --depth=1 https://github.com/boostorg/python cd python touch Jamroot bjam test
The first error here is
IMPORT error: rule "requires" unknown in module "../../config/checks/config"
because we don't have Boost.Config checked out as a sibling here.
Instead of hard-coding paths to the build modules, it would be nice if it searched for the modules instead, and then it could fallback on the hardcoded paths when the search fails. Ideally, instead of inventing a search algorithm to find the modules, pkgconfig could be used here. So when boost config is installed, the .pc would add a variable that is the location of its bjam files for consumption: prefix=<install-location> bjam_dir=${prefix}/bjam_files Name: boost_config And then you can call `pkg-config boost_config --variable=bjam_dir` and it will give the directory of the bjam modules installed with boost_config.
paul wrote:
The first error here is
IMPORT error: rule "requires" unknown in module "../../config/checks/config"
because we don't have Boost.Config checked out as a sibling here.
Instead of hard-coding paths to the build modules, it would be nice if it searched for the modules instead, and then it could fallback on the hardcoded paths when the search fails.
Ideally, instead of inventing a search algorithm to find the modules, pkgconfig could be used here. So when boost config is installed, the .pc would add a variable that is the location of its bjam files for consumption: ...
This won't work. The system Boost.Config installation on CentOS 7 is just the headers, as part of (yum) package boost. There are no bjam files anywhere to be found, no pkgconfig, no boost_config module.
On 6/19/17 8:46 AM, Stefan Seefeld via Boost wrote:
* I want to be able to build a given Boost library stand-alone, with nothing but the library's repo being checked out. (In other words: prerequisite Boost components should be assumed pre-installed.) * I want to be able to choose the build (, test, etc.) infrastructure used for my library, and make sure that when Boost is built as a whole, that build infrastructure is used.
Hmmmm - I'm thinking I doing that already. Here's what I do: a) I have a modular boost clone on my machine - set to master branch. b) I also have in the same tree libraries which I'm working on with the branch set to develop. These might be libraries already in boost, like boost.radional or boost.serialization. They also might be totally new libraries/applications which are not in boost at all. c) working from my shell move the directory which interests me. Its currently boost/libs/safe_numerics/test. But it could be anywhere which has a Jamfile.v2 d) then I invoke b2* which builds anything dependent and the then the target in the local Jamfile.v2 e) So then I've got exactly what I need with pretty no hassle - except getting the switches to bjam right. This last is a hassle as I test with various toolsets. * actually rather than invoking b2 directly, I invoke ../../../library_status.sh which runs b2 and then produces my own very cool html global table of all the tests I've run by compiler, build variant, link variant, etc. Sooooo - I'm not getting what we're missing here. And I'm getting what the CMake advocates want either. Actually I'm not understanding what anyone (but me) wants which we don't already have except an easier way to do what we're already doing. BTW I do use CMake because I like to use IDE for edit, test etc. Using IDE without this was a painful maintainence nightmare. Fortunately making a CMake to build my IDE is pretty simple. You can see what I did in the github develop branch of safe_numerics or serialization library. The Jamfile.v2 scripts are actually even simpler - just a list of tests to run. Robert Ramey
On 06/19/17 17:32, Peter Dimov via Boost wrote:
Releasing separately from Boost means that you now have two Boost versions to communicate: the version of Boost.Python, and the version of the preinstalled Boost against which it was built (I built, f.ex. libboost-python-1.65.0-1.53.0 in my CentOS 7 case.)
This consequently affects your downstream dependencies, which also have to include these two versions, in addition to their own. So when a library X depends on Boost.Python and it's compiled against Boost.Python 1.65.0-1.53.0, its version would now be X-1.17-1.65.0-1.53.0. This is not compatible with Y-2.1-1.63.0-1.53.0, even though both are built in the same CentOS 7 environment.
I think, you unnecessarilly complicate things. Yes, naturally lower level dependencies define the effective set of software that is required for the upper level software to build, run and be tested. But it doesn't mean those dependencies need to be reflected in version numbers (they don't) or that the upper level software doesn't work with different versions of the dependencies. Normally, when you release a piece of software, you declare the minimum versions of the prerequisites - preferably, the oldest versions that were successfully tested. IMHO, formal dependency management is a task related to packaging, which is specific to the packaging system. I don't think Boost should be doing this work beyond documenting the dependencies. Let packagers deal with the technical part of enforcing these dependencies with the means they have in their disposal.
On 18 June 2017 at 23:34, Edward Diener via Boost
When I speak of dependencies I am not just speaking of library X depending on other libraries A, B, and C etc. I am also speaking of library X depending on particular versions of library A, B, C etc. But since the only versioning system Boost has is a single version number for a Boost release, and since their is no way a library can check even that single version number of a Boost release either at compile or run-time, Boost libraries have no way to check versioning of other individual Boost libraries on which a library may depend.
If you say, I am going to distribute my library X with particular releases of library A, B, and C etc. with which I know my library X will work correctly you then have end-users of your library who may have multiple copies of libraries X, A, B, and C etc. on their systems, ...
and: Robert Ramey wrote: "In order to distribute boost as individual libraries as opposed to a monolithic set, individual library versioning will sooner or later have to be adopted." This is exactly how rust's Cargo works. Developers can specify the version or version+ against which his library should be built. I'm not claiming Cargo is perfect, but it's really pretty good at this and creates complete transparency as to what the dependencies are. Having multipple versions around is a consequence of this though. Cargo just works! Bar errors that did not came out in testing, this guarantees stability for every library individually, which in its' turn guarantees overall stability. This is pretty neat!
Let's be realistic, this is a real problem which only some sort of Boost individual library versioning system for starters can hope to solve.
The build system should address/include the versioning, Cargo should be taken as a model, me thinks. degski -- "*Ihre sogenannte Religion wirkt bloß wie ein Opiat reizend, betäubend, Schmerzen aus Schwäche stillend.*" - Novalis 1798
participants (14)
-
Andrey Semashev
-
Chris Glover
-
degski
-
Edward Diener
-
Gary Furnish
-
Ion Gaztañaga
-
John Maddock
-
paul
-
Peter Dimov
-
Raffi Enficiaud
-
Rene Rivera
-
Robert Ramey
-
Stefan Seefeld
-
Tom Kent