To modularize, or not to modularize. What is the plan?
Bringing this up because of the recent discussion in the 1.70.1 thread. And of course because this is the week that a certain BSC meets. Hence I thought I would clarify what I've meant my a modular Boost all this time. A modular Boost, to me, means a Boost that first and foremost a collection of independently consumable C++ libraries. That would have the following aspects to it: * An end user or library author can obtain a single Boost library of their choice and use it in their project, assuming they also obtain the appropriate dependencies of that library. * BoostOrg would not produce a monolithic, combined, merged, etc distribution. * BoostOrg would produce collectively tested milestone modular distributions. Sounds lovely right? ...I'll leave the discussion of merits to responses herein ;-) What would it take to reach that modular goal? Why do I keep saying we've been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order): * Abandon the single header include tree. * Abandon the monolithic build infrastructure. * Ban relative use of inter-library dependencies. * Explicit declaration of inter-library dependencies. * Strict normalized library layout. * Remove, and ban, dependency cycles at the inter-library user consumable granularity. There's probably more items that I've forgotten above. But this should be enough to converse about. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 7/05/2019 11:47, Rene Rivera wrote:
* Abandon the single header include tree. * Abandon the monolithic build infrastructure.
I'm not sure those are strictly necessary, as long as the system can
cope with a "partial checkout" and can ignore missing submodules.
Removing the single include directory would just break existing code and
documentation, with not really any particular benefit that I can see?
Though there's a big caveat with missing modules -- if the user does a
#include
* Ban relative use of inter-library dependencies.
I think this is already the case; a library that uses Boost.Config does
#include
* Explicit declaration of inter-library dependencies.
This can get a little tricky in some cases, as previously discussed.
For example, if Optional depends on Serialization only if you include a
specific header file (which is not included by the default
* Strict normalized library layout.
Doesn't this already exist?
Gavin Lambert wrote:
Though there's a big caveat with missing modules -- if the user does a #include
and Boost.Optional wasn't installed, they might get an older Boost.Optional from the OS packages rather than an error, which is probably wrong (because mixing versions unintentionally is bad).
This does happen in practice, and yes, it's bad.
On Mon, May 6, 2019 at 7:13 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 7/05/2019 11:47, Rene Rivera wrote:
* Abandon the single header include tree. * Abandon the monolithic build infrastructure.
I'm not sure those are strictly necessary, as long as the system can cope with a "partial checkout" and can ignore missing submodules.
What system are you thinking of? Does it already exist? Removing the single include directory would just break existing code and
documentation, with not really any particular benefit that I can see?
How would it break existing code? And what code would it break? I ask because I know for a fact that it doesn't break existing code. As not having a single include directory works perfectly well with the modular Conan Boost packages.
Though there's a big caveat with missing modules -- if the user does a #include
and Boost.Optional wasn't installed, they might get an older Boost.Optional from the OS packages rather than an error, which is probably wrong (because mixing versions unintentionally is bad).
This is problem at present even if there are no missing files. This is not a problem specifically of Boost either. It happens with any C or C++ library. There are software engineering methods and tools designed to deal with that problem. But having a modular Boost would actually make it easier to deal with that problem. As we could then include a version header for each library. And the library could check that the versions of its dependencies satisfy their requirements and error otherwise.
This argues that instead of omitting header files entirely, when a submodule is missing the build process should actually install stub headers that #error out ... which means that the build system needs to know what header files to create even when the submodule isn't installed, which is tricky, *especially* for users in the habit of including individual headers from the library rather than a convenience composite header.
I don't see that as a conclusion. (And before you suggest that removing the single header directory solves
this: it doesn't, at least not after the OS packages are updated to the new version's include paths as well.)
Haha.. didn't even think of it ;-)
* Ban relative use of inter-library dependencies.
I think this is already the case; a library that uses Boost.Config does #include
(or ) rather than anything else, for example. Or do you have some specific counter-examples?
Oh, yes, I have examples. Examples I've ru across from creating the Conan modular Boost packages. You made the mistake of thinking only of header files. There are also source files and build files to consider. Although currently I can only find one such occurrence. So this seems to have improved in the past year.
* Explicit declaration of inter-library dependencies.
This can get a little tricky in some cases, as previously discussed. For example, if Optional depends on Serialization only if you include a specific header file (which is not included by the default
) -- is that a dependency or not?
It is.
* Strict normalized library layout.
Doesn't this already exist?
Key word is strict. Some libraries don't follow the current one. And the current layout is fuzzy in some respects. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 7/05/2019 15:57, Rene Rivera wrote:
I'm not sure those are strictly necessary, as long as the system can cope with a "partial checkout" and can ignore missing submodules.
What system are you thinking of? Does it already exist?
B2 already exists, yes.
Removing the single include directory would just break existing code and documentation, with not really any particular benefit that I can see?
How would it break existing code? And what code would it break? I ask because I know for a fact that it doesn't break existing code. As not having a single include directory works perfectly well with the modular Conan Boost packages.
As long as the <boost/> prefix is retained, then it's probably ok. I was thinking that doing something else was being proposed. Although adding >100 library paths to the include path if someone does want to (or happens to) use all of Boost doesn't particularly strike me as an improvement.
This can get a little tricky in some cases, as previously discussed. For example, if Optional depends on Serialization only if you include a specific header file (which is not included by the default
) -- is that a dependency or not? It is.
This is where circular dependencies and too-eager dependencies come from, though. If a user wants to use Optional without downloading Serialization, they would never include that header file, and thus Optional does not "really" depend on Serialization for their usage, thus it should not be downloaded. If the user wants to use both, then they're probably using Serialization elsewhere already, so they would already be downloading both, and can then use that header. So nothing actually needs to consider Serialization a dependency of Optional -- both are simply dependencies of the consuming app/library.
Gesendet: Dienstag, 07. Mai 2019 um 08:03 Uhr Von: "Gavin Lambert via Boost"
On 7/05/2019 15:57, Rene Rivera wrote: [...]
This can get a little tricky in some cases, as previously discussed. For example, if Optional depends on Serialization only if you include a specific header file (which is not included by the default
) -- is that a dependency or not? It is.
This is where circular dependencies and too-eager dependencies come from, though.
If a user wants to use Optional without downloading Serialization, they would never include that header file, and thus Optional does not "really" depend on Serialization for their usage, thus it should not be downloaded.
If the user wants to use both, then they're probably using Serialization elsewhere already, so they would already be downloading both, and can then use that header. So nothing actually needs to consider Serialization a dependency of Optional -- both are simply dependencies of the consuming app/library.
I'm with Gavin on this. If the dependency is well contained in a separate header file (in particular not included by a library source file or by boost/libname.hpp) and doesn't provide any functionality beyon integration with serialization, then serialization is not really a usage requirement of - in this example - Boost.Optional. Just to be clear though: Such integration headers are a special case. I don't agree with Robert that dependencies should generally be tracked on a per-file basis. In particular not, if that requires the user to use yet another specialized tool. The latter is also, why I'd prefer if every library explicitly states its dependencies in a simple to parse file that is checkid into the repository (of course the maintainer may use a tool to automatically create/update that file): Boost should make it as easy as possible to be used with/ consumed by existing tools (i.e. package managers like conan and vcpkg) and not require yet another specialized tool to be used in a modular fashion. Mike
Rene Rivera wrote:
But having a modular Boost would actually make it easier to deal with that problem. As we could then include a version header for each library. And the library could check that the versions of its dependencies satisfy their requirements and error otherwise.
I am so not looking forward to this. It will be a disaster, if we're lucky.
On Tue, May 7, 2019 at 7:42 AM Peter Dimov via Boost
Rene Rivera wrote:
But having a modular Boost would actually make it easier to deal with that problem. As we could then include a version header for each library. And the library could check that the versions of its dependencies satisfy their requirements and error otherwise.
I am so not looking forward to this. It will be a disaster, if we're lucky.
It is also unnecessary; a distribution system like conan would manage downloading direct and transitive dependencies. The version of a boost module would be stored in a conanfile (if we used conan), not in a C++ header; though it could be in both if we felt it was useful, or since conan uses python, it could read the version from the module's version file so there is one source of truth to the version. There are also multiple projects that inspect boost repositories for dependencies. We could leverage these to automatically maintain a list of dependencies in each repository meta/, as well as describe optional sub-packages and which headers belong to those. There should be no need to manually generate or manage dependencies... sure you can do it, but how do you know that you got it right without a clean-room build with 100% unit test coverage? - Jim
On 5/6/19 4:47 PM, Rene Rivera via Boost wrote:
Bringing this up because of the recent discussion in the 1.70.1 thread. And of course because this is the week that a certain BSC meets. Hence I thought I would clarify what I've meant my a modular Boost all this time.
A modular Boost, to me, means a Boost that first and foremost a collection of independently consumable C++ libraries.
Right That would have the following
aspects to it:
* An end user or library author can obtain a single Boost library of their choice and use it in their project, assuming they also obtain the appropriate dependencies of that library.
Check
* BoostOrg would not produce a monolithic, combined, merged, etc distribution.
I don't see it as necessary for Boost to give this up. That is, I don't see the current setup as conflicting with the ability to just download the libraries he wants.
* BoostOrg would produce collectively tested milestone modular distributions.
I don't think anything has to change here.
Sounds lovely right? ...I'll leave the discussion of merits to responses herein ;-)
I envision the construction of a tool which just goes to github and downloads a list of boost libraries. For each library the download process makes a simple transform to a standalone directory for that library. Similar to what the global distribution currently looks like except for one library at at time. This would be useful right away. In vinnies 1.70.1 situation, one could put this to use right away. a)the user changes the name of the current beast directory to beast-1.70. b)downloads the latest from the master into the new beast directory.
What would it take to reach that modular goal? Why do I keep saying we've been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure.
One would need a more "stand alone" tool for non-header only libraries. But I presume lots of users would just compile the *.cpp files into their app or build their own DLL. Ideally, the library package would/should contain a CMake script to do this.
* Ban relative use of inter-library dependencies.
I don't think that's possible. But I don't think that's necessary. The only thing is that a user would need to install the dependent libraries he needs. One could try to make a tool to do this - but I've argued that that is a fools errand. Rather than argue that any more. I could just imagine user does the following: a) Adds a boost header to his project. b) "installs" that header as above c) tries to build his project d) If something missing - call a) for the missing thing at the end of that process, he has a minimal subset of boost required to support his project. If someone has nothing else to do he could write a tool which does generates a list of dependencies for a given app as a text file. Then the user would do most of them in one shot. This would likely be an minor enhancement of BCP or similar program. But the result would be the same. BTW - the user already incorporates non boost libraries into his project using this same procedure. Ideally any dependency checking tools would work on these as well.
* Explicit declaration of inter-library dependencies.
I don't think this is necessary.
* Strict normalized library layout.
I don't think this should be necessary. But I'm aware that some libraries don't follow convention regarding header layout. So either those have to change or the "downloader" tool would have to smart enough to sort those out. I don't recommend the latter option.
* Remove, and ban, dependency cycles at the inter-library user consumable granularity.
I don't think this is necessary. If one is following a chain of headers rather than a chain of modules - there are no cycles.
There's probably more items that I've forgotten above. But this should be enough to converse about.
LOL - ya think? I think you're concept of "modularized boost" is at least broadly similar to mine. To summarize, the only things we would need: a) we need a tool to download/transform one boost library at a time. b) optionally, it would be nice to have a dependency listing tool. FYI this is more difficult than it looks since the user doesn't all the boost libraries on his machine. Such a tool would have to trawl the boost master on github or some oneline database of headers summaries. c) There would likely need to be separate directory - boost tools with a couple of things in it. Some stuff would be moved from boost root to boost/tools. So the user wouldn't need the root on his machine. d) A good written explanation for users who want to do this. What we don't need to do is re-organize current boost/development/testing etc. This is merely an alternative deployment concept. Boost developers would not be effected. That's it. just three or 4 simple things. And no disruption of current setup. Robert Ramey
On 7/05/2019 13:16, Robert Ramey wrote:
I don't think that's possible. But I don't think that's necessary. The only thing is that a user would need to install the dependent libraries he needs. One could try to make a tool to do this - but I've argued that that is a fools errand. Rather than argue that any more. I could just imagine user does the following:
a) Adds a boost header to his project. b) "installs" that header as above c) tries to build his project d) If something missing - call a) for the missing thing
at the end of that process, he has a minimal subset of boost required to support his project.
The problem with this algorithm is when (c) works when it shouldn't, because it found another version of the library somewhere else that happens to sufficiently resemble the version of the library that was actually intended. This can lead to weird runtime behaviour, mysterious future compilation failures, or "it works for me" but not someone else. The latter two should hopefully lead to discovery of the problem eventually, but the first can be quite dangerous.
To summarize, the only things we would need:
a) we need a tool to download/transform one boost library at a time.
Transform in what way?
b) optionally, it would be nice to have a dependency listing tool. FYI this is more difficult than it looks since the user doesn't all the boost libraries on his machine. Such a tool would have to trawl the boost master on github or some oneline database of headers summaries.
Most compilers can generate a list of all the headers included by a C++ file, recursively. (And so can boostdep.) The trick is that there are optional dependencies so you kinda have to start with what the user app is actually including on a header-by-header basis rather than a whole-library basis. Or at least when declaring whole-library-level dependencies it should only consider dependencies included (recursively) by the top-level convenience header file, and not those included by optional extra files, unless the app actually uses them. If a header is not found or if an obviously-Boost-library header file is found outside of the Boost library root, it probably needs to be downloaded.
c) There would likely need to be separate directory - boost tools with a couple of things in it. Some stuff would be moved from boost root to boost/tools. So the user wouldn't need the root on his machine.
I don't see why it would need to be any more complex than downloading the existing Boost superproject root and doing the equivalent of "git submodule update --init" on only the specified submodules, then running a b2 build/stage cycle (modified to cope with missing modules). Either by actually using git, or doing some equivalent with tarball/zip snapshots.
On 5/6/19 8:31 PM, Gavin Lambert via Boost wrote:
On 7/05/2019 13:16, Robert Ramey wrote:
I don't think that's possible. But I don't think that's necessary. The only thing is that a user would need to install the dependent libraries he needs. One could try to make a tool to do this - but I've argued that that is a fools errand. Rather than argue that any more. I could just imagine user does the following:
a) Adds a boost header to his project. b) "installs" that header as above c) tries to build his project d) If something missing - call a) for the missing thing
at the end of that process, he has a minimal subset of boost required to support his project.
The problem with this algorithm is when (c) works when it shouldn't, because it found another version of the library somewhere else that happens to sufficiently resemble the version of the library that was actually intended.
Not it won't. The user has some procedure for building his project. That procedure specifies where to find libraries. It doesn't look for them willy - nilly all over his machine or the net or anywhere else. It includes only what he specifies.
To summarize, the only things we would need:
a) we need a tool to download/transform one boost library at a time.
Transform in what way?
This current directory structure for boost looks like: boost_root/ libs/ safe_numerics/ include/ boost/ include/ safe_interger.hpp But when boost is delivered to the user the same data is organized as boost_root/ boost/ safe_numerics/ safe_integer.hpp This is done by the release process. Those of use who are developing with the master branch use "b2 headers" to create a bunch of file links which constitute a map from the first version to the second version.
b) optionally, it would be nice to have a dependency listing tool. FYI this is more difficult than it looks since the user doesn't all the boost libraries on his machine. Such a tool would have to trawl the boost master on github or some oneline database of headers summaries.
Most compilers can generate a list of all the headers included by a C++ file, recursively. (And so can boostdep.) The trick is that there are optional dependencies so you kinda have to start with what the user app is actually including on a header-by-header basis rather than a whole-library basis.
Exactly - that's what I specified above. The procedure follows the include files not some idea of library dependencies.
Or at least when declaring whole-library-level dependencies it should only consider dependencies included (recursively) by the top-level convenience header file, and not those included by optional extra files, unless the app actually uses them.
This procedure does not in any way depend on any notion of whole library dependencies. I've discussed this many many times. Most recently a few days ago on this list. I've maintained that it is not a valid notion.
If a header is not found or if an obviously-Boost-library header file is found outside of the Boost library root, it probably needs to be downloaded.
Of course. This is true regardless of whether it's a boost library or any other library. If my app needs something not on my machine, I have to find it and download it.
c) There would likely need to be separate directory - boost tools with a couple of things in it. Some stuff would be moved from boost root to boost/tools. So the user wouldn't need the root on his machine.
I don't see why it would need to be any more complex than downloading the existing Boost superproject root and doing the equivalent of "git submodule update --init" on only the specified submodules, then running a b2 build/stage cycle (modified to cope with missing modules). Some would disagree with you.
Either by actually using git, or doing some equivalent with tarball/zip snapshots.
Right, I've made no assumption about the download process or the repository or anything else. It's just a method which guarantees that one has what needs and only what he needs to build his project. Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 7/05/2019 15:55, Robert Ramey wrote:
The problem with this algorithm is when (c) works when it shouldn't, because it found another version of the library somewhere else that happens to sufficiently resemble the version of the library that was actually intended.
Not it won't. The user has some procedure for building his project. That procedure specifies where to find libraries. It doesn't look for them willy - nilly all over his machine or the net or anywhere else. It includes only what he specifies.
It usually will look in the system include directories, which (on Linux) will quite often include some older version of Boost. It's possible to tell it to not do that, but it's not the default and would require some heroics to do it successfully. I doubt it's safe to assume that everybody is willing to do this.
This current directory structure for boost looks like:
boost_root/ libs/ safe_numerics/ include/ boost/ include/ safe_interger.hpp
But when boost is delivered to the user the same data is organized as
boost_root/ boost/ safe_numerics/ safe_integer.hpp
This is done by the release process. Those of use who are developing with the master branch use "b2 headers" to create a bunch of file links which constitute a map from the first version to the second version.
There's no particular reason why a b2 build couldn't do the same thing -- in fact it probably already does. So whenever the user runs the "get me a Boost library" command (whatever that turns out to be) it probably has to run a b2 build anyway, so that will just happen.
I don't see why it would need to be any more complex than downloading the existing Boost superproject root and doing the equivalent of "git submodule update --init" on only the specified submodules, then running a b2 build/stage cycle (modified to cope with missing modules).
Some would disagree with you.
To clarify, I don't mean that "regular users" should have to do a full git clone (if they wanted one, they could already do that). But there are ways to tell git to only download a specific tree without any history, or to just use snapshot archives instead of using git itself.
On 5/6/19 11:15 PM, Gavin Lambert via Boost wrote:
There's no particular reason why a b2 build couldn't do the same thing -- in fact it probably already does.
Right - that's what "b2 headers" does. b2 headers creates a structure of links which maps the original to that which the user uses. But this is not a good match for users needs. And users don't want to have to deal with b2. This is why my proposal doesn't added any new burden to the user.
So whenever the user runs the "get me a Boost library" command (whatever that turns out to be) it probably has to run a b2 build anyway, so that will just happen.
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
I don't see why it would need to be any more complex than downloading the existing Boost superproject root and doing the equivalent of "git submodule update --init" on only the specified submodules, then running a b2 build/stage cycle (modified to cope with missing modules).
Some would disagree with you.
To clarify, I don't mean that "regular users" should have to do a full git clone (if they wanted one, they could already do that). But there are ways to tell git to only download a specific tree without any history, or to just use snapshot archives instead of using git itself.
Hmmm - I can't understand all that the above does and how it does it. I would hope it would be easy to invoke: Get me boost library X and place in directory location Y. This would boil down to invoking a git clone and making the translation from the boost development tree to the simpler tree structure that users currently get. If there is git command that can do this- great! Please post it here. Robert Ramey
Robert Ramey wrote:
Get me boost library X and place in directory location Y.
This is at present roughly achievable with the following sequence of commands: git clone --depth 1 https://github.com/boostorg/boost.git cd boost git submodule update --init tools/boostdep libs/$X python tools/boostdep/depinst/depinst.py -X test $X ./bootstrap.sh ./b2 --prefix=$Y --with-$X install But that's an entirely different use case. This is Boost Github -> user as opposed to Boost Github -> Boost release -> package manager (apt, conan, vcpkg) -> user
On 5/7/19 7:49 AM, Peter Dimov via Boost wrote:
Robert Ramey wrote:
Get me boost library X and place in directory location Y.
This is at present roughly achievable with the following sequence of commands:
git clone --depth 1 https://github.com/boostorg/boost.git cd boost git submodule update --init tools/boostdep libs/$X python tools/boostdep/depinst/depinst.py -X test $X ./bootstrap.sh ./b2 --prefix=$Y --with-$X install
hmmm - looks like we're getting somewhere!
But that's an entirely different use case. This is
Boost Github -> user
as opposed to
Boost Github -> Boost release -> package manager (apt, conan, vcpkg) -> user
"entirely different use case" I asked for "Get me boost library X and place in directory location Y" and it looks like you responded to it. So my case is different than what exactly? How is this different from what may users would find useful. I'm clearly missing something here. Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 5/7/19 7:49 AM, Peter Dimov via Boost wrote:
Robert Ramey wrote:
Get me boost library X and place in directory location Y.
This is at present roughly achievable with the following sequence of commands:
git clone --depth 1 https://github.com/boostorg/boost.git cd boost git submodule update --init tools/boostdep libs/$X python tools/boostdep/depinst/depinst.py -X test $X ./bootstrap.sh ./b2 --prefix=$Y --with-$X install
hmmmm - look at this more carefully. a) I have to clone the whole of boost including the superproject. That's exactly what I'm trying to avoid. b) i have to build and invoke b2. Another thing i want to avoid. I guess the simplest would be to: a) create a directory named boost. b) clone the library(s) into this local boost directory manually move the libraries include/boost/.. subdirectory into my newly created boost directory. Optionally I can create links to do the same thing. c) add to the path to this directory to the INCLUDE path d) compile and link my app e) I will discover some other missing libraries, repeat the above. f) until my app builds. At this point I will have a minimally dependent build of my application. No tools, no hassle. It's a pain, but its a one time thing. I only need to do it when I add a new boost library to my app. If I used the linking option above and I'm still hooked to the boost repos, I can selectively or collectively pull all the latest updates or I can switch builds between master/develop at any time to the past. This is extra sauce who don't want to make their lives too simple. Robert Ramey
But that's an entirely different use case. This is
Boost Github -> user
as opposed to
Boost Github -> Boost release -> package manager (apt, conan, vcpkg) -> user
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 8/05/2019 07:31, Robert Ramey wrote:
git clone --depth 1 https://github.com/boostorg/boost.git cd boost git submodule update --init tools/boostdep libs/$X python tools/boostdep/depinst/depinst.py -X test $X ./bootstrap.sh ./b2 --prefix=$Y --with-$X install
a) I have to clone the whole of boost including the superproject. That's exactly what I'm trying to avoid.
Actually no, that's not what the above does. You are cloning the superproject with --depth 1 (which means no history, just the current tree). So it's basically no different from downloading a snapshot archive with a bit of extra .git metadata. The rest are just empty subdirectories until you run the submodule update. And the submodule update is fetching only those named submodules, not all of Boost. By default the above grabs current master, but you can also add -b to request a specific branch (or release tag). So for example, if you just want Boost 1.70, you can: git clone --depth 1 -b boost-1.70.0 --no-tags https://github.com/boostorg/boost.git You can also pass --depth 1 to the submodule update when fetching specific libraries. And add -g '--depth 1' to the depinst command as well. Or fetch the dependencies manually with more submodule updates rather than using depinst; it might still be a bit too eager at fetching dependencies as-is since it's looking at all the library files. Or if you don't use --depth 1, then you get a larger download but you can easily move between different versions and update to new versions in the future.
b) i have to build and invoke b2. Another thing i want to avoid.
If you're using any non-header-only library (and large chunks of Boost fall into that category, such as Boost.Thread), then you can't avoid that.
b) clone the library(s) into this local boost directory manually move the libraries include/boost/.. subdirectory into my newly created boost directory. Optionally I can create links to do the same thing.
Or you could run b2 headers, rather than making life complicated for yourself.
No tools, no hassle. It's a pain, but its a one time thing. I only need to do it when I add a new boost library to my app.
Or when you update any of them, or switch versions. It's hardly a one time thing.
On Tue, May 7, 2019 at 10:59 AM Peter Dimov via Boost
Robert Ramey wrote:
Get me boost library X and place in directory location Y.
This is at present roughly achievable with the following sequence of commands:
git clone --depth 1 https://github.com/boostorg/boost.git cd boost git submodule update --init tools/boostdep libs/$X python tools/boostdep/depinst/depinst.py -X test $X ./bootstrap.sh ./b2 --prefix=$Y --with-$X install
But that's an entirely different use case. This is
Boost Github -> user
as opposed to
Boost Github -> Boost release -> package manager (apt, conan, vcpkg) -> user
More accurately,
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
On 2019-05-07 7:46 p.m., Gavin Lambert via Boost wrote:
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
There clearly is: third parties providing binary packages. That's the common practice with Linux distributions, but certainly not limited to those. Ask google to list others. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Tue, May 7, 2019 at 6:47 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
Sure there is.. You could add all the source files from the standard source directory and plop them into your own build tool. And hit the "big red build button". -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 8/05/2019 12:23, Rene Rivera wrote:
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
Sure there is.. You could add all the source files from the standard source directory and plop them into your own build tool. And hit the "big red build button".
And that won't work because it will be using the wrong settings. Given that current Boost release archives already require users to run b2 in almost all cases, I don't see why this aversion exists.
On Tue, May 7, 2019 at 7:38 PM Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
Sure there is.. You could add all the source files from the standard
On 8/05/2019 12:23, Rene Rivera wrote: source
directory and plop them into your own build tool. And hit the "big red build button".
And that won't work because it will be using the wrong settings.
There are no wrong settings. If your library can't work with the settings users choose to provide it's a problem with your library. Given that current Boost release archives already require users to run
b2 in almost all cases, I don't see why this aversion exists.
I obviously don't have an aversion to supporting use of b2. But I've always advocated for the freedom of users to select whatever tools they choose. Because requiring that they run b2, or any specific build system, limits the set of possible users. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
Rene Rivera wrote:
Sure there is.. You could add all the source files from the standard source directory and plop them into your own build tool. And hit the "big red build button".
You'll also need to figure out the include path first somehow, and remember to define WHATEVER_DYN_LINK if building a DLL. That's for the simple stuff, good luck with Boost.Context.
On Tue, May 7, 2019 at 8:40 PM Peter Dimov via Boost
Rene Rivera wrote:
Sure there is.. You could add all the source files from the standard source directory and plop them into your own build tool. And hit the "big red build button".
You'll also need to figure out the include path first somehow,
That's the "it's in a standard location" aspect.
and remember to define WHATEVER_DYN_LINK if building a DLL. That's for the simple stuff,
Oh, sure. But that's stuff that's documented. You don't specifically need b2 for that.
good luck with Boost.Context.
Yes, that's a particularly problematic library. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
Rene Rivera wrote:
On Tue, May 7, 2019 at 8:40 PM Peter Dimov via Boost
wrote: ... You'll also need to figure out the include path first somehow,
That's the "it's in a standard location" aspect.
But we abandoned the standard location at step one:
* Abandon the single header include tree.
Now each library contains its headers, so when building libX, the include directories of its dependencies need to be in the include path. This will happen automatically if building with b2 or CMake, assuming that the dependencies - including the header-only ones - are properly "linked to". Not when just adding a bag of sources to a project though.
On Tue, May 7, 2019 at 9:12 PM Peter Dimov via Boost
Rene Rivera wrote:
On Tue, May 7, 2019 at 8:40 PM Peter Dimov via Boost
wrote: ... You'll also need to figure out the include path first somehow,
That's the "it's in a standard location" aspect.
But we abandoned the standard location at step one:
* Abandon the single header include tree.
Now each library contains its headers, so when building libX, the include directories of its dependencies need to be in the include path.
That's still a standard location. Just not the merged single standard location that requires preprocessing to use.
This will happen automatically if building with b2 or CMake, assuming that the dependencies - including the header-only ones - are properly "linked to". Not when just adding a bag of sources to a project though.
True.. But users already do similar operations when adding Boost and other libraries. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 8/05/2019 14:33, Rene Rivera wrote:
On Tue, May 7, 2019 at 9:12 PM Peter Dimov wrote:
This will happen automatically if building with b2 or CMake, assuming that the dependencies - including the header-only ones - are properly "linked to". Not when just adding a bag of sources to a project though.
True.. But users already do similar operations when adding Boost and other libraries.
Currently, most users just say "I want to use Boost" and add one include path and one library path. In a modular Boost world, perhaps a user might say "I want to use Boost.Thread", and add one include path and one library path. Except that this won't work without the "preprocessing". I don't think that this user would want to add the 62 other libraries (I counted) that are apparently required (according to boostdep/depinst) in order to use Boost.Thread. (Granted, it's probably being overly pessimistic -- I don't see why Boost.Thread should depend on Boost.Regex, for example -- but that's where we are right now.)
On 5/7/19 9:27 PM, Gavin Lambert via Boost wrote:
On 8/05/2019 14:33, Rene Rivera wrote:
In a modular Boost world, perhaps a user might say "I want to use Boost.Thread", and add one include path and one library path.
Except that this won't work without the "preprocessing". I don't think that this user would want to add the 62 other libraries (I counted) that are apparently required (according to boostdep/depinst) in order to use Boost.Thread.
(Granted, it's probably being overly pessimistic -- I don't see why Boost.Thread should depend on Boost.Regex, for example -- but that's where we are right now.)
Right - that IS where we are. Our assertion (library by library) of dependency leads to the conclusion that 62 other libraries are necessary when in fact they are not. This assertion leads to demonstrably incorrect conclusions. Thus it must be false. This logic is incontestable. Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey wrote:
Right - that IS where we are. Our assertion (library by library) of dependency leads to the conclusion that 62 other libraries are necessary when in fact they are not.
Whereas only 33 are needed, assuming that you don't need to build the library. If you do, 39.
On 2019-05-08 12:27 a.m., Gavin Lambert via Boost wrote:
On 8/05/2019 14:33, Rene Rivera wrote:
On Tue, May 7, 2019 at 9:12 PM Peter Dimov wrote:
This will happen automatically if building with b2 or CMake, assuming that the dependencies - including the header-only ones - are properly "linked to". Not when just adding a bag of sources to a project though.
True.. But users already do similar operations when adding Boost and other libraries.
Currently, most users just say "I want to use Boost" and add one include path and one library path.
In a modular Boost world, perhaps a user might say "I want to use Boost.Thread", and add one include path and one library path.
Except that this won't work without the "preprocessing". I don't think that this user would want to add the 62 other libraries (I counted) that are apparently required (according to boostdep/depinst) in order to use Boost.Thread.
(Granted, it's probably being overly pessimistic -- I don't see why Boost.Thread should depend on Boost.Regex, for example -- but that's where we are right now.)
I agree. But this isn't new, and both B2 as well as CMake (to name just those two) can already handle transitive requirements. So it's already is possible for a user to ask "I want Boost.Thread, so give me all the compiler flags I need." and have everything else sorted out automagically. No preprocessing has to be involved. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Tuesday, May 7, 2019 11:27:01 PM CDT Gavin Lambert via Boost wrote:
Currently, most users just say "I want to use Boost" and add one include path and one library path.
Yep. From a user point of view, "modularization" seems like only pain. Naively, I'm wondering why the modular people don't just go and use github for their library? There's no reason to develop on boost.org if it is painful for you. -Steve
On Fri, 10 May 2019 at 09:30, Steve Robbins via Boost
On Tuesday, May 7, 2019 11:27:01 PM CDT Gavin Lambert via Boost wrote:
Currently, most users just say "I want to use Boost" and add one include path and one library path.
Yep. From a user point of view, "modularization" seems like only pain.
The discussion so far leaves me with similar impression.
Naively, I'm wondering why the modular people don't just go and use github for their library?
If we could only dance like Go :-) $ boost install github.com/boostorg/hello import ( github.com/boostorg/hello ) Best regards, -- Mateusz Loskot, http://mateusz.loskot.net
On May 10, 2019, at 11:11, Mateusz Loskot via Boost
wrote: On Fri, 10 May 2019 at 09:30, Steve Robbins via Boost
wrote: $ boost install github.com/boostorg/hello
import ( github.com/boostorg/hello )
On my system, I have boost installed by macports, and it has its own system to manage the optional components. One of them is MPI, and the resulting libraries depend on MPI. I wander how this affects the optimists assumptions? The second observation is that the boost dylibs have no version information whatsoever on them!!! /opt/local/lib/libboost_log-mt.dylib: /opt/local/lib/libboost_log-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_atomic-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_chrono-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_thread-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_date_time-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_filesystem-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_system-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libboost_regex-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /opt/local/lib/libicudata.58.dylib (compatibility version 58.0.0, current version 58.2.0) /opt/local/lib/libicui18n.58.dylib (compatibility version 58.0.0, current version 58.2.0) /opt/local/lib/libicuuc.58.dylib (compatibility version 58.0.0, current version 58.2.0) /opt/local/lib/openmpi-devel-mp/libmpi_cxx.40.dylib (compatibility version 41.0.0, current version 41.0.0) /opt/local/lib/openmpi-devel-mp/libmpi.40.dylib (compatibility version 41.0.0, current version 41.1.0) /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 400.9.4) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
On Fri, May 10, 2019 at 2:30 AM Steve Robbins via Boost < boost@lists.boost.org> wrote:
On Tuesday, May 7, 2019 11:27:01 PM CDT Gavin Lambert via Boost wrote:
Currently, most users just say "I want to use Boost" and add one include path and one library path.
Yep. From a user point of view, "modularization" seems like only pain.
Must be that we talk to different users. The interactions I get are more commonly "I want to use Boost Beast", "Does Boost have X?" (implication being that's all they want), and of course the eternal "I can't use Boost, it's too big.". -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
Rene Rivera wrote:
Must be that we talk to different users. The interactions I get are more commonly "I want to use Boost Beast", "Does Boost have X?" (implication being that's all they want), and of course the eternal "I can't use Boost, it's too big.".
These are nothing compared to the interactions we'll be getting if we force people to declare their header-only dependencies. This will break literally thousands of packages downstream. For better or worse, today, after `apt install libboost-dev` and/or `find_package(Boost)`, you can #include any header-only Boost library and use it, without going into specifics. PS the other day I got the equivalent of "can't use mp11, it's too big." Something tells me that a "modular" Boost will still remain "too big".
On 10 May 2019, at 14:12, Peter Dimov via Boost
wrote: PS the other day I got the equivalent of "can't use mp11, it's too big." Something tells me that a "modular" Boost will still remain "too big”
A single anecdata point from one type of user: I’ve been introducing Boost into large corporates for over 15 years. Typically one privileged person/account brings a release through the corporate firewall (so that’s one person needing to do one approved download). Then a fairly stable standard build b2-based script is run to create binaries for the compilers+compile flags we care about. All libraries are built. The headers and libs are then dumped somewhere internal for the development teams to use. The Boost part of those teams’ CMakeLists.txts are pretty straightforward: one line for includes and another for a lib dir. To be honest the prospect of a modular Boost gives me the jitters - am I going to replace one open-source-use approval, one download, one build, one include path and one lib path by 100 of each of those things?? Now some might argue that my problem should be with “silly” corporate policies rather than modular Boost. But many Boost users can do nothing to change those policies. Regards, Pete
In the end, all truly useful libraries are hierarchial, not flat.
Some library components share needs.
For example, variants, tuples, optionals and strings are useful almost
everywhere.
Hierarchial libraries encourage re-use of lower level components.
If you make boost truly flat, it means requiring an implementation of (say
) string in every component.
That's daft.
Flattening boost is the wrong direction.
Example: beast is built on Asio , which is built on system. Two truly
brilliant components, which would be much less useful if built in isolation.
On Fri, 10 May 2019, 15:12 Peter Dimov via Boost,
Rene Rivera wrote:
Must be that we talk to different users. The interactions I get are more commonly "I want to use Boost Beast", "Does Boost have X?" (implication being that's all they want), and of course the eternal "I can't use Boost, it's too big.".
These are nothing compared to the interactions we'll be getting if we force people to declare their header-only dependencies. This will break literally thousands of packages downstream. For better or worse, today, after `apt install libboost-dev` and/or `find_package(Boost)`, you can #include any header-only Boost library and use it, without going into specifics.
PS the other day I got the equivalent of "can't use mp11, it's too big." Something tells me that a "modular" Boost will still remain "too big".
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Tue, May 7, 2019 at 7:47 PM Gavin Lambert via Boost
On 8/05/2019 01:55, Robert Ramey wrote:
I'm not seeing this at all. I don't think any solution which requires a user to download, build and execute b2 is going to fly.
Unless you want to limit yourself to the subset of Boost which is header-only, or commit to supplying binary packages for every conceivable combination of compiler and platform, then I don't think that there is any alternative.
This is a strong argument to move as many repositories as possible to header-only. It would eliminate the mess of needing to deliver all the variants of binaries on platforms that require it. - Jim
On Mon, May 6, 2019 at 8:17 PM Robert Ramey via Boost
On 5/6/19 4:47 PM, Rene Rivera via Boost wrote:
* BoostOrg would not produce a monolithic, combined, merged, etc distribution.
I don't see it as necessary for Boost to give this up. That is, I don't see the current setup as conflicting with the ability to just download the libraries he wants.
It conflicts in that the modular arrangement is not the same as the current monolithic arrangement.
* BoostOrg would produce collectively tested milestone modular
distributions.
I don't think anything has to change here.
The change would be in holding on to the belief that the single header dir and top level build is the face we should be putting forward to users.
Sounds lovely right? ...I'll leave the discussion of merits to responses
herein ;-)
I envision the construction of a tool which just goes to github and downloads a list of boost libraries.
Can we just stop trying to build more Boost specific tools?
For each library the download process makes a simple transform to a standalone directory for that library. Similar to what the global distribution currently looks like except for one library at at time.
You've just created an combinatorial explosion of distributions which you'll need to test. As users will complain when their particular combination doesn't work. This would be useful right away. In vinnies 1.70.1 situation, one could
put this to use right away.
a)the user changes the name of the current beast directory to beast-1.70. b)downloads the latest from the master into the new beast directory.
I my ideal modular view Vinnie would publish a new Beast library version of 1.70.1. Along with the requirement that it's can be used with its dependencies of 1.70.0. Users would obtain that new version with their existing package management method. Hence, not that different with how you posit, but not tied to any particular Boost custom tool. Or arrangement.
What would it take to reach that modular goal? Why do I keep saying we've
been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure.
One would need a more "stand alone" tool for non-header only libraries.
Why? And not sure what you mean by that?
But I presume lots of users would just compile the *.cpp files into their app or build their own DLL. Ideally, the library package would/should contain a CMake script to do this.
Yes, they could make use of whatever build system the library provides support for. Or they could get it from an established package manager that supports their method of building.
* Ban relative use of inter-library dependencies.
I don't think that's possible.
Actually it seems we are almost there on that point. As I could only find one instance in the latest release of this problem. But it was a quick search only. So there may be more.
But I don't think that's necessary.
Really? How would you resolve a library referring to another one with a path like "../../thread" instead of "/boost/thread" (or equivalent).
The only thing is that a user would need to install the dependent libraries he needs.
Right, they could say, use a package manager or "manua"l downloads of what they need.
One could try to make a tool to do this
Yes, they are called package and dependency managers. There are many of them.
- but I've argued that that is a fools errand.
Okay two things.. 1. Are you saying that package manager developers are fools? 2. Ah, earlier you say we should write a custom Boost tool to create custom distributions, and now you say it's a fools errand. Did you just call yourself a fool? ;-)
Rather than argue that any more. I could just imagine user does the following:
a) Adds a boost header to his project. b) "installs" that header as above c) tries to build his project d) If something missing - call a) for the missing thing
at the end of that process, he has a minimal subset of boost required to support his project.
If someone has nothing else to do he could write a tool which does generates a list of dependencies for a given app as a text file. Then the user would do most of them in one shot. This would likely be an minor enhancement of BCP or similar program. But the result would be the same.
Similar tools already exist.. they are called Package & Dependency Managers. We should probably support them by making Boost easier to package by making Boost modular. BTW - the user already incorporates non boost libraries into his project
using this same procedure. Ideally any dependency checking tools would work on these as well.
They do.
* Explicit declaration of inter-library dependencies.
I don't think this is necessary.
Sure, it's not required. There are tools to this job for you. But they aren't as precise as an explicit declaration from the author. And being explicit helps those existing tools by making their jobs as easy as reading that explicit list and avoiding all the work of scanning and guessing. Improving everyone's lives accordingly with reduced complexity.
* Strict normalized library layout.
I don't think this should be necessary. But I'm aware that some libraries don't follow convention regarding header layout. So either those have to change or the "downloader" tool would have to smart enough to sort those out. I don't recommend the latter option.
Wait, another tool to write? Oh, wait, no.. You mean make the same tool even smarter. And hence prone to mistakes.
* Remove, and ban, dependency cycles at the inter-library user consumable
granularity.
I don't think this is necessary. If one is following a chain of headers rather than a chain of modules - there are no cycles.
Sure, that's a true statement given your supposition. But lets face it, the reality is that everyone except Robert Ramey thinks in terms of module granularity. All the tools we use are written in those terms. And we do it at the module granularity because it is useful to think about them that way. It simplifies things like configuring the tools we use, how we advertise the libraries, how we track the libraries in source control repositories, documentation, bug tracking, and so on.
There's probably more items that I've forgotten above. But this should be
enough to converse about.
LOL - ya think?
Haha, there's only so much time in the world.. especially when you are waiting for friends to arrive for usual Monday night political gathering :-) I think you're concept of "modularized boost" is at least broadly
similar to mine.
To summarize, the only things we would need:
a) we need a tool to download/transform one boost library at a time.
Those already exist. We should facilitate their existence. b) optionally, it would be nice to have a dependency listing tool. FYI
this is more difficult than it looks since the user doesn't all the boost libraries on his machine. Such a tool would have to trawl the boost master on github or some oneline database of headers summaries.
Those also exist. For example the Conan package manger can produce a list or graph of such dependencies in various format.
c) There would likely need to be separate directory - boost tools with a couple of things in it. Some stuff would be moved from boost root to boost/tools. So the user wouldn't need the root on his machine.
Yep. But likely would be minimal as we would lean on existing, external, tools ecosystem. d) A good written explanation for users who want to do this.
We always need documentation. There can never be enough of it! What we don't need to do is re-organize current
boost/development/testing etc. This is merely an alternative deployment concept. Boost developers would not be effected.
Well... I would implore that we do not support more than one deployment arrangement (multiple "channels" is perfectly okay though). But a sans-root arrangement actually makes development easier in that it facilitates individual library testing. It would be considerably easier to just clone your one library and test it against any particular existing versions of other Boost libraries.. without the juggling with git submodules. That's it. just three or 4 simple things. And no disruption of current
setup.
I feel so much better that you think it's simple. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 5/6/19 9:40 PM, Rene Rivera via Boost wrote:
On Mon, May 6, 2019 at 8:17 PM Robert Ramey via Boost
wrote: On 5/6/19 4:47 PM, Rene Rivera via Boost wrote:
* BoostOrg would not produce a monolithic, combined, merged, etc distribution.
I don't see it as necessary for Boost to give this up. That is, I don't see the current setup as conflicting with the ability to just download the libraries he wants.
It conflicts in that the modular arrangement is not the same as the current monolithic arrangement.
* BoostOrg would produce collectively tested milestone modular
distributions.
I don't think anything has to change here.
The change would be in holding on to the belief that the single header dir and top level build is the face we should be putting forward to users.
Sounds lovely right? ...I'll leave the discussion of merits to responses
herein ;-)
I envision the construction of a tool which just goes to github and downloads a list of boost libraries.
Can we just stop trying to build more Boost specific tools?
For each library the download process makes a simple transform to a standalone directory for that library. Similar to what the global distribution currently looks like except for one library at at time.
You've just created an combinatorial explosion of distributions which you'll need to test. As users will complain when their particular combination doesn't work.
This would be useful right away. In vinnies 1.70.1 situation, one could
put this to use right away.
a)the user changes the name of the current beast directory to beast-1.70. b)downloads the latest from the master into the new beast directory.
I my ideal modular view Vinnie would publish a new Beast library version of 1.70.1. Along with the requirement that it's can be used with its dependencies of 1.70.0. Users would obtain that new version with their existing package management method. Hence, not that different with how you posit, but not tied to any particular Boost custom tool. Or arrangement.
What would it take to reach that modular goal? Why do I keep saying we've
been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure.
One would need a more "stand alone" tool for non-header only libraries.
Why? And not sure what you mean by that?
But I presume lots of users would just compile the *.cpp files into their app or build their own DLL. Ideally, the library package would/should contain a CMake script to do this.
Yes, they could make use of whatever build system the library provides support for. Or they could get it from an established package manager that supports their method of building.
* Ban relative use of inter-library dependencies.
I don't think that's possible.
Actually it seems we are almost there on that point. As I could only find one instance in the latest release of this problem. But it was a quick search only. So there may be more.
But I don't think that's necessary.
Really? How would you resolve a library referring to another one with a path like "../../thread" instead of "/boost/thread" (or equivalent).
The only thing is that a user would need to install the dependent libraries he needs.
Right, they could say, use a package manager or "manua"l downloads of what they need.
One could try to make a tool to do this
Yes, they are called package and dependency managers. There are many of them.
- but I've argued that that is a fools errand.
Okay two things.. 1. Are you saying that package manager developers are fools? 2. Ah, earlier you say we should write a custom Boost tool to create custom distributions, and now you say it's a fools errand. Did you just call yourself a fool? ;-)
Rather than argue that any more. I could just imagine user does the following:
a) Adds a boost header to his project. b) "installs" that header as above c) tries to build his project d) If something missing - call a) for the missing thing
at the end of that process, he has a minimal subset of boost required to support his project.
If someone has nothing else to do he could write a tool which does generates a list of dependencies for a given app as a text file. Then the user would do most of them in one shot. This would likely be an minor enhancement of BCP or similar program. But the result would be the same.
Similar tools already exist.. they are called Package & Dependency Managers. We should probably support them by making Boost easier to package by making Boost modular.
BTW - the user already incorporates non boost libraries into his project
using this same procedure. Ideally any dependency checking tools would work on these as well.
They do.
* Explicit declaration of inter-library dependencies.
I don't think this is necessary.
Sure, it's not required. There are tools to this job for you. But they aren't as precise as an explicit declaration from the author. And being explicit helps those existing tools by making their jobs as easy as reading that explicit list and avoiding all the work of scanning and guessing. Improving everyone's lives accordingly with reduced complexity.
* Strict normalized library layout.
I don't think this should be necessary. But I'm aware that some libraries don't follow convention regarding header layout. So either those have to change or the "downloader" tool would have to smart enough to sort those out. I don't recommend the latter option.
Wait, another tool to write? Oh, wait, no.. You mean make the same tool even smarter. And hence prone to mistakes.
* Remove, and ban, dependency cycles at the inter-library user consumable
granularity.
I don't think this is necessary. If one is following a chain of headers rather than a chain of modules - there are no cycles.
Sure, that's a true statement given your supposition. But lets face it, the reality is that everyone except Robert Ramey thinks in terms of module granularity. All the tools we use are written in those terms. And we do it at the module granularity because it is useful to think about them that way. It simplifies things like configuring the tools we use, how we advertise the libraries, how we track the libraries in source control repositories, documentation, bug tracking, and so on.
There's probably more items that I've forgotten above. But this should be
enough to converse about.
LOL - ya think?
Haha, there's only so much time in the world.. especially when you are waiting for friends to arrive for usual Monday night political gathering :-)
I think you're concept of "modularized boost" is at least broadly
similar to mine.
To summarize, the only things we would need:
a) we need a tool to download/transform one boost library at a time.
Those already exist. We should facilitate their existence.
Great I didn't realize that there was already a method for downloading a boost library an transforming the directory arrange from the boost develop one to the one that users use.
b) optionally, it would be nice to have a dependency listing tool. FYI
this is more difficult than it looks since the user doesn't all the boost libraries on his machine. Such a tool would have to trawl the boost master on github or some oneline database of headers summaries.
Those also exist. For example the Conan package manger can produce a list or graph of such dependencies in various format.
Double great - just pick one.
c) There would likely need to be separate directory - boost tools with a couple of things in it. Some stuff would be moved from boost root to boost/tools. So the user wouldn't need the root on his machine.
Yep. But likely would be minimal as we would lean on existing, external, tools ecosystem.
d) A good written explanation for users who want to do this.
We always need documentation. There can never be enough of it!
Our problem isn't that we don't have enough documentation. It's that we write tools and procedures that lack sufficient conceptual integrity to document. We only discover that when we write the documentation and by that time it's too late to re-design.
What we don't need to do is re-organize current
boost/development/testing etc. This is merely an alternative deployment concept. Boost developers would not be effected.
Well... I would implore that we do not support more than one deployment arrangement (multiple "channels" is perfectly okay though). But a sans-root arrangement actually makes development easier in that it facilitates individual library testing. It would be considerably easier to just clone your one library and test it against any particular existing versions of other Boost libraries.. without the juggling with git submodules.
That's really an orthogonal question. You and your team are free to evolve the current global/build/test in a way that you see fit. But in the mean time, we can just use the current one. It doesn't need to change. The current infrastructure is already working and in use for many years. It works fine for many people. I'm suggesting an additional method of deployment for special cases.
That's it. just three or 4 simple things. And no disruption of current
setup.
I feel so much better that you think it's simple.
LOL - it's simple until everyone starts mucking with it. Basically a case like vinnies would be handled by just being able to produce a special version of the source that constitutes vinnie's library. If we can't do that, that what would be the point of trying to make something more ambitious?
On 2019-05-06 9:16 p.m., Robert Ramey via Boost wrote:
On 5/6/19 4:47 PM, Rene Rivera via Boost wrote:
Sounds lovely right? ...I'll leave the discussion of merits to responses herein ;-)
I envision the construction of a tool which just goes to github and downloads a list of boost libraries. For each library the download process makes a simple transform to a standalone directory for that library. Similar to what the global distribution currently looks like except for one library at at time.
No, no, no. Let's not turn that into yet another tools discussion. This isn't about tooling, it's about how we want to run the project (or projects, as it stands) as a community. Before we can write tools, we need to agree on how we want to work with (i.e., contribute to, build, maintain) the projects. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 2019-05-06 7:47 p.m., Rene Rivera via Boost wrote:
What would it take to reach that modular goal? Why do I keep saying we've been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure. * Ban relative use of inter-library dependencies. * Explicit declaration of inter-library dependencies. * Strict normalized library layout. * Remove, and ban, dependency cycles at the inter-library user consumable granularity.
There's probably more items that I've forgotten above. But this should be enough to converse about.
Indeed. For example documentation and other metadata (issue trackers, release notes, etc., etc.) that we may want to syndicate via the main boost.org portal, but which should likewise not be generated in a monolithic way. The best part about modularization is that we don't have to switch from black to white in a single atomic transaction. Rather, we can take one library at a time and apply the above rules. In fact, I imagine it to be possible to draw a graph that shows inbound (prerequisites) and outbound (dependencies) connections, so we can rank libraries such that we work our way from the outside inwards. The ones that no other boost libraries depend on, and which have the fewest boost prerequisites, can be converted first. This work can in fact spread over multiple release cycles, so we don't have to complete all of this within four months. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Gesendet: Dienstag, 07. Mai 2019 um 06:59 Uhr Von: "Stefan Seefeld via Boost"
[...] Rather, we can take one library at a time and apply the above rules. In fact, I imagine it to be possible to draw a graph that shows inbound (prerequisites) and outbound (dependencies) connections, so we can rank libraries such that we work our way from the outside inwards. The ones that no other boost libraries depend on, and which have the fewest boost prerequisites, can be converted first.
I guess you can use any of those tools (maybe slightly extended) to do that? https://github.com/jeking3/boost-deptree https://github.com/Mike-Devel/boost_dep_graph (self-plug) https://github.com/boostorg/boostdep Mike
Stefan
Gesendet: Dienstag, 07. Mai 2019 um 01:47 Uhr Von: "Rene Rivera via Boost"
* Ban relative use of inter-library dependencies.
I'd go one step further: Ideally, every inter-liberary include would go through #include
-- -- Rene Rivera
On Mon, May 6, 2019 at 7:54 PM Rene Rivera via Boost
A modular Boost, to me, means a Boost that first and foremost a collection of independently consumable C++ libraries. ... What would it take to reach that modular goal? Why do I keep saying we've been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure. * Ban relative use of inter-library dependencies. * Explicit declaration of inter-library dependencies. * Strict normalized library layout. * Remove, and ban, dependency cycles at the inter-library user consumable granularity.
I would also add: * the ability to have optional sub-modules (for example boost uuid only depends on serialization if you want to use it) * all direct dependencies must be automatically generated - nobody should need to manually modify the list of dependencies for a module (this is safe because your pull request will identify when the build system file with the dependencies has changed, and alert you to the fact that you added or removed a dependency). It would be easier in the beginning if everyone agreed to target one distribution system, such as conan, which is capable of handling the direct dependencies, capable of downloading all direct and transitive dependencies, and can generate additional build system details for consuming what got downloaded (in make and cmake, and others). For example: $ conan install boost_uuid/1.69.0@bincrafters/stable -g cmake -g txt -g compiler_args ... ... Generator compiler_args created conanbuildinfo.args Generator txt created conanbuildinfo.txt Generator cmake created conanbuildinfo.cmake This will download a lot of stuff into ~/.conan, but the paths inside are known only to conan. The resulting files allow build systems to consume what was downloaded. I would then want to extend it further with optional recipes so add a boost_uuid/.../stable_optional_serialization which would bring in the serialization and the uuid serialization header. We could then use conan to perform the build as well, or use another build system to do it. This effort would lend itself well to conversion to cmake as an actual build system as well. I would recommend a standalone boost-cmake repository containing cmake scripts that all other repositories can use for common, normalized build operations across all the repositories. Once this works with conan it would then be possible to add other build systems, but I think it's important to choose one system and get it working. Conan has already done their own work on packaging.We can even teach the CI jobs in Travis and Appveyor to build and push updates into bintray for conan to deliver, for example a fast-forward merge and push into a specially named branch like "release/" could do all of that automatically. This may allow simplification of the branching model, where folks still work on develop all the time. We would need to find a better way to overlay local changes to other repositories on top of the dependencies that are downloaded and managed. With python I use a pypi package called pypi-server which allows me to locally publish an update to a package and then when I run tests on another module in a venv (with tox, usually) it will pick up my locally published package update. This lets me stage changes that must touch more than one repository. Perhaps we could do something similar. Boost could maintain a Jfrog server we could all use for this purpose (I don't know if JFrog offers hosted JFrog services yet). - Jim
Gesendet: Dienstag, 07. Mai 2019 um 13:57 Uhr Von: "James E. King III via Boost"
It would be easier in the beginning if everyone agreed to target one distribution system, such as conan, which is capable of handling the direct dependencies, capable of downloading all direct and transitive dependencies, and can generate additional build system details for consuming what got downloaded (in make and cmake, and others).
“Don't package your libraries, write packagable libraries!” https://www.youtube.com/watch?v=sBP17HQAQjk Imho the question should not be "How can system XY be used to distribute boost in a modular fashion?", but "What can be done to make it easier for systems *like XY* to distribute boost in a modular fashion?" I.e. I don't think boost should not start to depend on any particular package manager for its distribution and even less so for its build system support. It should simply make sure that distribution via any package manager requires as little special treatment as possible. E.g. (as Rene already mentioned) via providing dependency information in an easy to parse format, following a standard directory layout, keeping build logic simple and no dependency cycles. Most of this is already the case and again, I think most "dependencies" on serialization can simply be ignored, but there are other ones that are probably more important to address (e.g. between math and multiprecision, but I haven't looked into the details). Mike
On 19-05-07 07:57:35, James E. King III via Boost wrote:
On Mon, May 6, 2019 at 7:54 PM Rene Rivera via Boost
wrote: A modular Boost, to me, means a Boost that first and foremost a collection of independently consumable C++ libraries. ... What would it take to reach that modular goal? Why do I keep saying we've been working on this for ages and ages? Briefly here's what it would take to get there (not in any particular order):
* Abandon the single header include tree. * Abandon the monolithic build infrastructure. * Ban relative use of inter-library dependencies. * Explicit declaration of inter-library dependencies. * Strict normalized library layout. * Remove, and ban, dependency cycles at the inter-library user consumable granularity.
I would also add:
[...] It would be easier in the beginning if everyone agreed to target one distribution system, such as conan, which is capable of handling the direct dependencies, capable of downloading all direct and transitive dependencies, and can generate additional build system details for consuming what got downloaded (in make and cmake, and others).
And, still incapable to support VS2019 (as I checked a few days ago)... My point being, please do not make bind Boost to any specific package manager, no matter how awsome one may seem to be. Instead, I'll quote Rene: "We should probably support them [P&DM] by making Boost easier to package by making Boost modular." Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Fingerprint=C081 EA1B 4AFB 7C19 38BA 9C88 928D 7C2A BB2A C1F2
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at the build files or anything else. I'm almost certain we don't to that for headers. We used to do it a fair amount in the jamfiles. But I only found one of those yesterday. Most likely this will fall out naturally from being modular anyway. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
Rene Rivera wrote:
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at the build files or anything else.
How will the libraries refer to each other? <library>/boost//libname, target_link_libraries(Boost::libname), and rely on someone to have made these available somehow? (If we're dropping the superproject.)
On Tue, May 7, 2019 at 9:06 PM Peter Dimov via Boost
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at
Rene Rivera wrote: the
build files or anything else.
How will the libraries refer to each other? <library>/boost//libname,
It could be.
target_link_libraries(Boost::libname), and rely on someone to have made these available somehow?
Yes. For the b2 case I would add something to make it easier that doing `use-project ..`. I don't know what you would do for cmake. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 5/7/19 7:05 PM, Peter Dimov via Boost wrote:
Rene Rivera wrote:
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at the build files or anything else.
How will the libraries refer to each other? <library>/boost//libname, target_link_libraries(Boost::libname), and rely on someone to have made these available somehow? (If we're dropping the superproject.)
wouldn't one just replace all instances of "../../lib/thread/.." with
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Tue, May 7, 2019 at 10:06 PM Peter Dimov via Boost
Rene Rivera wrote:
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at the build files or anything else.
How will the libraries refer to each other? <library>/boost//libname, target_link_libraries(Boost::libname), and rely on someone to have made these available somehow? (If we're dropping the superproject.)
Wouldn't we be able to make much simpler builds for each repository if we assume a third party package manager will handle acquiring our dependencies and tell us how to leverage them? If you want to stick with b2 we could add a b2 generator to conan so it would spit out some b2 rules, otherwise we could further simplify things by using cmake builds in each repository instead of b2. I'd like to suggest we take two repositories, one that has no other boost dependencies, and one that depends on it and only it, both of which are header-only, and convert them to this new potential build environment: 1. Package management in conan 2. CMake build in each repo (header only repos means this build runs tests) 3. depender declares a dependency on the dependee 4. depender cmake build automatically invokes conan to acquire dependee and learn the target(s) Where things get messy are: * having to support all the library variants for those repos that still have a library is more difficult * coordinating changes to multiple repositories requires an extra step of locally publishing to a private conan repo that is always consulted first, unless we come up with a better way do to it locally * Dependencies are managed as part of the build process, whereas before they were all handled together by b2 * build tools like cmake tend not to produce more than one language level or one link variant at a time Benefits that occur however are: * No more monolithic release coordination - a huge responsibility and time sink for a select few individuals. * Versioned dependencies isolate each repository from breaking changes in others. * Each repository can release on its own schedule (following SemVer rules, of course) - code gets to consumers much faster; defect windows shrink * No real need to continue to develop or support b2 (unless cmake is insufficient to cover all potential platforms, which would be surprising), reducing overall maintenance burden and aligning with what is popular. * Optional dependencies can be separated into different recipes (for example, if you want to use serialization with uuid), otherwise normally serialization is not a dependency, further reducing the standard dependency tree * No more need for both a develop and master branch; repositories would be free to manage their release strategies as they see fit. Typical projects create release branches from master when they are ready to stabilize a release. It's probably worth asking the question - how important is it to enable each repository to release on its own schedule, or to eliminate the monolithic coordination process, or to eliminate the need for use of and continued maintenance of b2, or ...? We could continue to use b2 however, and just train it to work with the package manager. b2 would have to be an external thing someone already has (like cmake) installed. That's a less radical change and may have more potential to succeed. - Jim
Gesendet: Mittwoch, 08. Mai 2019 um 14:24 Uhr Von: "James E. King III via Boost"
4. depender cmake build automatically invokes conan to acquire dependee and learn the target(s)
Why is conan invocation necessary? I thought they had a non-intrusive mode by now. Hardcoding conan into the buildscript sounds like it will make using/developing boost a pain for everyone not using that package manager or am I missing something? Mike
On Wed, May 8, 2019 at 7:24 AM James E. King III via Boost < boost@lists.boost.org> wrote:
On Tue, May 7, 2019 at 10:06 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
On Tue, May 7, 2019 at 8:41 PM Peter Dimov via Boost
wrote: Rene Rivera wrote:
* Ban relative use of inter-library dependencies.
What does this mean?
It means no doing "../../lib/thread/.." from some other lib to get at
the
build files or anything else.
How will the libraries refer to each other? <library>/boost//libname, target_link_libraries(Boost::libname), and rely on someone to have made these available somehow? (If we're dropping the superproject.)
Wouldn't we be able to make much simpler builds for each repository if we assume a third party package manager will handle acquiring our dependencies and tell us how to leverage them?
Yes.
If you want to stick with b2 we could add a b2 generator to conan so it would spit out some b2 rules,
That already exists < https://docs.conan.io/en/latest/reference/generators/b2.html>.
otherwise we could further simplify things by using cmake builds in each repository instead of b2.
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer to both produce and consume the libraries. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
Rene Rivera wrote:
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer to both produce and consume the libraries.
I really don't understand how this is supposed to work. What do we put in the repo, and how will this enable building a Boost library with whatever build system?
On Wed, May 8, 2019 at 9:21 AM Peter Dimov via Boost
Rene Rivera wrote:
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer to both produce and consume the libraries.
I really don't understand how this is supposed to work. What do we put in the repo, and how will this enable building a Boost library with whatever build system?
I'm not, James is though, saying we should do this.. just that it's possible. But one way to do it is to agree on an API for building, testing, etc. Such an API would be up for design. It could be we have bash/bat/etc, or it could be a single build system, or it could be a single package manager that supports the use case. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
On 19-05-08 09:35:49, Rene Rivera via Boost wrote:
On Wed, May 8, 2019 at 9:21 AM Peter Dimov via Boost
wrote: Rene Rivera wrote:
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer to both produce and consume the libraries.
I really don't understand how this is supposed to work. What do we put in the repo, and how will this enable building a Boost library with whatever build system?
I'm not, James is though, saying we should do this.. just that it's possible. But one way to do it is to agree on an API for building, testing, etc. Such an API would be up for design. It could be we have bash/bat/etc, or it could be a single build system, or it could be a single package manager that supports the use case.
And, API to integrate documentation into the global book at boost.org <day-dreaming> It could be as 'simple' as agreement on common format individual libraries produce. It is submitted to boost.org, consumed by processors to aggregate or produce single unified-looking documentation. </day-dreaming> Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Fingerprint=C081 EA1B 4AFB 7C19 38BA 9C88 928D 7C2A BB2A C1F2
AMDG On 5/8/19 12:24 PM, Mateusz Loskot via Boost wrote:
And, API to integrate documentation into the global book at boost.org
<day-dreaming> It could be as 'simple' as agreement on common format individual libraries produce.
The common format already exists. It's called html.
It is submitted to boost.org, consumed by processors to aggregate or produce single unified-looking documentation. </day-dreaming>
In Christ, Steven Watanabe 53409a4953410,4956087
Rene Rivera wrote:
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer to both produce and consume the libraries. ... But one way to do it is to agree on an API for building, testing, etc. Such an API would be up for design. It could be we have bash/bat/etc, or it could be a single build system, or it could be a single package manager that supports the use case.
I can't help but notice the similarity with CMake here. It also seems to have started with this goal - to provide a "portable" project description so that one could then use one's preferred build system, after cmake -G "My Preferred Build System". Didn't quite turn out that way though.
On 2019-05-09 8:43 a.m., Peter Dimov via Boost wrote:
Rene Rivera wrote:
Sure.. Actually you wouldn't care what build system a particular library author used. As you could use whatever build system you prefer > > to both produce and consume the libraries. ... But one way to do it is to agree on an API for building, testing, etc. Such an API would be up for design. It could be we have bash/bat/etc, or it could be a single build system, or it could be a single package manager that supports the use case.
I can't help but notice the similarity with CMake here. It also seems to have started with this goal - to provide a "portable" project description so that one could then use one's preferred build system, after cmake -G "My Preferred Build System". Didn't quite turn out that way though.
Right, but rather than provide the required meta information in a portable way, CMake is (or has become, I don't know its history) a totally invasive wrapper tool. Quite the anti-pattern, in fact. But that doesn't mean that the idea of a portable (and tool-agnostic) interface is wrong, does it ? Stefan -- ...ich hab' noch einen Koffer in Berlin...
Before this drifts of into nothingness: How about starting with those points from Rene's list:
* Ban relative use of inter-library dependencies. * Explicit declaration of inter-library dependencies. * Strict normalized library layout.
Number 1 and 3 are probably already full filled by most (all?) libraries and it is "just" a matter of cleaning up a few exceptions. @Rene: Do you have on opinion on how exactly that normalized layout should look like (other than what is already mandated by boost)? More importantly: Is there any library that violates it? Does this cover moving the "numerics" libraries into "boost/libs"? Number 2 is useful regardless of whether boost goes full modular or not (it e.g. can be used by depinst.py, package managers that provides modular boost and also by cmake or other build files). Best Mike
participants (14)
-
Gavin Lambert
-
James E. King III
-
Kostas Savvidis
-
Mateusz Loskot
-
Mike
-
Pete Bartlett
-
Peter Dimov
-
Rene Rivera
-
Richard Hodges
-
Robert Ramey
-
stefan
-
Stefan Seefeld
-
Steve Robbins
-
Steven Watanabe