On 4/11/17 6:29 PM, Steven Watanabe via Boost wrote:
AMDG
On 04/11/2017 07:10 PM, Robert Ramey via Boost wrote:
On 4/11/17 4:10 PM, Gavin Lambert via Boost wrote:
While I agree with this practice in general, in the specific case of Boost libraries using ../include relative paths is not a good idea, in my view.
The problem with this is the way the source is repackaged as a monolithic zip/tarball -- all of the include directories are removed and replaced with a "boost" folder that combines the include directories from all libraries.
I think it's more accurate to say that it creates a directory structure of file links which create the appearance of a monolithic boost "procuct". b2 headers is the magic command which creates this structure.
Gavin is correct. Most of us here work from git, but the actual release archive doesn't use b2 headers.
Wow - it's amazing that I never knew that. In fact, this would never have crossed my mind.
Thus an end user who only uses this zip/tarball version of Boost cannot build your tests/examples without modifying the source to use
includes instead -- so this is what you should have used to begin with.
OK - I see this now.
Actually, my whole motivation was for users to be able to build the examples and run tests without having to create links to all the headers in the main boost directory. b2 headers only does this for libraries which have been "installed" into the boost. There is no tool/mechanism for doing such a think with non boost libraries.
Boost.Build can technically handle it. The actual implementation of b2 headers doesn't care whether the headers are part of the Boost tree or not. (Of course, whether that's a good idea is debatable).
Of course. Now that I understand how "modular boost" is distributed (for the first time ever), I can see what the problem is. My way of using boost is: a) clone boost super project from github - takes a couple of minutes. b) run b2 headers c) I'm done if I'm not using compiled libraries - the common case today d) I run b2 on any libraries that need building e) I run b2 in the test directory of any library I'm suspicious of - which is basically all of them that I'm going to use. This is much easier then then downloading, unzipping then having to probably rebuild anyway because something is always out of sync. The whole process is tedious, time consuming and error prone. When want to try a new library (say from the incubator) I want to a) clone the library to some directory. b) This directory most likely inside the project I'm working on for which I need the library in the first place. Since the project is often in an IDE which has set variables for includes etc, this is pretty simple. c) I poke around the documentation on more detail to try to use it in my own project. Unfortunately, many/most libraries don't include the html documentation so I have to look around it somewhere. This annoys the hell out of me since I have to look on the net and what I find might be slightly out of sync. Now I have to depend on a net connection to do actual development - another annoying thing. Anyway, I just udder a few curse words and move on. d) I might or might not run the tests/examples. e) I'll hack my project to use the new library and see if it solves my problem. If after an hour, I don't feel I'm making progress, I just want to delete it and try the next one. That's the experience I want to have and I almost have it now. After considering all this, I'm thinking we should should just drop the zipfile distribution. The whole focus on "release" should be assign the magic tag to the master in github - "Release 1.63". If someone else want's to build and distribute the zipfiles, let them do it but I don't think we should promote it as the preferred way for users to acquire boost. By adopting this point of view, and a couple small changes (e.g. requiring html documentation inside each project) we would have a "modular boost" which is much easier to maintain and work with. Robert Ramey