Header Inclusion practices
As long as I can remember, it's been my practice to write program code
to minimize dependencies on the environment in which it is built. By
environment I mean things external to the code itself like environmental
and command line variables, directory context etc. As part of this I
use rules for header file inclusion:
a) use #include "header.hpp" for files in the same directory as the
current source file.
b) use #include "other directory/header.hpp" for files which are known
to be in a specific place relative to the current file. This shows up
in things like: #include "../include/header.hpp" for tests and examples.
c) use #include
AMDG On 04/11/2017 11:55 AM, Robert Ramey via Boost wrote:
// example using he safe numerics library
#include <iostream> #include <limits> #include
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This has raised consternation in some quarters - but I don't see anything wrong with it. It basically means that only the
My opinion on this specific case is that examples should match what a user would expect to write. i.e. They should not rely on the fact that the example code is inside the safe_numerics library. For test code, I don't really care how you write it as long as it works.
As far as I know this question has never been asked before and I'm curious to know what others might have to say about this.
In Christ, Steven Watanabe
On 04/11/17 20:55, Robert Ramey via Boost wrote:
// example using he safe numerics library
#include <iostream> #include <limits> #include
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This has raised consternation in some quarters - but I don't see anything wrong with it. It basically means that only the
As far as I know this question has never been asked before and I'm curious to know what others might have to say about this.
I'm not sure I understood the question. But in the example above I would
rather avoid relative paths. The example should assume it's external to
the library, so that the user is able to just copy/paste the code and
start playing with the code. It should also demonstrate how the user is
supposed to use the library, and the user most likely is supposed to add
an include path to the library to compile his code.
Actually, I try to never ever use relative paths in source code, except
for the trivial case #include "header.h", where header.h is in the same
directory. The reason for this is that relative paths obfuscate
dependencies between different parts of the program.
Bad:
|
|-apples
| |-apple.h
|
|-oranges
|-orange.h
apple.h:
#include "../oranges/orange.h" // why the dependency?
Good:
|
|-common
| |-oranges
| | |-orange.h
|
|-apples
|-apple.h
apple.h:
#include
Le 11/04/2017 à 19:55, Robert Ramey via Boost a écrit :
As long as I can remember, it's been my practice to write program code to minimize dependencies on the environment in which it is built. By environment I mean things external to the code itself like environmental and command line variables, directory context etc. As part of this I use rules for header file inclusion:
a) use #include "header.hpp" for files in the same directory as the current source file.
b) use #include "other directory/header.hpp" for files which are known to be in a specific place relative to the current file. This shows up in things like: #include "../include/header.hpp" for tests and examples.
c) use #include
for files which are found by looking in directories listed in "-I" switches and environmental variables (INCLUDE). I generally try not to depend upon any environmental variables as I always forget to set them or even how to set them. Come to think about it. I don't know how my build system finds the boost libraries. I presume it's through some IDE/Bjam/CMake setting which I can never remember. d) use #include <iostream> for standard library components. Presumably these are routed to some directory relative to the compiler.
So some of my source files look like:
// interval.hpp header for safe numerics library
#include <limits> #include <cassert> #include
#include <array> #include #include
#include #include "utility.hpp" // log #include "checked_result.hpp" #include "checked.hpp"
and
// example using he safe numerics library
#include <iostream> #include <limits> #include
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This has raised consternation in some quarters - but I don't see anything wrong with it. It basically means that only the
As far as I know this question has never been asked before and I'm curious to know what others might have to say about this. Hi,
Using
#include "whatever.hpp"
disallows to check a .cpp file mocking the whatever.hpp file.
If you use instead
#include
// example using he safe numerics library
#include <iostream> #include <limits> #include
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This is in your ~/example directory right in the base directory of library right? In which case your example code is confusing. It should be: #include "../include/boost/safe_numerics/safe_range.hpp" ... or something like that if you're going to be consistent.
This has raised consternation in some quarters - but I don't see anything wrong with it. It basically means that only the
As far as I know this question has never been asked before and I'm curious to know what others might have to say about this.
I think there are big changes coming to how new C++ libraries #include stuff. If you want to transparently support Modules, ABI version pinning and precompiled headers at least. You'll be seeing (to my knowledge) the first C++ Modules ready Boost format library up for review with Outcome end of May. You'll be glad to know it ICEs MSVC handily, but in theory it will work one day when the compiler gets debugged :) Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On 4/11/17 2:11 PM, Niall Douglas via Boost wrote:
In which case your example code is confusing. It should be:
#include "../include/boost/safe_numerics/safe_range.hpp"
... or something like that if you're going to be consistent.
While the project is/was in the incubator, I just used the structure safe_numerics/include/header.hpp as I didn't want to use the word "boost" all over the place. When I first made it, I didn't expect for it to actually become a boost library - it was supposed to be an example of how I intended the incubator be used. Now that it's going to be a boost library, that of course has to be changed. Robert Ramey
On 12/04/2017 05:55, Robert Ramey via Boost wrote:
// example using he safe numerics library
#include <iostream> #include <limits> #include
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This has raised consternation in some quarters - but I don't see anything wrong with it. It basically means that only the
While I agree with this practice in general, in the specific case of
Boost libraries using ../include relative paths is not a good idea, in
my view.
The problem with this is the way the source is repackaged as a
monolithic zip/tarball -- all of the include directories are removed and
replaced with a "boost" folder that combines the include directories
from all libraries.
Thus an end user who only uses this zip/tarball version of Boost cannot
build your tests/examples without modifying the source to use
On 4/11/17 4:10 PM, Gavin Lambert via Boost wrote:
While I agree with this practice in general, in the specific case of Boost libraries using ../include relative paths is not a good idea, in my view.
The problem with this is the way the source is repackaged as a monolithic zip/tarball -- all of the include directories are removed and replaced with a "boost" folder that combines the include directories from all libraries.
I think it's more accurate to say that it creates a directory structure of file links which create the appearance of a monolithic boost "procuct". b2 headers is the magic command which creates this structure.
Thus an end user who only uses this zip/tarball version of Boost cannot build your tests/examples without modifying the source to use
includes instead -- so this is what you should have used to begin with.
Actually, my whole motivation was for users to be able to build the examples and run tests without having to create links to all the headers in the main boost directory. b2 headers only does this for libraries which have been "installed" into the boost. There is no tool/mechanism for doing such a think with non boost libraries. The current approach permitted a library user to build and run the examples without adding the library to the boost tree and running b2 headers and without adding an -I switch or environmental variable. But the downside is that I have these sort of funky path names for included header files. I would like one to easily be able to install/uninstall a particular library without the whole b2 headers back and forth. I would actually be happy with just adding another -I switch for each library I use directly. Its typically a small number. But that doesn't deal with the indirectly used boost libraries. After I move it to the boost subprojects, I can just follow the normal rules so in practical terms its not a huge issue for this one library. I'm really questioning the idea of depending on b2 headers and having the directory structure inside a library include the "boost" subdirectory - which has no syblings.
Having said that, as long as the library isn't actually part of the Boost distribution yet it might be more convenient to keep relative paths (since the headers don't get moved), but it's one more thing that needs to be fixed up later.
I won't save anything by fixing it sooner so I'll leave it for last as I've already a lot to do. Robert Ramey
AMDG On 04/11/2017 07:10 PM, Robert Ramey via Boost wrote:
On 4/11/17 4:10 PM, Gavin Lambert via Boost wrote:
While I agree with this practice in general, in the specific case of Boost libraries using ../include relative paths is not a good idea, in my view.
The problem with this is the way the source is repackaged as a monolithic zip/tarball -- all of the include directories are removed and replaced with a "boost" folder that combines the include directories from all libraries.
I think it's more accurate to say that it creates a directory structure of file links which create the appearance of a monolithic boost "procuct". b2 headers is the magic command which creates this structure.
Gavin is correct. Most of us here work from git, but the actual release archive doesn't use b2 headers.
Thus an end user who only uses this zip/tarball version of Boost cannot build your tests/examples without modifying the source to use
includes instead -- so this is what you should have used to begin with. Actually, my whole motivation was for users to be able to build the examples and run tests without having to create links to all the headers in the main boost directory. b2 headers only does this for libraries which have been "installed" into the boost. There is no tool/mechanism for doing such a think with non boost libraries.
Boost.Build can technically handle it. The actual implementation of b2 headers doesn't care whether the headers are part of the Boost tree or not. (Of course, whether that's a good idea is debatable). In Christ, Steven Watanabe
On 4/11/17 6:29 PM, Steven Watanabe via Boost wrote:
AMDG
On 04/11/2017 07:10 PM, Robert Ramey via Boost wrote:
On 4/11/17 4:10 PM, Gavin Lambert via Boost wrote:
While I agree with this practice in general, in the specific case of Boost libraries using ../include relative paths is not a good idea, in my view.
The problem with this is the way the source is repackaged as a monolithic zip/tarball -- all of the include directories are removed and replaced with a "boost" folder that combines the include directories from all libraries.
I think it's more accurate to say that it creates a directory structure of file links which create the appearance of a monolithic boost "procuct". b2 headers is the magic command which creates this structure.
Gavin is correct. Most of us here work from git, but the actual release archive doesn't use b2 headers.
Wow - it's amazing that I never knew that. In fact, this would never have crossed my mind.
Thus an end user who only uses this zip/tarball version of Boost cannot build your tests/examples without modifying the source to use
includes instead -- so this is what you should have used to begin with.
OK - I see this now.
Actually, my whole motivation was for users to be able to build the examples and run tests without having to create links to all the headers in the main boost directory. b2 headers only does this for libraries which have been "installed" into the boost. There is no tool/mechanism for doing such a think with non boost libraries.
Boost.Build can technically handle it. The actual implementation of b2 headers doesn't care whether the headers are part of the Boost tree or not. (Of course, whether that's a good idea is debatable).
Of course. Now that I understand how "modular boost" is distributed (for the first time ever), I can see what the problem is. My way of using boost is: a) clone boost super project from github - takes a couple of minutes. b) run b2 headers c) I'm done if I'm not using compiled libraries - the common case today d) I run b2 on any libraries that need building e) I run b2 in the test directory of any library I'm suspicious of - which is basically all of them that I'm going to use. This is much easier then then downloading, unzipping then having to probably rebuild anyway because something is always out of sync. The whole process is tedious, time consuming and error prone. When want to try a new library (say from the incubator) I want to a) clone the library to some directory. b) This directory most likely inside the project I'm working on for which I need the library in the first place. Since the project is often in an IDE which has set variables for includes etc, this is pretty simple. c) I poke around the documentation on more detail to try to use it in my own project. Unfortunately, many/most libraries don't include the html documentation so I have to look around it somewhere. This annoys the hell out of me since I have to look on the net and what I find might be slightly out of sync. Now I have to depend on a net connection to do actual development - another annoying thing. Anyway, I just udder a few curse words and move on. d) I might or might not run the tests/examples. e) I'll hack my project to use the new library and see if it solves my problem. If after an hour, I don't feel I'm making progress, I just want to delete it and try the next one. That's the experience I want to have and I almost have it now. After considering all this, I'm thinking we should should just drop the zipfile distribution. The whole focus on "release" should be assign the magic tag to the master in github - "Release 1.63". If someone else want's to build and distribute the zipfiles, let them do it but I don't think we should promote it as the preferred way for users to acquire boost. By adopting this point of view, and a couple small changes (e.g. requiring html documentation inside each project) we would have a "modular boost" which is much easier to maintain and work with. Robert Ramey
On Apr 12, 2017, at 9:34 AM, Robert Ramey via Boost
wrote: After considering all this, I'm thinking we should should just drop the zipfile distribution. The whole focus on "release" should be assign the magic tag to the master in github - "Release 1.63". If someone else want's to build and distribute the zipfiles, let them do it but I don't think we should promote it as the preferred way for users to acquire boost. By adopting this point of view, and a couple small changes (e.g. requiring html documentation inside each project) we would have a "modular boost" which is much easier to maintain and work with.
I strongly disagree with this sentiment. The purpose of a release is to provide some guarantees that the things being released are vetted, work smoothly together, etc. If the process devolves to just a tagging, then the burden of verification is transferred to the users, and the entire exercise loses one of the most valuable elements: assurance of quality. There are already too many examples of projects with no formal release process. These provide no formal quality control, potentially lead to inconsistencies, and increase the burden on users. I feel that Boost would be taking a large step backward to adopt similar approaches. Cheers, Brook
On 12 April 2017 at 09:34, Robert Ramey via Boost
zipfile distribution. The whole focus on "release" should be assign the magic tag to the master in github - "Release 1.63".
This seems a great idea, some time ago, on this list, I was getting dissed for claiming that an average windows developer was able to open a developer command prompt and lanch a boost build from there. The common opinion seemed to be that that's not to be expected. But now we leap to the other end, everybody should install and learn git, notoriously obscure and alien to windows users, in order to build and use boost! Or do you mean I should just download the snapshot zip-file on Github? ... build and distribute the zipfiles ...
You make it sound very complicated.
By adopting this point of view, and a couple small changes (e.g. requiring html documentation inside each project) we would have a "modular boost" which is much easier to maintain and work with.
I think the distribution of (7)-zip files, with check-sums, is the way to fix (as in "this is it") a release. Unless all the interdependencies between libraries are removed, I don't see how boost can ever be modular in a meaningfull way. degski -- "*Ihre sogenannte Religion wirkt bloß wie ein Opiat reizend, betäubend, Schmerzen aus Schwäche stillend.*" - Novalis 1798
On 4/12/17 9:16 AM, degski via Boost wrote:
On 12 April 2017 at 09:34, Robert Ramey via Boost
wrote: After considering all this, I'm thinking we should should just drop the
zipfile distribution. The whole focus on "release" should be assign the magic tag to the master in github - "Release 1.63".
This seems a great idea, some time ago, on this list, I was getting dissed for claiming that an average windows developer was able to open a developer command prompt and lanch a boost build from there. The common opinion seemed to be that that's not to be expected.
But now we leap to the other end, everybody should install and learn git, notoriously obscure and alien to windows users, in order to build and use boost! Or do you mean I should just download the snapshot zip-file on Github?
LOL - now you've reminded me that I use SourceTree for navigating git. This has made git itself with it's ridiculous command line syntax invisible to me. I see that this distorted my vision here.
... build and distribute the zipfiles ...
You make it sound very complicated.
It's not that it's not doable, it's just seems more awkward than than using SourceTree to hook into the git repo.
By adopting this point of view, and a couple small changes (e.g. requiring html documentation inside each project) we would have a "modular boost" which is much easier to maintain and work with.
Unless all the interdependencies between libraries are removed, I don't see how boost can ever be modular in a meaningfull way.
I don't see this. My expectation is that one will clone the whole boost library tree and one my one add on other non-boost libraries. So for me, the situation doesn't come up. Robert Ramey
On 12 April 2017 at 10:46, Robert Ramey via Boost
On 04/12/2017 06:16 PM, degski via Boost wrote:
But now we leap to the other end, everybody should install and learn git, notoriously obscure and alien to windows users, in order to build and use boost! Or do you mean I should just download the snapshot zip-file on Github?
I don't know, if git is such "notoriously obscure and alien" then how about Boost.Build ? :) FWIW, the problem with Windows is not git but simply "unique" development environment.
... build and distribute the zipfiles ...
Yes. Boost is not only distributed by boost.org. All distributions, Linux and BSD and others, are also distributing Boost. It is of great help to the users of these distributions to make certain that all of them at least start with a consistent upstream version. - Adam
On 13 April 2017 at 13:41, Adam Majer via Boost
I don't know, if git is such "notoriously obscure and alien" then how about Boost.Build ? :)
Even the experts here seem to struggle from time to time. degski -- "*Ihre sogenannte Religion wirkt bloß wie ein Opiat reizend, betäubend, Schmerzen aus Schwäche stillend.*" - Novalis 1798
On Wed, Apr 12, 2017 at 5:34 PM, Robert Ramey via Boost
On 4/11/17 6:29 PM, Steven Watanabe via Boost wrote: Now that I understand how "modular boost" is distributed (for the first time ever), I can see what the problem is. My way of using boost is:
a) clone boost super project from github - takes a couple of minutes. b) run b2 headers c) I'm done if I'm not using compiled libraries - the common case today d) I run b2 on any libraries that need building e) I run b2 in the test directory of any library I'm suspicious of - which is basically all of them that I'm going to use.
This is much easier then then downloading, unzipping then having to probably rebuild anyway because something is always out of sync. The whole process is tedious, time consuming and error prone.
How so? Download, unzip and rename top dir to boost is about as simple as a) and b) if you have git installed and MUCH easier if you don't. -- Olaf
On Tue, Apr 11, 2017 at 7:55 PM, Robert Ramey via Boost
a) use #include "header.hpp" for files in the same directory as the current source file.
b) use #include "other directory/header.hpp" for files which are known to be in a specific place relative to the current file. This shows up in things like: #include "../include/header.hpp" for tests and examples.
I'm not sure about #include " " in library code.. especially not #include "../ " as it breaks when you move the file.
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This too breaks when you move the example / test .cpp.
The names are too generic as well, is a normal user supposed to do
#include
On 4/12/17 12:09 AM, Olaf van der Spek via Boost wrote:
On Tue, Apr 11, 2017 at 7:55 PM, Robert Ramey via Boost
wrote: a) use #include "header.hpp" for files in the same directory as the current source file.
b) use #include "other directory/header.hpp" for files which are known to be in a specific place relative to the current file. This shows up in things like: #include "../include/header.hpp" for tests and examples.
I'm not sure about #include " " in library code.. especially not #include "../ " as it breaks when you move the file.
#include "../include/cpp.hpp" #include "../include/exception.hpp" #include "../include/safe_integer.hpp" #include "../include/safe_range.hpp"
This too breaks when you move the example / test .cpp. The names are too generic as well, is a normal user supposed to do #include
? Your lib might not be the only one with an exception.hpp file..
Hmmm, I'm not seeing this. Actually my motivation is to not avoid breaking things when files are moved. When the library is moved the tests still build and run without having to change switches, environmental variables etc. Robert Ramey
On Wed, Apr 12, 2017 at 5:38 PM, Robert Ramey via Boost
Hmmm, I'm not seeing this. Actually my motivation is to not avoid breaking things when files are moved. When the library is moved the tests still build and run without having to change switches, environmental variables etc.
If you move the including file into a sub directory or parent directory, it'll know longer be able to do the relative include, unless you move the entire library / tree. -- Olaf
On 4/12/17 11:40 PM, Olaf van der Spek via Boost wrote:
On Wed, Apr 12, 2017 at 5:38 PM, Robert Ramey via Boost
wrote: Hmmm, I'm not seeing this. Actually my motivation is to not avoid breaking things when files are moved. When the library is moved the tests still build and run without having to change switches, environmental variables etc.
If you move the including file into a sub directory or parent directory, it'll know longer be able to do the relative include, unless you move the entire library / tree.
Right - which is what I recommend. I see the library as something which should be handled as unit. I would like to see users a) move the library around as a package. b) easily run tests and examples when they first download the library. c) easily re-run tests and examples anytime their environment changes - new compiler version, etc. d) easily remove the library from their system should it fail to address their needs. e) I've been very disappointed that users of libraries don't run the test suite of the libraries they use. I don't think this is currently as easy as it should be. I would hope to see this change. f) the whole exercise of getting all the libraries in the master branch to a syncronized "releasable" state is a huge amount of work which is of little value - at least to me. In any case, you've all convinced me that my way of doing things isn't going to be attractive for most people so we don't have discuss in anymore. There's no problem for me to just continue to sync up with the most recently release master from time to time when it is convenient for me to do so. I would like to be able to browse library documentation directly on my own machine rather than going through boost.org as I do now. Robert Ramey
participants (10)
-
Adam Majer
-
Andrey Semashev
-
Brook Milligan
-
degski
-
Gavin Lambert
-
Niall Douglas
-
Olaf van der Spek
-
Robert Ramey
-
Steven Watanabe
-
Vicente J. Botet Escriba