On Mon, 2017-06-26 at 17:34 -0400, Stefan Seefeld wrote:
On 26.06.2017 17:21, paul wrote:
On Mon, 2017-06-26 at 15:46 -0400, Stefan Seefeld wrote:
On 26.06.2017 13:36, paul wrote:
On Sun, 2017-06-25 at 13:35 -0400, Stefan Seefeld wrote:
It's precisely the lack of encapsulation that causes this overhead. I'd be happy to include additional files in my library if it wasn't for the implied maintenance cost.
Yes, I would like the maintenance cost to be just adding source files to a list somewhere. Of course, for header-only libraries its even easier.
Although, there are libraries like Boost.Python or Boost.Context that have more complicated build infrastructure, but the nice thing about cmake is that there is a much larger community to help with the maintenance cost rather than relying on a few Boost.Build gurus.
That's definitely true. But ultimately, it comes down to the maintainer or the library's own developer community. Whenever users try to build Boost.Python and run into issues, they are submitting issues to *our* tracker, and I hate having to tell them to go ask for help in a different community because I'm unable to help myself.
Not exactly. The user may have a build problem, but the likeliness that they(or another user who is reading the issue) know enough cmake to contribute a fix is very much higher than them knowing enough bjam to provide a fix.
Oh, you are arguing about bjam vs. cmake. I wasn't. (I would agree that a well-known tool is better than an obscure one). My point is about having to maintain (and answer questions about) two infrastructures rather than one.
I don't think it's realistic to assume that Boost as a whole will switch, at least not over a short period of time. If there was any lesson to be learned from passed changes to Boost, it's that such a move will take a very long time, if it's going to finish at all.
Therefore I think it's much better to leave the decision to switch to individual project maintainers, as on that scale a change is much quicker to implement, if that's what the (project-) community decides to do. And for that to be possible individual projects need more autonomy, and the whole build infrastructure needs to be modular. Thus, what I propose is a little script that iterates over all boost components, invoking some well-defined entry-point on each, which *may* run `b2` locally, or it may invoke `cmake`.
Thats not realistic at all. If I need to use Boost.System, I will say `find_package(boost_system)`, but if Boost.System builds with b2, then that won't be available. There is also other problems as well. Boost.Build can build all variants in one build tree whereas cmake requires using seperate build trees. Mixing the two philosophies just makes its confusing for both users.
Of course there needs to be an agreed-upon protocol (what arguments that script needs to accept, what is the expected outcome of a build, i.e. where are the generated artefacts, etc.). I strongly believe that such an approach would be by far more realistic, efficient, and generate less friction and administrative overhead than a whole-sale move from bjam to cmake.
We could build a meta-build system that can generate builds for other build systems, but thats what cmake already is, so for cmake it would be a meta- meta-build system. Maybe it might be better to just create a build generator for b2 in cmake.