Announcement: Faber, a new build system based on bjam
Hello, about a year ago I started to experiment with a new Python frontend for Boost.Build. After many iterations of prototyping things are starting to fall into place, and the project stabilizes. I'm thus happy to announce the release of Faber version 0.2: code: https://github.com/stefanseefeld/faber docs: https://stefanseefeld.github.io/faber While Faber retains most of the features from Boost.Build, it is redesigned from the ground up. bjam is still used as scheduling engine, but everything else is written in Python. In particular, Jamfiles are replaced by fabscripts, which are essentially Python scripts. The project contains a range of examples to demonstrate various simple use-cases, from a simple "hello world" example to demos involving autoconf-style config checks, and unit testing. I have added build logic to Boost.Python to use Faber on Travis-CI as well as AppVeyor, which also is a good litmus test for Faber's capabilities. I'd be very interested in feedback as well as contributions. Perhaps it might become possible one day to integrate Faber with other efforts to add a Python frontend to Boost.Build. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...
not overly happy about the name - I think v20.0 or so of Faber, now into its sixth century of existence, is just about good enough! But obviously I don't have sole claim to the name Long time list lurker, Jools (Faber) On 10/11/2017 14:40, Stefan Seefeld via Boost wrote:
Hello,
about a year ago I started to experiment with a new Python frontend for Boost.Build. After many iterations of prototyping things are starting to fall into place, and the project stabilizes.
I'm thus happy to announce the release of Faber version 0.2:
code: https://github.com/stefanseefeld/faber
docs: https://stefanseefeld.github.io/faber
While Faber retains most of the features from Boost.Build, it is redesigned from the ground up. bjam is still used as scheduling engine, but everything else is written in Python. In particular, Jamfiles are replaced by fabscripts, which are essentially Python scripts. The project contains a range of examples to demonstrate various simple use-cases, from a simple "hello world" example to demos involving autoconf-style config checks, and unit testing.
I have added build logic to Boost.Python to use Faber on Travis-CI as well as AppVeyor, which also is a good litmus test for Faber's capabilities.
I'd be very interested in feedback as well as contributions. Perhaps it might become possible one day to integrate Faber with other efforts to add a Python frontend to Boost.Build.
Regards,
Stefan
The hard questions: 1. Can it cross-compile to iOS, android, OSX, linux and windows out of the box? (i.e. without me having to specify any magic command line options, environment variables or write any nasty scripts in some new syntax) 2. Can it identify, download and build dependencies automatically, using the correct toolset? 3. Will it create install scripts? 4. Will it package executables and libraries for later consumption? 5. will it build and deploy directly into docker? These are the only questions I have regarding a build engine. At the moment I use CMAKE with Hunter and Polly. Although it has a hideous syntax, this combination at least fulfils the basic requirements of a c++ cross-compiling build system in the modern age. Currently nothing else does.
On 10 Nov 2017, at 15:40, Stefan Seefeld via Boost
wrote: Hello,
about a year ago I started to experiment with a new Python frontend for Boost.Build. After many iterations of prototyping things are starting to fall into place, and the project stabilizes.
I'm thus happy to announce the release of Faber version 0.2:
code: https://github.com/stefanseefeld/faber
docs: https://stefanseefeld.github.io/faber
While Faber retains most of the features from Boost.Build, it is redesigned from the ground up. bjam is still used as scheduling engine, but everything else is written in Python. In particular, Jamfiles are replaced by fabscripts, which are essentially Python scripts. The project contains a range of examples to demonstrate various simple use-cases, from a simple "hello world" example to demos involving autoconf-style config checks, and unit testing.
I have added build logic to Boost.Python to use Faber on Travis-CI as well as AppVeyor, which also is a good litmus test for Faber's capabilities.
I'd be very interested in feedback as well as contributions. Perhaps it might become possible one day to integrate Faber with other efforts to add a Python frontend to Boost.Build.
Regards,
Stefan
--
...ich hab' noch einen Koffer in Berlin...
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 21.11.2017 15:02, Richard Hodges via Boost wrote:
The hard questions:
1. Can it cross-compile to iOS, android, OSX, linux and windows out of the box? (i.e. without me having to specify any magic command line options, environment variables or write any nasty scripts in some new syntax)
Yes. (In principle, that is: So far I have focused on the design and infrastructure. I know that it works, having (cross-)compiled with gcc, clang, msvc, on Linux and Windows. There are still a lot of holes that need to be filled by people who have access to the respective platforms and tools.)
2. Can it identify, download and build dependencies automatically, using the correct toolset?
I'm not a fan of automatic downloads, though I don't see any reason why such functionality couldn't be layered on top. All that is needed is a convention for storing and handling the associated meta-information.
3. Will it create install scripts?
Likewise: adding packaging logic is (mostly) just a matter of adding package meta-information as well as some tooling. The design fully supports that. (It might be a good idea to add some sample package generation logic to the next release, based on which other package formats could be added later.)
4. Will it package executables and libraries for later consumption?
Same answer.
5. will it build and deploy directly into docker?
Can you describe the workflow you have in mind ? I'd expect the above package building being the missing link. Everything beyond that seems out of scope for a build tool.
These are the only questions I have regarding a build engine.
At the moment I use CMAKE with Hunter and Polly. Although it has a hideous syntax, this combination at least fulfils the basic requirements of a c++ cross-compiling build system in the modern age.
As I mentioned earlier, Faber should be feature-compatible with Boost.Build. It's offering a different frontend language (Python), and simplified logic (no "meta targets" etc.). The main focus is easier use and extensibility than Boost.Build. Everything from configuration, to testing and packaging is fully in scope. But given that it can be used as a library, I'm sure that people can come up with very different use-cases. and simply embed it into other applications. In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk. Stefan -- ...ich hab' noch einen Koffer in Berlin...
Stephan wrote:
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk.
Nothing would please me more than to be able to dump the horrific syntax of
cmake. I have often thought that python would be the obvious language for a
replacement. There was of course a similar tool called SCONS once, which is
python also. It seems to have fallen by the wayside.
What has prevented me making a start on a conversion is having recently
used node and npm. It seems that the rest of the world has converged on
javascript as the development script tool of choice.
I am strongly of the view that c++ needs a standard tool for build, IDE
project generation, toolset selection, dependency management, testing and
deployment. I find it deeply disturbing that one cannot simply write a
project that pulls in 3 or 4 libraries and then cross-compile it for any
target with one command.
It really should be as simple as:
build install --target=iOS10 --build_dir=[auto]
--install_dir=installs/iOS10 # an auto build dir would be named using a
hash of the target toolset, etc.
build install --target=[this-host] --build_dir=[auto]
--install_dir=/usr/local
build install --target=fedora25-docker --build_dir=[auto]
--install_dir=installs/dockers/fedora25
-j
The hard questions:
1. Can it cross-compile to iOS, android, OSX, linux and windows out of
On 21.11.2017 15:02, Richard Hodges via Boost wrote: the box? (i.e. without me having to specify any magic command line options, environment variables or write any nasty scripts in some new syntax)
Yes. (In principle, that is: So far I have focused on the design and infrastructure. I know that it works, having (cross-)compiled with gcc, clang, msvc, on Linux and Windows. There are still a lot of holes that need to be filled by people who have access to the respective platforms and tools.)
2. Can it identify, download and build dependencies automatically, using the correct toolset?
I'm not a fan of automatic downloads, though I don't see any reason why such functionality couldn't be layered on top. All that is needed is a convention for storing and handling the associated meta-information.
3. Will it create install scripts?
Likewise: adding packaging logic is (mostly) just a matter of adding package meta-information as well as some tooling. The design fully supports that. (It might be a good idea to add some sample package generation logic to the next release, based on which other package formats could be added later.)
4. Will it package executables and libraries for later consumption?
Same answer.
5. will it build and deploy directly into docker?
Can you describe the workflow you have in mind ? I'd expect the above package building being the missing link. Everything beyond that seems out of scope for a build tool.
These are the only questions I have regarding a build engine.
At the moment I use CMAKE with Hunter and Polly. Although it has a hideous syntax, this combination at least fulfils the basic requirements of a c++ cross-compiling build system in the modern age.
As I mentioned earlier, Faber should be feature-compatible with Boost.Build. It's offering a different frontend language (Python), and simplified logic (no "meta targets" etc.). The main focus is easier use and extensibility than Boost.Build. Everything from configuration, to testing and packaging is fully in scope. But given that it can be used as a library, I'm sure that people can come up with very different use-cases. and simply embed it into other applications.
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk.
Stefan
--
...ich hab' noch einen Koffer in Berlin...
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
Hi, this is an answer to Richards mail, but I am mostly addressing Stephan.
On 22. Nov 2017, at 09:00, Richard Hodges via Boost
wrote: Stephan wrote:
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk.
Nothing would please me more than to be able to dump the horrific syntax of cmake. I have often thought that python would be the obvious language for a replacement. There was of course a similar tool called SCONS once, which is python also. It seems to have fallen by the wayside.
I am not a big fan of CMake and I do like Python very much, but the success of CMake shows me once again that people have surprising needs they are usually not concerned about how powerful the tool is for the technical expert. Mostly it is about making simple things simple and saving time for the average guy. Here is my list why I believe CMake is so popular (ordered by perceived priority): 1) it is a high-level language for describing build trees, it allows me to ignore much of the technical low-level stuff, like how to call the compiler with the right options 2) the commands have nice long names which makes CMake build scripts almost self-documenting 3) it ships with a maintained collection of scripts to configure and include external libraries 4) there is a company behind it which continuously adapts cmake to its user base and advertises its use 5) the documentation is ok 6) it is comparably fast (Scons was rejected because it is slow, AFAIK), especially when you use the Ninja backend 7) it has a ncurses-based configuration interface 8) it produces pretty output Since cmake has already an impressive following, this adds the most important item on the top of the list: 0) people already know the tool and don't have to learn it Boost.Build does not offer points 0-4 and 7 and neither does Faber, it seems. The Hello World example in the Faber docs reminds me a lot of Makefiles, because of the $(<) and $(>) syntax. # define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') link = action('c++.link', 'c++ -o $(<) $(>)') # bind artefacts to sources using the above recipes obj = rule(compile, 'hello.o', 'hello.cpp') bin = rule(link, 'hello', obj) test = rule(action('run_test', './$(>)'), 'test', bin, attrs=notfile|always) default = bin This looks rather mathematical and abstract. I appreciate math, but many people don't. The syntax is very terse, which I think is not good. For build scripts, I think that Cmake has a point with its long and descriptive names. I touch a build script only very rarely. If I touch it very rarely, I want it to be very easy to read, because I forgot all the little details of how it works after a few months. I learned CMake basically by looking at build scripts from other people, not by studying a documentation from ground up. This is incredibly useful, nobody likes to study manuals. We had this discussion about the pros and cons of CMake a while ago, and it seems that nobody loves it, but it still seems like a useful compromise for many people. I don't see how Faber can compete with this, and I would prefer if we move Boost to CMake. Best regards, Hans
On Wed, Nov 22, 2017 at 5:12 AM, Hans Dembinski via Boost < boost@lists.boost.org> wrote:
Hi,
this is an answer to Richards mail, but I am mostly addressing Stephan.
On 22. Nov 2017, at 09:00, Richard Hodges via Boost < boost@lists.boost.org> wrote:
Stephan wrote:
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk.
Nothing would please me more than to be able to dump the horrific syntax of cmake. I have often thought that python would be the obvious language for a replacement. There was of course a similar tool called SCONS once, which is python also. It seems to have fallen by the wayside.
I am not a big fan of CMake and I do like Python very much, but the success of CMake shows me once again that people have surprising needs they are usually not concerned about how powerful the tool is for the technical expert. Mostly it is about making simple things simple and saving time for the average guy.
Here is my list why I believe CMake is so popular (ordered by perceived priority): 1) it is a high-level language for describing build trees, it allows me to ignore much of the technical low-level stuff, like how to call the compiler with the right options 2) the commands have nice long names which makes CMake build scripts almost self-documenting 3) it ships with a maintained collection of scripts to configure and include external libraries 4) there is a company behind it which continuously adapts cmake to its user base and advertises its use 5) the documentation is ok 6) it is comparably fast (Scons was rejected because it is slow, AFAIK), especially when you use the Ninja backend 7) it has a ncurses-based configuration interface 8) it produces pretty output
Since cmake has already an impressive following, this adds the most important item on the top of the list: 0) people already know the tool and don't have to learn it
Boost.Build does not offer points 0-4 and 7 and neither does Faber, it seems. The Hello World example in the Faber docs reminds me a lot of Makefiles, because of the $(<) and $(>) syntax. # define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') link = action('c++.link', 'c++ -o $(<) $(>)')
# bind artefacts to sources using the above recipes obj = rule(compile, 'hello.o', 'hello.cpp') bin = rule(link, 'hello', obj) test = rule(action('run_test', './$(>)'), 'test', bin, attrs=notfile|always)
default = bin This looks rather mathematical and abstract. I appreciate math, but many people don't. The syntax is very terse, which I think is not good. For build scripts, I think that Cmake has a point with its long and descriptive names. I touch a build script only very rarely. If I touch it very rarely, I want it to be very easy to read, because I forgot all the little details of how it works after a few months. I learned CMake basically by looking at build scripts from other people, not by studying a documentation from ground up. This is incredibly useful, nobody likes to study manuals.
We had this discussion about the pros and cons of CMake a while ago, and it seems that nobody loves it, but it still seems like a useful compromise for many people. I don't see how Faber can compete with this, and I would prefer if we move Boost to CMake.
Best regards, Hans
I thought the decision was already made to move to CMake based on an announcement that was made a couple months ago? It's true that sometimes CMake is lacking, for example it has no built-in support on Windows to select a static versus dynamic runtime (one can set the compile flags at generation time however), nor does it do a good job at packaging up PDB files when building install targets for build types that include debug info. That said, I have used it for a long time now (over 10 years) and find it to be the most complete and easiest to understand/use cross-platform build system. Regarding package management, the conan C/C++ package manager team has a project for cmake integration. Hans summarized the good points nicely above, but missed a couple key points I find useful - CMake produces parallel-build-capable build scripts, and it can generate both Eclipse and Visual Studio projects. Many open-source projects already use CMake (which is not limited to just building C/C++ by the way), and given the active and robust ecosystem that exists around it, CMake will be my tool of choice for some time to come. - Jim
On 22.11.2017 05:12, Hans Dembinski via Boost wrote:
Hi,
this is an answer to Richards mail, but I am mostly addressing Stephan.
Let me reply to both your mails here...
On 22. Nov 2017, at 09:00, Richard Hodges via Boost
wrote: Stephan wrote:
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk. Nothing would please me more than to be able to dump the horrific syntax of cmake. I have often thought that python would be the obvious language for a replacement. There was of course a similar tool called SCONS once, which is python also. It seems to have fallen by the wayside.
Not at all. SCons is well alive (and I have contributed to it in the past, even mentored a GSoC student for it). There are fundamental differences, though, and (as you mention it) I think its interface is quite unpythonic, unfortunately, despite the superficial fact that it uses Python for its SConscripts.
I am not a big fan of CMake and I do like Python very much, but the success of CMake shows me once again that people have surprising needs they are usually not concerned about how powerful the tool is for the technical expert. Mostly it is about making simple things simple and saving time for the average guy.
Yes. But despite its popularity, I disagree with CMake's approach on a very fundamental level (it being a build system generator, rather than a build system), as it makes things oh so much more complex. Everything works well until it doesn't, at which point all hell breaks loose. That's very typical for macro-based languages (think latex, m4). And it had to reinvent its own language, in a rather ad-hoc manner (i.e. started with a simple declarative syntax, but then had to expand on that to cover other use-cases. "Now you have two problems." comes to mind (if I may paraphrase).
Here is my list why I believe CMake is so popular (ordered by perceived priority): 1) it is a high-level language for describing build trees, it allows me to ignore much of the technical low-level stuff, like how to call the compiler with the right options 2) the commands have nice long names which makes CMake build scripts almost self-documenting 3) it ships with a maintained collection of scripts to configure and include external libraries 4) there is a company behind it which continuously adapts cmake to its user base and advertises its use 5) the documentation is ok 6) it is comparably fast (Scons was rejected because it is slow, AFAIK), especially when you use the Ninja backend 7) it has a ncurses-based configuration interface 8) it produces pretty output
Since cmake has already an impressive following, this adds the most important item on the top of the list: 0) people already know the tool and don't have to learn it
Boost.Build does not offer points 0-4 and 7
I'd contest that. Boost.Build does hide most details about tool internals (how to invoke the compiler, say), until you need to plug in your own tools.
and neither does Faber, it seems. The Hello World example in the Faber docs reminds me a lot of Makefiles, because of the $(<) and $(>) syntax.
Yes, intentionally so. (Well, also because that's what bjam uses, and I didn't see any reason to change that.) A scheduling tool that invokes external tools to update artefacts needs some kind of DSL, and I think constructs such as $(>) seem rather intuitive. The fact that `make` (as the de-facto standard in UNIX land) uses the same language can only help.
# define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') link = action('c++.link', 'c++ -o $(<) $(>)')
# bind artefacts to sources using the above recipes obj = rule(compile, 'hello.o', 'hello.cpp') bin = rule(link, 'hello', obj) test = rule(action('run_test', './$(>)'), 'test', bin, attrs=notfile|always)
default = bin This looks rather mathematical and abstract. I appreciate math, but many people don't. The syntax is very terse, which I think is not good. For build scripts, I think that Cmake has a point with its long and descriptive names. I touch a build script only very rarely. If I touch it very rarely, I want it to be very easy to read, because I forgot all the little details of how it works after a few months. I learned CMake basically by looking at build scripts from other people, not by studying a documentation from ground up. This is incredibly useful, nobody likes to study manuals.
Thanks for your feedback. I may rethink how I document Faber. As a developer, I typically much prefer a bottom-up approach, starting with the lowest details, then adding layer upon layer of abstraction. So what you are looking for is likely persent in a higher layer. Have a look at one of the other examples: from faber.artefacts.binary import binary greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet]) rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always) default = hello (from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
We had this discussion about the pros and cons of CMake a while ago, and it seems that nobody loves it, but it still seems like a useful compromise for many people. I don't see how Faber can compete with this, and I would prefer if we move Boost to CMake.
As much as I'd love for people to adopt Faber, I'm not proposing Boost (as a whole) to move to any tool in particular. I have expressed this view in the past, and I don't want to distract this thread (about Faber) with discussions about Boost's strategy moving forward. The reason I propose Faber here is as a fork (or, in a way, a new frontend) of Boost.Build, so people who have been struggling with Boost.Build in the past may want to look at this as a possible alternative. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Wed, Nov 22, 2017 at 2:43 PM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 22.11.2017 05:12, Hans Dembinski via Boost wrote:
and neither does Faber, it seems. The Hello World example in the Faber docs reminds me a lot of Makefiles, because of the $(<) and $(>) syntax.
Yes, intentionally so. (Well, also because that's what bjam uses, and I didn't see any reason to change that.) A scheduling tool that invokes external tools to update artefacts needs some kind of DSL, and I think constructs such as $(>) seem rather intuitive. The fact that `make` (as the de-facto standard in UNIX land) uses the same language can only help.
# define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)')
Like Hans, I've also never been fond of $(<) or $(>). You invoke Make heritage here for the terseness, while previously you justified Boost.Build rewrite into Faber on clarity grounds. You can't have it both ways Stefan ;)
From a Shell perspective, $(<) evokes input to me, and $(>) output, the reverse of what they likely mean above, given the -o. Playing devil's advocate I guess.
I did have a look at the doc when you announced it, and was quickly turned of by the syntax, to be honest. My $0.02. --DD
On 22.11.2017 09:33, Dominique Devienne via Boost wrote:
On Wed, Nov 22, 2017 at 2:43 PM, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 22.11.2017 05:12, Hans Dembinski via Boost wrote:
and neither does Faber, it seems. The Hello World example in the Faber docs reminds me a lot of Makefiles, because of the $(<) and $(>) syntax.
Yes, intentionally so. (Well, also because that's what bjam uses, and I didn't see any reason to change that.) A scheduling tool that invokes external tools to update artefacts needs some kind of DSL, and I think constructs such as $(>) seem rather intuitive. The fact that `make` (as the de-facto standard in UNIX land) uses the same language can only help.
# define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') Like Hans, I've also never been fond of $(<) or $(>). You invoke Make heritage here for the terseness, while previously you justified Boost.Build rewrite into Faber on clarity grounds. You can't have it both ways Stefan ;)
From a Shell perspective, $(<) evokes input to me, and $(>) output, the reverse of what they likely mean above, given the -o. Playing devil's advocate I guess.
From the C++ community I had expected a different reaction. Or perhaps the above should be spelled '<<' and '>>' ? :-) But more seriously, I had hoped the discussion to be more focussed on general design than on spelling. I expect the majority of users will never see (nor care about) how actions are defined, as they will merely use pre-defined built-tin actions such as `c++.compile`, `fileutils.copy`, or `python.run`.
Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 22. Nov 2017, at 15:42, Stefan Seefeld via Boost
wrote: # define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') Like Hans, I've also never been fond of $(<) or $(>). You invoke Make heritage here for the terseness, while previously you justified Boost.Build rewrite into Faber on clarity grounds. You can't have it both ways Stefan ;)
From a Shell perspective, $(<) evokes input to me, and $(>) output, the reverse of what they likely mean above, given the -o. Playing devil's advocate I guess.
From the C++ community I had expected a different reaction. Or perhaps the above should be spelled '<<' and '>>' ? :-)
You could maybe use $(in) and $(out). There is no ambiguity then and there are not many more characters to type either.
But more seriously, I had hoped the discussion to be more focussed on general design than on spelling.
I value code readability (which includes avoiding ambiguity) a lot, as you can see in the other thread concerning the histogram library. It is not just bike-shedding, if there is an alternative that is less ambiguous/easier to read.
I expect the majority of users will never see (nor care about) how actions are defined, as they will merely use pre-defined built-tin actions such as `c++.compile`, `fileutils.copy`, or `python.run`.
I don't know about that. And also, why not make life more pleasant for the power user as well, who needs custom actions? Ideally, a software is appealing to both the casual and the power user.
On 22. Nov 2017, at 14:43, Stefan Seefeld via Boost
wrote: On 22. Nov 2017, at 09:00, Richard Hodges via Boost
mailto:boost@lists.boost.org> wrote: Stephan wrote:
In case it isn't obvious: I very much welcome collaboration, so if you want to contribute (be it more tools or even entirely new functionality), I'd be happy to talk. Nothing would please me more than to be able to dump the horrific syntax of cmake. I have often thought that python would be the obvious language for a replacement. There was of course a similar tool called SCONS once, which is python also. It seems to have fallen by the wayside.
Not at all. SCons is well alive (and I have contributed to it in the past, even mentored a GSoC student for it). There are fundamental differences, though, and (as you mention it) I think its interface is quite unpythonic, unfortunately, despite the superficial fact that it uses Python for its SConscripts.
I was referring to the fact that big OS projects moved away from SCons to CMake, notably KDE. This article nicely explains some issues with SCons. https://lwn.net/Articles/188693/ I think that using an established scripting language to write build scripts is a great idea. I like that about Faber and about Scons, and I think it is bad that CMake established yet another scripting language, effectively. The annoying thing about SConscripts is that it looks like Python, but the behaviour is different, since the order of statements is not maintained. It looks like Python, but it behaves like a Makefile, where only the dependencies of the statements matter and not the order in the code. This is inconsistent.
I am not a big fan of CMake and I do like Python very much, but the success of CMake shows me once again that people have surprising needs they are usually not concerned about how powerful the tool is for the technical expert. Mostly it is about making simple things simple and saving time for the average guy. Yes. But despite its popularity, I disagree with CMake's approach on a very fundamental level (it being a build system generator, rather than a build system), as it makes things oh so much more complex. Everything works well until it doesn't, at which point all hell breaks loose. That's very typical for macro-based languages (think latex, m4). And it had to reinvent its own language, in a rather ad-hoc manner (i.e. started with a simple declarative syntax, but then had to expand on that to cover other use-cases. "Now you have two problems." comes to mind (if I may paraphrase).
I agree with you on all points, like most people would. Like I said, I am not in love with CMake either.
Here is my list why I believe CMake is so popular (ordered by perceived priority): 1) it is a high-level language for describing build trees, it allows me to ignore much of the technical low-level stuff, like how to call the compiler with the right options 2) the commands have nice long names which makes CMake build scripts almost self-documenting 3) it ships with a maintained collection of scripts to configure and include external libraries 4) there is a company behind it which continuously adapts cmake to its user base and advertises its use 5) the documentation is ok 6) it is comparably fast (Scons was rejected because it is slow, AFAIK), especially when you use the Ninja backend 7) it has a ncurses-based configuration interface 8) it produces pretty output
Since cmake has already an impressive following, this adds the most important item on the top of the list: 0) people already know the tool and don't have to learn it
Boost.Build does not offer points 0-4 and 7
I'd contest that. Boost.Build does hide most details about tool internals (how to invoke the compiler, say), until you need to plug in your own tools.
Ok, you are right.
# define some actions compile = action('c++.compile', 'c++ -c -o $(<) $(>)') link = action('c++.link', 'c++ -o $(<) $(>)')
# bind artefacts to sources using the above recipes obj = rule(compile, 'hello.o', 'hello.cpp') bin = rule(link, 'hello', obj) test = rule(action('run_test', './$(>)'), 'test', bin, attrs=notfile|always)
default = bin This looks rather mathematical and abstract. I appreciate math, but many people don't. The syntax is very terse, which I think is not good. For build scripts, I think that Cmake has a point with its long and descriptive names. I touch a build script only very rarely. If I touch it very rarely, I want it to be very easy to read, because I forgot all the little details of how it works after a few months. I learned CMake basically by looking at build scripts from other people, not by studying a documentation from ground up. This is incredibly useful, nobody likes to study manuals.
Thanks for your feedback. I may rethink how I document Faber. As a developer, I typically much prefer a bottom-up approach, starting with the lowest details, then adding layer upon layer of abstraction. So what you are looking for is likely persent in a higher layer. Have a look at one of the other examples:
Yes, I think providing high-level examples first is better to draw people in.
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
This example remains cryptic. from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing". Then, I have to type many redundant things here. Note, the many occurrences of greet in these two lines greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet]) It seems like hello is a binary which depends on 'hello.cpp' and the module greet. Why the latter? The rule to make a test is very cryptic. The action takes several positional arguments, and I can only guess what each positional argument does. I am also critical about this in bjam. By using a syntax that uses a lot of positional arguments, you need to read the documentation to figure out what is going on. If you are lucky, the author provided comments for each positional argument, but then one might as well use keywords which are self-documenting. This is what CMake does well, IMHO. Best regards, Hans
On 29.11.2017 07:59, Hans Dembinski wrote:
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing".
Right, it's what "faber" generates (using the same stem even).
Then, I have to type many redundant things here. Note, the many occurrences of greet in these two lines
greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet])
It seems like hello is a binary which depends on 'hello.cpp' and the module greet. Why the latter?
"hello" is a binary built from a "hello.cpp" source file and a "greet" library provided from another ("greet") module (thus using Pythonic syntax, we refer to the latter as "greet.greet"). If the library would have been built by the same module, the above would simply be greet = library('greet', 'greet.cpp') hello = binary('hello', ['hello.cpp', greet]) as is in fact done in this example: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/...
The rule to make a test is very cryptic. The action takes several positional arguments, and I can only guess what each positional argument does.
rules take at least two (positional) arguments (an action and a name for the target artefact). All other arguments have default values, and thus *may* be given as keyword arguments or as positional arguments, depending on your preference. (But given that a "source" argument is still very common, I just chose to not spell it out as "source=hello" for compactness.) As a fabscript author you are of course free to name all your rule arguments, if that helps readability. I not inventing anything here, but rather take the most natural approach possible following Python language rules and idioms.
I am also critical about this in bjam. By using a syntax that uses a lot of positional arguments, you need to read the documentation to figure out what is going on.
Again, Python allows you to name all arguments. This is up to the caller, not the API designer. As far as the API is concerned, rules have two mandatory arguments, so it wouldn't make sense to make them keyword arguments. But if you prefer some help in drafting your fabscript logic, there are good tools to help interactively editing Python code, including code completion etc. That's the beauty of using Python: we can tap into a fabulous ecosystem of tools and modules, including ipython, jupyter, spyder, etc., etc. In any case, nothing is cast in stone just yet. One reason I decided to publish Faber now was to gather feedback and interest in collaboration, and I expect lots of things to change as we collectively improve upon what's already there.
If you are lucky, the author provided comments for each positional argument, but then one might as well use keywords which are self-documenting. This is what CMake does well, IMHO.
Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 29. Nov 2017, at 14:31, Stefan Seefeld
wrote: On 29.11.2017 07:59, Hans Dembinski wrote:
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing".
Right, it's what "faber" generates (using the same stem even).
:) Fair enough, but it is still not very descriptive. Why use an uncommon latin word if you could use a common word from day-to-day language? The purpose of language is to transmit information, so it is usually a good idea to use common words that leave no room for ambiguity. Ironically, the other meaning of "artefact" is "any error in the *perception or representation of any information*, introduced by the involved equipment or technique(s)" [Wikipedia]
Then, I have to type many redundant things here. Note, the many occurrences of greet in these two lines
greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet])
It seems like hello is a binary which depends on 'hello.cpp' and the module greet. Why the latter?
"hello" is a binary built from a "hello.cpp" source file and a "greet" library provided from another ("greet") module (thus using Pythonic syntax, we refer to the latter as "greet.greet"). If the library would have been built by the same module, the above would simply be
greet = library('greet', 'greet.cpp') hello = binary('hello', ['hello.cpp', greet])
as is in fact done in this example: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/... https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/...
I think source code is allowed to be verbose, but it should not be redundant, especially if said redundancy could lead to mistakes. I suppose you run the fabscript through a special interpreter, not just the standard Python interpreter. If so, then you can use this shorthand syntax instead: greet = library('greet.cpp') That way, one cannot make a mistake like this greet = library('great', 'greet.cpp') To make the syntax very consistent (the Zen of Python says: "There should be one - preferably only one - obvious way to do it."), you could define all build items like library and binary in this way: def binary(*inputs, attribute1=default1, attribute2=default2, …): ... All positional arguments would always be inputs of any kind, like a source file or a library. If you always use positional arguments consistently like this, then my complaint about ambiguity is gone, because there is a clear rule which is easy to remember. Attributes would be passed consistently via keywords. They have reasonable defaults that Faber picks for me. Like, if I want another file extension for a library than the default for the platform. For libraries, I could specify whether to build a static or shared one. Or if I really don't want to name the library "greet", I could pass the keyword name="great". This declaration enforces the use of keywords for attributes, positional arguments are not allowed for attributes, which is a good for clarity.
The rule to make a test is very cryptic. The action takes several positional arguments, and I can only guess what each positional argument does.
rules take at least two (positional) arguments (an action and a name for the target artefact). All other arguments have default values, and thus *may* be given as keyword arguments or as positional arguments, depending on your preference. (But given that a "source" argument is still very common, I just chose to not spell it out as "source=hello" for compactness.) As a fabscript author you are of course free to name all your rule arguments, if that helps readability. I not inventing anything here, but rather take the most natural approach possible following Python language rules and idioms.
I am also critical about this in bjam. By using a syntax that uses a lot of positional arguments, you need to read the documentation to figure out what is going on.
Again, Python allows you to name all arguments. This is up to the caller, not the API designer. As far as the API is concerned, rules have two mandatory arguments, so it wouldn't make sense to make them keyword arguments.
I hope I explained better above what I had in mind. I agree, of course, that writing things like source="bla" all the time is annoying and superfluous.
But if you prefer some help in drafting your fabscript logic, there are good tools to help interactively editing Python code, including code completion etc. That's the beauty of using Python: we can tap into a fabulous ecosystem of tools and modules, including ipython, jupyter, spyder, etc., etc.
Agreed, that's why I am also in favour of using an established scripting language to describe a build tree. I am sorry that I am so critical, but we have some common ground. All this is meant in a constructive way. Best regards, Hans
This discussion will eventually lead to the realisation that a cross-platform builder tool requires separate concepts for (amongst others): * source files * libraries (dynamic and static) * dependencies * executables built for the target system * executables built for the build host (i.e. intermediate build tools) * scopes * -D macro definitions * abstractions of compiler options * abstractions of build host fundamental operations (environment variables, file existence, file copying, file locks, spawning subprocesses etc) … and so on. To short-circuit the discussion I can offer the following observations: * bjam, clever as it is, is basically a form of makefile. It will never be a build tool It’s therefore not useful to anyone but boost maintainers or single-target projects. * makefiles are great for creating dependency graphs. They are suitable for the output or intermediate stages of a build tool. You build a makefile hierarchy from the build tool abstractions given a target toolset and options. We already have Scons and CMake, which are both awful in their own way. I really think that effort would be better spent retro fitting python (or javascript, or [insert well maintained scripting language here]) into cmake so that cmake becomes beautiful. Either that, or recreate cmake in python (or javascript), but cleanly, using the combined knowledge of several years of evolution. Why the cmake team chose to build their own godawful scripting language is a mystery to me. I suspect someone just wanted to write a DSL one day and it all got way out of hand (original poster, please take note!) R
On 30 Nov 2017, at 11:51, Hans Dembinski via Boost
wrote: On 29. Nov 2017, at 14:31, Stefan Seefeld
wrote: On 29.11.2017 07:59, Hans Dembinski wrote:
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing".
Right, it's what "faber" generates (using the same stem even).
:) Fair enough, but it is still not very descriptive. Why use an uncommon latin word if you could use a common word from day-to-day language? The purpose of language is to transmit information, so it is usually a good idea to use common words that leave no room for ambiguity.
Ironically, the other meaning of "artefact" is "any error in the *perception or representation of any information*, introduced by the involved equipment or technique(s)" [Wikipedia]
Then, I have to type many redundant things here. Note, the many occurrences of greet in these two lines
greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet])
It seems like hello is a binary which depends on 'hello.cpp' and the module greet. Why the latter?
"hello" is a binary built from a "hello.cpp" source file and a "greet" library provided from another ("greet") module (thus using Pythonic syntax, we refer to the latter as "greet.greet"). If the library would have been built by the same module, the above would simply be
greet = library('greet', 'greet.cpp') hello = binary('hello', ['hello.cpp', greet])
as is in fact done in this example: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/... https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/...
I think source code is allowed to be verbose, but it should not be redundant, especially if said redundancy could lead to mistakes. I suppose you run the fabscript through a special interpreter, not just the standard Python interpreter. If so, then you can use this shorthand syntax instead:
greet = library('greet.cpp')
That way, one cannot make a mistake like this
greet = library('great', 'greet.cpp')
To make the syntax very consistent (the Zen of Python says: "There should be one - preferably only one - obvious way to do it."), you could define all build items like library and binary in this way:
def binary(*inputs, attribute1=default1, attribute2=default2, …): ...
All positional arguments would always be inputs of any kind, like a source file or a library. If you always use positional arguments consistently like this, then my complaint about ambiguity is gone, because there is a clear rule which is easy to remember.
Attributes would be passed consistently via keywords. They have reasonable defaults that Faber picks for me. Like, if I want another file extension for a library than the default for the platform. For libraries, I could specify whether to build a static or shared one. Or if I really don't want to name the library "greet", I could pass the keyword name="great".
This declaration enforces the use of keywords for attributes, positional arguments are not allowed for attributes, which is a good for clarity.
The rule to make a test is very cryptic. The action takes several positional arguments, and I can only guess what each positional argument does.
rules take at least two (positional) arguments (an action and a name for the target artefact). All other arguments have default values, and thus *may* be given as keyword arguments or as positional arguments, depending on your preference. (But given that a "source" argument is still very common, I just chose to not spell it out as "source=hello" for compactness.) As a fabscript author you are of course free to name all your rule arguments, if that helps readability. I not inventing anything here, but rather take the most natural approach possible following Python language rules and idioms.
I am also critical about this in bjam. By using a syntax that uses a lot of positional arguments, you need to read the documentation to figure out what is going on.
Again, Python allows you to name all arguments. This is up to the caller, not the API designer. As far as the API is concerned, rules have two mandatory arguments, so it wouldn't make sense to make them keyword arguments.
I hope I explained better above what I had in mind. I agree, of course, that writing things like source="bla" all the time is annoying and superfluous.
But if you prefer some help in drafting your fabscript logic, there are good tools to help interactively editing Python code, including code completion etc. That's the beauty of using Python: we can tap into a fabulous ecosystem of tools and modules, including ipython, jupyter, spyder, etc., etc.
Agreed, that's why I am also in favour of using an established scripting language to describe a build tree. I am sorry that I am so critical, but we have some common ground. All this is meant in a constructive way.
Best regards, Hans
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 30.11.2017 06:57, Richard Hodges via Boost wrote:
This discussion will eventually lead to the realisation that a cross-platform builder tool requires separate concepts for (amongst others):
* source files * libraries (dynamic and static) * dependencies * executables built for the target system * executables built for the build host (i.e. intermediate build tools) * scopes * -D macro definitions * abstractions of compiler options * abstractions of build host fundamental operations (environment variables, file existence, file copying, file locks, spawning subprocesses etc)
… and so on.
What's your point ? I think everyone understands that.
To short-circuit the discussion I can offer the following observations:
* bjam, clever as it is, is basically a form of makefile. It will never be a build tool It’s therefore not useful to anyone but boost maintainers or single-target projects.
That sounds like a rather unsubstantial rant. Can you elaborate on what you mean by that ?
* makefiles are great for creating dependency graphs. They are suitable for the output or intermediate stages of a build tool. You build a makefile hierarchy from the build tool abstractions given a target toolset and options.
We already have Scons and CMake, which are both awful in their own way.
I really think that effort would be better spent retro fitting python (or javascript, or [insert well maintained scripting language here]) into cmake so that cmake becomes beautiful.
Either that, or recreate cmake in python (or javascript), but cleanly, using the combined knowledge of several years of evolution.
Why the cmake team chose to build their own godawful scripting language is a mystery to me. I suspect someone just wanted to write a DSL one day and it all got way out of hand (original poster, please take note!)
R
Sorry, but I still don't understand what you are trying to say. (I don't agree that CMake would be great if it used a better language. I think it is flawed on multiple levels. The fact that it wants to be a build system generator rather than a build system probably being the central issue.) But, to get back to the original topic (which was the Faber announcement): have you looked at it (either the docs or the code) ? Can you substantiate your claim that it doesn't meet the points in your shopping list above ? Stefan -- ...ich hab' noch einen Koffer in Berlin...
replies inline On 30 November 2017 at 13:28, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 30.11.2017 06:57, Richard Hodges via Boost wrote:
This discussion will eventually lead to the realisation that a cross-platform builder tool requires separate concepts for (amongst others):
* source files * libraries (dynamic and static) * dependencies * executables built for the target system * executables built for the build host (i.e. intermediate build tools) * scopes * -D macro definitions * abstractions of compiler options * abstractions of build host fundamental operations (environment variables, file existence, file copying, file locks, spawning subprocesses etc)
… and so on.
What's your point ? I think everyone understands that.
My point is that I think it would be useful to focus on this reality in priority to a wrapper around bjam.
To short-circuit the discussion I can offer the following observations:
* bjam, clever as it is, is basically a form of makefile. It will never be a build tool It’s therefore not useful to anyone but boost maintainers or single-target projects.
That sounds like a rather unsubstantial rant. Can you elaborate on what you mean by that ?
bjam does what makefiles do. It computes and executes configurable dependency tree. It remains up to the developer know the specific flags, tools, settings etc he needs to build for a given target on a given host. This is also true of a makefile. bjam and make are functionally equivalent in that they offer no abstraction in statement of intent.
* makefiles are great for creating dependency graphs. They are suitable
for the output or intermediate stages of a build tool. You build a makefile hierarchy from the build tool abstractions given a target toolset and options.
We already have Scons and CMake, which are both awful in their own way.
I really think that effort would be better spent retro fitting python
(or javascript, or [insert well maintained scripting language here]) into cmake so that cmake becomes beautiful.
Either that, or recreate cmake in python (or javascript), but cleanly,
using the combined knowledge of several years of evolution.
Why the cmake team chose to build their own godawful scripting language
is a mystery to me. I suspect someone just wanted to write a DSL one day and it all got way out of hand (original poster, please take note!)
R
Sorry, but I still don't understand what you are trying to say. (I don't agree that CMake would be great if it used a better language. I think it is flawed on multiple levels. The fact that it wants to be a build system generator rather than a build system probably being the central issue.)
The abstract build system generator feature of cmake is what makes it uniquely useful to me (and half* the world)
But, to get back to the original topic (which was the Faber announcement): have you looked at it (either the docs or the code) ? Can you substantiate your claim that it doesn't meet the points in your shopping list above ?
I have looked at the docs an the code. You cannot describe a c++ project
in abstract terms with faber, just as you cannot with bjam or make. You still need to know the exact command line options to set for your particular compiler and target system. System discovery of build host and target is very important. This is why gnu autotools was created. The complexity of gnu autotools I suspect was the driver for the creation of scons and cmake. They are better, but not good enough. We don't need** another make. make et-al are good enough for managing dependencies. We do need** a better, more intuitive means of describing a project and its dependencies in a platform-agnostic manner. Stefan
--
...ich hab' noch einen Koffer in Berlin...
* "half the world" - is a rough finger-in-the-air estimate of the
population of c++ developers who need more than a simple makefile (i.e. most of them). ** my opinion
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
Hi Richard, I'm going to use "b2" rather than "bjam" in my reply, as it really is b2 (or "Boost.Build", if you prefer) that we need to look at here, and which Faber draws from. bjam is indeed little more than a "dependency graph manager". B2 layers lots of important concepts over that, including tools and tool abstractions, features and their mapping to (tool-specific) parameters, etc. So I assume you really mean b2 when you criticise "bjam" below... On 30.11.2017 07:50, Richard Hodges via Boost wrote:
replies inline
On 30 November 2017 at 13:28, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 30.11.2017 06:57, Richard Hodges via Boost wrote:
This discussion will eventually lead to the realisation that a cross-platform builder tool requires separate concepts for (amongst others): * source files * libraries (dynamic and static) * dependencies * executables built for the target system * executables built for the build host (i.e. intermediate build tools) * scopes * -D macro definitions * abstractions of compiler options * abstractions of build host fundamental operations (environment variables, file existence, file copying, file locks, spawning subprocesses etc) … and so on. What's your point ? I think everyone understands that.
My point is that I think it would be useful to focus on this reality in priority to a wrapper around bjam.
That's because b2 already *does* support all of the above concepts. And while they may not be very intuitive to use (hence my focus on a new "frontend"), I believe that on a conceptual level I can reuse most of what b2 has to offer, including features, tools, and much more. So I'd like to ask you to substantiate your claim that these concepts aren't provided or served adequately by b2 / faber.
To short-circuit the discussion I can offer the following observations:
* bjam, clever as it is, is basically a form of makefile. It will never be a build tool It’s therefore not useful to anyone but boost maintainers or single-target projects.
That sounds like a rather unsubstantial rant. Can you elaborate on what you mean by that ?
bjam does what makefiles do. It computes and executes configurable dependency tree.
It does much more. It defines features and tools, then automatically detects the build platform and available tools, lets users fine-tune those, and only then maps a platform-agnostic build description (in form of a set of Jamfiles or fabscripts) and maps that to a concrete dependency graph with concrete actions.
It remains up to the developer know the specific flags, tools, settings etc he needs to build for a given target on a given host.
You only need to know the specific tools and their argument spelling if the automatic mapping performed by faber doesn't suite you, and you want to fine-tune the specific commands. But the general premise of faber (as well as b2) is that it separates the task of defining tools and the task of defining build logic, as the two are done by quite different sets of people.
But, to get back to the original topic (which was the Faber announcement): have you looked at it (either the docs or the code) ? Can you substantiate your claim that it doesn't meet the points in your shopping list above ?
I have looked at the docs an the code. You cannot describe a c++ project in abstract terms with faber, just as you cannot with bjam or make. You still need to know the exact command line options to set for your particular compiler and target system.
Huh ? Counter examples: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/test/fabscript https://github.com/stefanseefeld/faber/blob/develop/examples/config/fabscrip... Where do you see any mention of "exact command line options" in these ? The first two would even cross-compile out of the box (when compiled with `faber target.arch=something`), and the latter two only can't because they need to execute compiled code, and cross-configuration and cross-testing isn't quite supported yet. Stefan -- ...ich hab' noch einen Koffer in Berlin...
inline... On 30 November 2017 at 14:18, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
Hi Richard,
I'm going to use "b2" rather than "bjam" in my reply, as it really is b2 (or "Boost.Build", if you prefer) that we need to look at here, and which Faber draws from. bjam is indeed little more than a "dependency graph manager". B2 layers lots of important concepts over that, including tools and tool abstractions, features and their mapping to (tool-specific) parameters, etc. So I assume you really mean b2 when you criticise "bjam" below...
I do mean bjam
On 30.11.2017 07:50, Richard Hodges via Boost wrote:
replies inline
On 30 November 2017 at 13:28, Stefan Seefeld via Boost < boost@lists.boost.org> wrote:
On 30.11.2017 06:57, Richard Hodges via Boost wrote:
This discussion will eventually lead to the realisation that a cross-platform builder tool requires separate concepts for (amongst others): * source files * libraries (dynamic and static) * dependencies * executables built for the target system * executables built for the build host (i.e. intermediate build tools) * scopes * -D macro definitions * abstractions of compiler options * abstractions of build host fundamental operations (environment variables, file existence, file copying, file locks, spawning subprocesses etc) … and so on. What's your point ? I think everyone understands that.
My point is that I think it would be useful to focus on this reality in priority to a wrapper around bjam.
That's because b2 already *does* support all of the above concepts. And while they may not be very intuitive to use (hence my focus on a new "frontend"), I believe that on a conceptual level I can reuse most of what b2 has to offer, including features, tools, and much more. So I'd like to ask you to substantiate your claim that these concepts aren't provided or served adequately by b2 / faber.
I would argue that the intuitive way to express this is a script that defines conceptual assertions.
To short-circuit the discussion I can offer the following observations:
* bjam, clever as it is, is basically a form of makefile. It will never be a build tool It’s therefore not useful to anyone but boost maintainers or single-target projects.
That sounds like a rather unsubstantial rant. Can you elaborate on what you mean by that ?
bjam does what makefiles do. It computes and executes configurable dependency tree.
It does much more. It defines features and tools, then automatically detects the build platform and available tools, lets users fine-tune those, and only then maps a platform-agnostic build description (in form of a set of Jamfiles or fabscripts) and maps that to a concrete dependency graph with concrete actions.
Have you tried getting bjam to correctly build boost for c++14 on iOS for a specific version of iOS, as a universal library? It's a dark art. Have you tried to build for emscripten? (internal rules do not work). I can't even find a reasonable instruction manual to describe how bjam works. Before building a front end for it, perhaps better to document what is already there.
It remains up to the developer know the specific flags, tools, settings etc he needs to build for a given target on a given host.
You only need to know the specific tools and their argument spelling if the automatic mapping performed by faber doesn't suite you, and you want to fine-tune the specific commands. But the general premise of faber (as well as b2) is that it separates the task of defining tools and the task of defining build logic, as the two are done by quite different sets of people.
How would you build a protocol buffers .proto file into c++ using the correct build options for the target, have the headers and sources installed to the right places and have these generated files correctly emplaced in the graph? This is fundamental to my use case.
But, to get back to the original topic (which was the Faber announcement): have you looked at it (either the docs or the code) ? Can you substantiate your claim that it doesn't meet the points in your shopping list above ?
I have looked at the docs an the code. You cannot describe a c++ project in abstract terms with faber, just as you cannot with bjam or make. You still need to know the exact command line options to set for your particular compiler and target system.
Huh ? Counter examples: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_ rules/fabscript https://github.com/stefanseefeld/faber/blob/develop/examples/modular/ fabscript https://github.com/stefanseefeld/faber/blob/develop/examples/test/ fabscript https://github.com/stefanseefeld/faber/blob/develop/examples/config/ fabscript
Where do you see any mention of "exact command line options" in these ? The first two would even cross-compile out of the box (when compiled with `faber target.arch=something`), and the latter two only can't because they need to execute compiled code, and cross-configuration and cross-testing isn't quite supported yet.
These examples are utterly simplistic. Nowhere are they setting sysroot,
sanitize, warning options, abstract libraries (with embedded include file paths) etc. How would you get these to build as a signed executable on iOS, android for example? without a 3-line CC=clang++ <insert magic options list here>
Stefan
--
...ich hab' noch einen Koffer in Berlin...
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
On 30.11.2017 08:37, Richard Hodges via Boost wrote:
Have you tried getting bjam to correctly build boost for c++14 on iOS for a specific version of iOS, as a universal library?
It's a dark art.
No doubt.
Have you tried to build for emscripten? (internal rules do not work).
I haven't. But with Faber I explain how to * define new file types (e.g. "ams.js") * define new tools (e.g. "emscripten") * define implicit rules (e.g. how to build an "ams.js" file from a "C++" file) And these are the basic ingredients you need to be able to *extend* Faber to support additional tools and targets.
I can't even find a reasonable instruction manual to describe how bjam works. Before building a front end for it, perhaps better to document what is already there.
I mentioned earlier that I think we should discuss "b2" rather than "bjam". But it is indeed "bjam" that Faber is a frontend to. Faber is at the same level as b2 / Boost.Build, as it tries to offer the same functionality. And while Faber is a very young project, and its documentation is very sparse, I'm hoping to fill in the missing bits to make it easy for users to find answers to the above questions of how to extend it to support additional languages and tools. I'm skipping your other examples, not because they aren't important, but because the answer is the same: there are many hooks that allow you to plug in platform- and tool-specific extensions and to fine-tune the way Faber operates. It's my hope that as Faber evolves, people will contribute support for additional tools, languages, platforms. There obviously is a lot to be done, so for this to succeed it needs to become a community project. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On Thu, Nov 30, 2017 at 6:37 AM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
Have you tried getting bjam to correctly build boost for c++14 on iOS for a specific version of iOS, as a universal library?
I have in the past when I used to do iOS programming for a living.
It's a dark art.
It is.. But the darkness of it is all Apple's fault. Have you tried to build for emscripten? (internal rules do not work).
Yes. Have you tried using the built-in b2 emscripten toolset? https://github.com/boostorg/build/blob/develop/src/tools/emscripten.jam PS. Feel free to ignore this email.. Since this thread really should be about Faber and not b2. -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Hans Dembinski via Boost Sent: 30 November 2017 10:52 To: Stefan Seefeld Cc: Hans Dembinski; boost@lists.boost.org Subject: Re: [boost] Announcement: Faber, a new build system based on bjam
On 29. Nov 2017, at 14:31, Stefan Seefeld
wrote: On 29.11.2017 07:59, Hans Dembinski wrote:
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...
https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...),
which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build.
This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing".
Right, it's what "faber" generates (using the same stem even).
:) Fair enough, but it is still not very descriptive. Why use an uncommon latin word if you could use a common word from day-to-day language? The purpose of language is to transmit information, so it is usually a good idea to use common words that leave no room for ambiguity.
Ironically, the other meaning of "artefact" is "any error in the *perception or representation of any information*, introduced by the involved equipment or technique(s)" [Wikipedia]
I'm with Hans on this. The underlying problem is that we have run out of words that mean 'thingamajig'. “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master—that's all.” Every time you want a 'thingamajig', it is different thing and you have not only to define it (easy-ish), but get that definition into the mind of the reader (much more difficult) and until you achieve the latter, you will leave the user confused. Choosing a word like 'artefact' that has multiple customary meanings is asking for more confusion. It's tricky because all the words you might chose have another definition already from some other application and so are 'taken' in peoples' minds. No specific suggestions have popped into my mind. But it would be better to call it 'thing' than artefact! Paul PS '$(>)' really, really turns me off :-(
On 30.11.2017 07:32, Paul A. Bristow via Boost wrote:
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Hans Dembinski via Boost Sent: 30 November 2017 10:52 To: Stefan Seefeld Cc: Hans Dembinski; boost@lists.boost.org Subject: Re: [boost] Announcement: Faber, a new build system based on bjam
On 29. Nov 2017, at 14:31, Stefan Seefeld
wrote: from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build. This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing". Right, it's what "faber" generates (using the same stem even). :) Fair enough, but it is still not very descriptive. Why use an uncommon latin word if you could use a common word from day-to-day language? The purpose of language is to transmit information, so it is usually a good idea to use common words
On 29.11.2017 07:59, Hans Dembinski wrote: that leave no room for ambiguity.
Ironically, the other meaning of "artefact" is "any error in the *perception or representation of any information*, introduced by the involved equipment or technique(s)" [Wikipedia] I'm with Hans on this.
[...] (Yes, I'm fully aware of the difficulties of defining and establishing terminology. :-) )
But it would be better to call it 'thing' than artefact!
Ah, no. Because with "artefact" I really use the original (etymologic) meaning: something created. Think of yourself as "homo faber" in that ontology :-)
Paul
PS '$(>)' really, really turns me off :-(
Sorry for that. That's a bit of inheritance from bjam, but can easily be changed. Or we could add aliases such as $(in) and (out), or whatever people prefer. I have to admit that I bit frustrated that we spend so much time talking about naming and syntax, rather than the more fundamental stuff like design or functionality. Is that merely because it's easy to find syntax issues, but arguing about design requires more time to understand the thing under review ? I really don't want to get stuck in bikeshed discussions after having spent so much effort on Faber's infrastructure to correct what I perceived as major flaws in b2's design. . :-( Stefan -- ...ich hab' noch einen Koffer in Berlin...
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Stefan Seefeld via Boost Sent: 30 November 2017 12:54 To: boost@lists.boost.org Cc: Stefan Seefeld Subject: Re: [boost] Announcement: Faber, a new build system based on bjam
On 30.11.2017 07:32, Paul A. Bristow via Boost wrote:
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Hans Dembinski via Boost Sent: 30 November 2017 10:52 To: Stefan Seefeld Cc: Hans Dembinski; boost@lists.boost.org Subject: Re: [boost] Announcement: Faber, a new build system based on bjam
On 29. Nov 2017, at 14:31, Stefan Seefeld
wrote: On 29.11.2017 07:59, Hans Dembinski wrote:
from faber.artefacts.binary import binary
greet = module('greet')
hello = binary('hello', ['hello.cpp', greet.greet])
rule(action('test', '$(>)'), 'test', hello, attrs=notfile|always)
default = hello
(from https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri... https://github.com/stefanseefeld/faber/blob/develop/examples/modular/fabscri...), which uses higher-level artefacts (`binary`, `library`) and doesn't require the user to define his own actions to build. This example remains cryptic.
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided
by
Right, it's what "faber" generates (using the same stem even). :) Fair enough, but it is still not very descriptive. Why use an uncommon latin word if you could use a common word from day-to-day language? The purpose of language is to transmit information, so it is usually a good idea to use common words
Google is essentially "human-made thing". that leave no room for ambiguity.
Ironically, the other meaning of "artefact" is "any error in the *perception or representation of any information*, introduced by the involved equipment or technique(s)" [Wikipedia] I'm with Hans on this. [...]
(Yes, I'm fully aware of the difficulties of defining and establishing terminology. :-) )
But it would be better to call it 'thing' than artefact!
Ah, no. Because with "artefact" I really use the original (etymologic) meaning: something created. Think of yourself as "homo faber" in that ontology :-)
Paul
PS '$(>)' really, really turns me off :-(
Sorry for that. That's a bit of inheritance from bjam, but can easily be changed. Or we could add aliases such as $(in) and (out), or whatever people prefer.
I have to admit that I bit frustrated that we spend so much time talking about naming and syntax, rather than the more fundamental stuff like design or functionality. Is that merely because it's easy to find syntax issues, but arguing about design requires more time to understand the thing under review ? I really don't want to get stuck in bikeshed discussions after having spent so much effort on Faber's infrastructure to correct what I perceived as major flaws in b2's design. . :-(
OK - yes definitely. Sorry for the noise - I can resist anything but temptation ;-) Paul
Hi Hans, let me get right through to the points not yet discussed elsewhere. On 30.11.2017 05:51, Hans Dembinski wrote:
Then, I have to type many redundant things here. Note, the many occurrences of greet in these two lines
greet = module('greet') hello = binary('hello', ['hello.cpp', greet.greet])
It seems like hello is a binary which depends on 'hello.cpp' and the module greet. Why the latter?
"hello" is a binary built from a "hello.cpp" source file and a "greet" library provided from another ("greet") module (thus using Pythonic syntax, we refer to the latter as "greet.greet"). If the library would have been built by the same module, the above would simply be
greet = library('greet', 'greet.cpp') hello = binary('hello', ['hello.cpp', greet])
as is in fact done in this example: https://github.com/stefanseefeld/faber/blob/develop/examples/implicit_rules/...
I think source code is allowed to be verbose, but it should not be redundant, especially if said redundancy could lead to mistakes. I suppose you run the fabscript through a special interpreter, not just the standard Python interpreter. (no special interpreter involved here, it's all standard Python)
If so, then you can use this shorthand syntax instead:
greet = library('greet.cpp')
That way, one cannot make a mistake like this
greet = library('great', 'greet.cpp')
Ah, well, now that's a fundamental limitation of Python. In the line greet = library('greet.cpp') you get a variable ("greet") that holds a reference to a "library" object. Take note that that object has no notion of it being referred to by the name "greet", and thus it doesn't know what name it needs to assign to the compiled and linked library, unless I provide that explicitly. Thus hello = binary(['hello.cpp', greet]) likewise wouldn't know how to name the binary file, and the 'greet' library object still having no name, the compiler couldn't refer to it either. I was actually thinking of ways to merge the two (the name of the variable, and the intrinsic names of the library and binary artefacts) to avoid that perceived redundancy, but got in all kinds of additional complexities trying that. The main point being that I really want to take advantage of this being written in a well established language, and using its idioms. SCons is a very good (or bad, actually) example of what happens if you technically use an established language without paying tribute to its idioms and established practices. I don't want to repeat that error.
To make the syntax very consistent (the Zen of Python says: "There should be one - preferably only one - obvious way to do it."), you could define all build items like library and binary in this way:
def binary(*inputs, attribute1=default1, attribute2=default2, …): ...
All positional arguments would always be inputs of any kind, like a source file or a library. If you always use positional arguments consistently like this, then my complaint about ambiguity is gone, because there is a clear rule which is easy to remember.
Perhaps you should think this through for a while longer (the way I have thought about it for months :-) ). I believe you will come to a very similar conclusion as I have.
Attributes would be passed consistently via keywords. They have reasonable defaults that Faber picks for me. Like, if I want another file extension for a library than the default for the platform. For libraries, I could specify whether to build a static or shared one. Or if I really don't want to name the library "greet", I could pass the keyword name="great".
This declaration enforces the use of keywords for attributes, positional arguments are not allowed for attributes, which is a good for clarity.
That's actually a good point: I can change the definition of these functions (`rule`, artefact constructors, etc.) to explicitly forbid positional arguments for anything but the required arguments. I agree that would prevent certain errors. Thanks for the tip !
The rule to make a test is very cryptic. The action takes several positional arguments, and I can only guess what each positional argument does.
rules take at least two (positional) arguments (an action and a name for the target artefact). All other arguments have default values, and thus *may* be given as keyword arguments or as positional arguments, depending on your preference. (But given that a "source" argument is still very common, I just chose to not spell it out as "source=hello" for compactness.) As a fabscript author you are of course free to name all your rule arguments, if that helps readability. I not inventing anything here, but rather take the most natural approach possible following Python language rules and idioms.
I am also critical about this in bjam. By using a syntax that uses a lot of positional arguments, you need to read the documentation to figure out what is going on.
Again, Python allows you to name all arguments. This is up to the caller, not the API designer. As far as the API is concerned, rules have two mandatory arguments, so it wouldn't make sense to make them keyword arguments.
I hope I explained better above what I had in mind. I agree, of course, that writing things like source="bla" all the time is annoying and superfluous.
But if you prefer some help in drafting your fabscript logic, there are good tools to help interactively editing Python code, including code completion etc. That's the beauty of using Python: we can tap into a fabulous ecosystem of tools and modules, including ipython, jupyter, spyder, etc., etc.
Agreed, that's why I am also in favour of using an established scripting language to describe a build tree. I am sorry that I am so critical, but we have some common ground. All this is meant in a constructive way.
OK, great to hear. I'm taking it in a constructive way. Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 30. Nov 2017, at 14:38, Stefan Seefeld
wrote: Ah, well, now that's a fundamental limitation of Python. […]
Ok, if you use a standard Python interpreter, then it wouldn't work... except perhaps with really evil hacks.
I was actually thinking of ways to merge the two (the name of the variable, and the intrinsic names of the library and binary artefacts) to avoid that perceived redundancy, but got in all kinds of additional complexities trying that. The main point being that I really want to take advantage of this being written in a well established language, and using its idioms. SCons is a very good (or bad, actually) example of what happens if you technically use an established language without paying tribute to its idioms and established practices. I don't want to repeat that error.
You are right, of course. I am glad to hear that you thought about it. I don't have any other ideas right now.
To make the syntax very consistent (the Zen of Python says: "There should be one - preferably only one - obvious way to do it."), you could define all build items like library and binary in this way:
def binary(*inputs, attribute1=default1, attribute2=default2, …): ...
All positional arguments would always be inputs of any kind, like a source file or a library. If you always use positional arguments consistently like this, then my complaint about ambiguity is gone, because there is a clear rule which is easy to remember.
Perhaps you should think this through for a while longer (the way I have thought about it for months :-) ). I believe you will come to a very similar conclusion as I have.
Well, I am not an expert on how to design a build system, since I have never done that. This sounds a bit like you don't want to explain and discuss this further. That's fine with me, but it would be a pity. I am sure everything makes sense to you, but as the creator you will invariably get "betriebsblind", it happens to everyone. A review is a chance to get a fresh new perspective. I know next to nothing about b2 and bjam, but I do understand Makefiles, I played with SCons for a while, and I am pretty fluent in CMake. I am very fluent in Python and I love the Zen of Python, which every programmer should put up in their office IMHO. I am also not above admitting, that I am but a simple guy who just wants to have a consistent build system that makes sense, is fun to use, and easy to extend. CMake mostly fails in the latter regard IMHO. Perhaps more importantly, CMake becomes more ugly with every version, as they try to cram in more and more features without really re-designing the whole thing (reminds me of C++). If this trend continues and eventually enough people get annoyed with CMake, then there would be a market for a new build system. A niche that Faber could potentially fill (although I believe this is hard for any competitor, as explained a few mails back). Concerning the different perspective; like Richard, I don't understand why you despise that CMake is a build system generator rather than a build system. I don't know why they did that, it seems to make things more complicated for them, but it is not harming me, the user.
Attributes would be passed consistently via keywords. They have reasonable defaults that Faber picks for me. Like, if I want another file extension for a library than the default for the platform. For libraries, I could specify whether to build a static or shared one. Or if I really don't want to name the library "greet", I could pass the keyword name="great".
This declaration enforces the use of keywords for attributes, positional arguments are not allowed for attributes, which is a good for clarity. That's actually a good point: I can change the definition of these functions (`rule`, artefact constructors, etc.) to explicitly forbid positional arguments for anything but the required arguments. I agree that would prevent certain errors. Thanks for the tip !
At least I was able to provide something useful :). Best regards, Hans
On 01.12.2017 15:42, Hans Dembinski wrote:
On 30. Nov 2017, at 14:38, Stefan Seefeld
wrote: Ah, well, now that's a fundamental limitation of Python. […] Ok, if you use a standard Python interpreter, then it wouldn't work... except perhaps with really evil hacks.
Exactly. And while I do use a few tricks (yes, Python also provides its bag of meta-programming tricks ! ;-) ), I very deliberately limited the number of tricks I use. (One trick you may appreciate is the use of "virtual actions", allowing you to use the abstract "cxx.compile" in a fabscript, but then override that with e.g. "gxx.compile" by invoking `faber cxx.name=gxx`. Enabling that required a bit of magic (I use metaclasses for both actions as well as tools to do that), but I thought that in this particular instance it's well worth it.
I was actually thinking of ways to merge the two (the name of the variable, and the intrinsic names of the library and binary artefacts) to avoid that perceived redundancy, but got in all kinds of additional complexities trying that. The main point being that I really want to take advantage of this being written in a well established language, and using its idioms. SCons is a very good (or bad, actually) example of what happens if you technically use an established language without paying tribute to its idioms and established practices. I don't want to repeat that error. You are right, of course. I am glad to hear that you thought about it. I don't have any other ideas right now.
To make the syntax very consistent (the Zen of Python says: "There should be one - preferably only one - obvious way to do it."), you could define all build items like library and binary in this way:
def binary(*inputs, attribute1=default1, attribute2=default2, …): ...
All positional arguments would always be inputs of any kind, like a source file or a library. If you always use positional arguments consistently like this, then my complaint about ambiguity is gone, because there is a clear rule which is easy to remember. Perhaps you should think this through for a while longer (the way I have thought about it for months :-) ). I believe you will come to a very similar conclusion as I have. Well, I am not an expert on how to design a build system, since I have never done that. This sounds a bit like you don't want to explain and discuss this further.
Actually, I'd be happy to. But a) there may be better places than the main Boost ML to do that and b) I'm really looking forward to a review of the workflow as a whole, the functionality, etc., rather than getting stuck in questions about whether "artefact" is a useful term, or whether "$(<)" is a good way to refer to the target of an action.
That's fine with me, but it would be a pity. I am sure everything makes sense to you, but as the creator you will invariably get "betriebsblind", it happens to everyone. A review is a chance to get a fresh new perspective.
I totally agree, and I very much appreciate people taking the time to review Faber and provide feedback.
I know next to nothing about b2 and bjam, but I do understand Makefiles, I played with SCons for a while, and I am pretty fluent in CMake. I am very fluent in Python and I love the Zen of Python, which every programmer should put up in their office IMHO. I am also not above admitting, that I am but a simple guy who just wants to have a consistent build system that makes sense, is fun to use, and easy to extend. CMake mostly fails in the latter regard IMHO.
Perhaps more importantly, CMake becomes more ugly with every version, as they try to cram in more and more features without really re-designing the whole thing (reminds me of C++). If this trend continues and eventually enough people get annoyed with CMake, then there would be a market for a new build system. A niche that Faber could potentially fill (although I believe this is hard for any competitor, as explained a few mails back).
Concerning the different perspective; like Richard, I don't understand why you despise that CMake is a build system generator rather than a build system. I don't know why they did that, it seems to make things more complicated for them, but it is not harming me, the user.
The main objection I have is that it requires me to learn two build systems (hence the "now you have two problems"), as CMake just isn't good enough to fully hide the wrapped (target) build system. It's good as long as everything works, but as soon as something breaks, I have to understand way more of the internals than I should have to. That's the fundamental flaw with all macro-based languages. On a more philosophical level, I think that CMake is an attempt to patch over the fact that the underlaying build systems aren't portable. So the real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake. I just happen to believe that Faber does that. (For avoidance of doubt: b2 attempts the same, but falls short in terms of usability.)
Attributes would be passed consistently via keywords. They have reasonable defaults that Faber picks for me. Like, if I want another file extension for a library than the default for the platform. For libraries, I could specify whether to build a static or shared one. Or if I really don't want to name the library "greet", I could pass the keyword name="great".
This declaration enforces the use of keywords for attributes, positional arguments are not allowed for attributes, which is a good for clarity. That's actually a good point: I can change the definition of these functions (`rule`, artefact constructors, etc.) to explicitly forbid positional arguments for anything but the required arguments. I agree that would prevent certain errors. Thanks for the tip ! At least I was able to provide something useful :).
I invite you to download Faber and play with the examples. There are plenty of ways to make more useful contributions ! ;-) Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 01/12/2017 22:09, Stefan Seefeld via Boost wrote:
On a more philosophical level, I think that CMake is an attempt to patch over the fact that the underlaying build systems aren't portable. So the real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake.
No, I think this is missing the major selling point of why people use CMake. It's glue which can integrate with all the major extant build systems on all the major platforms. As a library and program author, my end users all want to have my software integrate with their existing systems. They wouldn't be happy if I dictated they use some spiffy but nonstandard and incompatible system. But that's basically what Boost does with b2, and this tool. Some users want to build with make on Unix. Others want to build and edit within Visual Studio on Windows. Or use CLion. Or Xcode, or Eclipse. Or use MinGW or Cygwin on Windows, with the tool of their choice. Or use Ninja on Windows or Unix because they want fast builds. There are lots of different requirements and preferences. CMake satisfies them very well. It might not be the prettiest language, or the most cleanly designed system, but it works. For all of these cases, and more. And as the software author, I write the build logic once and don't have to personally care about all the systems I don't use or dislike aside from some CI testing. I can't do any of that with b2, and presumably this tool as well. Making a "portable build system" ignores the fact that none of these people *care* about the build system being portable. They care that it works with *their* tools and workflows with a minimum of hassle and the maximum benefit to themselves. People who are tied to a specific IDE aren't going to care that the build system is "portable". They care that it works with *their* setup. Likewise people who want to use make or ninja, or whatever. CMake the tool is portable, so the problem is solved. I write the CMake build, they use the tool *they* want. Not the one I dictate. A grand unified portable build system isn't a bad idea in and of itself. But it's not something which any tool will realise in practice in the real world. It's a pipe dream. We are all tied into various existing tools and systems, and we're not going to throw all that out and replace it. In many cases, those decisions aren't ours to make; my end users are mostly in different companies and institutions with their own policies and requirements (often they don't have a say in them either). Having something that works with all the different existing systems is more desirable than replacing them. And that's why I use CMake, so that I can produce software releases that each end user can use in they way they need. It doesn't matter that it's a bit ugly and complex, because the value it provides is solving the real world practical integration problems which other tools do not, and if you want to replace CMake then you need to be solving problems at that level as well. If you get to that point you might find that your system becomes a bit ugly and complex too; that's the price for some of CMake's backward compatibility guarantees. All successful and widely used systems build up some degree of cruft, and CMake is no exception in that regard. Honestly, if a fraction of the amount of effort that went into b2 and this new system went into producing pkg-config and cmake configuration files for all the boost components, that would be an invaluable improvement to the ability of the boost libraries to be used portably and flexibly by other projects. Because for myself and many others, this is a **much more of a portability hindrance than any other factor**. It doesn't matter how wonderful the build system is, if the end product of the build can't be easily consumed by anyone else. I've mentioned before on the list just how large a burden is imposed on downstream users in the absence of this information, and it would be greatly appreciated if the real-world integration problems could be given a higher priority. We already have enough build systems to deal with. Regards, Roger
2017-12-02 0:14 GMT+01:00 Roger Leigh via Boost
On 01/12/2017 22:09, Stefan Seefeld via Boost wrote:
On a more philosophical level, I think that CMake is an attempt to patch
over the fact that the underlaying build systems aren't portable. So the real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake.
No, I think this is missing the major selling point of why people use CMake. It's glue which can integrate with all the major extant build systems on all the major platforms. As a library and program author, my end users all want to have my software integrate with their existing systems. They wouldn't be happy if I dictated they use some spiffy but nonstandard and incompatible system. But that's basically what Boost does with b2, and this tool.
Some users want to build with make on Unix. Others want to build and edit within Visual Studio on Windows. Or use CLion. Or Xcode, or Eclipse. Or use MinGW or Cygwin on Windows, with the tool of their choice. Or use Ninja on Windows or Unix because they want fast builds. There are lots of different requirements and preferences. CMake satisfies them very well. It might not be the prettiest language, or the most cleanly designed system, but it works. For all of these cases, and more. And as the software author, I write the build logic once and don't have to personally care about all the systems I don't use or dislike aside from some CI testing. I can't do any of that with b2, and presumably this tool as well.
Making a "portable build system" ignores the fact that none of these people *care* about the build system being portable. They care that it works with *their* tools and workflows with a minimum of hassle and the maximum benefit to themselves. People who are tied to a specific IDE aren't going to care that the build system is "portable". They care that it works with *their* setup. Likewise people who want to use make or ninja, or whatever. CMake the tool is portable, so the problem is solved. I write the CMake build, they use the tool *they* want. Not the one I dictate.
A grand unified portable build system isn't a bad idea in and of itself. But it's not something which any tool will realise in practice in the real world. It's a pipe dream. We are all tied into various existing tools and systems, and we're not going to throw all that out and replace it. In many cases, those decisions aren't ours to make; my end users are mostly in different companies and institutions with their own policies and requirements (often they don't have a say in them either). Having something that works with all the different existing systems is more desirable than replacing them. And that's why I use CMake, so that I can produce software releases that each end user can use in they way they need. It doesn't matter that it's a bit ugly and complex, because the value it provides is solving the real world practical integration problems which other tools do not, and if you want to replace CMake then you need to be solving problems at that level as well. If you get to that point you might find that your system becomes a bit ugly and complex too; that's the price for some of CMake's backward compatibility guarantees. All successful and widely used systems build up some degree of cruft, and CMake is no exception in that regard.
Honestly, if a fraction of the amount of effort that went into b2 and this new system went into producing pkg-config and cmake configuration files for all the boost components, that would be an invaluable improvement to the ability of the boost libraries to be used portably and flexibly by other projects. Because for myself and many others, this is a **much more of a portability hindrance than any other factor**. It doesn't matter how wonderful the build system is, if the end product of the build can't be easily consumed by anyone else. I've mentioned before on the list just how large a burden is imposed on downstream users in the absence of this information, and it would be greatly appreciated if the real-world integration problems could be given a higher priority. We already have enough build systems to deal with.
While I disagree on the CMake being ugly part (for me the ugly ones were always syntaxes Make files, auto tools templates, b2 files, and build systems like Maven or any other XML or YAML based build system but that's probably just my odd personal taste and what I'm used to) I strongly agree with the rest of it. I've always preferred CMake exactly because I was able to use Code::Blocks or Visual Studio IDE while others were free to torture themselves with Eclipse and I was able to simply produce build scripts that worked on Linux, Unix and Windows without caring which build system was used underneath for the most of the time (plus the benefits of having a testing, continuous integration and packaging almost without extra effort). Also most of the time I find CMakeLists.txt files and CMake scripts readable and intuitive since the commands are like mnemonics to me unlike some other hieroglyphic syntaxes - unless somebody goes out of their way to make them unreadable. In the end it has nothing to do with the build system. The added value is exactly the abstraction and the level of control which allows me to either work with targets and required compiler feature abstractions or I can easily set per platform/per toolchain compiler flags if I really need to - it doesn't limit me too much but it also doesn't provide a general purpose programming language in which everybody could wreak havoc and write an entire program inside build system scripts. For me it's simply well balanced in every aspect and readable as it doesn't try to do everything and it instead knows how to delegate to other systems what I need to get my job done. Regards, Domen
On 01.12.2017 18:14, Roger Leigh via Boost wrote:
On 01/12/2017 22:09, Stefan Seefeld via Boost wrote:
On a more philosophical level, I think that CMake is an attempt to patch over the fact that the underlaying build systems aren't portable. So the real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake.
No, I think this is missing the major selling point of why people use CMake. It's glue which can integrate with all the major extant build systems on all the major platforms.
Yes, no doubt, that's the selling point. But does it truly deliver on this promise ? As I said, it works until it fails. And when it fails, who will have to help users figuring out what the cryptic error messages mean ? The problem is of course all about division of labour. Some people are platform experts, others are application domain experts. So ideally they collaborate in that the latter write abstract and portable build logic for their respective applications (or libraries as the case may be), while the former make sure that this high-level platform-agnostic logic maps correctly to platform-specific tool invocations. And while this separation of work is sound and useful, the way this is spelled out with CMake injects an extra layer in the middle, so the whole system requires expertise in three domains: * the application domain * the target platform (including target build system) * the CMake mapping logic In an ideal world CMake would hide everything else underneath it. But in reality, the encapsulation leaks like a sieve, so users are forced not only to understand the target logic but also the mapping logic.
As a library and program author, my end users all want to have my software integrate with their existing systems. They wouldn't be happy if I dictated they use some spiffy but nonstandard and incompatible system. But that's basically what Boost does with b2, and this tool.
"Integration" may mean different things. In Unix dependencies are typically dealt with via (binary) installations, i.e. tools like autoconf or pkg-config (which you mention yourself) are useful in detecting those, and providing flags I need to use to build *my* code using such third-party libraries. If you really want to build my library from source, you'll have to use the build logic of my choice, there is no way around that, as I can't maintain a build system I don't understand. But then again, who would want to build my library from source, other than potential contributors who are willing to understand the build system (which hopefully isn't too cryptic, hence my work on "this tool", as you prefer to name it, rather than using b2).
Some users want to build with make on Unix. Others want to build and edit within Visual Studio on Windows. Or use CLion. Or Xcode, or Eclipse. Or use MinGW or Cygwin on Windows, with the tool of their choice.
Now you are mixing different categories of things. Building with different compilers (the MSVC command line tools, mingw, cygwin, etc.) should of course work (and does, with b2 and Faber). That's what "portable build system" means. But using different build *systems* (including IDEs such as Visual Studio or Eclipse) is an entirely different thing. If people want build integration with other tools, they need to maintain that logic themselves. I have seen enough error reports which I don't have any clue how to resolve that I came to the conclusion that CMake goes after the problem in a very wrong way. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...
2017-12-02 1:00 GMT+01:00 Stefan Seefeld via Boost
On 01.12.2017 18:14, Roger Leigh via Boost wrote:
On 01/12/2017 22:09, Stefan Seefeld via Boost wrote:
On a more philosophical level, I think that CMake is an attempt to patch over the fact that the underlaying build systems aren't portable. So the real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake.
No, I think this is missing the major selling point of why people use CMake. It's glue which can integrate with all the major extant build systems on all the major platforms.
Yes, no doubt, that's the selling point. But does it truly deliver on this promise ? As I said, it works until it fails. And when it fails, who will have to help users figuring out what the cryptic error messages mean ? The problem is of course all about division of labour. Some people are platform experts, others are application domain experts. So ideally they collaborate in that the latter write abstract and portable build logic for their respective applications (or libraries as the case may be), while the former make sure that this high-level platform-agnostic logic maps correctly to platform-specific tool invocations.
There is another side of the medallion here. I tend to move away from libraries which use custom build tools (python based or b2) if at all possible - even when it comes to autotools and CMake projects I'll always prefer the CMake ones as they both may fail and for CMake + Makefiles I can fix the problem myself while with autotools I can't (and I really hate to ask what's wrong and hate a bit less googling what the solution is). CMake allows me to use the tools that I'm familiar with (e.g. Make instead of Ninja). It's a difference between being able to fix it myself or whining to the others or simply abandoning the library (with Boost the last option is a bit hard from time to time). There are of course people who don't have the knowledge about build system on their platform of choice but I'd guess that they'd use binaries if at all possible and not go into the compiling the library land.
And while this separation of work is sound and useful, the way this is spelled out with CMake injects an extra layer in the middle, so the whole system requires expertise in three domains:
* the application domain * the target platform (including target build system) * the CMake mapping logic
In an ideal world CMake would hide everything else underneath it. But in reality, the encapsulation leaks like a sieve, so users are forced not only to understand the target logic but also the mapping logic.
I may have too little experience but from where I stand you are exaggerating this point quite allot (while I do agree that Boost libraries are quite often pushing the compilers and error reports to the limit so it may be a valid exceptional point).
As a library and program author, my end users all want to have my software integrate with their existing systems. They wouldn't be happy if I dictated they use some spiffy but nonstandard and incompatible system. But that's basically what Boost does with b2, and this tool.
"Integration" may mean different things. In Unix dependencies are typically dealt with via (binary) installations, i.e. tools like autoconf or pkg-config (which you mention yourself) are useful in detecting those, and providing flags I need to use to build *my* code using such third-party libraries. If you really want to build my library from source, you'll have to use the build logic of my choice, there is no way around that, as I can't maintain a build system I don't understand. But then again, who would want to build my library from source, other than potential contributors who are willing to understand the build system (which hopefully isn't too cryptic, hence my work on "this tool", as you prefer to name it, rather than using b2).
One reason for why would be that there is not the latest version in my distro and I need it. And no I wouldn't bother to learn a new build system just for the sake of contributing to one project (unless really forced to) and I keep wondering how many others are like me.
Some users want to build with make on Unix. Others want to build and edit within Visual Studio on Windows. Or use CLion. Or Xcode, or Eclipse. Or use MinGW or Cygwin on Windows, with the tool of their choice.
Now you are mixing different categories of things. Building with different compilers (the MSVC command line tools, mingw, cygwin, etc.) should of course work (and does, with b2 and Faber). That's what "portable build system" means. But using different build *systems* (including IDEs such as Visual Studio or Eclipse) is an entirely different thing. If people want build integration with other tools, they need to maintain that logic themselves. I have seen enough error reports which I don't have any clue how to resolve that I came to the conclusion that CMake goes after the problem in a very wrong way.
Again I disagree here. This puts an overhead on me that my meta build system could solve but can't due to the choices of other authors. My point is that I like the fact that different build systems exist and compete with each other but want a meta build system to support them so when I choose one over the other I don't have to reinvent the entire build structure of my projects. Regards, Domen
2017-12-02 9:58 GMT+01:00 Domen Vrankar
2017-12-02 1:00 GMT+01:00 Stefan Seefeld via Boost
: On 01/12/2017 22:09, Stefan Seefeld via Boost wrote:
On a more philosophical level, I think that CMake is an attempt to
over the fact that the underlaying build systems aren't portable. So
On 01.12.2017 18:14, Roger Leigh via Boost wrote: patch the
real fix to that problem obviously is to write such a portable build system that would obsolete the need for something like CMake.
No, I think this is missing the major selling point of why people use CMake. It's glue which can integrate with all the major extant build systems on all the major platforms.
Yes, no doubt, that's the selling point. But does it truly deliver on this promise ? As I said, it works until it fails. And when it fails, who will have to help users figuring out what the cryptic error messages mean ? The problem is of course all about division of labour. Some people are platform experts, others are application domain experts. So ideally they collaborate in that the latter write abstract and portable build logic for their respective applications (or libraries as the case may be), while the former make sure that this high-level platform-agnostic logic maps correctly to platform-specific tool invocations.
There is another side of the medallion here. I tend to move away from libraries which use custom build tools (python based or b2) if at all possible - even when it comes to autotools and CMake projects I'll always prefer the CMake ones as they both may fail and for CMake + Makefiles I can fix the problem myself while with autotools I can't (and I really hate to ask what's wrong and hate a bit less googling what the solution is). CMake allows me to use the tools that I'm familiar with (e.g. Make instead of Ninja).
Stefan I have one suggestion - maybe a stupid one but that's for you to decide... Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja... This way people like me could get used to your build system (if it proves that it has real advantages over the already existing alternatives), can see which features/workflows are missing from Faber and which could be improved in CMake that would perhaps be mapped one to one between Faber and CMake. This way Faber gets more publicity, real world experience, can better show it's worth compared to other build systems and people get a bit more familiarized with Faber before you try to force their switch from CMake+Faber combo to Faber only. Added bonus is that you only compete with other build systems instead of trying to convince uninitiated like myself to prefer a build system to CMake build system generator. C++ evolved on top of C, CMake evolved on top of existing build systems so I don't find it a bad idea to hitch a ride on CMake with Faber and improve them both by doing that. Regards, Domen
Hi Domen, On 02.12.2017 15:58, Domen Vrankar wrote:
Stefan I have one suggestion - maybe a stupid one but that's for you to decide...
Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja...
What would be the point of that ? Do CMake users really care what build system "backend" is being used ? I thought the goal was for them to only interact with CMake itself ? I expect Faber to get most publicity from its simple and portable interface, which wouldn't even be visible if it were used as a CMake backend. Moreover, CMake as a build script generator produces extremely ugly and unreadable Makefiles. That's in the nature of the approach of a macro-language. The Makefile has degenerated into an intermediate representation, not something for a human eye to dwell on. Doing that with Faber would be entirely pointless, I think. Though I may misunderstand either what you are suggesting, or even the way(s) in which users use CMake.
C++ evolved on top of C, CMake evolved on top of existing build systems so I don't find it a bad idea to hitch a ride on CMake with Faber and improve them both by doing that.
Not everyone agrees that it was to C++'s advantage that it was (initially) promoted as "C with objects". But let's not digress. I'm (obviously) not against the idea of layering new frontends over Faber. In fact, I have designed it to be usable as a library, precisely so people can extend Faber with frontends (for example graphical ones). But I think I'd spend my time more wisely focusing on Faber itself, its missing features, documentation, etc., and let those who like working with CMake add and improve bindings to build system backends. Stefan -- ...ich hab' noch einen Koffer in Berlin...
2017-12-02 22:34 GMT+01:00 Stefan Seefeld
Hi Domen,
On 02.12.2017 15:58, Domen Vrankar wrote:
Stefan I have one suggestion - maybe a stupid one but that's for you to decide...
Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja...
What would be the point of that ? Do CMake users really care what build system "backend" is being used ? I thought the goal was for them to only interact with CMake itself ?
In this thread I've read that a build system is better than a meta build system as it removes one layer of complexity - and that one extra layer is a really problematic part or something like that:
Yes. But despite its popularity, I disagree with CMake's approach on a very fundamental level (it being a build system generator, rather than a build system), as it makes things oh so much more complex. Everything works well until it doesn't, at which point all hell breaks loose. That's very typical for macro-based languages (think latex, m4). And it had to reinvent its own language, in a rather ad-hoc manner (i.e. started with a simple declarative syntax, but then had to expand on that to cover other use-cases. "Now you have two problems." comes to mind (if I may paraphrase).
Most of the time when using CMake I don't care which build system is used - most of the time I even don't care which compiler or operating system it is used on. So the advantage of having a true build system rather than a meta build system is lost on me in 99% of the cases. Which brings us to:
I expect Faber to get most publicity from its simple and portable interface, which wouldn't even be visible if it were used as a CMake backend. Moreover, CMake as a build script generator produces extremely ugly and unreadable Makefiles. That's in the nature of the approach of a macro-language. The Makefile has degenerated into an intermediate representation, not something for a human eye to dwell on. Doing that with Faber would be entirely pointless, I think.
I had to look at the generated Makefiles 3 or 4 times in the past 8 years so it's nice to see something at least a bit familiar there - but doesn't really matter as usually I just have to look at the line in the generated file that failed and more often than not that line is quite readable as it just calls compiler with certain flags or some external program. So with Faber I would expect to see generated file with: gxx11 = gxx(name='g++11', features=cxxflags('--std=c++11')) hello = binary('hello', 'hello.cpp') Which would possibly be simpler to read. And for now this is the only "advantage" that I see with a build system that does the abstraction job of CMake compared to using Makefiles in combination with a meta build system... I'm simply trying to see the benefits in Feber and unfortunately that's the most that I came up with... That's why I suggested the integration attempt. I was hoping that you could see a benefit in such a merger that would go beyond reimplementing CMake with a bit different syntax and less abstraction from what I can see from Feber's docs. I'm trying to see in Feber what you see to make it worth your development time and to be honest after going through the documentation and this thread I'm failing to see it. Your reply just added to my doubts because if the only two benefits are not generating intermediate build files which I extremely rarely look at and a bit (and I mean just a bit) different syntax that simply doesn't cut it for me... In case you'd manage to integrate it into CMake as a back-end that could become the only non-legacy back-end for C++ on all platforms in a couple of years that would possibly make this something more than a "reimplementation of CMake with a small new twist". Though I may misunderstand either what you are suggesting, or even the
way(s) in which users use CMake.
I really hate the logic of spawning new computer languages with one new feature just for the sake of it really annoying as it makes my life considerably harder (hype driven development logic...) and same goes for reinventing the tools with only minor differences but with a large impact on the code ecosystem fragmentation. On the other hand I'm trying to keep an open mind as much as possible and that was my main reason for the suggestion. If CMake wouldn't exist and wouldn't have its fair share of the market I'd say that Feber is a great idea on its own but as it stands I don't see it as an advancement but more as a hype-because-I-can thing that would just fragment a small portion the landscape a bit further making my life harder while not adding any great change. I would really like to see in it what you obviously manage to see...
C++ evolved on top of C, CMake evolved on top of existing build systems so I don't find it a bad idea to hitch a ride on CMake with Faber and improve them both by doing that.
Not everyone agrees that it was to C++'s advantage that it was (initially) promoted as "C with objects". But let's not digress. I'm (obviously) not against the idea of layering new frontends over Faber. In fact, I have designed it to be usable as a library, precisely so people can extend Faber with frontends (for example graphical ones). But I think I'd spend my time more wisely focusing on Faber itself, its missing features, documentation, etc., and let those who like working with CMake add and improve bindings to build system backends.
http://www.stroustrup.com/bs_faq.html#whyC
Opinions differ but my point was just the "C++ would have been stillborn."
part. But everybody sees things from their own perspective and only the
time tells which choices were feasible enough to make something survive.
For the end I'd like to give you an example how most of my CMakeLists.txt
files look like (they are quite similar but most of the time on a more
abstract level if at all possible than what Feber does):
#--------------------
cmake_minimum_required(VERSION "3.10.0" FATAL_ERROR)
set(CMAKE_CXX_STANDARD 14)
# search for dependencies
find_package(x1 1.2 REQUIRED CONFIG) # some package supporting cmake import
targets
find_package(Boost REQUIRED) # some package for which somebody provided a
find module (find Boost module is messy as Boost doesn't export a CMake
friendly targets file but at least it's provided by CMake out of the box...)
# add sub projects/components
add_subdirectory(subproject_1)
# build a library
add_library(my_lib SHARED
lib_stuff.cpp
lib_stuff_2.cpp)
target_include_directories(project_2_lib
PUBLIC
$
Domen Vrankar wrote:
Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja...
This makes no sense because Faber is an alternative to CMake. If you still have to use CMake, there's no point in using Faber. In other words, Faber competes with CMake, not with the CMake backends. Faber is not a backend, it's a frontend.
2017-12-03 1:02 GMT+01:00 Peter Dimov via Boost
Domen Vrankar wrote:
Since Faber is meant to be a cross platform build system and CMake is a
build system generator you could perhaps start by competing with other > build systems by attempting to integrate Faber into CMake as yet another > build system along side Makefiles, Ninja...
This makes no sense because Faber is an alternative to CMake. If you still have to use CMake, there's no point in using Faber.
In other words, Faber competes with CMake, not with the CMake backends. Faber is not a backend, it's a frontend.
And as I said as a front end it doesn't really add anything worth mentioning - just puts a different make up on it. If I have two ice creams that taste the same I just waste more energy without getting any benefit - alternatives would have to provide considerable meaningful differences at least from the start before they start converging on each other as they steal ideas from one another. I was just hoping that there is something non obvious that makes the two a bit more different than creating a new tool just for the sake of it and competing with Makefiles/Ninja instead of CMake front-end syntactic sugar would possibly be that hidden non obvious thing that would get my attention - a far fetched hope that was proven wrong. Regards, Domen
On 02.12.2017 19:12, Domen Vrankar via Boost wrote:
And as I said as a front end it doesn't really add anything worth mentioning - just puts a different make up on it.
Domen, no need to defend CMake. If you like it, by all means, keep using it. I have described at length what I think is wrong with CMake, and I know lots of people who agree with me (even if they don't think that Faber is the solution to those problems). I don't intend to convince CMake lovers to switch away from their tool of choice, but I do offer something for those who need a portable build system but aren't satisfied with either CMake or b2. Stefan -- ...ich hab' noch einen Koffer in Berlin...
2017-12-03 1:20 GMT+01:00 Stefan Seefeld via Boost
On 02.12.2017 19:12, Domen Vrankar via Boost wrote:
And as I said as a front end it doesn't really add anything worth mentioning - just puts a different make up on it.
Domen,
no need to defend CMake. If you like it, by all means, keep using it. I have described at length what I think is wrong with CMake, and I know lots of people who agree with me (even if they don't think that Faber is the solution to those problems). I don't intend to convince CMake lovers to switch away from their tool of choice, but I do offer something for those who need a portable build system but aren't satisfied with either CMake or b2.
From Boost I always used the libraries as I didn't have good alternatives. It uses b2 so I even thought about using it instead of CMake for my own
You misunderstand me. I'm not defending CMake. Quite frankly I wouldn't care if at the point in time CMake was created and started getting popular Faber would be created instead and CMake would be presented here now - I would "defend" Faber in that case. What I am defending/am against is the idea of creating alternatives without a large benefit (that's how I see Java compared to C++ all this years... development landscape fragmentation instead of improving existing in a compatible way). Every alternative that I stumble across makes my life harder so I really like the alternatives only when they are really meaningful and not just a matter of taste. At work I'm using CMake. A few weeks ago I had to decide between Botan and Cryptopp library and first I saw Botans "C++11 library" advertisement so I thought I'd give it a try first. Then I saw it has a non CMake build system and got the "python not found" error message... So I downloaded Cryptopp, saw the CMakeLists.txt file, ran the build and noticed that target importing doesn't work - I found out that CMake is community supported but didn't have a problem fixing it and decided that I'll probably contribute a fix once I have the time. After that I just deleted Botan as I knew that both can do what I need. Creating a new build system without a real advantage that couldn't be fairly easily added to an already existing (meta) build system is from where I stand just another obstacle which I'll have to learn to avoid without getting anything more in return than I'd get if the author would have used a common solution instead. projects (that was 5 years ago if I remember correctly). Then I skimmed through the documentation and decided against it - I figured out that it'd be harder to explain to others than CMake and wouldn't make my life easier as we still used some libraries that were CMake based. So since I just had to compile it for AIX and had the instructions/patches from IBMs site I just built it and was OK with that. I would have liked if b2 would create CMake import files and that would be it. Since b2 didn't get enough popularity outside Boost and CMake already provides a find package script for it it didn't change much for me anyway. When the "Boost moving to CMake" announcement came I just thought to myself "Interesting. I hope that now they'll finally provide a target import file for CMake" and that was it. But creating a new build system that could potentially become more popular and really fragment my workflow is a completely different thing - and that's what I'm against... I'm a bit afraid that what b2 didn't manage cause Faber would. Regards, Domen
2017-12-03 1:47 GMT+01:00 Domen Vrankar
2017-12-03 1:20 GMT+01:00 Stefan Seefeld via Boost
: On 02.12.2017 19:12, Domen Vrankar via Boost wrote:
And as I said as a front end it doesn't really add anything worth mentioning - just puts a different make up on it.
Domen,
no need to defend CMake. If you like it, by all means, keep using it. I have described at length what I think is wrong with CMake, and I know lots of people who agree with me (even if they don't think that Faber is the solution to those problems). I don't intend to convince CMake lovers to switch away from their tool of choice, but I do offer something for those who need a portable build system but aren't satisfied with either CMake or b2.
You misunderstand me. I'm not defending CMake. Quite frankly I wouldn't care if at the point in time CMake was created and started getting popular Faber would be created instead and CMake would be presented here now - I would "defend" Faber in that case.
What I am defending/am against is the idea of creating alternatives without a large benefit (that's how I see Java compared to C++ all this years... development landscape fragmentation instead of improving existing in a compatible way). Every alternative that I stumble across makes my life harder so I really like the alternatives only when they are really meaningful and not just a matter of taste.
At work I'm using CMake. A few weeks ago I had to decide between Botan and Cryptopp library and first I saw Botans "C++11 library" advertisement so I thought I'd give it a try first. Then I saw it has a non CMake build system and got the "python not found" error message... So I downloaded Cryptopp, saw the CMakeLists.txt file, ran the build and noticed that target importing doesn't work - I found out that CMake is community supported but didn't have a problem fixing it and decided that I'll probably contribute a fix once I have the time. After that I just deleted Botan as I knew that both can do what I need.
Creating a new build system without a real advantage that couldn't be fairly easily added to an already existing (meta) build system is from where I stand just another obstacle which I'll have to learn to avoid without getting anything more in return than I'd get if the author would have used a common solution instead.
From Boost I always used the libraries as I didn't have good alternatives. It uses b2 so I even thought about using it instead of CMake for my own projects (that was 5 years ago if I remember correctly). Then I skimmed through the documentation and decided against it - I figured out that it'd be harder to explain to others than CMake and wouldn't make my life easier as we still used some libraries that were CMake based. So since I just had to compile it for AIX and had the instructions/patches from IBMs site I just built it and was OK with that. I would have liked if b2 would create CMake import files and that would be it.
Since b2 didn't get enough popularity outside Boost and CMake already provides a find package script for it it didn't change much for me anyway. When the "Boost moving to CMake" announcement came I just thought to myself "Interesting. I hope that now they'll finally provide a target import file for CMake" and that was it.
But creating a new build system that could potentially become more popular and really fragment my workflow is a completely different thing - and that's what I'm against... I'm a bit afraid that what b2 didn't manage cause Faber would.
Sorry for another "how about" mail... I've thought about Faber, what it tries to do and why I dislike it a bit further and: How about you add commands to Faber that generate import target files for CMake and perhaps some additional interoperability things that wouldn't make my life harder (perhaps even contributing something to CMake in order to achieve that) in case two projects use different (meta) build systems? That way you keep your experimental tool which could someday really become better than CMake and a new de facto standard while not make things harder and more divided than they need to be. I just don't want Faber to pretend that it lives inside a bubble and that CMake doesn't exist or is a competitor that you don't want to know about and interoperate with - just make them interoperable and try if it survives. And I'll somehow try to convince myself to install another dependency (python) just for the sake of using it when either Boost or other non Boost libraries start using it. Faber is years too late to not inter operate with CMake and expect people not to care about that. Regards, Domen
On 03.12.2017 05:02, Domen Vrankar wrote:
How about you add ...
How about you contribute patches ?
Faber is years too late to not inter operate with CMake and expect people not to care about that.
I think you misunderstand something fundamental about the dynamics of Open Source projects. People who care will certainly get involved and give this project a direction they deem useful. Just complaining and throwing around advice doesn't have that effect. Stefan -- ...ich hab' noch einen Koffer in Berlin...
2017-12-03 13:52 GMT+01:00 Stefan Seefeld
On 03.12.2017 05:02, Domen Vrankar wrote:
How about you add ...
How about you contribute patches ?
As long as it doesn't start bothering me with popping all over the place I don't care enough about it to contribute to it. From where I stand I'd at the moment be glad if it wouldn't even exist. It doesn't solve any of my problems but adds to them instead so why would I bother with it any more than to either convince you to stop ignoring the existing solution or convince others to forget about it and make it sink?
Faber is years too late to not inter operate with CMake and expect people not to care about that.
I think you misunderstand something fundamental about the dynamics of Open Source projects. People who care will certainly get involved and give this project a direction they deem useful. Just complaining and throwing around advice doesn't have that effect.
I understand the dynamics behind open source and know that most things I hear about are hyped far before they are useful. But after the hype is gone they either fall into the forgotten basket or they did something right. CMake evolved, reinvented itself and so on so it survived and prospered, b2 didn't adapt, didn't become popular and is now slowly going down the drain. For Faber I hope that it either burns before it becomes my problem or succeeds enough that investing in learning it and making patches for it would be feasible. As it currently stands it's just something that could give me head ache in the future so I don't see a reason to contribute in order for it to succeed. If I were the author of something like Faber I'd either keep it as my hobby project and never bother announcing on a high traffic mailing list or I'd try to make it work nicely with current big players but it's always the authors choice where the project is going and contributors choice if he cares enough to invest coding time. Regards, Domen
On 02/12/17 21:34, Stefan Seefeld via Boost wrote:
Hi Domen,
On 02.12.2017 15:58, Domen Vrankar wrote:
Stefan I have one suggestion - maybe a stupid one but that's for you to decide...
Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja...
What would be the point of that ? Do CMake users really care what build system "backend" is being used ? I thought the goal was for them to only interact with CMake itself ? I expect Faber to get most publicity from its simple and portable interface, which wouldn't even be visible if it were used as a CMake backend.
The main point of this would be to lower the barrier to entry for using Faber. If I wasn't using CMake, I would never have had a reason to try out the Ninja generator, and I would have stuck with plain old Makefiles. The cost of switching build systems for moderately large projects is huge, and switching build systems by rewriting all the build logic is a big task, particularly when you can't know up front if it's going to be worth the effort or full cost/benefit of migrating. I've experienced this first hand with project conversions to the autotools in the early 2000s and to cmake over the last few years. It turned out Ninja was a pretty good build tool, and is much faster than make, so I got to use a nice tool which I'd otherwise have ignored, irrespective of its merits, because I couldn't justify rewriting everything from scratch. Being able to generate Faber build configuration with CMake provides similar possibilities. You can get exposure and real world usage with existing big projects, without projects having to commit huge resources to wholesale conversion. This gets you extra testing and exposure, and it lets other projects experiment with Faber when realistically they would not be able to even think about it otherwise due to the cost. It might be the case that the stuff CMake generates isn't as æsthetically pleasing or as efficient as hand-written files; this is certainly the case for Ninja and a bit for make as well. But, when the alternative is not using the tool at all, it's a compromise I can live with. You are correct that often we don't care about the CMake generator in use, if all we want is to build stuff. But sometimes choosing a specific backend is useful. I use Ninja over make when the system has enough memory and CPUs to benefit from it, and have it peg several dozen cores to the max (it can make small systems suffer horribly). I use various IDE generators when I need to use an IDE. If Faber has specific advantages which differentiate it from all the others, then CMake support would let users opt into using those distinguishing features when they need them. Please do think about it. It's an effective way to get Faber more exposure, by allowing all the thousands of projects using CMake to build with it. Regards, Roger
On 4 December 2017 at 23:59, Roger Leigh via Boost
On 02/12/17 21:34, Stefan Seefeld via Boost wrote:
Hi Domen,
On 02.12.2017 15:58, Domen Vrankar wrote:
Stefan I have one suggestion - maybe a stupid one but that's for you to decide...
Since Faber is meant to be a cross platform build system and CMake is a build system generator you could perhaps start by competing with other build systems by attempting to integrate Faber into CMake as yet another build system along side Makefiles, Ninja...
What would be the point of that ? Do CMake users really care what build system "backend" is being used ? I thought the goal was for them to only interact with CMake itself ? I expect Faber to get most publicity from its simple and portable interface, which wouldn't even be visible if it were used as a CMake backend.
The main point of this would be to lower the barrier to entry for using Faber.
If I wasn't using CMake, I would never have had a reason to try out the Ninja generator, and I would have stuck with plain old Makefiles. The cost of switching build systems for moderately large projects is huge, and switching build systems by rewriting all the build logic is a big task, particularly when you can't know up front if it's going to be worth the effort or full cost/benefit of migrating. I've experienced this first hand with project conversions to the autotools in the early 2000s and to cmake over the last few years. It turned out Ninja was a pretty good build tool, and is much faster than make, so I got to use a nice tool which I'd otherwise have ignored, irrespective of its merits, because I couldn't justify rewriting everything from scratch.
Being able to generate Faber build configuration with CMake provides similar possibilities. You can get exposure and real world usage with existing big projects, without projects having to commit huge resources to wholesale conversion. This gets you extra testing and exposure, and it lets other projects experiment with Faber when realistically they would not be able to even think about it otherwise due to the cost.
It might be the case that the stuff CMake generates isn't as æsthetically pleasing or as efficient as hand-written files; this is certainly the case for Ninja and a bit for make as well. But, when the alternative is not using the tool at all, it's a compromise I can live with.
You are correct that often we don't care about the CMake generator in use, if all we want is to build stuff. But sometimes choosing a specific backend is useful. I use Ninja over make when the system has enough memory and CPUs to benefit from it, and have it peg several dozen cores to the max (it can make small systems suffer horribly). I use various IDE generators when I need to use an IDE. If Faber has specific advantages which differentiate it from all the others, then CMake support would let users opt into using those distinguishing features when they need them.
Please do think about it. It's an effective way to get Faber more exposure, by allowing all the thousands of projects using CMake to build with it.
I concur. I use the Clion IDE on windows, linux and Mac. I gives a consistent, reliable, useful interface on all systems without the horror of using open source editors (none of which since emacs ever work). It uses CMake files as its project files. If faber doesn't support this, or Clion doesn't support faber, I'll never use Faber. Improve CMake, please don't unleash yet another c++ build tool on the world. We only need one - that works. R
Regards, Roger
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman /listinfo.cgi/boost
On 05.12.2017 06:41, Richard Hodges via Boost wrote:
Improve CMake, please don't unleash yet another c++ build tool on the world. We only need one - that works.
We should all have contributed to MS-DOS to make it better, back then. Look what a mess we are in now... Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 5 December 2017 at 13:20, Stefan Seefeld via Boost wrote: On 05.12.2017 06:41, Richard Hodges via Boost wrote: Improve CMake, please don't unleash yet another c++ build tool on the
world. We only need one - that works. We should all have contributed to MS-DOS to make it better, back then.
Look what a mess we are in now... The mess is because microsoft developed their own rather than using a
version of unix. Stefan -- ...ich hab' noch einen Koffer in Berlin... _______________________________________________
Unsubscribe & other changes: http://lists.boost.org/
mailman/listinfo.cgi/boost
On 05.12.2017 07:29, Richard Hodges via Boost wrote:
On 5 December 2017 at 13:20, Stefan Seefeld via Boost
wrote: On 05.12.2017 06:41, Richard Hodges via Boost wrote:
Improve CMake, please don't unleash yet another c++ build tool on the world. We only need one - that works. We should all have contributed to MS-DOS to make it better, back then. Look what a mess we are in now...
The mess is because microsoft developed their own rather than using a version of unix. You are of course right. But that is not really the point I was trying to make. :-)
Stefan -- ...ich hab' noch einen Koffer in Berlin...
On 30/11/2017 01:59, Hans Dembinski wrote:
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing". A little more context gives it meaning (though with the US spelling):
https://www.google.co.nz/search?q=build+artifact It's a reasonably well-known phrase, I thought.
On 1. Dec 2017, at 07:11, Gavin Lambert via Boost
wrote: On 30/11/2017 01:59, Hans Dembinski wrote:
from faber.artefacts...: artefacts? The term "artefact" is very general and non-descriptive. The first definition provided by Google is essentially "human-made thing". A little more context gives it meaning (though with the US spelling):
https://www.google.co.nz/search?q=build+artifact
It's a reasonably well-known phrase, I thought.
The first link in this Google search is (at least in my search bubble) https://en.wikipedia.org/wiki/Artifact_(software_development) "An artifact occasionally may be used to refer to the released code (in the case of a code library https://en.wikipedia.org/wiki/Library_(computing)) or released executable (in the case of a program) produced but the more common usage is in referring to the byproducts of software development rather than the product itself." The quote says it. In addition, the article lists so many other meanings of "artifact". And then the word itself has many other meanings in other contexts, like in compression or archeology. It is not a well-defined term. A word that has so many meanings means nothing in the end. I believe CMake calls "targets" what Faber calls "artefacts". The meaning of "targets" is pretty clear to me in the context of a build system. I think I will stop here...
On Wed, Nov 22, 2017 at 3:00 AM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
I am strongly of the view that c++ needs a standard tool for build, IDE project generation, toolset selection, dependency management, testing and deployment. I find it deeply disturbing that one cannot simply write a project that pulls in 3 or 4 libraries and then cross-compile it for any target with one command.
FWIW, the C++ standards committee is in the process of setting up a new Study Group for Tooling. Titus Winters will be the chair. I suspect it will initially focus on tooling to support P0684 *C++ Stability, Velocity, and Deployment Plans *(See https://wg21.link/p0684), but it is a sign of increased interest in the whole C++ ecosystem. --Beman
participants (13)
-
Beman Dawes
-
Domen Vrankar
-
Dominique Devienne
-
Gavin Lambert
-
Hans Dembinski
-
James E. King, III
-
Julian Faber
-
Paul A. Bristow
-
Peter Dimov
-
Rene Rivera
-
Richard Hodges
-
Roger Leigh
-
Stefan Seefeld