On 11/6/23 20:30, Robert Ramey via Boost wrote:
On 11/6/23 5:13 AM, Andrey Semashev via Boost wrote:
Now, if you mean compiler bugs and such that affect your library, then the failure is legitimate, as it will likely affect your users that also use that compiler.
The solution here would be to implement workarounds in the library or,
Boost libraries are only required to support standard C++ at the time they are first released. That's all. For a large library, it's a huge amount of work to address all the problems in all the compiler implementations.
Supporting only strictly conforming compilers isn't practical, because there's no such compilers in the real world. If you want your library to be useful other than an academic exercise, you will have to face real users running real compilers, with all their bugs and missing C++ features.
if not feasible, declare that compiler unsupported
(preferably, in the library docs) and remove it from testing.
Or just leave the failue in the test matrix - it's effectively self documenting.
No, it's not. Not for an outside viewer.
The test matrix shows all the tests x all the environments. One can easily see if any failure is general or isolated to a particular environment. The current CI just registers pass/fail for the whole library and all the environments.
The only benefit that the test matrix provides is the breakdown by individual tests.
That is an indispensible benefit.
Evidently, I can manage without. In return I get much more valuable benefits, like faster turnaround, testing PRs and feature branches, notifications, and most importantly, being able to see the build/test logs (which were often missing last time I used the test matrix).
With CI, you typically have to search though the logs to know which test has failed. You do get the breakdown by jobs (which are equivalent to environments). It may not be presented the same way as in the matrix, and you may prefer one or the other presentation, but the information is there.
This is a hugely time consuming exercise!
Not really. Ctrl+F, type "...fail", it takes about 5 seconds for me.
Some times someone will suggest skipping a particular test for a particular library so the CI comes of clean. This is basically hiding the error. Users considering using a library in their own enviroment are basically mislead that their the library works everywhere - which is demonstrably untrue. It's a fraud.
As I said above, it's not about hiding a failure and trying to deceive the user.
LOL - it's hiding the failure to so the developer can convince himself that there are not issues.
That would be a pointless exercise, as he would discover many new things about himself and his library from his users rather soon. Or not, which is worse, because that would mean he has no users left.
It's about supporting or not supporting a configuration. A CI is supposed to test the configurations that you intend to support and highlight problems with those configurations, should they appear. The configurations you don't support need not be present in the CI, as you don't care whether they are failing or not.
I don't see it that way at all. I see it as display of the current status of the library in light of the environment we're stuck in. It's about providing visibility as to the current state of the library. We write to the standard - not to individual compiler quirks.
Again, if you want your libraries to be practically useful, you do need to take compiler quirks into account, however irritating is may be.
Furthermore, as a user, CI (or the test matrix) is not going to be the first place where I look to see whether the library supports my configuration.
why not?
My first choice would probably be the documentation,
Which is not generally maintained / updated due to test output. Especially when the test script has been updated to hide the failures.
and
if it lacks the information, I will probably try the library myself.
Glad you have the time for that.
The reason why CI or the test matrix is not useful for this is because, as a user, I have no idea what the test results show. If a test is failing, I have no idea what it is testing, and why it is failing. Is the core functionality broken? Is it some corner case that I will never hit? Is it some intermittent failure e.g. due to network issues? Even discovering the testing configuration (compiler version, installed libraries, build options, etc.) may not be a trivial task for an outside user - even less trivial if the test results are in a less commonly used CI service.
Actually, the current test matrix has the facility whereby one can click on failing cell an it opens a new tab with the output from that particular test. Very quick and useful.
Useful to whom? Certainly not to an outside user, who has no idea of your tests and what the failure signifies. At most, he gathers that your library doesn't work on that compiler, which may or may not be true.
Bottom line is, CI (or the test matrix) is a tool primarily for the library maintainer, who can interpret its results and act accordingly.
which is not me.
For me, as a maintainer, CI does a better job than the test matrix, even despite the shortcomings it has. And I'm not denying that there are shortcomings. Good for you.
However, CI vs. test matrix is a bit off-topic in this discussion.
Hmmm - it is the discussion. My original point is that CI is useless for improving library quality. The test matrix is better but the scripts are not being maintained.
As I've suggest elsewhere, maybe the solution is to a) make a script to be run before release which verifies the master and develop branches are in sync. b) OR maybe eliminate the develop branch entirely c) add test matrix like output to the CI
I would be ok with a), but this is one more burden on the release managers, who probably already have their plate full. I think, it would be better to have a more distributed solution that everyone would be able to use on their own, or some sort of automation. For b), it looks like people still find develop useful, so removing it probably won't be acceptable. As for c), the CI is an external service not under our control. I'm not sure how one would transform its UI to something like the test matrix. Perhaps, though its public API? In any case, it doesn't look trivial, and given that everyone (except you) seem content with the current UI, it is unlikely that this will be implemented any time soon.
Technically, you can configure CI to test however you like, including testing your feature branches and PRs against master. You can even test both against develop and master, if you like, although it will double the testing time. You write the CI scripts, including checkout commands, so you decide.
The development over head now includes CI scripting. Add that to Git, github, b2 and CMake, documentation tools and assorted other tools one needs to keep up with it's a real burden on developers. No wonder we have difficulties attracting more boost developers!
Well, you can't have a cake and eat it too. Nothing is free. Testing flexibility has its price too. Also, I do not think the requirement of knowledge of how to set up GitHub Actions CI is a significant factor for scaring off new developers; it's not a factor at all. People are much more likely to be familiar with GHA than with our test matrix, Boost.Build and review process.
However, as long as our workflow includes the develop branch, it doesn't make sense to test against master, as you will be merging your changes into develop, not master. If you want to test whether your changes will not break master, you could create a PR merging develop to master and run the CI on that, but that is separate from feature branches and other PRs that target develop.
But then you merge the feature branch to develop, right?
No.
Yet what you're saying below sounds like "yes".
a) I test the feature branch on my own machine - no CI. Review test matrix like output. For boost dependencies I test against master. b) Then I merge feature brand onto my local develop branch - again using the master branch for other boost dependencies. c) Then I push the changes on my local copy of the develop branch to the git hub version. d) Watch the test matrix. Unfortunately, this tests agains the develop branch of other boost libraries. So sometimes this reveals a bug / dependency issue in another boost library. Then I have to harass the current maintainer - if there is one - or post a message here.