I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well. I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests. Any thoughts/solutions? Cheers, John. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 1/23/21 2:20 PM, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
As a maintainer of Boost.Atomic, I would appreciate more hardware architectures, especially ARM. The real hardware is not mandatory for this, as it could be emulated with QEMU, although the performance would suffer (but could still be acceptable). Currently, I'm running this setup locally. The question is who is going to run these QEMU VMs in the cloud and how much it will cost.
On Sat, Jan 23, 2021 at 6:14 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
On 1/23/21 2:20 PM, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
As a maintainer of Boost.Atomic, I would appreciate more hardware architectures, especially ARM. The real hardware is not mandatory for this, as it could be emulated with QEMU, although the performance would suffer (but could still be acceptable). Currently, I'm running this setup locally. The question is who is going to run these QEMU VMs in the cloud and how much it will cost.
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64). If anyone has access to a RISC-V development board, I'd like to get my hands on one of those to start running too. I've debated trying QEMU for more variety, but that has always taken a back seat to getting the working compilers providing results more quickly. At one point we had some person/group/company that was targeting android and had the tests running. That seems like a big hole in what we test....being the most widely used computing platform and all. Not sure what would be needed to get that going again. Tom
On 1/23/21 3:47 PM, Tom Kent via Boost wrote:
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64).
I appreciate your and all other testers efforts in running the test suite, but I must confess that I pay almost no attention to the official test matrix these days because: - No notifications of build/test failures or test completions. Having to manually visit the page from time to time is a problem, especially given the... - Slow turnaround times. As I remember, for some testers the turnaround time was days and weeks. For others it was better, but still not as good as AppVeyor, which usually sends the notification in a few hours after the commit. - Problematic debugging. Often the report shows a failure, but the error log is not accessible. This seems to be a long standing problem. This makes the whole testing process pointless, as I cannot do anything about the failures. I wish the current testing infrastructure was replaced with something more modern, CI-style, as I don't believe the above issues will be fixed any time soon.
If anyone has access to a RISC-V development board, I'd like to get my hands on one of those to start running too.
I've debated trying QEMU for more variety, but that has always taken a back seat to getting the working compilers providing results more quickly.
Given that compile times probably dominate the run times, I wonder if it would be better to do a cross-compile on x86 and then run on the real hardware or a QEMU VM, in terms of test turnaround. I realize that this setup is much more complex, but it could be the game changer.
Andrey Semashev via Boost said: (by the date of Sat, 23 Jan 2021 18:38:04 +0300)
- No notifications of build/test failures or test completions. Having to manually visit the page from time to time is a problem, especially given the...
you might want to switch to gitlab package. No need to use their site, it is also packaged e.g. for debian to use on any server. Recently I configured CI for ppc64el, arm64 [1], there is a full log in the pipeline [2] and email alert in case of failure. I needed to install qemu-user-static version above 5.0-14 on the host apt-get install binfmt-support qemu-user-static Then this command starts docker inside emulator: docker run -it --rm multiarch/debian-debootstrap:ppc64el-bullseye bash best regards Janek [1] https://gitlab.com/yade-dev/trunk/-/merge_requests/588 [2] e.g. https://gitlab.com/yade-dev/trunk/-/pipelines/245472012 click "Show complete raw" icon. -- # Janek Kozicki http://janek.kozicki.pl/
On 23/01/2021 15:38, Andrey Semashev via Boost wrote:
On 1/23/21 3:47 PM, Tom Kent via Boost wrote:
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64).
I appreciate your and all other testers efforts in running the test suite, but I must confess that I pay almost no attention to the official test matrix these days because:
- No notifications of build/test failures or test completions. Having to manually visit the page from time to time is a problem, especially given the... - Slow turnaround times. As I remember, for some testers the turnaround time was days and weeks. For others it was better, but still not as good as AppVeyor, which usually sends the notification in a few hours after the commit. - Problematic debugging. Often the report shows a failure, but the error log is not accessible. This seems to be a long standing problem. This makes the whole testing process pointless, as I cannot do anything about the failures.
I wish the current testing infrastructure was replaced with something more modern, CI-style, as I don't believe the above issues will be fixed any time soon.
This is much how I feel - I confess to having been a CI Luddite when it first appeared, but there's no question it does streamline things. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 1/23/21 7:38 AM, Andrey Semashev via Boost wrote:
On 1/23/21 3:47 PM, Tom Kent via Boost wrote:
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64).
I appreciate your and all other testers efforts in running the test suite, but I must confess that I pay almost no attention to the official test matrix these days because:
The boost test matrix is the most complete and reliable display of the current state of the the boost packages. I do run the more moder CI stuff, but it's often failing for some issue or another totally unrelated to my library. It's the gold standard as far as I'm concerned.
- No notifications of build/test failures or test completions. Having to manually visit the page from time to time is a problem, especially given the...
This bothers me not at all.
- Slow turnaround times. As I remember, for some testers the turnaround time was days and weeks. For others it was better, but still not as good as AppVeyor, which usually sends the notification in a few hours after the commit.
- Problematic debugging. Often the report shows a failure, but the error log is not accessible. This seems to be a long standing problem. This makes the whole testing process pointless, as I cannot do anything about the failures.
I have difficulties sifting through the test output on all platforms. (I've been roundly ridiculed for this complaint. But it means nothing to me - I wear their ridicule as a badge of honor.) I have my own solution which I run on my own machine - library_status which presents a table which is actually more useful to me than the official boost one not to mention the Appveyor one. Now If I could get library_status to run as part of the CI solutions ...
I wish the current testing infrastructure was replaced with something more modern, CI-style, as I don't believe the above issues will be fixed any time soon.
I've made a worthy proposal for that (to be used in addition to the current boost test matrix). Again, got a lot of ridicule on that one too.
On 1/23/21 10:07 PM, Robert Ramey via Boost wrote:
On 1/23/21 7:38 AM, Andrey Semashev via Boost wrote:
On 1/23/21 3:47 PM, Tom Kent via Boost wrote:
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64).
I appreciate your and all other testers efforts in running the test suite, but I must confess that I pay almost no attention to the official test matrix these days because:
The boost test matrix is the most complete and reliable display of the current state of the the boost packages.
Sure, if you want to see the status of the whole Boost. I, as a library maintainer, am more interested in my specific library status, after having pushed a commit. This information is not readily provided by the test matrix.
I do run the more moder CI stuff, but it's often failing for some issue or another totally unrelated to my library. It's the gold standard as far as I'm concerned.
In my experience, the official test matrix is not more reliable than CI, when it comes to random failures. More than once I've seen failures caused by configuration errors on tester's machine (e.g. compiler executable not found). There were also weird failures for no apparent reason, which turned out to be the result of updates on the tester's machine. CI images are more stable, and you also can install necessary dependencies for testing. There are some quirks, and the installation can fail from time to time, but in general I would say the CI errors are more actionable.
- Problematic debugging. Often the report shows a failure, but the error log is not accessible. This seems to be a long standing problem. This makes the whole testing process pointless, as I cannot do anything about the failures.
I have difficulties sifting through the test output on all platforms.
The problem is not too much output (that wouldn't be a problem at all). The problem is that you often get no output at all.
(I've been roundly ridiculed for this complaint. But it means nothing to me - I wear their ridicule as a badge of honor.) I have my own solution which I run on my own machine - library_status which presents a table which is actually more useful to me than the official boost one not to mention the Appveyor one. Now If I could get library_status to run as part of the CI solutions ...
A short status table might be nice, but that is not my complaint. I can do without such a table just fine. I can't do without the build and test output in case of failure.
I wish the current testing infrastructure was replaced with something more modern, CI-style, as I don't believe the above issues will be fixed any time soon.
I've made a worthy proposal for that (to be used in addition to the current boost test matrix). Again, got a lot of ridicule on that one too.
I haven't seen this, sorry.
On Sat, Jan 23, 2021 at 1:32 PM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
On 1/23/21 10:07 PM, Robert Ramey via Boost wrote:
On 1/23/21 7:38 AM, Andrey Semashev via Boost wrote:
On 1/23/21 3:47 PM, Tom Kent via Boost wrote:
I've got a couple raspberry pi 4's that are running tests (slowly,
takes
20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64).
I appreciate your and all other testers efforts in running the test suite, but I must confess that I pay almost no attention to the official test matrix these days because:
The boost test matrix is the most complete and reliable display of the current state of the the boost packages.
Sure, if you want to see the status of the whole Boost. I, as a library maintainer, am more interested in my specific library status, after having pushed a commit. This information is not readily provided by the test matrix.
I do run the more moder CI stuff, but it's often failing for some issue or another totally unrelated to my library. It's the gold standard as far as I'm concerned.
In my experience, the official test matrix is not more reliable than CI, when it comes to random failures. More than once I've seen failures caused by configuration errors on tester's machine (e.g. compiler executable not found). There were also weird failures for no apparent reason, which turned out to be the result of updates on the tester's machine. CI images are more stable, and you also can install necessary dependencies for testing. There are some quirks, and the installation can fail from time to time, but in general I would say the CI errors are more actionable.
I agree that the current setup is far from ideal, and there are lots of off the shelf CI setups that library authors *should* be depending on for per-commit testing. However there are two aspects that are often missed in per-library CI testing: 1. Integration with the rest of boost. If a change is made in one of the libraries up near the top of the dependency graph, how does the library author know if it breaks some feature in something that depends on it? For most changes, especially ones that don't affect the libraries API, I'd hope this isn't common. However since boost is an integrated product, as a community it is something we should think about. 2. Most CI setups I've seen run a very limited number of compilers. Boost's matrix has dozens of different compiler versions, several different sets of compiler options, and multiple architectures (this used to be more vibrant). I think there is a need for both types of testing in Boost.
- Problematic debugging. Often the report shows a failure, but the error log is not accessible. This seems to be a long standing problem. This makes the whole testing process pointless, as I cannot do anything about the failures.
I have difficulties sifting through the test output on all platforms.
The problem is not too much output (that wouldn't be a problem at all). The problem is that you often get no output at all.
There are lots of problems with the current regression test setup. Output is one of them. Runners with bad configs is another one. <aside>We also have users with bad configs, depending on what config problem there is boost library authors need to do a much better job and making config problems apparent to users. </aside>
From my side, the actual regression test tools are barely-held-together- with-duct-tape bad. They only work in python 2, lots of wasted time with git overhead and log processing, janky ftp uploads.
(I've been roundly ridiculed for this complaint. But it means nothing to me - I wear their ridicule as a badge of honor.) I have my own solution which I run on my own machine - library_status which presents a table which is actually more useful to me than the official boost one not to mention the Appveyor one. Now If I could get library_status to run as part of the CI solutions ...
A short status table might be nice, but that is not my complaint. I can do without such a table just fine. I can't do without the build and test output in case of failure.
I wish the current testing infrastructure was replaced with something more modern, CI-style, as I don't believe the above issues will be fixed any time soon.
I've made a worthy proposal for that (to be used in addition to the current boost test matrix). Again, got a lot of ridicule on that one too.
I haven't seen this, sorry.
I'd love to see proposals for: 1. Fixing the boost-wide regression test system. 2. Getting CI best-practices for boost that can be easily pulled into library's standalone testing system. Tom
I've got a couple raspberry pi 4's that are running tests (slowly, takes 20+hrs to run the test suite...any earlier models just didn't have enough ram). Look at the teeks99-05* (armv71/armhf) and teeks99-06* (aarch64). Thanks Tom, I had missed those!
If anyone has access to a RISC-V development board, I'd like to get my hands on one of those to start running too.
I've debated trying QEMU for more variety, but that has always taken a back seat to getting the working compilers providing results more quickly.
At one point we had some person/group/company that was targeting android and had the tests running. That seems like a big hole in what we test....being the most widely used computing platform and all. Not sure what would be needed to get that going again.
Tom
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On Sat, 23 Jan 2021, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
https://www.boost.org/development/testing.html does not link to explanations on how to add testers, not very encouraging. The bottom still says literally "Revised $Date$" so maybe that page is dead. Why not use the gcc testfarm? Despite the name, it isn't at all restricted to gcc. It has some aarch64, sparc64, ppc64, etc. Of course you shouldn't abuse it by running a CI on every commit, but running the testsuite once a week on aarch64 should be no problem I believe. An advantage is that developers would have access to the platform, so they would have an easier time reproducing issues than with other testers. -- Marc Glisse
On Sat, Jan 23, 2021 at 7:10 AM Marc Glisse via Boost
On Sat, 23 Jan 2021, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
https://www.boost.org/development/testing.html does not link to explanations on how to add testers, not very encouraging. The bottom still says literally "Revised $Date$" so maybe that page is dead.
Why not use the gcc testfarm? Despite the name, it isn't at all restricted to gcc. It has some aarch64, sparc64, ppc64, etc. Of course you shouldn't abuse it by running a CI on every commit, but running the testsuite once a week on aarch64 should be no problem I believe. An advantage is that developers would have access to the platform, so they would have an easier time reproducing issues than with other testers.
Interesting, the GCC Compile Farm (https://gcc.gnu.org/wiki/CompileFarm) looks like it has quite a few architectures. I applied for an account there. I think it would be pretty easy to get some boost test jobs running weekly-ish across different architectures. I know it is hosted by the GCC people, but do they have qualms about running our tests against clang too? I can ask on that list, just wondering if you have any knowledge. Tom
On Sat, 23 Jan 2021 at 17:58, Tom Kent via Boost
On Sat, Jan 23, 2021 at 7:10 AM Marc Glisse via Boost
wrote: On Sat, 23 Jan 2021, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
https://www.boost.org/development/testing.html does not link to explanations on how to add testers, not very encouraging. The bottom still says literally "Revised $Date$" so maybe that page is dead.
Why not use the gcc testfarm? Despite the name, it isn't at all restricted to gcc. It has some aarch64, sparc64, ppc64, etc. Of course you shouldn't abuse it by running a CI on every commit, but running the testsuite once a week on aarch64 should be no problem I believe. An advantage is that developers would have access to the platform, so they would have an easier time reproducing issues than with other testers.
Interesting, the GCC Compile Farm (https://gcc.gnu.org/wiki/CompileFarm) looks like it has quite a few architectures. I applied for an account there. I think it would be pretty easy to get some boost test jobs running weekly-ish across different architectures.
I know it is hosted by the GCC people, but do they have qualms about running our tests against clang too? I can ask on that list, just wondering if you have any knowledge.
I'm not speaking for GCC as a project, but no such qualms for me. Be civil, don't hoard the machines, nice your stuff out of the way if you can. The farm is predominantly for GCC/libstdc++, be a civilized house-guest if you use the farm. ;)
John Maddock wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86.
Travis has support for ARM64, ppc64le, s390x. Of those, only the last one is big-endian. See e.g. the first four entries of https://travis-ci.org/github/boostorg/endian/builds/750954473.
On 23/01/2021 15:47, Peter Dimov via Boost wrote:
John Maddock wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86.
Travis has support for ARM64, ppc64le, s390x. Of those, only the last one is big-endian.
See e.g. the first four entries of https://travis-ci.org/github/boostorg/endian/builds/750954473.
Thanks Peter, I confess I hadn't realized they'd expanded to these other architectures. Longer term, I confess I could never see how Travis or the other free providers could afford to do what they're doing and not be overwhelmed. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
John Maddock wrote:
Thanks Peter, I confess I hadn't realized they'd expanded to these other architectures.
Longer term, I confess I could never see how Travis or the other free providers could afford to do what they're doing and not be overwhelmed.
In this specific case I suppose they are using IBM-donated computing power which is a more sustainable "business model" than paying MacStadium or whoever. Github Actions is very usable and covers the common cases (gcc-4.7 to -10, clang 3.5 to 11, msvc 14.1 and 14.2, macos-10.15) so Travis can be used only for the rest (https://travis-ci.org/github/boostorg/core/builds/755272662). To enable Github Actions for your repo, just copy e.g. https://github.com/boostorg/core/blob/develop/.github/workflows/ci.yml into the same directory. No other steps are needed.
On 23/01/2021 17:48, Peter Dimov via Boost wrote:
John Maddock wrote:
Thanks Peter, I confess I hadn't realized they'd expanded to these other architectures.
Longer term, I confess I could never see how Travis or the other free providers could afford to do what they're doing and not be overwhelmed.
In this specific case I suppose they are using IBM-donated computing power which is a more sustainable "business model" than paying MacStadium or whoever.
Github Actions is very usable and covers the common cases (gcc-4.7 to -10, clang 3.5 to 11, msvc 14.1 and 14.2, macos-10.15) so Travis can be used only for the rest (https://travis-ci.org/github/boostorg/core/builds/755272662).
To enable Github Actions for your repo, just copy e.g. https://github.com/boostorg/core/blob/develop/.github/workflows/ci.yml into the same directory. No other steps are needed.
Nod. Math and Multiprecision are slowly getting their CI's updated which is what prompted the original message, we will probably try to balance the permutations between the different services to keep things as speedy as possible (and avoid hogging too much of any one services time). And yes, GHA are super quick - color me impressed! :) John. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
John Maddock wrote:
Nod. Math and Multiprecision are slowly getting their CI's updated which is what prompted the original message, we will probably try to balance the permutations between the different services to keep things as speedy as possible (and avoid hogging too much of any one services time). And yes, GHA are super quick - color me impressed! :)
Interesting, so it has msvc-14.0 now as well, didn't know that.
On 23/01/2021 17:48, Peter Dimov via Boost wrote:
Github Actions is very usable and covers the common cases (gcc-4.7 to -10, clang 3.5 to 11, msvc 14.1 and 14.2, macos-10.15) so Travis can be used only for the rest (https://travis-ci.org/github/boostorg/core/builds/755272662).
To enable Github Actions for your repo, just copy e.g. https://github.com/boostorg/core/blob/develop/.github/workflows/ci.yml into the same directory. No other steps are needed.
Github Actions is a convenience wrap of Azure Pipelines. You can use Azure Pipelines directly if you wish. Azure Pipelines is free for open source projects and it comes with runners for: - Windows Server 2019 x86/x64/ARM/ARM64 - Windows Server 2016 x86/x64/ARM/ARM64 - Ubuntu 20.04 x86/x64/ARM/ARM64 - Ubuntu 18.04 x64 - Ubuntu 16.04 x64 - Mac OS X 10.14 x64 - Mac OS X 10.15 x64 It's a bit more work to write up all the integration scripting over GA, but you definitely have the above architectures available for free of cost for CI. There are Github Actions apps which will template the integration with Azure Pipelines for you. I would be very surprised if ARM runners for Github Actions don't appear at some point relatively soon, but for now, my project CI simply invokes the ARM cross compiler to ensure all my code builds and links correctly for ARM Linux, but does not currently run the unit tests for ARM. I would also mention that if your project is in vcpkg (Boost is), then it gets compiled by Azure Pipelines for almost all of the above platforms. That isn't running the unit tests of course, but it's better than nothing. Niall
On 24/01/2021 00:21, Niall Douglas via Boost wrote:
On 23/01/2021 17:48, Peter Dimov via Boost wrote:
Github Actions is very usable and covers the common cases (gcc-4.7 to -10, clang 3.5 to 11, msvc 14.1 and 14.2, macos-10.15) so Travis can be used only for the rest (https://travis-ci.org/github/boostorg/core/builds/755272662).
To enable Github Actions for your repo, just copy e.g. https://github.com/boostorg/core/blob/develop/.github/workflows/ci.yml into the same directory. No other steps are needed.
Github Actions is a convenience wrap of Azure Pipelines. You can use Azure Pipelines directly if you wish.
Azure Pipelines is free for open source projects and it comes with runners for:
- Windows Server 2019 x86/x64/ARM/ARM64 - Windows Server 2016 x86/x64/ARM/ARM64 - Ubuntu 20.04 x86/x64/ARM/ARM64 - Ubuntu 18.04 x64 - Ubuntu 16.04 x64 - Mac OS X 10.14 x64 - Mac OS X 10.15 x64
Even with the help of a famous-web-search-engine I couldn't find that information anywhere on the Azure website... just saying. But many thanks for the information, that's most useful (or will be if I can find the docs).
It's a bit more work to write up all the integration scripting over GA, but you definitely have the above architectures available for free of cost for CI. There are Github Actions apps which will template the integration with Azure Pipelines for you.
I would be very surprised if ARM runners for Github Actions don't appear at some point relatively soon, but for now, my project CI simply invokes the ARM cross compiler to ensure all my code builds and links correctly for ARM Linux, but does not currently run the unit tests for ARM.
I would also mention that if your project is in vcpkg (Boost is), then it gets compiled by Azure Pipelines for almost all of the above platforms. That isn't running the unit tests of course, but it's better than nothing.
Niall
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 24/01/2021 09:44, John Maddock via Boost wrote:
Github Actions is a convenience wrap of Azure Pipelines. You can use Azure Pipelines directly if you wish.
Azure Pipelines is free for open source projects and it comes with runners for:
- Windows Server 2019 x86/x64/ARM/ARM64 - Windows Server 2016 x86/x64/ARM/ARM64 - Ubuntu 20.04 x86/x64/ARM/ARM64 - Ubuntu 18.04 x64 - Ubuntu 16.04 x64 - Mac OS X 10.14 x64 - Mac OS X 10.15 x64
Even with the help of a famous-web-search-engine I couldn't find that information anywhere on the Azure website... just saying.
It was composed by me from multiple sources, including personal experience of programming GA and submitting pull requests for AP-based CI open source projects. Doing my own famous web searching to provide links to support my claims above: https://azure.microsoft.com/en-us/services/devops/pipelines/ Scroll to the very bottom, you'll see there are 10 parallel jobs with unlimited minutes for open source projects. They actually provide additional stuff too free of cost for open source, some of the enterprisey features e.g. 2Gb of build artifact storage. https://devblogs.microsoft.com/devops/top-5-open-source-features-in-azure-pi... You can examine libgit2 for how to use Azure Pipelines for CI on github. https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=a... Here is a list of all the OS platforms supported, as I listed above. You can actually prod GA to use OS platforms which GA says it doesn't support, but is in the list above. Obviously you're on your own if you do that. https://azure.microsoft.com/en-us/updates/azure-devops-pipelines-introduces-... There's the announcement of ARM64 hardware. I actually couldn't find any proof that that ARM64 hardware they provide supports running ARM32 binaries, some ARM64 designs don't have the requisite hardware. Plus, the ARM32 implemented by ARM64 isn't "true" ARM32 e.g. with NEON being IEEE754 incomplete and so on. Something I haven't tried is whether qemu can emulate "legacy ARM" on ARM64 including all quirks in a performant fashion. I'd like to think it would.
But many thanks for the information, that's most useful (or will be if I can find the docs).
Hopefully the above helps. A huge short circuit to the learning curve is to find open source projects on github already using Azure Pipelines, and simply lift whatever they're doing. Github Actions has hugely closed the gap with Azure Pipelines in 2020 such that for most people implementing CI afresh, directly using Azure Pipelines is excess work when you can just "poke through" GA into the underlying AP if you need to, and otherwise remain within GA. GA is open source on github, so you can see how it converts your GA scripts into AP, and indeed you can then poke through into officially unsupported AP capabilities from GA. Stackoverflow is also an enormous help for unofficial hacks, as are various issue trackers on github. For me personally, last Autumn I invested about a month of my precious outside-of-work free time into permanently leaving Travis and Appveyor for Github Actions, given that Travis was shortly going to be become unusable, which it then became. I ended up with a solution as exemplified by https://github.com/ned14/llfio, whereby if all CI tests pass on all platforms, it automatically publishes a new prerelease of prebuilt binaries to github and tags the git commit as "all tests passed". It also does cool stuff like summarise all your unit tests per pull request e.g. https://github.com/ned14/llfio/pull/68 (unhide the comments to see the tables). This, in my opinion, is very cool. From time to time I choose a particular prerelease which I think particularly stable, and promote it to latest full release, merging that commit to master branch. I'm very happy with this setup, though I am now 100% dependent on Microsoft for everything. Feel free to lift from my .github/workflows GA scripting as you see fit. Niall
On Sun, Jan 24, 2021 at 8:24 AM Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 24/01/2021 09:44, John Maddock via Boost wrote:
Github Actions is a convenience wrap of Azure Pipelines. You can use Azure Pipelines directly if you wish.
Azure Pipelines is free for open source projects and it comes with runners for:
- Windows Server 2019 x86/x64/ARM/ARM64 - Windows Server 2016 x86/x64/ARM/ARM64 - Ubuntu 20.04 x86/x64/ARM/ARM64 - Ubuntu 18.04 x64 - Ubuntu 16.04 x64 - Mac OS X 10.14 x64 - Mac OS X 10.15 x64
Even with the help of a famous-web-search-engine I couldn't find that information anywhere on the Azure website... just saying.
It was composed by me from multiple sources, including personal experience of programming GA and submitting pull requests for AP-based CI open source projects.
What Niall failed to mention is that the ARM architectures are not for Microsoft hosted agents. Here's what the very minimal docs < https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-171-update#additional-agent-platform-arm64> says about it: You can now run your self-hosted agents on Linux/ARM64. We added
Linux/ARM64 to the list of supported platforms for the Azure Pipelines agent. Although the code changes were minimal, a lot of behind-the-scenes work had to be completed first, and we're excited to announce its release!
For Microsoft hosted agents, which is what most of us are interested in, you only get x64 (not even pure x86). As Sam Darwin referenced in his blog post I have an always in-progress CI test bed for C++ that people can use to get started with various cloud CI providers < https://github.com/bfgroup/ci_playground>. I'll see if I can find non-x64 support in something out there (other than Travis). -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supone Nada -- Robot Dreams - http://robot-dreams.net
On 24/01/2021 20:25, René Ferdinand Rivera Morell wrote:
It was composed by me from multiple sources, including personal experience of programming GA and submitting pull requests for AP-based CI open source projects.
What Niall failed to mention is that the ARM architectures are not for Microsoft hosted agents.
I didn't actually know that Microsoft weren't exposing their ARM64 Azure VMs to the public yet, sorry. I was also confused by some having wired in AWS Graviton2 instances as self hosted Azure agents as a stand in. That's expensive unless you're already spending on AWS, alternatives include: - OVH will do you a fully dedicated dual core ARM Cortex A9 @ 1 Ghz with 2Gb of RAM for €60/year, if you can snag one available. Note that emulation on a fast x64 is not especially slower than this CPU. - Ikoula will do you a fully dedicated quad core Raspberry Pi 4 @ 1.5 Ghz with 4Gb of RAM for €60/year. Not much storage however. - Mythic Beasts will do you a fully dedicated quad core Raspberry Pi 4 @ 1.5 Ghz with 4Gb of RAM for £72.50/year + £2/10Gb/year of storage. - You can colocate your own RaspPi at https://raspberry-hosting.com/en for €36/year. You could easily build a high end model for under €100 complete with fast USB3 connected SSD. I have no connection with any of the above providers. I have no experience with any of the above providers, except the OVH Cortex A9 which I rented for a while for a job I had to do at the time many years ago. It worked well enough, though it was slow relative even to an Intel Atom. Niall
On 1/24/2021 3:25 PM, René Ferdinand Rivera Morell via Boost wrote:
On Sun, Jan 24, 2021 at 8:24 AM Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 24/01/2021 09:44, John Maddock via Boost wrote:
Github Actions is a convenience wrap of Azure Pipelines. You can use Azure Pipelines directly if you wish.
Azure Pipelines is free for open source projects and it comes with runners for:
- Windows Server 2019 x86/x64/ARM/ARM64 - Windows Server 2016 x86/x64/ARM/ARM64 - Ubuntu 20.04 x86/x64/ARM/ARM64 - Ubuntu 18.04 x64 - Ubuntu 16.04 x64 - Mac OS X 10.14 x64 - Mac OS X 10.15 x64
Even with the help of a famous-web-search-engine I couldn't find that information anywhere on the Azure website... just saying.
It was composed by me from multiple sources, including personal experience of programming GA and submitting pull requests for AP-based CI open source projects.
What Niall failed to mention is that the ARM architectures are not for Microsoft hosted agents. Here's what the very minimal docs < https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-171-update#additional-agent-platform-arm64> says about it:
You can now run your self-hosted agents on Linux/ARM64. We added
Linux/ARM64 to the list of supported platforms for the Azure Pipelines agent. Although the code changes were minimal, a lot of behind-the-scenes work had to be completed first, and we're excited to announce its release!
For Microsoft hosted agents, which is what most of us are interested in, you only get x64 (not even pure x86). As Sam Darwin referenced in his blog post I have an always in-progress CI test bed for C++ that people can use to get started with various cloud CI providers < https://github.com/bfgroup/ci_playground>. I'll see if I can find non-x64 support in something out there (other than Travis).
How does this compare with Boost CI https://github.com/boostorg/boost-ci. I am utterly confused by all the CI stuff for Boost and am really tired of spending time worrying about it all. If there were some common solution for all Boost libraries which was really easy to use I would use it but all that I see is that all this CI stuff takes a large amount of time to understand and I would rather spend time programming than time trying to decipher CI testing for a Boost library.
On Mon, Jan 25, 2021 at 1:24 AM Edward Diener via Boost < boost@lists.boost.org> wrote:
On 1/24/2021 3:25 PM, René Ferdinand Rivera Morell via Boost wrote:
As Sam Darwin referenced in his blog post I have an always in-progress CI test bed for C++ that people can use to get started with various cloud CI providers < https://github.com/bfgroup/ci_playground>. I'll see if I can find non-x64 support in something out there (other than Travis).
How does this compare with Boost CI https://github.com/boostorg/boost-ci.
CI playground is decidedly not Boost centric. It's just a place to get the minimal CI for C++ working. Which is the part many first time users of CI fail at. It's been the base for parts of boost-ci though.
I would rather spend time programming than time trying to decipher CI testing for a Boost library.
Don't we all :-) But testing in the C++ ecosystem forces us into uncomfortable positions. -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supone Nada -- Robot Dreams - http://robot-dreams.net
On Sat, Jan 23, 2021 at 3:21 AM John Maddock via Boost
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
I'm not really a fan of the Boost development test matrix, because it has a low signal to noise ratio. That is, often there are errors reported which are spurious, or due to a misconfiguration. And when the errors are real, the log is often unhelpful and lacks sufficient context to diagnose and treat the problem. On the other hand I very much used to like Travis, and Drone CI is a close approximation to Travis. That said, the C++ Alliance Drone CI instance is destined to become the dedicated Boost Drone CI service (we are still working out the kinks). This service will be available for all Boost libraries. Sam has been working on integration scripts to get existing Boost repositories to build on it. If there are requests for other platforms, I believe we can add them using the virtual emulation (qemu?). Sam Darwin is the lead on this and he monitors the mailing list so feel free to reach out and make requests. Thanks
On 1/23/2021 6:20 AM, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
Is there information anywhere showing which CPUs/platforms the major compilers ( vc++, clang, gcc, Intel C++, Oracle C++ ) run on, along with how to run CI tests ( appveyor, travis CI, ??? ) on those CPUs/platforms ? I have always personally found that figuring out how the major CIs work is a great deal of effort setting up for a Boost library for little additional gain.
On 23/01/2021 16:22, Edward Diener via Boost wrote:
On 1/23/2021 6:20 AM, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
Is there information anywhere showing which CPUs/platforms the major compilers ( vc++, clang, gcc, Intel C++, Oracle C++ ) run on, along with how to run CI tests ( appveyor, travis CI, ??? ) on those CPUs/platforms ? No, and this is the issue - every CI provider has their own arcane syntax and list of supported OS images.
I have always personally found that figuring out how the major CIs work is a great deal of effort setting up for a Boost library for little additional gain.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
John Maddock wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86.? The compiler list has shrunk to msvc/clang/gcc as well.
For a while in 2017-2018 I was running ARM64 tests on Scaleway hardware which appeared in the test matrix. I had the impression no-one ever took any interest. It would not be difficult to set this up on AWS ARM instances, though there would be some costs involved. Interestingly AWS now also has x86 Macs though they aren't cheap. If anyone fancies something even more exotic to experiment with, how about Emscripten / Wasm? In my limited experience, header-only Boost seems to work OK though I do get a few warnings. It is an interesting platform these days because it is 32-bit. I think that running tests on it could be quite a challenge! Regards, Phil.
> The compiler list has shrunk to msvc/clang/gcc > as well. > I note that at least in theory, other platforms/architectures could be > integrated into Drone CI (either the CppAlliance one, or our own), but > someone would have to offer to host the clients running the tests. > Any thoughts/solutions? Yes. Numerous... thoughts I guess. Thanks for raising this issue. Having spent half a lifetime in the exciting embeddedworld it's fascinating to see how "big" some of these"little" microcontrollers have become, and how wellcertain embedded compilers are actuallyrising to the challenge of keeping up withmodern C++ progress. This post brings thoughts (back) to mind.* How cool would it be investigate more on whichpart(s) of Boost lend themselves well (or not)to deeply embedded systems.* How to leverage more power of cross-compiling. * How to work toward more "green" CI philosophies. Regarding these, you can already handilycross-build directly on your Ubuntu pipeline witharm-none-eabi for the architectures of"famous stars" like RPI and BBB. Little modificationsin bjam could handle cross compilers installedwith sudo apt. In fact, I recently set up somecross builds for arm-none-eabi-gcc onGitHub Actions. OK, great... you say, build, but not run.They do not run at that point, but you find outlots of preliminary problems even trying to buildthe software on the cross-compiler. Cross-compiling will in general make your codemore portable. On the other note, I personally find that systemssuch as Raspberry PI and BeagleBone do, in fact,have comparatively high power-consumptionfor the actual yielded CPU power that is attainedfrom them. With relatively outdated cores,comparatively small caches and predominantlyoff-board RAM, they are at a disadvantage.Fast and modern microcontroller-based systemswith today's state-of-the-art embedded controllerscould be much more energy efficient. Thiswould be more on the research end of things,how to make a fast, future-ready, greenerembedded CI cluster... Kind regards, Chris On Saturday, January 23, 2021, 12:21:13 PM GMT+1, John Maddock via Boostwrote: I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well. I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests. Any thoughts/solutions? Cheers, John. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 1/23/2021 6:20 AM, John Maddock via Boost wrote:
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
My main thought is that, while you are correct about not testing on something other than Intel x64, from the programmers viewpoint he is writing code in C++ and not for a particular platform/architecture. At the most he may be writing code with attention to a compiler or OS, but hardly ever does a particular CPU itself come into play unless lower level assembly code, tailored to a CPU, comes into play. So realistically the programmer does not care about a CPU at all. Even if a test were to fail on some other platform/architecture, where it normally passes on Intel x64, what would a programmer do about it ? Probably very little. Maybe, at the very most, report a problem to the compiler running on that platform/architecture that their is something wrong somewhere. But for the vast, vast majority of times such a problem would hardly indicate anything wrong with the code itself. I offer all this up as a possibly valid reason why testing a Boost library on some other platform/architecture, other than the usual Intel x64 on Mac/Linux/Windows, is not going to be a big priority for any Boost library developer/maintainer.
On 26/01/2021 9:44 am, Edward Diener wrote:
My main thought is that, while you are correct about not testing on something other than Intel x64, from the programmers viewpoint he is writing code in C++ and not for a particular platform/architecture. At the most he may be writing code with attention to a compiler or OS, but hardly ever does a particular CPU itself come into play unless lower level assembly code, tailored to a CPU, comes into play. So realistically the programmer does not care about a CPU at all. Even if a test were to fail on some other platform/architecture, where it normally passes on Intel x64, what would a programmer do about it ? Probably very little. Maybe, at the very most, report a problem to the compiler running on that platform/architecture that their is something wrong somewhere. But for the vast, vast majority of times such a problem would hardly indicate anything wrong with the code itself. I offer all this up as a possibly valid reason why testing a Boost library on some other platform/architecture, other than the usual Intel x64 on Mac/Linux/Windows, is not going to be a big priority for any Boost library developer/maintainer.
This isn't really a good way of thinking. Despite C/C++ supposedly being portable languages, the truth is that each platform and compiler have their own quirks -- mostly stemming from implementation-defined parts of the language, including such fundamental things as the sizes of primitive types and overall endianness, along with some higher-level concepts such as whether certain operations are lock-free or not, etc. (And even, in theory, whether two's complement encoding is used; although I'm not aware of any modern platforms at least where that's not the case, they certainly did exist in the history of the language.) While a library author perhaps doesn't have to care about such differences quite as much as an application author does (as the implementation-defined differences are more significant in the multi-threading arena, and libraries usually stay out of that space), they're not something that can be ignored entirely. It is absolutely possible to write code that works perfectly on Intel x64 but is utterly broken on other architectures (or worse: mostly works except for some corner cases, or works but with pathologic performance), and that is the fault of the code, not of the compiler or platform. But even outside of that, it is a useful exercise to run code through multiple compilers, as each have a different suite of warnings, and trying to get code to compile warning-clean in most compilers can lead to an overall improvement in quality (although not always).
On 25/01/2021 20:44, Edward Diener via Boost wrote:
I offer all this up as a possibly valid reason why testing a Boost library on some other platform/architecture, other than the usual Intel x64 on Mac/Linux/Windows, is not going to be a big priority for any Boost library developer/maintainer.
I would take the view that over half of all computing devices where C++ is likely to run are ARM or AArch64. Therefore one ought to be targeting one's code at those preferentially to other architectures. You're right that for high level libraries, C++ is generally very portable. But for low level libraries, and any high level libraries which depend on those low level libraries, there can be some _very_ nasty surprises e.g. ARM does not implement all of IEEE 754, and ARM is strict about use of acquire-release atomics as well as alignment in a way x64 is not. Therefore, in my opinion, if your code works well on ARM, it's very likely to work on x64. But the reverse is not true. Niall
On 1/25/2021 4:36 PM, Niall Douglas via Boost wrote:
On 25/01/2021 20:44, Edward Diener via Boost wrote:
I offer all this up as a possibly valid reason why testing a Boost library on some other platform/architecture, other than the usual Intel x64 on Mac/Linux/Windows, is not going to be a big priority for any Boost library developer/maintainer.
I would take the view that over half of all computing devices where C++ is likely to run are ARM or AArch64.
Therefore one ought to be targeting one's code at those preferentially to other architectures.
You're right that for high level libraries, C++ is generally very portable. But for low level libraries, and any high level libraries which depend on those low level libraries, there can be some _very_ nasty surprises e.g. ARM does not implement all of IEEE 754, and ARM is strict about use of acquire-release atomics as well as alignment in a way x64 is not.
Therefore, in my opinion, if your code works well on ARM, it's very likely to work on x64. But the reverse is not true.
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
On 26/01/2021 10:53 am, Edward Diener wrote:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
Boost.Atomic (and consequently Boost.Lockfree too) is the obvious one (that Niall already hinted at), but parts of Boost.Thread also apply. Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well. There are also some surprise gotchas in other libraries that do their own spinlocks or pointer-packing, such as Boost.SmartPtr and likely others. Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
Am Di., 26. Jan. 2021 um 00:08 Uhr schrieb Gavin Lambert via Boost < boost@lists.boost.org>:
Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well.
boost.coroutine/boost.coroutine2 and boost.fiber depend on boost.context boost.context itself contains assembler for several architectures
On 25/01/2021 23:08, Gavin Lambert via Boost wrote:
On 26/01/2021 10:53 am, Edward Diener wrote:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
Boost.Atomic (and consequently Boost.Lockfree too) is the obvious one (that Niall already hinted at), but parts of Boost.Thread also apply.
Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well.
There are also some surprise gotchas in other libraries that do their own spinlocks or pointer-packing, such as Boost.SmartPtr and likely others.
Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
There are even more subtle problems than that. Consider this bit of code I encountered recently. This was a trivially copyable struct whose contents were initialised at construction to all bits one by as if memset(this, 0xff, sizeof(T)). The struct's members were set, or not set, by code depending on what happens at runtime. If a member was set by code, its value would not be all bits one. The logical code to write therefore was this: // All bits one representation is a negative NaN under IEEE 754 static constexpr float FLOAT_UNSET = -__builtin_nanf("0xffffff"); struct Foo { float x{FLOAT_UNSET}; constexpr Foo() {} // this is constexpr }; ... if(foo.x == FLOAT_UNSET) ... The above code works absolutely fine if, and only if your compiler is x86/x64/ARMv8 and the target is x86/x64/ARMv8. If your compiler, OR your target, is ARMv7, all bets are off. Why? Because ARMv7 doesn't fully implement NaN. So, if the compiler were x64/ARMv8 and the target were ARMv7, IF the compiler executes the code consteval, you get all bits one float, but if instead it executes the code at runtime, you get some other NaN float. This is because consteval executed expressions by definition are what the compiler itself experiences, which may be quite different to what the target architecture experiences. Subtle stuff like the above causes all sorts of fun in the real world. x64 is extremely benign and tolerant compared to ARMv7, which is probably bigger in terms of total code execution space nowadays. So, restating what I said earlier, I think good engineering means you ensure your C++ works well on ARMv7, and if it works well there, you have an excellent chance of it working well on x64 and ARMv8. The reverse is not true. And incidentally ARMv7 will be a huge chunk of market share for decades to come. Most of the mid low end microcontrollers are ARMv7, they likely will continue to displace PIC and AVR CPUs. Those didn't run C++ well, so we never really experienced much of our userbase trying to say run Boost on them. However all ARMv7 CPUs run C++ very well, so it would be a great surprise if more copies of Boost don't start getting shipped on billions of lower end embedded systems in the coming decade. Niall
On 26/01/2021 09:08, Dominique Devienne via Boost wrote:
On Tue, Jan 26, 2021 at 9:53 AM Niall Douglas via Boost < boost@lists.boost.org> wrote:
if(foo.x == FLOAT_UNSET) ...
Since two NaNs never compare equal, at runtime at least, isn't this code wrong in the first place?
Apologies, I wrote that example code when drinking first coffee of the morning. You're right it's wrong in the literal sense, so consider it meaning the figurative sense. If it were to be literally correct: if(0 == memcmp(&foo.x, &FLOAT_UNSET, sizeof(float))) ... So here if FLOAT_UNSET is instanced by the compiler via consteval, it appears as all bits one. But if it is instanced at runtime, you get who know's what. (As a guess, on ARMv7 not all the bits would be 1) To write correctly portable code if the architecture is two's complement (now required in recent C++), you might instead write: static constexpr uint32_t ALL_BITS_ONE = (uint32_t) -1; if(0 == memcmp(&foo.x, &ALL_BITS_ONE, sizeof(float))) ... Or depending on semantics, isnan(foo.x) might be sufficient. Niall
On 1/25/2021 6:08 PM, Gavin Lambert via Boost wrote:
On 26/01/2021 10:53 am, Edward Diener wrote:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
Boost.Atomic (and consequently Boost.Lockfree too) is the obvious one (that Niall already hinted at), but parts of Boost.Thread also apply.
Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well.
There are also some surprise gotchas in other libraries that do their own spinlocks or pointer-packing, such as Boost.SmartPtr and likely others.
Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
I think then it is important for the libraries in which some of the code is tailored to a platform/architecture to test on other platform/architectures than Intel, but for the large number of Boost libraries which have no direct dependencies on a platform/architecture I do not see that it is of much importance. Simply because library XXX uses Boost.SmartPtr does not mean that testing on ARM is going to show anything of use for library XXX, which testing Boost.SmartPtr on ARM would not itself show.
On 1/27/21 7:38 AM, Edward Diener via Boost wrote:
On 1/25/2021 6:08 PM, Gavin Lambert via Boost wrote:
On 26/01/2021 10:53 am, Edward Diener wrote:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
Boost.Atomic (and consequently Boost.Lockfree too) is the obvious one (that Niall already hinted at), but parts of Boost.Thread also apply.
Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well.
There are also some surprise gotchas in other libraries that do their own spinlocks or pointer-packing, such as Boost.SmartPtr and likely others.
Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
I think then it is important for the libraries in which some of the code is tailored to a platform/architecture to test on other platform/architectures than Intel, but for the large number of Boost libraries which have no direct dependencies on a platform/architecture I do not see that it is of much importance. Simply because library XXX uses Boost.SmartPtr does not mean that testing on ARM is going to show anything of use for library XXX, which testing Boost.SmartPtr on ARM would not itself show.
Memory ordering correctness cannot be tested in the low level libraries, it must be tested at the place of use, i.e. in the downstream libraries. Assumptions about alignment, endianness and overflow semantics and FP specifics are also not limited to lower level libraries and tend to crop up at any level.
On 1/27/2021 3:41 AM, Andrey Semashev via Boost wrote:
On 1/27/21 7:38 AM, Edward Diener via Boost wrote:
On 1/25/2021 6:08 PM, Gavin Lambert via Boost wrote:
On 26/01/2021 10:53 am, Edward Diener wrote:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination. I am not talking about code for just Mac or Linux or Solaris or Windows but code that actually does something different when run on Intel or ARM or AArch64 etc. I still imagine that if such Boost libraries exist there are still very, very few Boost libraries with dependence on such code. I am not arguing that testing on non-Intel is in any way wrong but simply that very, very few libraries should be impacted by different architectures in any way.
Boost.Atomic (and consequently Boost.Lockfree too) is the obvious one (that Niall already hinted at), but parts of Boost.Thread also apply.
Add to that list other low-level libraries such as Boost.Endian, Boost.Coroutine[2], and Boost.Fiber as well.
There are also some surprise gotchas in other libraries that do their own spinlocks or pointer-packing, such as Boost.SmartPtr and likely others.
Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
I think then it is important for the libraries in which some of the code is tailored to a platform/architecture to test on other platform/architectures than Intel, but for the large number of Boost libraries which have no direct dependencies on a platform/architecture I do not see that it is of much importance. Simply because library XXX uses Boost.SmartPtr does not mean that testing on ARM is going to show anything of use for library XXX, which testing Boost.SmartPtr on ARM would not itself show.
Memory ordering correctness cannot be tested in the low level libraries, it must be tested at the place of use, i.e. in the downstream libraries.
Assumptions about alignment, endianness and overflow semantics and FP specifics are also not limited to lower level libraries and tend to crop up at any level.
Granted ! But I think that the vast majority of Boost libraries do not deal in these issues. For the ones that do testing with different platform/architecturs is important.
Edward Diener wrote:
On 1/27/2021 3:41 AM, Andrey Semashev via Boost wrote:
Assumptions about alignment, endianness and overflow semantics and FP specifics are also not limited to lower level libraries and tend to crop up at any level.
Granted ! But I think that the vast majority of Boost libraries do not deal in these issues.
It's surprisingly easy for platform-specific assumptions to sneak into ostensibly portable code. If it's not tested, it's not guaranteed to work.
On 28/01/2021 6:03 am, Edward Diener wrote:
Assumptions about alignment, endianness and overflow semantics and FP specifics are also not limited to lower level libraries and tend to crop up at any level.
Granted ! But I think that the vast majority of Boost libraries do not deal in these issues. For the ones that do testing with different platform/architecturs is important.
The size of int, long, pointers, size_t, and time_t (among others) are implementation-defined, as well as such common shortcuts as ((uint32_t) -1) assumed equivalent to 0xFFFFFFFFu (which is not guaranteed), and to some extent casting between signed and unsigned integers in general. It's basically impossible to write any non-contrived C/C++ code that doesn't rely on implementation-defined behaviour *somewhere*. As such it seems like a good idea to test on as many implementations as reasonably feasible.
((uint32_t) -1) assumed equivalent to 0xFFFFFFFFu (which is not guaranteed)
Is it not? IIRC, by the standard C++ the above is equivalent to `(uint32_t)((uint32_t)0u - 1)`, which must give 0xFFFFFFFF.
It is. To be exact `(uint32_t) -1` is defined to be `2^32 - 1` by the C++ standard and that is indeed `0xFFFFFFFFu` Note that this is only true for conversions *to* unsigned types, but C++20 might have changed that too.
On 28/01/2021 08:16, Alexander Grund via Boost wrote:
((uint32_t) -1) assumed equivalent to 0xFFFFFFFFu (which is not guaranteed)
Is it not? IIRC, by the standard C++ the above is equivalent to `(uint32_t)((uint32_t)0u - 1)`, which must give 0xFFFFFFFF.
It is. To be exact `(uint32_t) -1` is defined to be `2^32 - 1` by the C++ standard and that is indeed `0xFFFFFFFFu` Note that this is only true for conversions *to* unsigned types, but C++20 might have changed that too.
Just to be clear here, and to summarise a discussion about this on Slack, (uint32_t) -1 generates a 32 bit unsigned integer with the object VALUE of 0xffffffff, but not necessarily the object storage REPRESENTATION of 0xffffffff. In other words: uint32_t *x = ..., *y = ...; *x = (uint32) - 1; assert(*x == 0xffffffff); // always true memset(y, 0xff, 4); assert(0 == memcmp(x, y, 4)); // not necessarily true That last assertion is true on all the major CPU architectures, and if I'm blunt, any that I personally care about supporting. But it may not be true according to the standard e.g. one could theoretically implement a C++ abstract machine which encrypts all storage, so no object representation in storage ever has a one-one correspondence to object value. Such a C++ implementation would be very interesting to see how badly all my C++ code breaks, but otherwise would not be useful, I suspect. Niall
Just got a merge request with the "add drone config" message and further explanations: Drone is a continuous integration framework similar to Travis CI. The C++ Alliance https://cppalliance.org/ is offering a hosted Drone server for Boost libraries. Please refer to https://github.com/CPPAlliance/drone-ci for more information and instructions. I have not heard of Drone and have not seen any discussion wrt integrating it into Boost. Any suggestions, clarifications if the request should be merged? Tnx.
On Thu, Jan 28, 2021 at 11:27 AM Vladimir Batov via Boost
I have not heard of Drone and have not seen any discussion wrt integrating it into Boost. Any suggestions, clarifications if the request should be merged? Tnx.
Well, that's up to every author. No one is forcing libraries to use the C++ Alliance Drone CI instance. And if you do accept the pull request, there is no requirement to stop using your other CI integrations. In other words there is only benefit, with no cost (except that if you decide you no longer want the integration, you have to revert the commit). The C++ Alliance as part of its mission to contribute to the health of C++ through open source, has undertaken a project to build its own hosted CI system as an alternative to Travis which is longer usable for Boost in a practical sense. The benefit of our solution is we can scale up the hardware and make sure it is dedicated only to Boost. It doesn't have to be exclusive - we are also working on submitting Github Actions integrations for all Boost repositories. Sam Darwin can answer specific questions, he is in charge of the integration. Thanks
Hi Vladimir, As Vinnie mentioned, Travis has become very slow and/or unusable. We are working on this project to offer alternatives, including a self-hosted, scalable, Drone system. Here is an example page: https://drone.cpp.al/boostorg/beast/ On Thu, Jan 28, 2021 at 1:56 PM Vinnie Falco via Boost < boost@lists.boost.org> wrote:
On Thu, Jan 28, 2021 at 11:27 AM Vladimir Batov via Boost
wrote: I have not heard of Drone and have not seen any discussion wrt integrating it into Boost. Any suggestions, clarifications if the request should be merged? Tnx.
Well, that's up to every author. No one is forcing libraries to use the C++ Alliance Drone CI instance. And if you do accept the pull request, there is no requirement to stop using your other CI integrations. In other words there is only benefit, with no cost (except that if you decide you no longer want the integration, you have to revert the commit).
The C++ Alliance as part of its mission to contribute to the health of C++ through open source, has undertaken a project to build its own hosted CI system as an alternative to Travis which is longer usable for Boost in a practical sense. The benefit of our solution is we can scale up the hardware and make sure it is dedicated only to Boost. It doesn't have to be exclusive - we are also working on submitting Github Actions integrations for all Boost repositories.
Sam Darwin can answer specific questions, he is in charge of the integration.
Thanks
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Vinnie, thank you for your promptly reply and explanations. Much appreciated. So, the answer is a gentle and "non-forcing" yes. :-) ... and it seems people are aware of the development. Will merge then. Tnx again. Appreciated. On 29/1/21 6:55 am, Vinnie Falco wrote:
I have not heard of Drone and have not seen any discussion wrt integrating it into Boost. Any suggestions, clarifications if the request should be merged? Tnx. Well, that's up to every author. No one is forcing libraries to use
On Thu, Jan 28, 2021 at 11:27 AM Vladimir Batov via Boost
wrote: the C++ Alliance Drone CI instance. And if you do accept the pull request, there is no requirement to stop using your other CI integrations. In other words there is only benefit, with no cost (except that if you decide you no longer want the integration, you have to revert the commit). The C++ Alliance as part of its mission to contribute to the health of C++ through open source, has undertaken a project to build its own hosted CI system as an alternative to Travis which is longer usable for Boost in a practical sense. The benefit of our solution is we can scale up the hardware and make sure it is dedicated only to Boost. It doesn't have to be exclusive - we are also working on submitting Github Actions integrations for all Boost repositories.
Sam Darwin can answer specific questions, he is in charge of the integration.
Thanks
Meanwhile other libraries like Boost.Serialization (and consumers of same) also make their own assumptions about things like endianness and type structure, which may not matter too much in isolation but becomes very important if you're intending to use it as a portable network or disk format.
I think then it is important for the libraries in which some of the code is tailored to a platform/architecture to test on other platform/architectures than Intel, but for the large number of Boost libraries which have no direct dependencies on a platform/architecture I do not see that it is of much importance. Simply because library XXX uses Boost.SmartPtr does not mean that testing on ARM is going to show anything of use for library XXX, which testing Boost.SmartPtr on ARM would not itself show.
Fully agree with the first point, as usual: Every branch needs to/should be tested. However usually bugs show up in complex scenarios, so testing the downstream lib might show up bugs which the author/maintainer of the lower-level lib didn't think about. So there is still worth in testing that. Weekly is likely enough to not burn to many resources. However the question still is: How/where can those (low level) libs be tested on other hardware? Travis is basically dead for us and setting up CI is hard (see prior discussion), so testing Boost as a whole on many archs, OSes, ... might still be the easiest solution as it means a single setup to be maintained (see the integration tester approach, which I'd say needs some love) Alex
Am Mo., 25. Jan. 2021 um 22:54 Uhr schrieb Edward Diener via Boost < boost@lists.boost.org>:
Please name the Boost low level libraries which have specific code aimed at the platform/architecture combination.
boost.context incorporates assembler on several architectures: - i386 / x86_64 - mips - powerpc32 / powerpc64 - arm32 / arm64 - riscv - s390x detailed description in documentation (combination of architecture + binary format + ABI): https://www.boost.org/doc/libs/1_75_0/libs/context/doc/html/context/architec... boost.coroutine / boost.coroutine2 / boost.fiber are C++ only but utilize boost.context
Hi John,
I'd like to get VxWorks results up there eventually. At Wind River we've
been shipping Boost for a couple years now with VxWorks. So we are doing
nightly compiling with ARM/clang, PowerPC/gcc and hope to add a
RISC-V/clang reference board at some point.
We've also released some SDKs under a non-commercial license
https://labs.windriver.com/ . So getting the build support upstreamed would
allow academic users to use any version of Boost with the SDKs.
I've gotten stalled a couple times submitting patches to build, mostly
because I'm still not a jam expert. But I hope to get back to it sometime
this year.
Once the public version of Boost has up to date build support, I've already
worked out how to run the test harness with QEMU.
Brian Kuhl
On Sat, 23 Jan 2021 at 06:21, John Maddock via Boost
I have no solution for this, but I note that neither do we have CI, nor tests on https://www.boost.org/development/tests/develop/developer/summary.html that aren't Intel x86. The compiler list has shrunk to msvc/clang/gcc as well.
I note that at least in theory, other platforms/architectures could be integrated into Drone CI (either the CppAlliance one, or our own), but someone would have to offer to host the clients running the tests.
Any thoughts/solutions?
Cheers, John.
-- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (21)
-
Alexander Grund
-
Andrey Semashev
-
Brian Kuhl
-
Christopher Kormanyos
-
Dominique Devienne
-
Edward Diener
-
Gavin Lambert
-
Janek Kozicki
-
John Maddock
-
Marc Glisse
-
Niall Douglas
-
Oliver Kowalke
-
Peter Dimov
-
Phil Endecott
-
René Ferdinand Rivera Morell
-
Robert Ramey
-
Sam Darwin
-
Tom Kent
-
Ville Voutilainen
-
Vinnie Falco
-
Vladimir Batov