On 9/21/2020 1:29 AM, Gavin Lambert via Boost wrote:
On 21/09/2020 16:20, Edward Diener wrote:
Even if 16 GB, these CIs limiting memory to 4 GB or 6 GB seem a disservice to me.
While virtual memory is non-linear and weird (and virtual machine memory even more so), it still approximately boils down to however much physical RAM they can (affordably) fit onto the host, then divided by how many VMs they want to run per host.
Giving each VM a larger memory allocation means you can fit fewer concurrent VMs on the same hardware, meaning that you either need more hardware or less concurrent users. And more hardware is expensive, especially at VM-host-server scale.
And these CIs have a lot of users who don't pay for anything.
So they have to draw a line in the sand somewhere. They have apparently decided that those particular limits work for most of their users (or at least the ones that they care sufficiently about).
It's possible that they have some dedicated servers for paying customers with looser limits. But it's also possible that they don't, if it hasn't been an issue in the past.
I understand that it is a free resource for Boost testing and that they may not offer much in the way of virtual memory, as well as having certain time limits when something is running. But nearly all my tests for intensive functionality in Boost PP and VMD run fine to completion on my local computer, using gcc, clang, and VC++ on Windows or Linux, whereas in the CI testing with Appveyor and Travis CI there are numerous "out of memory" and "process killed on timeouts" errors which prevents successful CI tests results. So for my purposes of CI testing these CI facilities are almost useless on the whole to tell me whether some change is valid or not. That's fine. I just will pay little attention to the overall result of the CI tests, since they do not reflect real world results because of the CI limitations.