On Wed, 16 Nov 2022 at 04:22, René Ferdinand Rivera Morell wrote:
I can't answer the CMake question for you, but... I need to explain one issue that will hopefully dissuade you using tar balls. Many months ago we had the default Boost CI pull the regular tarballs to do testing. Tha had an interesting effect of creating a fair bit of downloads from the jFrog repository that holds those tarballs. Everything was great. CI would download Boost really fast. And get all of it without problems. Or so we thought. One day everything stopped working with download errors. We contacted jFrog about it. They looked and saw that we hit a data cap limit.
I'm pretty sure that we use waaaaaaaay more bandwidth with `git clone` than we would by caching the tarball though: https://github.com/actions/cache But if jFrog is the weak point and "github just works": what about simply putting the tarball to releases on GitHub (or at least this special one with CMake support)? Compare https://github.com/boostorg/boost/releases/tag/boost-1.80.0 with https://github.com/wxWidgets/wxWidgets/releases/tag/v3.2.1 I really like the way wxWidgets does the releases. (And GitHub actions would already "have all the build resources in house".)
At which point we started changing the various CI methods to selectively git clone Boost (there's this great tool that gets just the right projects you need). And we haven't had download problems since then.
There are a gazillion issues with "git clone" that I see: * I really like the idea of selectively using just the modules I need. However, I only need "dll" and "uuid" and yet I need to download 71 submodules (maybe one or two less after our code cleanup, but at least when we needed "filesystem system date_time regex" we had to download all of those 71) which kind of defeats the whole purpose and doesn't save any bit of bandwidth. See also https://github.com/boostorg/cmake/issues/26#issuecomment-1286919419 * GitHub is well optimized for downloading your own git sources. As an example, a GitHub action needs 2 seconds to clone it all, including all the large files. I tried doing the same on AWS CodeBuild and it takes 5 minutes. Every. Single. Job. That's for our own sources. I have absolutely no idea how to optimize that for something that gets fetched via FetchContent (other than by installing boost with `apt install` which is feasible for Linux, but not for Windows). I uploaded a custom Windows image for AWS CodeBuild that has Boost preinstalled, but AWS needs15 minutes to load the Windows image, making the CI so slow that I find it useless. Fetching boost via FetchContent takes roughly 2-3 extra minutes on GitHub actions. Fetching 100 MB from cache on the other hand would be instant. * CMake has a known issue that it keeps fetching everything from git "all the time" after you have already downloaded all the sources: https://gitlab.kitware.com/cmake/cmake/-/issues/21146 Mojca