I guess I'm confused. My understanding is that libraries are considered good candidates for the Boost collection based on meeting some or all of certain criteria: * They offer useful, novel functionality not found elsewhere * The API is superior to other libraries that do similar things * The implementation is exceptionally performant * Solving a familiar problem in a particularly elegant fashion * The library is already popular and has field experience * The library offers C++ standard functionality for older compilers But some of the chatter on the mailing list suggests that the bar for a Boost library candidate is lower. That a library just needs to basically work, even if it duplicates functionality found elsewhere, or even if it does not have anything that one might hold up as an example of exceptional engineering. Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example). In other cases I see libraries with few to no users limping into reviews, or absent discussions which question whether or not the bar for excellence is exceeded. When I used to participate in wg21 I complained about the "direct to standard" pipeline, where people would just write papers for the sake of it with no example code or real-world user experience. I have to wonder if we are not cultivating a "direct to Boost" pipeline by having relaxed or poorly-defined acceptance criteria. There are thousands of open-source libraries for doing things in a fashion that is "good enough." If libraries are not held to technical excellence and high standards during the review process then what is the value proposition of the collection? If libraries of average design and implementation are seen as passable for acceptance then how would this inspire any authors to strive for excellence? When I write a library there are always two factors in my mind: 1. that the use-case is compelling, and 2. that I can bring something to the table which demonstrates technical excellence. These are the libraries that I have contributed thus far and my process for designing them: 1. Boost.Beast This offers something never seen before which is a generic approach to HTTP and Websocket, based on Asio, which offers models and algorithms that are designed well (of course, it still has its flaws). The library already had many users ahead of the review, and was being used to transact economic value in a live product. I will note that my initial motivation for Beast was because the websocketpp library API was just terrible, and the alternatives even worse. 2. Boost.JSON When I needed JSON I looked at the alternatives and while RapidJSON was clearly the best in terms of performance I felt that it made several major design flaws in its API. Boost.JSON pretty much copies all of the good ideas from RapidJSON especially the performant implementation, and offers a redesigned API that follows C++ best practices with respect to copy and move, exceptions, and error codes. Furthermore I added the streaming parser and I introduced a novel means of managing memory resources which I believe corrects a design flaw in the standard. This library already had commercial users and lots of feedback before going into review. 3. Boost.URL Beast users regularly asked for URL parsing. There are some regex patterns floating around and some Spirit based parsers, and also some URL libraries that do the "five std::string data members" thing. I wanted something excellent, but the first two iterations of an API turned out to be crap and I gave up, as I do not like publishing mediocre libraries. Some time passed and with renewed energy I tried again and then I got it right. The URL is stored in its serialized form (just one string), allowing the possibility for having a url_view type in addition to the mutable container. I copied much of the API from a popular JS URL lib because, well hey why duplicate the effort if someone has already gone through the trouble :) Boost.URL offers particularly clean and robust interfaces for reading and modifying the values of each part of the URL both with and without the URL-encoding applied. The mechanism for lazily iterating the path and query sequences is novel. And it offers a strong invariant: the container will always contain a valid URL. The library further innovates by making public its parsing framework which is optimized for the type of grammars found in the Internet RFCs such as HTTP and the fields used in Websocket handshakes. Boost.URL had a bunch of users before the review was conducted. 4. Boost.StaticString This one is just a straightforward string container whose storage is fixed in capacity as a data member. Container libraries are often unexciting but they are necessary. StaticString follows all best practices for modern C++ containers. When I look at a proposed library I try to figure out what is great about it, how well it performs for its users (or even, does it have any users?), what part of the API is exceptionally well designed and ergonomic, but most importantly I want to ask: what makes this library stand out to the extent that it should be part of the library collection? What aspects of the library, if viewed by someone learning C++ or interested in improving their design skills, are inspirational? Is this overly demanding or exclusionary? Am I overthinking things? Should we be asking more of these types of questions and requiring better answers? What is the criteria for determining if a library is good enough to become part of the collection? Thanks
On Wed, 27 Mar 2024 at 14:47, Vinnie Falco via Boost
Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example).
It is simpler now for most people to just write papers than a complete, documented, unit-tested library and then engaging in a Boost review. With this role change Boost becomes specially attractive to libraries that won't find their way in to the standard, such as Boost.Redis, Boost.MySql etc.
In other cases I see libraries with few to no users limping into reviews, or absent discussions which question whether or not the bar for excellence is exceeded.
It is hard to know how many users a library has, specially if the docs are great and the library has no bugs. Most users won't even let a star on github.
What is the criteria for determining if a library is good enough to become part of the collection?
The review process? This is why it so important to have a qualified review manager. Marcelo
On 3/27/24 8:22 AM, Marcelo Zimbres Silva via Boost wrote:
It is hard to know how many users a library has, specially if the docs are great and the library has no bugs. Most users won't even let a star on github.
I would like to see suggestions as to how we could get usage statistics on boost libraries. Robert Ramey
On Wed, Mar 27, 2024 at 2:34 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I would like to see suggestions as to how we could get usage statistics on boost libraries.
Yes, this is an important area of analysis for which no perfect solution exists. However, researchers at The C++ Alliance have developed an experimental technique which offers hope for newly submitted libraries. It works like this: * We maintain an inventory of unique names that are not found on the web * New libraries will be assigned a unique name from our inventory * A C++ Alliance cloud application will trawl the web for references to the name For example, the next library which is accepted may be given the name Boost.Zissifrak, a term which nets zero search results using Google. Using this method we can indirectly measure the popularity of the library over time by observing the evolution of the number of matching search results. Thanks
On 3/28/24 17:01, Vinnie Falco via Boost wrote:
On Wed, Mar 27, 2024 at 2:34 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I would like to see suggestions as to how we could get usage statistics on boost libraries.
Yes, this is an important area of analysis for which no perfect solution exists. However, researchers at The C++ Alliance have developed an experimental technique which offers hope for newly submitted libraries. It works like this:
* We maintain an inventory of unique names that are not found on the web * New libraries will be assigned a unique name from our inventory * A C++ Alliance cloud application will trawl the web for references to the name
For example, the next library which is accepted may be given the name Boost.Zissifrak, a term which nets zero search results using Google. Using this method we can indirectly measure the popularity of the library over time by observing the evolution of the number of matching search results.
Are you being serious? You might as well generate UUIDs for library names. Please don't. Preferably, a library name should reflect its purpose, and, if possible, use recognizable terminology in its domain.
On 3/28/24 7:01 AM, Vinnie Falco via Boost wrote:
On Wed, Mar 27, 2024 at 2:34 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I would like to see suggestions as to how we could get usage statistics on boost libraries.
Yes, this is an important area of analysis for which no perfect solution exists. However, researchers at The C++ Alliance have developed an experimental technique which offers hope for newly submitted libraries.
Actually, I was thinking more of existing libraries. Case in point. I get relatively little feedback on the boost serialization library. It's been 20+ years. Initially it was almost alone. But now there are lots of alternatives. Serialization has never been proposed for the standard. It has few "stars" on github. I'm wondering if it's even used any more. It's becoming increasingly out of sync with evolving boost tools and hence more effort to maintain. Perhaps it's time to deprecated it or remove it from boost. Robert Ramey
On Thu, Mar 28, 2024 at 11:46 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
Actually, I was thinking more of existing libraries.
Not to worry. Technicians at The C++ Alliance are working on a mostly portable solution which uses the compiler's pragma message facility to embed a small piece of HTML containing a tracking cookie: #pramga message( "<html><script>gtag('config', 'GA_TRACKING_ID', { 'anonymize_ip': true });</script><body></body></html>" ) Every time your library is compiled, the C++ Alliance cloud services will be contacted to collect non personally-identifiable information about what is being compiled, the versions of Boost C++ being used, and the quality of your implementation. Currently this capability only works when the program in question is built using an IDE such as Visual Studio or XCode but we are investigating patches to bjam to enable Internet communication during compilation. Stay tuned! Thanks
On Thu, Mar 28, 2024 at 3:42 PM Vinnie Falco via Boost
but we are investigating patches to bjam to enable Internet communication during compilation.
I've actually thought of implementing that in the past. Something about privacy and confidential information held me back. -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supone Nada -- Robot Dreams - http://robot-dreams.net
On 3/28/24 23:42, Vinnie Falco via Boost wrote:
On Thu, Mar 28, 2024 at 11:46 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
Actually, I was thinking more of existing libraries.
Not to worry. Technicians at The C++ Alliance are working on a mostly portable solution which uses the compiler's pragma message facility to embed a small piece of HTML containing a tracking cookie:
#pramga message( "<html><script>gtag('config', 'GA_TRACKING_ID', { 'anonymize_ip': true });</script><body></body></html>" )
Every time your library is compiled, the C++ Alliance cloud services will be contacted to collect non personally-identifiable information about what is being compiled, the versions of Boost C++ being used, and the quality of your implementation. Currently this capability only works when the program in question is built using an IDE such as Visual Studio or XCode but we are investigating patches to bjam to enable Internet communication during compilation.
This would be a massive privacy breach, if it works. If this ever comes to reality, please announce it on this list so that I never use those libraries.
On 3/28/24 5:38 PM, Andrey Semashev via Boost wrote:
On 3/28/24 23:42, Vinnie Falco via Boost wrote:
On Thu, Mar 28, 2024 at 11:46 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
Actually, I was thinking more of existing libraries.
Not to worry. Technicians at The C++ Alliance are working on a mostly portable solution which uses the compiler's pragma message facility to embed a small piece of HTML containing a tracking cookie:
#pramga message( "<html><script>gtag('config', 'GA_TRACKING_ID', { 'anonymize_ip': true });</script><body></body></html>" )
Every time your library is compiled, the C++ Alliance cloud services will be contacted to collect non personally-identifiable information about what is being compiled, the versions of Boost C++ being used, and the quality of your implementation. Currently this capability only works when the program in question is built using an IDE such as Visual Studio or XCode but we are investigating patches to bjam to enable Internet communication during compilation.
This would be a massive privacy breach, if it works. If this ever comes to reality, please announce it on this list so that I never use those libraries.
+1 I can't imagine anyone willing to use a library which did that Robert Ramey
On Thu, Mar 28, 2024 at 9:56 PM Robert Ramey wrote:
On 3/28/24 5:38 PM, Andrey Semashev via Boost wrote:
On 3/28/24 23:42, Vinnie Falco via Boost wrote:
Not to worry. Technicians at The C++ Alliance are working on a mostly portable solution which uses the compiler's pragma message facility to embed a small piece of HTML containing a tracking cookie:
#pramga message( "<html><script>gtag('config', 'GA_TRACKING_ID', { 'anonymize_ip': true });</script><body></body></html>" )
Every time your library is compiled, the C++ Alliance cloud services will be contacted to collect non personally-identifiable information about what is being compiled, the versions of Boost C++ being used, and the quality of your implementation. Currently this capability only works when the program in question is built using an IDE such as Visual Studio or XCode but we are investigating patches to bjam to enable Internet communication during compilation.
This would be a massive privacy breach, if it works. If this ever comes to reality, please announce it on this list so that I never use those libraries.
+1 I can't imagine anyone willing to use a library which did that
It's too late. If your email client has opened that message above, the C++ Alliance is already inside your computer. Glen
On Thu, Mar 28, 2024 at 5:38 PM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
This would be a massive privacy breach, if it works. If this ever comes to reality, please announce it on this list so that I never use those libraries.
Some of our engineers doing pure analytical C++ research have taken note that while compilation is inherently parallel when there are multiple TUs, the linking stage is not. In effect the linker is single threaded. To help compensate developers for having their anonymized compilation usage statistics uploaded into the cloud, we are developing a client-side cryptocurrency mining service which utilizes those additional, idle CPUs during the linker stage to solve proof-of-work for popular blockchains. I was thinking a 50/50 split on profits. Thanks
On 01.04.24 23:28, Vinnie Falco via Boost wrote:
On Thu, Mar 28, 2024 at 5:38 PM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
This would be a massive privacy breach, if it works. If this ever comes to reality, please announce it on this list so that I never use those libraries.
Some of our engineers doing pure analytical C++ research have taken note that while compilation is inherently parallel when there are multiple TUs, the linking stage is not. In effect the linker is single threaded. To help compensate developers for having their anonymized compilation usage statistics uploaded into the cloud, we are developing a client-side cryptocurrency mining service which utilizes those additional, idle CPUs during the linker stage to solve proof-of-work for popular blockchains. I was thinking a 50/50 split on profits.
April fools? Much less funny now that I am reading it on the 2nd. -- Rainer Deyke (rainerd@eldwood.com)
On Tue, Apr 2, 2024 at 9:05 AM Rainer Deyke via Boost
April fools? Much less funny now that I am reading it on the 2nd.
I'm not really a traditionalist when it comes to observing "April Fools" but I will note that the humor is not in my satirical postings but rather, the serious nature of some of the replies :) Thanks
On 4/2/24 19:18, Vinnie Falco via Boost wrote:
On Tue, Apr 2, 2024 at 9:05 AM Rainer Deyke via Boost
wrote: April fools? Much less funny now that I am reading it on the 2nd.
I'm not really a traditionalist when it comes to observing "April Fools" but I will note that the humor is not in my satirical postings but rather, the serious nature of some of the replies :)
Thing is, with how software is designed these days, I wouldn't be surprised if some of the posts you made were true. So color me boring, but I'm not amused.
On 28/03/2024 21:42, Vinnie Falco via Boost wrote:
Not to worry. Technicians at The C++ Alliance are working on a mostly portable solution which uses the compiler's pragma message facility to embed a small piece of HTML containing a tracking cookie:
#pramga message( "<html><script>gtag('config', 'GA_TRACKING_ID', { 'anonymize_ip': true });</script><body></body></html>" )
Every time your library is compiled, the C++ Alliance cloud services will be contacted to collect non personally-identifiable information about what is being compiled, the versions of Boost C++ being used, and the quality of your implementation. Currently this capability only works when the program in question is built using an IDE such as Visual Studio or XCode but we are investigating patches to bjam to enable Internet communication during compilation.
I'll build without internet connection in this case. Daniele Lupo
Hi Robert, I’m sorry you are feeling that Boost.Serialization might be EOL. I actually do use it and have found its stability to be a great selling point over the years. I confess that I’m not a retro-compiler person so I personally don’t use any old compat code, but particularly in this case I expect many others might. Of course, it is your decision I suppose, but it would be a great loss were the library to be deprecated. Thanks for your steady work on it. Cheers, Brook
On Mar 28, 2024, at 12:46 PM, Robert Ramey via Boost
wrote: On 3/28/24 7:01 AM, Vinnie Falco via Boost wrote:
On Wed, Mar 27, 2024 at 2:34 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I would like to see suggestions as to how we could get usage statistics on boost libraries.
Yes, this is an important area of analysis for which no perfect solution exists. However, researchers at The C++ Alliance have developed an experimental technique which offers hope for newly submitted libraries.
Actually, I was thinking more of existing libraries. Case in point. I get relatively little feedback on the boost serialization library. It's been 20+ years. Initially it was almost alone. But now there are lots of alternatives. Serialization has never been proposed for the standard. It has few "stars" on github. I'm wondering if it's even used any more. It's becoming increasingly out of sync with evolving boost tools and hence more effort to maintain. Perhaps it's time to deprecated it or remove it from boost.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.boost.org%2Fmailman%2Flistinfo.cgi%2Fboost&data=05%7C02%7Cbrook%40biology.nmsu.edu%7C5a314fc4c5a3452aae7908dc4f5764f4%7Ca3ec87a89fb84158ba8ff11bace1ebaa%7C1%7C0%7C638472483990883482%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=O5Y%2BhZzW8iHnhDKizakgHg7GR762QF1KD8nOREuTDJ8%3D&reserved=0
Boost.MPI is based on Boost.Serialization. And I know a few people who uses Boost.MPI (starting with me). I'm ok with moving to an alternative if there are suggestions but will need some time. It also means that Boost.MPI will have to be removed from boost (as having a dependency outside of Boost and std woul be an issue I guess), but that can be considered if that's the only reason for extra maintenance work. Cheers ---- Alain Miniussi DSI, Pôles Calcul et Genie Log. Observatoire de la Côte d'Azur Tél. : +33609650665 ----- On Mar 28, 2024, at 10:23 PM, Brook Milligan via Boost boost@lists.boost.org wrote:
Hi Robert,
I’m sorry you are feeling that Boost.Serialization might be EOL. I actually do use it and have found its stability to be a great selling point over the years. I confess that I’m not a retro-compiler person so I personally don’t use any old compat code, but particularly in this case I expect many others might. Of course, it is your decision I suppose, but it would be a great loss were the library to be deprecated.
Thanks for your steady work on it.
Cheers, Brook
On Mar 28, 2024, at 12:46 PM, Robert Ramey via Boost
wrote: On 3/28/24 7:01 AM, Vinnie Falco via Boost wrote:
On Wed, Mar 27, 2024 at 2:34 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I would like to see suggestions as to how we could get usage statistics on boost libraries.
Yes, this is an important area of analysis for which no perfect solution exists. However, researchers at The C++ Alliance have developed an experimental technique which offers hope for newly submitted libraries.
Actually, I was thinking more of existing libraries. Case in point. I get relatively little feedback on the boost serialization library. It's been 20+ years. Initially it was almost alone. But now there are lots of alternatives. Serialization has never been proposed for the standard. It has few "stars" on github. I'm wondering if it's even used any more. It's becoming increasingly out of sync with evolving boost tools and hence more effort to maintain. Perhaps it's time to deprecated it or remove it from boost.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.boost.org%2Fmailman%2Flistinfo.cgi%2Fboost&data=05%7C02%7Cbrook%40biology.nmsu.edu%7C5a314fc4c5a3452aae7908dc4f5764f4%7Ca3ec87a89fb84158ba8ff11bace1ebaa%7C1%7C0%7C638472483990883482%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=O5Y%2BhZzW8iHnhDKizakgHg7GR762QF1KD8nOREuTDJ8%3D&reserved=0
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 3/28/24 2:23 PM, Brook Milligan via Boost wrote:
Hi Robert,
I’m sorry you are feeling that Boost.Serialization might be EOL. I actually do use it and have found its stability to be a great selling point over the years. I confess that I’m not a retro-compiler person so I personally don’t use any old compat code, but particularly in this case I expect many others might. Of course, it is your decision I suppose, but it would be a great loss were the library to be deprecated.
Thanks for your steady work on it.
You welcome. I don't think it's so much the serialization library. I'm guessing that a lot libraries have this concerns. But many older libraries are not maintained so no one raises the issue. Or maybe they are so well written that they never need maintenance or evolution. Or maybe no one uses them any more. Without some real data, there's no way to tell. Robert Ramey
Le ven. 29 mars 2024 à 04:18, Robert Ramey via Boost
Or maybe they are so well written that they never need maintenance or evolution. Or >maybe no one uses them any more. Without some real data, there's no way to tell.
Hello Robert, Your Boost serialization is in the first case. For sure! This is my opinion but I shared it. Best regards, -- Marc Viala
On Thu, Mar 28, 2024 at 8:18 PM Robert Ramey via Boost < boost@lists.boost.org> wrote:
Hi Robert,
I’m sorry you are feeling that Boost.Serialization might be EOL. I actually do use it and have found its stability to be a great selling point over the years. I confess that I’m not a retro-compiler person so I
On 3/28/24 2:23 PM, Brook Milligan via Boost wrote: personally don’t use any old compat code, but particularly in this case I expect many others might. Of course, it is your decision I suppose, but it would be a great loss were the library to be deprecated.
Thanks for your steady work on it.
You welcome.
I don't think it's so much the serialization library. I'm guessing that a lot libraries have this concerns. But many older libraries are not maintained so no one raises the issue. Or maybe they are so well written that they never need maintenance or evolution. Or maybe no one uses them any more. Without some real data, there's no way to tell.
My take is that it's time for serialization 2.0 built on c++26 reflection. This is the perfect domain to test the reflection facilities being proposed for c++26. There's a fork of clang that supports it so it's technically possible. Jeff
Actually, I was thinking more of existing libraries. Case in point. I get relatively little feedback on the boost serialization library. It's been 20+ years. Initially it was almost alone. But now there are lots of alternatives. Serialization has never been proposed for the standard. It has few "stars" on github. I'm wondering if it's even used any more. It's becoming increasingly out of sync with evolving boost tools and hence more effort to maintain. Perhaps it's time to deprecated it or remove it from boost.
Hello Robert, I can confirm that, here in our production code since 2005, we are intensively using your Boost serialization library. It was very well designed and its interoperability with other Boost libraries (utilities, multi_index, bimap...) is very valuable. From my point of view, its deprecation will be really a mess. I am very sorry you have that feeling. Best regards, -- Marc Viala
On Wed, Mar 27, 2024 at 9:47 PM Vinnie Falco via Boost
I guess I'm confused. My understanding is that libraries are considered good candidates for the Boost collection based on meeting some or all of certain criteria:
* They offer useful, novel functionality not found elsewhere * The API is superior to other libraries that do similar things * The implementation is exceptionally performant * Solving a familiar problem in a particularly elegant fashion * The library is already popular and has field experience * The library offers C++ standard functionality for older compilers
If by "My understanding is" you mean to imply that those are general rules, you are mistaken. I know for a fact that the latter two would not get consent. Especially your criteria for popularity & field experience seems wrong to me. Let's just take boost.json: that would have gotten my endorsement with zero users, because it's just obviously a useful thing to have. Also, why does it need to be exceptional? If there were amazing json libraries and boost.json was just better in it's error handling and a better fit for boost it wouldn't be exceptional? Why would that hurt inclusion into boost? You can have your preferences and thus withhold an endorsement and recommend rejection during the review.
But some of the chatter on the mailing list suggests that the bar for a Boost library candidate is lower. That a library just needs to basically work, even if it duplicates functionality found elsewhere, or even if it does not have anything that one might hold up as an example of exceptional engineering.
Correct. Being a candidate gives the library author the opportunity to receive feedback and prepare for the actual review. Given that preparing for the boost review includes a lot of work that is only useful if the library gets accepted into boost, I find it a good idea to receive an endorsement early.
Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example). In other cases I see libraries with few to no users limping into reviews, or absent discussions which question whether or not the bar for excellence is exceeded. When I used to participate in wg21 I complained about the "direct to standard" pipeline, where people would just write papers for the sake of it with no example code or real-world user experience. I have to wonder if we are not cultivating a "direct to Boost" pipeline by having relaxed or poorly-defined acceptance criteria.
It sounds like you'd like boost to be cutting edge yet have libraries that have field experience and a user base. Pick one. [...]
When I look at a proposed library I try to figure out what is great about it, how well it performs for its users (or even, does it have any users?), what part of the API is exceptionally well designed and ergonomic, but most importantly I want to ask: what makes this library stand out to the extent that it should be part of the library collection? What aspects of the library, if viewed by someone learning C++ or interested in improving their design skills, are inspirational?
Again, why does a library need to stand out? Why isn't it good enough to solve a real world problem in an excellent fashion? Why does it need to be exceptional, which implies it eclipses all competitors?
Is this overly demanding or exclusionary? Am I overthinking things? Should we be asking more of these types of questions and requiring better answers?
You should ask more targeted questions and accept answers. In case of the sqlite discussion your question was "Why isn't this more like SoCi?". You completely ignored me pointing out the category mistake and all the features sqlite has that set it apart from any other sqlite client library. I think category mistakes and lack of understanding of scope can lead to many frustrating discussions here. I am pretty sure that you're familiar with people asking why beast is such a complicated version of python's request. The same happened during the mysql, redis and mustache reviews. I think it's important to ask two questions: 1. Based on the scope the author chose, does this library make sense in boost? 2. Is the execution great? I don't think it's illegal to question the scope, but it's usually not helpful. The author usually has a good idea what should be in a library, like you limiting boost.json to not include other document formats, because a user can just provide his own parser. It's a choice that makes sense, so both criteria are fulfilled. I think a lot of tension arises when people don't consider the intended scope of a library, but rather assume the library's scope must be congruent with their use-case.
What is the criteria for determining if a library is good enough to become part of the collection?
It passes review. That's the criteria, and always will be.
On 27.03.24 14:47, Vinnie Falco via Boost wrote:
Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example). In other cases I see libraries with few to no users limping into reviews, or absent discussions which question whether or not the bar for excellence is exceeded. When I used to participate in wg21 I complained about the "direct to standard" pipeline, where people would just write papers for the sake of it with no example code or real-world user experience. I have to wonder if we are not cultivating a "direct to Boost" pipeline by having relaxed or poorly-defined acceptance criteria. I don't see "direct to Boost" as a problem in the same way as "direct to standard". If anything, "direct to Boost" provides a compelling alternative to "direct to standard". If people are skipping Boost and going directly to the standard because it's easier to get into the standard than to get into Boost, that's a problem.
As it stands, Boost is already very much of a mixed bag. Some libraries represent the state of the art of C++ library development, some once did when they were released but have fallen behind, and some never did. I'm happy to accept new libraries that raise the average quality of Boost, and as more and more old libraries fall into obsolescence, this becomes an increasingly low bar to clear. -- Rainer Deyke (rainerd@eldwood.com)
On Thu, Mar 28, 2024 at 12:00 AM Rainer Deyke via Boost
On 27.03.24 14:47, Vinnie Falco via Boost wrote:
Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example). In other cases I see libraries with few to no users limping into reviews, or absent discussions which question whether or not the bar for excellence is exceeded. When I used to participate in wg21 I complained about the "direct to standard" pipeline, where people would just write papers for the sake of it with no example code or real-world user experience. I have to wonder if we are not cultivating a "direct to Boost" pipeline by having relaxed or poorly-defined acceptance criteria. I don't see "direct to Boost" as a problem in the same way as "direct to standard". If anything, "direct to Boost" provides a compelling alternative to "direct to standard". If people are skipping Boost and going directly to the standard because it's easier to get into the standard than to get into Boost, that's a problem.
As it stands, Boost is already very much of a mixed bag. Some libraries represent the state of the art of C++ library development, some once did when they were released but have fallen behind, and some never did. I'm happy to accept new libraries that raise the average quality of Boost, and as more and more old libraries fall into obsolescence, this becomes an increasingly low bar to clear.
Was there ever a discussion about creating a process for removal of unmaintained libraries? If not, I think there should be.
Was there ever a discussion about creating a process for removal of unmaintained libraries? If not, I think there should be.
It does come up every now and again, so far as I know the only thing we have ever managed to remove was TR1 - and only because the author (that would be me) actively pushed for it ;) We should learn to be more ruthless on that front IMO. John.
On 3/27/24 19:37, John Maddock via Boost wrote:
Was there ever a discussion about creating a process for removal of unmaintained libraries? If not, I think there should be.
It does come up every now and again, so far as I know the only thing we have ever managed to remove was TR1 - and only because the author (that would be me) actively pushed for it ;)
We should learn to be more ruthless on that front IMO.
The biggest obstacle to removing any library is that the library may have users. This is true regardless of the perceived quality or "modern-ness" of the library.
On 27/03/2024 17:47, Andrey Semashev via Boost wrote:
The biggest obstacle to removing any library is that the library may have users. This is true regardless of the perceived quality or "modern-ness" of the library.
If boost remains stuck with this, no libraries will ever be removed. In my opinion at some point it's necessary to say clear and loud "this library will be deprecated in Boost 1.87.0 and removed in 1.90.0". Users that will use the library will have time to update their code, and if it's some legacy code that cannot be removed, simply they will not update boost anymore in their environment, remaining stuck to the last version that they will use and that support that library. It's always possible, if necessary, to give a patch release for that version if necessary. For example if boost is updated to 1.95.0, and we discover a severe bug, it's always possible to release a 1.89.1 release, that is the last release that support the removed library. But only if needed. It should be also possible to define the last supported version, saying that bugs in version that are newer that this one will be patched if needed like I've said, otherwise the version is out of support. For example, for smart pointers (I don't say that we need to remove it, it's only an example) we can write in the site and in the documentation - this library is deprecated since version 1.87.0 - this library will be removed in version 1.91.0 And also - The oldest version of Boost actively supported is the 1.84.0 (that means that it's possible to have 1.84.1, but not 1.83.1). This way it's possible to: - Remove old libraries (i.e. smart pointers, since they are supported in C++11) - Give time to people that use deprecated libraries to update their code - Support people that cannot update the code for any reason for a defined period of time. Regards Daniele Lupo
On 27/03/2024 17:04, Daniele Lupo via Boost wrote:
On 27/03/2024 17:47, Andrey Semashev via Boost wrote:
The biggest obstacle to removing any library is that the library may have users. This is true regardless of the perceived quality or "modern-ness" of the library.
If boost remains stuck with this, no libraries will ever be removed.
In my opinion at some point it's necessary to say clear and loud "this library will be deprecated in Boost 1.87.0 and removed in 1.90.0". Users that will use the library will have time to update their code, and if it's some legacy code that cannot be removed, simply they will not update boost anymore in their environment, remaining stuck to the last version that they will use and that support that library. It's always possible, if necessary, to give a patch release for that version if necessary. For example if boost is updated to 1.95.0, and we discover a severe bug, it's always possible to release a 1.89.1 release, that is the last release that support the removed library. But only if needed. It should be also possible to define the last supported version, saying that bugs in version that are newer that this one will be patched if needed like I've said, otherwise the version is out of support.
For example, for smart pointers (I don't say that we need to remove it, it's only an example) we can write in the site and in the documentation
- this library is deprecated since version 1.87.0 - this library will be removed in version 1.91.0
And also
- The oldest version of Boost actively supported is the 1.84.0 (that means that it's possible to have 1.84.1, but not 1.83.1).
This way it's possible to:
- Remove old libraries (i.e. smart pointers, since they are supported in C++11) - Give time to people that use deprecated libraries to update their code - Support people that cannot update the code for any reason for a defined period of time.
Right, but also we can leave the github repro's in place, and folks can download and use the "last known good version" on top of a later Boost if this wish. Just my 2c... John.
On Fri, Mar 29, 2024 at 3:00 PM Daniele Lupo via Boost < boost@lists.boost.org> wrote:
On 27/03/2024 17:47, Andrey Semashev via Boost wrote:
The biggest obstacle to removing any library is that the library may have users. This is true regardless of the perceived quality or "modern-ness" of the library.
If boost remains stuck with this, no libraries will ever be removed.
This way it's possible to:
- Remove old libraries (i.e. smart pointers, since they are supported in C++11) - Give time to people that use deprecated libraries to update their code - Support people that cannot update the code for any reason for a defined period of time.
Regards
Daniele Lupo
And kill projects that target older C++03 platforms? Don't maintain, update or improve, but remove? If somebody wants some smart pointer that is consistent across compiler versions that boost ones are a very good case. Be responsible to ones who use your code. Don't break it unless it is absolutely necessary
And kill projects that target older C++03 platforms? Don't maintain, update or improve, but remove?
If somebody wants some smart pointer that is consistent across compiler versions that boost ones are a very good case.
Be responsible to ones who use your code. Don't break it unless it is absolutely necessary
In my opinion, yes. Let's start from a point: nobody wil kill C++03 code on older platforms and with older compilers: it will continue to work properly with existing boost versions. Simply, the old code will not be able to use, at some point, the latest boost version, and that's also ok, since new boost releases will use code more modern than C++03, so simply upgrading the library will break the old code. My idea, as I've said, is to deprecate old libraries at some point, remove them later in a future version, and maintain old boost version for bugfixing. It's something that many other libraries do. At some point, for avoiding the boilerplate of the library/framework, some deprecated features must be removed, and there will be some API break. Even C++ did it with auto_ptr, for example, why boost should not do it? Why boost should rely to a bunch of configuration macros for writing and maintaining code that can be developed in a much cleaner way? More important, Boost already did it: in version 1.31 Compose library was deprecated, and in 1.32 was removed. Single libraries had breaking changes during their history, like Spirit. I'm simply saying that for maintaining the code, and making development of new libraries easier we should start to think about cleaning the actual code base a bit. But it's something that every project must do, at same point. And I will repeat again, this does not imply that old code will break. Simply old code, developed with a specific version of boost, cannot be compiled with a more recent version of it. It the developer find some boost bug, it's always possible to create a patch release, if the boost version that it uses is more recent than the last supported one in my proposal. If your code uses a really old boost version, maybe it's not possible to upgrade to last version for free, but it happens also for all other libraries, and for compilers too. So I don't think that with this we'll break code, and I've said clearly that smart pointers were only an example and not a real candidate for a library that needs to be removed, this is something that the community should decide. But I'm convinced that sooner or later this work should be done, and in my experience sooner is better than later. Regards Daniele Lupo
On 4/2/24 13:46, Daniele Lupo via Boost wrote:
And kill projects that target older C++03 platforms? Don't maintain, update or improve, but remove?
If somebody wants some smart pointer that is consistent across compiler versions that boost ones are a very good case.
Be responsible to ones who use your code. Don't break it unless it is absolutely necessary
In my opinion, yes.
Let's start from a point: nobody wil kill C++03 code on older platforms and with older compilers: it will continue to work properly with existing boost versions. Simply, the old code will not be able to use, at some point, the latest boost version, and that's also ok,
No, not ok. Incompatibility with newer Boost releases means that the code is no longer compatible with other code that *requires* the newer Boost. For example, a library that uses boost::shared_ptr will be incompatible with an application or another library that requires Boost version where boost::shared_ptr has been removed. There is also an issue of shipping the code that requires an older Boost version in a Linux distro (and probably other OS distros), because typically distros only ship one Boost version system-wide. This means that either distro maintainers now have to ship multiple Boost versions (which is a maintenance and technical problem) or the code in question needs to be removed from distro and manually built by users who need it (which is, again, a maintenance and technical problem shifted downstream). Note that building old Boost on a newer system may be problematic by itself due to updated dependencies. For example, if the old Boost was only compatible with OpenSSL 1.0 and the newer system has migrated to OpenSSL 3.0, you won't be able to build without extensive patching. (By "technical problem" above I mean that two different Boost versions may not be possible to install on the same system using the standard package manager. Yes, shared libraries can coexist via version suffixes in library names, but this doesn't work for headers and static libraries. Fixing this would require renaming libraries and header directories and would break builds of everything downstream, so basically won't happen.)
My idea, as I've said, is to deprecate old libraries at some point, remove them later in a future version, and maintain old boost version for bugfixing.
That's also problematic. The current Boost workflow is not well suited for maintenance releases, let alone maintenance of older branches of Boost. There are no older branches. Every Boost release is made from master, and represents its current state, with all bug fixes, new features and removals, should those happen. Then there is the added maintenance burden of the older branches, if we were to create them. So there's another problem to the "old code uses old Boost" approach: we don't do point releases. I'm going to leave the discussion on whether this is good or bad aside, but this means if there's a bug or security vulnerability in the older Boost version, the old code is either stuck with it or has to patch their Boost themselves. So my main point is that "old code uses old Boost" is a myth in practice.
On 02/04/2024 13:20, Andrey Semashev via Boost wrote:
No, not ok.
Incompatibility with newer Boost releases means that the code is no longer compatible with other code that *requires* the newer Boost. For example, a library that uses boost::shared_ptr will be incompatible with an application or another library that requires Boost version where boost::shared_ptr has been removed.
Sorry, but I don't see the point here. What you're saying it's true, but it's the normality. If I write a program that links to Qt4, cannot be linked to another program that uses Qt6. Nobody complains it. If you want to link to another library/interface, you need to hide the libraries that you uses, for example with Pimpl, or expose the depedency if you need it in your public API and you have to require that the other library will use the same version. That's the reason for which, for example, some commercial software for which you can write plugins, gives the SDK to you, with the libraries that the company uses for building the program, forcing you to use the same version. MAK does it with Vr-Forces (you can find the qt version that they use in their site so you can download and use it), Unreal Engine does the same including all its dependencies in a ThirdParty folder that you download when you clone its repository. Also, the incompatibilities can arise even when the library is not removed, but only updated, in API or in ABI. So this is a no-problem for me: if you want to use code that use a specific boost (or other libraries) version, you must use that version or anyway a compatible version.
There is also an issue of shipping the code that requires an older Boost version in a Linux distro (and probably other OS distros), because typically distros only ship one Boost version system-wide. This means that either distro maintainers now have to ship multiple Boost versions (which is a maintenance and technical problem) or the code in question needs to be removed from distro and manually built by users who need it (which is, again, a maintenance and technical problem shifted downstream).
I'm not an expert in this, so you probably have a point here, but at the moment I'm looking at Ubuntu distro, and I can see that in the repository there are two different versions of boost: apt-cache search --names-only "libboost(.*?)-all-dev" return me version 1.74 and version 1.71. So it's possible (and I think mandatory) for a distro to maintain and use different versions of a library. If you want to install programs from repository that use different version of boost, they will install the corresponding dependencies.
Note that building old Boost on a newer system may be problematic by itself due to updated dependencies. For example, if the old Boost was only compatible with OpenSSL 1.0 and the newer system has migrated to OpenSSL 3.0, you won't be able to build without extensive patching.
Again, that's not a problem in my opinion. This is the same dependency problem handled and solved in many ways. For example, with vcpkg, you should be able to download a specific version of a library, and build compatible version dependencies. I don't know if this is already done with boost, but it works with other libraries. If you need to work with specific versions of a library, especially an old one, you're not usually using the system-wide installed libraries, but old ones, so it's normal to use them together. If you use the old boost version, you should also use the old openssl version. I don't see the problem. I see more problems in trying to maintain this overhelming compatibilities between many library versions.
(By "technical problem" above I mean that two different Boost versions may not be possible to install on the same system using the standard package manager. Yes, shared libraries can coexist via version suffixes in library names, but this doesn't work for headers and static libraries. Fixing this would require renaming libraries and header directories and would break builds of everything downstream, so basically won't happen.)
My idea, as I've said, is to deprecate old libraries at some point, remove them later in a future version, and maintain old boost version for bugfixing. That's also problematic. The current Boost workflow is not well suited for maintenance releases, let alone maintenance of older branches of Boost. There are no older branches. Every Boost release is made from master, and represents its current state, with all bug fixes, new features and removals, should those happen. The branch can be unique, but we can have different tags. If we are at version 1.90, and there's a problem in 1.88, we can simply create a new branch starting from the 1.88 release commit, named for example "xxx-hotfix", where xxx is the issue, solve it, test it, and commit. Then we create the new tag 1.88.1 in the fixed commit and, if necessary, we merge this commit in the main branch, or in later releases (creating also 1.89.1 in this case). It's something that can be easily done, in a way similar to what git workflow does. I don't think that this can break
It's true, if I try to install both of them I've errors, like you say and this is correct, but someway it can be solved. Since there are breaking changes for me it's ok to update the major version of boost every time that a library is removed, from boost1 to boost2 for example, and then the maintainers should be able to handle this. It's a bit of work, you're right, but it's no different from the kind of work that they already do for other libraries. If I want to develop a software with the system-wide libraries, I need to select one version and use it. If I want to use another version, I need to install the correct one, or I need to install the version that I need with a package manager. I can see some problem here, you're right, but nothing that cannot be solved with a bit of configuration. the existing development logic, it's simple a simple addition that must be used only if necessary.I don't expect that this will happen ofter, it's not a normal developmente workflow, we need to do it only if necessary, but in the few cases that it's necessary to manage, we know what to do.
Then there is the added maintenance burden of the older branches, if we were to create them.
So there's another problem to the "old code uses old Boost" approach: we don't do point releases. I'm going to leave the discussion on whether this is good or bad aside, but this means if there's a bug or security vulnerability in the older Boost version, the old code is either stuck with it or has to patch their Boost themselves.
So my main point is that "old code uses old Boost" is a myth in practice.
I don't think that's a myth. It can be achieved easily by adapting a bit the development, putting tags in correct commits and creating hotfix branches like many other project already do by using this logic for example A successful Git branching model » nvie.co https://nvie.com/posts/a-successful-git-branching-model/. If people doesn't want to do it because it's not used to do it, fine by me, but it's not a technical issue. If you decide that we don't need this, it's fine by me, but in my experience at some point every project needs a refactoring and a re-thinking, by removing old and unused code and try to modernize it a bit. The more we postpone this step, the more problems we will have. Regards Daniele Lupo
On 4/2/24 17:27, Daniele Lupo via Boost wrote:
On 02/04/2024 13:20, Andrey Semashev via Boost wrote:
No, not ok.
Incompatibility with newer Boost releases means that the code is no longer compatible with other code that *requires* the newer Boost. For example, a library that uses boost::shared_ptr will be incompatible with an application or another library that requires Boost version where boost::shared_ptr has been removed.
Sorry, but I don't see the point here. What you're saying it's true, but it's the normality. If I write a program that links to Qt4, cannot be linked to another program that uses Qt6. Nobody complains it. If you want to link to another library/interface, you need to hide the libraries that you uses, for example with Pimpl, or expose the depedency if you need it in your public API and you have to require that the other library will use the same version.
Pimpl is not the solution, at least not on Linux, where publicly visible symbols are linked across all modules in the process. Even if you hide Boost X components behind Pimpl in one module, you still can't use Boost Y in other parts of the process, unless you explicitly hide all symbols coming from Boost X and Y. This means no interaction involving Boost types, including exceptions, is allowed between the two parts of the program.
That's the reason for which, for example, some commercial software for which you can write plugins, gives the SDK to you, with the libraries that the company uses for building the program, forcing you to use the same version. MAK does it with Vr-Forces (you can find the qt version that they use in their site so you can download and use it), Unreal Engine does the same including all its dependencies in a ThirdParty folder that you download when you clone its repository.
Right, that's the problem with binary SDKs. That's usually not the problem with open source software, since you can usually just rebuild it with the dependencies you want. But this becomes problematic if you have two dependencies that require different Boost versions.
Also, the incompatibilities can arise even when the library is not removed, but only updated, in API or in ABI. So this is a no-problem for me: if you want to use code that use a specific boost (or other libraries) version, you must use that version or anyway a compatible version.
If API of some Boost library is updated, usage of that API in other Boost libraries is also updated. So when you take a new Boost release, you're receiving a package that works. It may require updating downstream code, but nonetheless, you get a single Boost version that works, as opposed to having multiple Boost versions, each working in some but not all downstream projects.
There is also an issue of shipping the code that requires an older Boost version in a Linux distro (and probably other OS distros), because typically distros only ship one Boost version system-wide. This means that either distro maintainers now have to ship multiple Boost versions (which is a maintenance and technical problem) or the code in question needs to be removed from distro and manually built by users who need it (which is, again, a maintenance and technical problem shifted downstream).
I'm not an expert in this, so you probably have a point here, but at the moment I'm looking at Ubuntu distro, and I can see that in the repository there are two different versions of boost:
apt-cache search --names-only "libboost(.*?)-all-dev"
return me version 1.74 and version 1.71.
Not on my system (it only shows 1.74 for me on Kubuntu 22.04). I vaguely remember there was a time when two Boost versions were packaged, but you couldn't install both versions of -dev packages at the same time, for the reasons I described. This allows you to build software that depends on *one* Boost version at a time, but not both. Meaning, you still cannot combine the two versions in one project.
Note that building old Boost on a newer system may be problematic by itself due to updated dependencies. For example, if the old Boost was only compatible with OpenSSL 1.0 and the newer system has migrated to OpenSSL 3.0, you won't be able to build without extensive patching.
Again, that's not a problem in my opinion. This is the same dependency problem handled and solved in many ways. For example, with vcpkg, you should be able to download a specific version of a library, and build compatible version dependencies. I don't know if this is already done with boost, but it works with other libraries. If you need to work with specific versions of a library, especially an old one, you're not usually using the system-wide installed libraries, but old ones, so it's normal to use them together. If you use the old boost version, you should also use the old openssl version. I don't see the problem. I see more problems in trying to maintain this overhelming compatibilities between many library versions.
The problem is that the old OpenSSL version is no longer shipped by the distro. Distro maintainers are generally not happy about having to drag multiple versions of the same software, as it has an ongoing maintenance cost.
My idea, as I've said, is to deprecate old libraries at some point, remove them later in a future version, and maintain old boost version for bugfixing. That's also problematic. The current Boost workflow is not well suited for maintenance releases, let alone maintenance of older branches of Boost. There are no older branches. Every Boost release is made from master, and represents its current state, with all bug fixes, new features and removals, should those happen. The branch can be unique, but we can have different tags. If we are at version 1.90, and there's a problem in 1.88, we can simply create a new branch starting from the 1.88 release commit, named for example "xxx-hotfix", where xxx is the issue, solve it, test it, and commit.
It's not only about a branch in the library, it requires changing the logic how the superproject works, how testing is done across all libraries, and probably how release packaging is done. There's organizational part as well, e.g. how often do we make point releases and who's going to manage them. It's not impossible to do, but it surely doesn't look trivial either.
On Tue, Apr 2, 2024 at 5:35 PM Daniele Lupo via Boost
On 02/04/2024 13:20, Andrey Semashev via Boost wrote:
No, not ok.
Incompatibility with newer Boost releases means that the code is no longer compatible with other code that *requires* the newer Boost. For example, a library that uses boost::shared_ptr will be incompatible with an application or another library that requires Boost version where boost::shared_ptr has been removed.
Sorry, but I don't see the point here. What you're saying it's true, but it's the normality. If I write a program that links to Qt4, cannot be linked to another program that uses Qt6. Nobody complains it.
I wish Boost was nearly as compatible as Qt. Seriously. Qt has very good API compatibility between major releases and binary compatibility across entire major version. Boost is horrible in that. It was one of the reasons I barely use Boost nowadays because you can never know when it breaks something. I have experienced so many issues with compilation against different versions of boost (not in my code) that it is really a big problem. Qt does amazing job in keeping ABI stable, boost can't even have barly stable API. Artyom
Artyom Beilis via Boost
Qt has very good API compatibility between major releases and binary compatibility across entire major version.
Was not my experience at all. I've ported a very simple Qt application (Kconfig GUI in the Linux kernel) from Qt5 to Qt6 and it required quite a few changes[1], mostly because a lot of deprecated in Qt5 APIs have been dropped in Qt6. Maybe this is the case if you diligently migrate off deprecated stuff as soon as it's marked as such, but definitely not if you follow the standard "don't fix what ain't broken" methodology. [1] https://lore.kernel.org/lkml/20230809114231.2533523-2-boris@codesynthesis.co...
On 11/04/2024 16:27, Artyom Beilis via Boost wrote:
I wish Boost was nearly as compatible as Qt. Seriously.
Qt has very good API compatibility between major releases and binary compatibility across entire major version. Boost is horrible in that.
It was one of the reasons I barely use Boost nowadays because you can never know when it breaks something.
I have experienced so many issues with compilation against different versions of boost (not in my code) that it is really a big problem.
Qt does amazing job in keeping ABI stable, boost can't even have barly stable API.
Artyom
I don't instead. Forcing ABI compatibility created in C++ standard different issues, for example the performances of std::regex, or the existence of std::jthread. I mean, ABI compatibility is good, but in my opinion should be no more important that API compatibility or performance. If I'm developing a project, I use a specific version of Boost. If for some reason I need to update it, I simply rebuild the project; if API is compatible I need only to build the project without changing code, and it works, I don't care about ABI. Qt obtains the ABI compatility not because the layout of classes remain the same, but due to PIMPL idiom, that adds a lot of indirections, everywhere. This can be good for widgets, where the performance loss is negligible, but in some cases where performance is important, PIMPL impact can be noticeable, even if not important. I prefer performances over ABI compatibility in this case. Also, the compatilibity forces to do not update or refactor the interface of classes, making very big classes that can be refactored easily (I mean, look how many things QString does for example). My idea is that the library compatibility between different versions should be: - ABI compatiliby between patch versions (1.84.0 -> 1.84.1) - API compatility between minor versions (1.84.0 -> 1.85.0) - API can break between major versions (1.84.0 -> 2.0.0) This can ensure a good tradeoff between improvement of a library/framework and compatibility with existing code in my opinion. Daniele
Andrey Semashev wrote:
So my main point is that "old code uses old Boost" is a myth in practice.
It's not a myth; old code does use old Boost. But at the same time, old code often needs to upgrade its Boost if it upgrades something else, such as compiler or OS, because the old Boost sometimes breaks.
On 4/2/24 18:24, Peter Dimov via Boost wrote:
Andrey Semashev wrote:
So my main point is that "old code uses old Boost" is a myth in practice.
It's not a myth; old code does use old Boost. But at the same time, old code often needs to upgrade its Boost if it upgrades something else, such as compiler or OS, because the old Boost sometimes breaks.
If the old code can be rebuilt with the newer Boost, that's not what I call "old code uses old Boost". It's when you can't rebuild, or put another way, when there is a non-trivial amount of work that needs to be done to port the old code to the newer Boost. And the myth is that such "old code uses old Boost" state is practically sustainable for that old code. Either the old code gets phased out (which can be painful for its users) or it has to be ported to the newer Boost (or away from Boost) to remain relevant and available to users.
Daniele Lupo wrote:
This way it's possible to:
- Remove old libraries (i.e. smart pointers, since they are supported in C++11)
Yeah, that (the i.e. part) 's not happening in the foreseeable future. But in principle, were it happening, one requirement for removing a library would be for it to have no dependents in Boost. Obviously, its removal will break everything that depends on it. https://pdimov.github.io/boostdep-report/develop/smart_ptr.html#reverse-depe... In addition to that, we can use the reverse dependencies to gauge the probability of the library still being used outside of Boost. If we use it, someone else probably does, too.
On 27.03.24 18:04, Daniele Lupo via Boost wrote:
On 27/03/2024 17:47, Andrey Semashev via Boost wrote:
The biggest obstacle to removing any library is that the library may have users. This is true regardless of the perceived quality or "modern-ness" of the library.
If boost remains stuck with this, no libraries will ever be removed.
In my opinion at some point it's necessary to say clear and loud "this library will be deprecated in Boost 1.87.0 and removed in 1.90.0". Users that will use the library will have time to update their code, and if it's some legacy code that cannot be removed, simply they will not update boost anymore in their environment, remaining stuck to the last version that they will use and that support that library. It's always possible, if necessary, to give a patch release for that version if necessary. For example if boost is updated to 1.95.0, and we discover a severe bug, it's always possible to release a 1.89.1 release, that is the last release that support the removed library. But only if needed. It should be also possible to define the last supported version, saying that bugs in version that are newer that this one will be patched if needed like I've said, otherwise the version is out of support.
For example, for smart pointers (I don't say that we need to remove it, it's only an example) we can write in the site and in the documentation
- this library is deprecated since version 1.87.0 - this library will be removed in version 1.91.0
And also
- The oldest version of Boost actively supported is the 1.84.0 (that means that it's possible to have 1.84.1, but not 1.83.1).
This way it's possible to:
- Remove old libraries (i.e. smart pointers, since they are supported in C++11) - Give time to people that use deprecated libraries to update their code - Support people that cannot update the code for any reason for a defined period of time.
I strongly feel that certain old libraries should be deprecated. I also strongly feel that these libraries should never be removed - or at least, not without a Boost 2.0 release in a boost2 namespace that can exist side by side with Boost 1.x. Deprecation should means that there is broad agreement that the library should not be used in new code. One corollary to that is that new code should have better alternatives available. Forcing people to pin their legacy code to old versions of Boost goes directly against that goal. Example: library X2 is introduced to Boost as a better replacement to library X1, and library X1 is deprecated. I have a project that uses X1. Refactoring existing code to use X2 is not viable, so the project is pinned to the latest version of Boost that still includes X1. Then X3 is introduced and X2 is deprecated, but I am forced to continue using X2 because my project is pinned to a version of Boost that does not include X3. -- Rainer Deyke (rainerd@eldwood.com)
For example, for smart pointers (I don't say that we need to remove it, it's only an example) we can write in the site and in the documentation
- this library is deprecated since version 1.87.0 - this library will be removed in version 1.91.0
I strongly feel that certain old libraries should be deprecated. I also strongly feel that these libraries should never be removed - or at least, not without a Boost 2.0 release in a boost2 namespace that can exist side by side with Boost 1.x.
I recently got our all of the C++ building in C++20 mode. I had the thought that it would be nice to have a C++20 "spin" of Boost that only included things relevant for "modern" C++. So things like smart pointers would _not_ be there because we encourage use of standard language and library features where possible. However, I also have a counter-argument to the same idea. We prefer boost random because it produces the same stream of numbers across platforms. It's an exception to our general rule of using std where possible. Perhaps a concept of "core" Boost and "extras" then? Things in extras would include some description of why it might not be recommended for new code. And there there is a #define to limit things to core. - Nigel Stewart
Nigel Stewart via Boost
I had the thought that it would be nice to have a C++20 "spin" of Boost that only included things relevant for "modern" C++. So things like smart pointers would not be there because we encourage use of standard language and library features where possible.
- Nigel Stewart
I recently moved _to_ boost smart pointers in a C++20 project, feeling that they are superior to the std variants. In particular because the boost shared pointer supports the concept of local_shared_ptr, that is not using atomics. it seems to be a useful extension to std. https://www.boost.org/doc/libs/1_84_0/libs/smart_ptr/doc/html/smart_ptr.html... Jakob
It's funny that SmartPtr is being brought up so much because SmartPtr contains a few key things not found in the STL. First and foremost, intrusive_ptr. I've built an entire I/O runtime using intrusive_ptr as it's more suited for low-level C APIs than the two word smart pointer impls are. For that reason, SmartPtr should never really die or go away or attempt to be replaced. SmartPtr is also the only place you'll find `allocate_unique`. - Christian
Christian Mazakas wrote:
It's funny that SmartPtr is being brought up so much because SmartPtr contains a few key things not found in the STL. First and foremost, intrusive_ptr.
I've built an entire I/O runtime using intrusive_ptr as it's more suited for low- level C APIs than the two word smart pointer impls are.
For that reason, SmartPtr should never really die or go away or attempt to be replaced. SmartPtr is also the only place you'll find `allocate_unique`.
I don't understand the hate for SmartPtr either. In addition to things that aren't
in std at all, such as
* intrusive_ptr
* local_shared_ptr
* enable_shared_from
* allocate_unique
* owner_equal_to
* owner_hash
(and probably others that don't come to mind at the moment)
some features aren't part of C++11 but have only been added in later standards,
such as
* shared_ptr
On Sun, Mar 31, 2024 at 6:08 PM Peter Dimov via Boost
Christian Mazakas wrote:
It's funny that SmartPtr is being brought up so much because SmartPtr contains a few key things not found in the STL. First and foremost, intrusive_ptr.
I've built an entire I/O runtime using intrusive_ptr as it's more suited for low- level C APIs than the two word smart pointer impls are.
For that reason, SmartPtr should never really die or go away or attempt to be replaced. SmartPtr is also the only place you'll find `allocate_unique`.
I don't understand the hate for SmartPtr either. In addition to things that aren't in std at all, such as
* intrusive_ptr * local_shared_ptr * enable_shared_from * allocate_unique * owner_equal_to * owner_hash
(and probably others that don't come to mind at the moment)
My guess would be that >80% of users of boost SmartPtr are just annoyed that 13 y after C++11 they have mix of Boost and std in their codebase. Because in real codebases often people do not bother to upgrade old code, they just start using new stuff so you end up with mix of boost and std. Advanced features you mention are probably not used in most codebases and/or people are not familiar with them. Not everybody is a poweruser. Actually TIL about local_shared_ptr and I have seen people reimplement local_shared_ptr because they wanted to avoid overhead of atomic increments :)
On Sun, Mar 31, 2024 at 11:04 AM Ivan Matek via Boost
My guess would be that >80% of users of boost SmartPtr are just annoyed that 13 y after C++11 they have mix of Boost and std in their codebase. Because in real codebases often people do not bother to upgrade old code, they just start using new stuff so you end up with mix of boost and std.
That is probably true, but removing Boost.SmartPtr does nothing to solve it. If the maintainers of a code base are bothered by a mixture of smart pointer types, they must refactor the code base in question to use only one type instead. boost::shared_ptr and std::shared_ptr are almost identical, so an automated search and replace would likely suffice. Thanks
Am 31.03.2024 um 21:17 schrieb Vinnie Falco via Boost:
On Sun, Mar 31, 2024 at 11:04 AM Ivan Matek via Boost
wrote: My guess would be that >80% of users of boost SmartPtr are just annoyed that 13 y after C++11 they have mix of Boost and std in their codebase. Because in real codebases often people do not bother to upgrade old code, they just start using new stuff so you end up with mix of boost and std.
Right, this mixture causes avoidable costs on various metrics.
That is probably true, but removing Boost.SmartPtr does nothing to solve it. If the maintainers of a code base are bothered by a mixture of smart pointer types, they must refactor the code base in question to use only one type instead. boost::shared_ptr and std::shared_ptr are almost identical, so an automated search and replace would likely suffice.
And possibly fall into a trap related to operators: *only* the (in)equality operators have the same behaviour as their std:: counterpart, the other relational operators are either missing in boost::shared_ptr (by design!) but exist for std::shared_ptr, and have different behaviour. While both boost:: and std:: flavoured shared_ptrs are designed to be used as keys in associative containers, they order differently and have a different definition of equivalence. Other Boost libraries (like e.g. Signals2) depend on that. AFAIK, this isn't brought to attention anywhere (but what do I know). Thanks, Dani -- PGP/GPG: 2CCB 3ECB 0954 5CD3 B0DB 6AA0 BA03 56A1 2C4638C5
Daniela Engert wrote:
And possibly fall into a trap related to operators: *only* the (in)equality operators have the same behaviour as their std:: counterpart, the other relational operators are either missing in boost::shared_ptr (by design!) but exist for std::shared_ptr, and have different behaviour. While both boost:: and std:: flavoured shared_ptrs are designed to be used as keys in associative containers, they order differently and have a different definition of equivalence. Other Boost libraries (like e.g. Signals2) depend on that. AFAIK, this isn't brought to attention anywhere (but what do I know).
The committee didn't like my operator< so they changed it, and added the rest of the relationals. Removal would have been a better choice - there are hardly any legitimate uses of p >= q. But it is what it is. I can't "fix" this now because it will potentially introduce silent breakage into a lot of existing code. The only thing I can do is delete operator< for a decade or so until everyone stops using it, then maybe add the standard behavior.
Am 31.03.2024 um 21:17 schrieb Vinnie Falco via Boost:
On Sun, Mar 31, 2024 at 11:04 AM Ivan Matek via Boost
wrote: Please read my mail again. The deprecation of smart pointers was only an example, I didn't mean that I want to deprecate it. I'm sorry if I was not clear, but I need simply an example of a library, and the smart
On 01/04/2024 07:36, Daniela Engert via Boost wrote: pointer one was the first that came into my mind. Again, sorry for the confusion about it. Daniele Lupo
On Sun, Mar 31, 2024 at 9:17 PM Vinnie Falco via Boost < boost@lists.boost.org> wrote:
That is probably true, but removing Boost.SmartPtr does nothing to solve it. If the maintainers of a code base are bothered by a mixture of smart pointer types, they must refactor the code base in question to use only one type instead. boost::shared_ptr and std::shared_ptr are almost identical, so an automated search and replace would likely suffice.
Thanks
Well it forces people to do it. :) I have seem a lot of codebases fix
parts of their technical debt just because they had to, e.g.. they wanted
to upgrade boost or compiler and now some UB or deprecated code needed to
be fixed.
If you do not care about people who do not care about their code that is
fair, but there is one more thing...
Removing stuff has positive PR effect since as a user I have feeling that
addition to boost are always amazing, but there is no cleanup and it gives
boost a bad image. I know people have done amazing work on existing
libraries like unordered, but I really really have no idea why boost::array
has not been removed 5+ years ago.
This is nothing against boost::array, if it was not a great library it
would not be in std::, but for me natural process would be to remove it
from boost.
Or why is
#include
Ivan Matek wrote:
Well it forces people to do it. :) I have seem a lot of codebases fix parts of their technical debt just because they had to, e.g.. they wanted to upgrade boost or compiler and now some UB or deprecated code needed to be fixed. If you do not care about people who do not care about their code that is fair, but there is one more thing...
"Everyone in the world should be denied the Boost smart pointers because it will force us to fix our code base" is not a serious attitude to adopt.
On Mon, Apr 1, 2024 at 5:11 PM Peter Dimov via Boost
"Everyone in the world should be denied the Boost smart pointers because it will force us to fix our code base" is not a serious attitude to adopt.
This has been discussed 1000x times, sensible people disagree, IIRC Alphabet gave up on C++ for similar reason, although you may think that ABI and API are two totally different things. Also there is nothing preventing those people who do not wish to adapt to keep using boost 1.70 or whatever version has shared_ptr. I am not some fanatic that will upgrade --std version in prod as soon as new compiler lands, but I really see no value in enabling people to not upgrade to std::shared_ptr for 10 years. In the long run you are not doing them a favor by enabling that kind of behavior. Sure I know there is one enlightened dev in some big company and he would suffer if you remove his access to shared_ptr, but those cases are exception. Most developers have access to std::shared_ptr for 5+ years and there is little reason for them to keep using boost version. To be clear here I am talking about shared_ptr only, as I mentioned I have seen people reimplement local_shared_ptr so there is value for functionality not in std::. Also I mentioned bigger issue is that this policy makes boost suffer from same bad image that C++ has. Nothing ever gets removed, even if it is totally obsolete. auto_ptr still compiles with --std=c++23 in gcc(although it emits a warning). Like why would you enable this kind of terrible code in 2024? If some companies do not want to invest into upgrading that is their problem, not a problem for entire community. tl;dr I really do not understand why there is this expectation that people can upgrade --std or boost version and not do any work to fix their code.
Ivan Matek wrote:
Also there is nothing preventing those people who do not wish to adapt to keep using boost 1.70 or whatever version has shared_ptr.
We are not talking about shared_ptr here, but about everything else in the smart pointers library that has no standard equivalent. But even if we were talking about just shared_ptr, how would I develop it further? Fork Boost from 1.70 and proceed there? If you want Boost without SmartPtr, delete it from your copy and carry on.
On 4/1/24 19:01, Ivan Matek via Boost wrote:
Also I mentioned bigger issue is that this policy makes boost suffer from same bad image that C++ has. Nothing ever gets removed, even if it is totally obsolete. auto_ptr still compiles with --std=c++23 in gcc(although it emits a warning). Like why would you enable this kind of terrible code in 2024?
I have first-hand experience with the above. We have a C++03 library that isn't going to be upgraded any time soon, if ever, which uses auto_ptr all over the place, including public interfaces. Our code base is C++17, and we're also using other libraries that require C++17, and it's not unrealistic that we move beyond that in not too distant future (more likely, by being forced by one of our dependencies). So yeah, like it or not, auto_ptr and lots of other deprecated and frowned upon code still exist in 2024, and will likely exist for many years to come. Which is why I think removing stuff from the standard library is wrong.
If some companies do not want to invest into upgrading that is their problem, not a problem for entire community.
For the life of me, I don't get why "the entire community" has an issue with auto_ptr, or boost::shared_ptr or whatever. Is there someone forcing you to use these components when you can use the more "modern" alternatives? You argue that everyone using auto_ptr must up and rewrite their code at once, and yet you have a problem with converting your code from boost::shared_ptr to std::shared_ptr. I wonder why. This "let's destroy C++03 and live in the C++Latest wonderland" really sounds like a religious mantra sometimes, while the pragmatic truth is that you often use what you must, and for that to be even possible the damn components must exist in the first place.
. This "let's destroy C++03 and live in the C++Latest wonderland" really sounds like a religious mantra sometimes, while the pragmatic truth is that you often use what you must, and for that to be even possible the damn components must exist in the first place.
A sensible compromise would be to document parts of Boost as, for example, „in new code, prefer std::shared_ptr over boost::shared_ptr“, and to use std::shared_ptr inside boost if it is available to avoid compiling two sets of code doing the same. -- Dr. Arno Schödl CTO schoedl@think-cell.com | +49 30 6664731-0 We are looking for C++ Developers: https://www.think-cell.com/developers think-cell Software GmbH (https://www.think-cell.com) Leipziger Str. 51, 10117 Berlin, Germany Main phone +49 30 6664731-0 | US toll-free +1 800 891 8091 Amtsgericht Berlin-Charlottenburg HRB 180042 Directors: Alexander von Fritsch, Christoph Hobo Please refer to our privacy policy (https://www.think-cell.com/privacy) on how we protect your personal data.
Ivan Matek wrote:
Or why is
#include
still available in latest boost?
We should probably do something about that. Unfortunately,
`#define BOOST_FOREACH(VAR, COL) for(VAR: COL)` doesn't
work (of course) because as it turns out BOOST_FOREACH
supports iteration over string literals and std::pair
On Mon, Apr 1, 2024 at 7:22 PM Peter Dimov via Boost
Ivan Matek wrote:
Or why is
#include
still available in latest boost?
We should probably do something about that. Unfortunately, `#define BOOST_FOREACH(VAR, COL) for(VAR: COL)` doesn't work (of course) because as it turns out BOOST_FOREACH supports iteration over string literals and std::pair
.
My mistake, I should have checked all what it supports before bringing it
as an example, as I said last time I used it was 10+y ago and in very
vanilla usecases.
This works with boost for each, but not with language one.
std::multiset ms{1,2,2,3,3,3,4,4,4,4,5,5,5,5,5};
auto gimme_five = ms.equal_range(5);
BOOST_FOREACH(int i, gimme_five) {
std::print("{}",i);
};
for(int i: gimme_five) { // does not compile
std::print("{}",i);
}
I would still obviously not do this in prod because despite how cute this
is I would rather have a wrapper that knows to rangify pair
Ivan Matek wrote:
As for other comments: as I have said this has been discussed 1000x, I did not expect to change anybody's mind, and now I kind of feel bad for wasting people time since these discussions rarely change anybody's mind, but at least I think one nice thing is that I fully agree with Arno so there is at least some benefit to this discussion :)
What we need to do is to systematically transition Boost libraries to standard components. The first and necessary step for this was dropping C++03, and we finally achieved liftoff there; now we need to finish the job. (I've on my TODO list migrating CRC off of boost::array et al in 1.86, for instance.) This is going to happen, eventually. Sooner, if we get PRs.
On 02/04/2024 00:27, Peter Dimov via Boost wrote:
Ivan Matek wrote:
As for other comments: as I have said this has been discussed 1000x, I did not expect to change anybody's mind, and now I kind of feel bad for wasting people time since these discussions rarely change anybody's mind, but at least I think one nice thing is that I fully agree with Arno so there is at least some benefit to this discussion :) What we need to do is to systematically transition Boost libraries to standard components. The first and necessary step for this was dropping C++03, and we finally achieved liftoff there; now we need to finish the job. (I've on my TODO list migrating CRC off of boost::array et al in 1.86, for instance.)
This is going to happen, eventually. Sooner, if we get PRs.
This sounds like a good thing, though no small job: in one of my idler moments I wondered who was still using Boost.StaticAssert - well it turns out nearly everyone, including some newer libraries that I'd assumed were at least C++11 anyway. John.
John Maddock wrote:
This sounds like a good thing, though no small job: in one of my idler moments I wondered who was still using Boost.StaticAssert - well it turns out nearly everyone, including some newer libraries that I'd assumed were at least C++11 anyway.
The "problem" with StaticAssert is that BOOST_STATIC_ASSERT is still useful in C++11 (and 14) because `static_assert` without a message is C++17. So a simple replacement doesn't suffice and libraries generally need their local (and trivial) #define BOOST_LIBNAME_STATIC_ASSERT(...) static_assert(__VA_ARGS__, #__VA_ARGS__) Had we dropped C++03 wholesale as I proposed, we'd have been able to just move BOOST_STATIC_ASSERT to Boost.Config with the above definition, and leave boost/static_assert.hpp a stub header, allowing libraries to just remove the include. As is, though, we're still a hodgepodge of C++03 and C++11 so the above plan won't work as is. :-) Maybe we can have a trivial C++03 definition in Boost.Config as well? Although looking at the various implementations in boost/static_assert.hpp, maybe not.
When I look at a proposed library I try to figure out what is great about it, how well it performs for its users (or even, does it have any users?), what part of the API is exceptionally well designed and ergonomic, but most importantly I want to ask: what makes this library stand out to the extent that it should be part of the library collection? What aspects of the library, if viewed by someone learning C++ or interested in improving their design skills, are inspirational?
Is this overly demanding or exclusionary? Am I overthinking things? Should we be asking more of these types of questions and requiring better answers?
What is the criteria for determining if a library is good enough to become part of the collection?
We muddle through and try and keep the quality up. I wonder how much this is to do with maturity: when Boost began the std library had little to offer outside of the STL, github didn't exist, and there was just no coherent ecosystem for C++ libraries, or much quality either. Boost pushed us all to try and think and do better, and no doubt that has had a knock on effect on quality elsewhere too. Strangely the bar has actually been raised in many ways: when we began we were dealing with really quite small libraries which didn't need vast amounts of work to produce. So now that the low hanging fruit has been picked off both here and in the std, we're left with the hard problems which require a lot more effort to complete. A lot more effort to review too. We always got lots of reviews for the one-page libraries, and not so many for the big ones! Perhaps one thing we should encourage again, is more experimentation at the bleeding edge - what can we do only with the very latest std features that's better, simpler and more performant than before? Curiously yours, John.
Am 27.03.2024 um 14:47 schrieb Vinnie Falco via Boost:
I guess I'm confused. My understanding is that libraries are considered good candidates for the Boost collection based on meeting some or all of certain criteria:
* They offer useful, novel functionality not found elsewhere * The API is superior to other libraries that do similar things * The implementation is exceptionally performant * Solving a familiar problem in a particularly elegant fashion * The library is already popular and has field experience * The library offers C++ standard functionality for older compilers
That's a diverse collection of criteria that may or may not guide adoption of new libraries into Boost. As a long-term user, contributor, and maintainer of our in-house flavour of Boost, I want to share *my* point of view to answer your question. In my opinion, Boost is lacking a vision (the original one doesn't seem to be alive anymore). Is Boost - a framework? This means Boost.x uses facilities from Boost.y even when there is a viable alternative in the standard library (or a non-Boost library with the same API), but rather insists on the Boost.y dependency. - a polyfill? This means Boost.x strives to depend on standard library facilities as much as possible - at least as configuration option. - a loose collection of libraries that share a common moniker? This means that every library developer does what they see fit according to their assessment of the state of the ecosystem. - a showcase of avantgarde, high-quality C++ building blocks that every developer is proud to use, and can look at to learn about good C++ for their own advancement? If it's the first, then be clear about that. And tell prospective users that they should expect huge Boost dependency chains. My teammates get *all* of Boost from our in-house distribution because that's where they end up anyway to some degree if they decide to take advantage of one the heavy-weight libraries. If it's the second then the question about the minimum supported C++ standard or compiler toolset becomes moot. But then you have to think hard about the adoption of ne libraries because you most likely want to backport them. And each individual library needs to provide configuration options to let go other Boost libraries if the consumer decides so (like std::variant instead of boost::variant, std::shared_ptr instead of boost::shared_ptr, vanilla Asio instead of boost::asio). If the user rather accepts performance hits by using standard facilities over alledgedly superiour Boost alternatives, that's fine. Every additional dependency has a cost, most certainly in compile times. If it's the third, then you may keep the status-quo. Everything goes. Consumers need to assess if a certain Boost library is worth the efforts to depend on over the projected course of their project. If it's the last option, go for it! Don't accept "me too" libraries. Require recent toolsets. Require the current language standard. Show what you can do with contemporary C++. The bar for adoption needs to be high. When I look at the projects in the company that I work, for I see a decline in the use of Boost. This is despite the fact that our build system makes Boost available *by default*, it's an implicit dependency nobody has to care about! There are projects that no longer use Boost because of the noticable increase in compile times. There are projects that replace Boost libraries with non-Boost libraries that have no or much shorter dependency chains. There are projects that spend serious efforts to create alternatives to existing Boost libraries. Because of all of that, I'll drop as much from the next distribution as I possibly can and divert my time elsewhere. I don't like that, but this is how things are in 2024. Thanks, Dani -- PGP/GPG: 2CCB 3ECB 0954 5CD3 B0DB 6AA0 BA03 56A1 2C4638C5
On 27/03/2024 17:07, Daniela Engert via Boost wrote:
As a long-term user, contributor, and maintainer of our in-house flavour of Boost, I want to share *my* point of view to answer your question. In my opinion, Boost is lacking a vision (the original one doesn't seem to be alive anymore).
Is Boost - a framework? This means Boost.x uses facilities from Boost.y even when there is a viable alternative in the standard library (or a non-Boost library with the same API), but rather insists on the Boost.y dependency. - a polyfill? This means Boost.x strives to depend on standard library facilities as much as possible - at least as configuration option. - a loose collection of libraries that share a common moniker? This means that every library developer does what they see fit according to their assessment of the state of the ecosystem. - a showcase of avantgarde, high-quality C++ building blocks that every developer is proud to use, and can look at to learn about good C++ for their own advancement?
Can't it be all four simultaneously? There are plenty of libraries in Boost which were state of the art and cool when introduced, but look stale and tired from today's perspective. And that's a good thing, it means we're improving and getting better. Niall
On Wed, Mar 27, 2024 at 10:07 AM Daniela Engert
That's a diverse collection of criteria that may or may not guide adoption of new libraries into Boost.
To clarify, I was suggesting that some of the criteria be met, not necessarily all.
I want to share *my* point of view to answer your question. In my opinion, Boost is lacking a vision (the original one doesn't seem to be alive anymore).
Ah, yes, thank you so much for this. It neatly expresses my thinking on the matter. Some may say the review process determines what belongs in Boost. But there is no documented vision nor mailing list discussion one can point to for guidance. Every reviewer has their own opinions, and the result is enormous variance in quality or focus.
- a loose collection of libraries that share a common moniker? This means that every library developer does what they see fit according to their assessment of the state of the ecosystem.
It seems to me that this is the most accurate description of the current state of affairs.
When I look at the projects in the company that I work, for I see a decline in the use of Boost.
As you noted, there is a cost that comes with using Boost in a project. I understood this right away as I tended to avoid Boost in the past unless absolutely necessary. When I design my libraries, this cost is always front of mind. By making my library a Boost library, I am expecting users to accept that Boost has its own error_code and other types that duplicate std functionality. Users also accept the varying quality and big footprint in terms of the number of additional files. I try to create a library that is so good, whose functionality cannot be found elsewhere at similar quality, that users will accept the costs of integrating Boosts in exchange for having access to my libraries. If the bar for technical excellence is not high with respect to which libraries are considered good candidates for Boost, then users will not find the value proposition compelling. In other words no one is going to put up with the costs of integrating Boost just to have access to a bunch of mid-tier libraries they could easily get elsewhere without Boost. Thanks
Daniela Engert wrote:
Is Boost - a framework? This means Boost.x uses facilities from Boost.y even when there is a viable alternative in the standard library (or a non-Boost library with the same API), but rather insists on the Boost.y dependency. - a polyfill? This means Boost.x strives to depend on standard library facilities as much as possible - at least as configuration option. - a loose collection of libraries that share a common moniker? This means that every library developer does what they see fit according to their assessment of the state of the ecosystem. - a showcase of avantgarde, high-quality C++ building blocks that every developer is proud to use, and can look at to learn about good C++ for their own advancement?
All of the above. (1) It's not possible to use a standard library feature introduced in e.g. C++17 in a Boost library that supports e.g. C++14, and "a configuration option" doesn't work well for reasons already explained numerous times here. In addition to that, sometimes Boost components offer additional functionality over the standard ones, so libraries can prefer them even when the previous sentence doesn't apply. This is not particularly unique in the C++ world - every code base does it. (2) Boost.Compat is a "polyfill", but Boost as a whole obviously not. What does Boost.JSON "polyfill"? (3) Yes, this has always been the case. (4) Yes, although this wasn't a requirement in the "original Boost". But first and foremost, Boost is a mechanism by which we, volunteer C++ developers, provide value to the C++ community. This has more than one aspect; Boost provides well designed and tested libraries that have good platform coverage, but Boost also "comes preinstalled" for many, avoiding the need to separately bring in disparate C++ dependencies.
On Wed, Mar 27, 2024 at 3:47 PM Vinnie Falco via Boost < boost@lists.boost.org> wrote:
I guess I'm confused. My understanding is that libraries are considered good candidates for the Boost collection based on meeting some or all of certain criteria:
* They offer useful, novel functionality not found elsewhere * The API is superior to other libraries that do similar things * The implementation is exceptionally performant * Solving a familiar problem in a particularly elegant fashion * The library is already popular and has field experience * The library offers C++ standard functionality for older compilers
I can't say Boost is about excellence. Far from it. I'd rather call it repository of useful and not-so tools - Some of them become true legends and go into standard (like shared_ptr, regex, thread) - Some are highly influential but sometimes ticky and evolving like Boost.Path - Some bring exceptional concepts and highly useful like Boost.Asio but suffer from issues that made too cutting edge - creating code that takes forever to compile, horrible error messages and gives hard time to average Joe to maintain the code (why Unix Domain Socket and TCP socket are different classes why so much template based code?) - Some come with good basic idea but try to become to C++ nerdy make it too limited and not as useful (like Boost.Beast that I personally voted for no, but this is since I have actual experience in building useful web framework CppCMS) - Some are truly wired concepts that only C++ nerds will be using like Spirit. (I mean Bison is way better tool for any practical purpose - but it isn't as nerdy as Spirit) - Some are just plainly useful and nice like Boost.UUID but nothing exceptional or can't be done easily with other tools. Sometimes Boost libraries go too far with template metaprogramming concepts and create something that looks very useful on paper but true pain in the ... for a normal programmer around. Being myself Boost contributor (in past) I actually use Boost far less, especially since C++11/14 became very common, since using Boost has its own drawbacks like breaking APIs and ABIs, a horrible build system and other stuff that make Boost highly exceptional but also highly problematic in many cases. For me, Boost is what made C++11/C++14 today really useful and playground for something that can be highly useful in future. I wish that new Boost libraries were more concentrated on usability and simplicity instead of fancy stuff. But Boost it is what it is for good and bad. Artyom
On Wed, Mar 27, 2024 at 1:41 PM Artyom Beilis via Boost < boost@lists.boost.org> wrote:
- Some come with good basic idea but try to become to C++ nerdy make it too limited and not as useful (like Boost.Beast that I personally voted for no, but this is since I have actual experience in building useful web framework CppCMS)
Beast is to a C++ CMS or web framework what raw sockets are to networking. The intention of Beast was only to achieve one quantum increment of progress in the direction of bringing HTTP and Websocket to C++. I totally hear you that you wish it did more, and I wish that as well. To this end we are working on the spiritual successor to Beast, which is actually a collection of five libraries: https://github.com/cppalliance/buffers https://github.com/cppalliance/http_proto https://github.com/cppalliance/http_io https://github.com/cppalliance/websocket_proto (not created yet) https://github.com/cppalliance/websocket_io (not created yet) These libraries are similar to Beast in that they offer HTTP and Websocket functionality. But with the following differences: * Designed largely without templates * Optimized memory usage pattern * Protocols are encapsulated in a separate "sans-io" library ( https://sans-io.readthedocs.io/) * Corrects flaws discovered with Beast's APIs * More HTTP and Websocket features are "in scope" than before This won't be a full blown CMS but it will be more than what Beast does. Thanks
On Thu, Mar 28, 2024 at 3:14 PM Vinnie Falco via Boost < boost@lists.boost.org> wrote:
* More HTTP and Websocket features are "in scope" than before
Are you still limiting yourself to HTTP 1.1, or is HTTP 2, and possibly HTTP 3 in the future, in scope? What about encryption? Standard ones are gzip and brotli for HTTP. Or the recent [Zstd][1] one? --DD [1]: https://chromestatus.com/feature/6186023867908096
On Thu, Mar 28, 2024 at 7:19 AM Dominique Devienne
Are you still limiting yourself to HTTP 1.1, or is HTTP 2, and possibly HTTP 3 in the future, in scope?
Yes this will be HTTP/1 only. Implementing the newer protocols is an enormous undertaking and doesn't offer the same value for the amount of work needed. It is also a thousand times less fun, as these protocols were designed for the needs of big tech companies and not someone who just wants to write a small, simple network program.
What about encryption? Standard ones are gzip and brotli for HTTP. Or the recent [Zstd][1] one? --DD
I think you mean compression, and yes - automatic addition and removal of transfer-encoding and content-encoding is IN SCOPE ! :) And this will work without requiring #ifdef to indicate if you have ZLIB support, this way there is only one version of the static or dynamic http-proto library instead of many depending on macros). There's a run-time dependency injection thing going on to support this. Thanks
Boost libraries used to be cutting edge, to such an extent that they were adopted into the C++ Standard. And now the progress is in reverse. The Standard introduces a new component, and the Boost library follows (Boost.Charconv for example).
Does a component need to be new to be valuable? You are correct that Charconv does not implement a new, or novel interface. In fact to the user it's really just 2 functions. The value proposition is that it's quite good at what it does and is available today. The library will benefit users of other libraries like Boost.MySQL and Boost.JSON at low cost to those maintainers: https://github.com/boostorg/json/pull/993. One of the big pushes for C++26 is BLAS. I'll bet if you look hard enough you can find a box of FORTRAN punch cards with a reference implementation on it. Because it's not new is not valuable? There is still quite a bit of research going on in the problem space: https://arxiv.org/abs/2210.10173. The users of boost are looking to solve whatever problem they are paid/want to solve. Offering novel components is cool and all, but offering well-engineered solutions to pervasive and mundane problems (like parsing numbers or matrix algebra) is valuable in it's own right. Matt
One of the big pushes for C++26 is BLAS. I'll bet if you look hard enough you can find a box of FORTRAN punch cards with a reference implementation on it. Because it's not new is not valuable? There is still quite a bit of research going on in the problem space: https://arxiv.org/abs/2210.10173.
But why? There is a standard interface cblas that has several highly efficient implementations like open source OpenBLAS or proprietary MKL. The Boost.BLAS is actually quite useless library for real computational context because it is by order of magnitude slower than stuff like OpenBLAS (at least when I tested it) Why should this be a part of C++ standard when there is highly established infrastructure that does it for you? If so just wrap cblas with nice C++ containers and you are done. Artyom
On Thursday, March 28th, 2024 at 9:54 AM, Artyom Beilis via Boost
One of the big pushes for C++26 is BLAS. I'll bet if you look hard enough you can find a box of FORTRAN punch cards with a reference implementation on it. Because it's not new is not valuable? There is still quite a bit of research going on in the problem space: https://arxiv.org/abs/2210.10173.
But why?
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p1673r11.html#why-i.... For sure the STLs will not have the fastest possible implementation, but it will be broadly useful even though it is nowhere near novel. Matt
There is still quite a bit of research going on in the problem space: https://arxiv.org/abs/2210.10173.
These algorithms while theoretically being interesting are rarely practical. Because while asymptotically faster they have very large constants. And if you do want to manage huge matrices it is much more practical to use a GPU. Artyom
On Wed, Mar 27, 2024 at 11:53 PM Matt Borland
You are correct that Charconv does not implement a new, or novel interface. In fact to the user it's really just 2 functions. The value proposition is that it's quite good at what it does and is available today.
Yes, and in my opinion Charconv is an example of an ideal Boost library candidate. It gives users of older versions of C++ the new std library feature.
On Wed, Mar 27, 2024 at 11:53 PM Matt Borland via Boost < boost@lists.boost.org> wrote:
One of the big pushes for C++26 is BLAS. I'll bet if you look hard enough you can find a box of FORTRAN punch cards with a reference implementation on it. Because it's not new is not valuable? There is still quite a bit of research going on in the problem space: https://arxiv.org/abs/2210.10173.
I'll just note that BLAS is 'already done' in c++26 https://eel.is/c++draft/linalg.algs.blas1 https://eel.is/c++draft/linalg.algs.blas2 Like special math functions, many applications won't need these tools, but if you do it's super handy not to have to wrap some other library or Fortran itself. The users of boost are looking to solve whatever problem they are paid/want
to solve. Offering novel components is cool and all, but offering well-engineered solutions to pervasive and mundane problems (like parsing numbers or matrix algebra) is valuable in it's own right.
Exactly. And one old phrase is that the committee should be 'standardizing existing practice'. This is standardizing decades old research and development. Jeff
On Tue, Apr 2, 2024 at 8:58 AM Jeff Garland via Boost
I'll just note that BLAS is 'already done' in c++26 ... Like special math functions, many applications won't need these tools, but if you do it's super handy not to have to wrap some other library or Fortran itself.
I think the bar for inclusion in the standard library needs to be higher than "super handy." Everything added to the standard creates an added and perpetually recurring cost, because subsequent features need to harmonise with a growing set of already existing facilities. In the past and now more recently I have heard "you can already get networking as an external library." One could say the same for this BLAS. Why does the C++ Standard now have BLAS and yet still cannot connect to the Internet, with no capability to do so coming anytime soon? Unlike BLAS, which has little to no second order effects (that is, new libraries whose interfaces are built on this std facility) networking is the opposite. C++ desperately needs a standard networking facility, as users are currently deprived of the rich ecosystem of external, derivative network libraries that are common in other languages. Too often the justification for library-only features in the standard comes down to one or both of: 1. this is "useful" 2. avoid the need for package managers These discussions of convenience and utility never consider the opportunity cost. That is, what we are sacrificing in order to have these things, likely because a true cost-benefit analysis would make papers more difficult to push through. There is never much in terms of quantitative analysis. How many people would benefit, and compared to the alternative (which is always simply to download and use a third party library). Thanks
On Tue, Apr 2, 2024 at 9:27 AM Vinnie Falco
On Tue, Apr 2, 2024 at 8:58 AM Jeff Garland via Boost < boost@lists.boost.org> wrote:
I'll just note that BLAS is 'already done' in c++26 ... Like special math functions, many applications won't need these tools, but if you do it's super handy not to have to wrap some other library or Fortran itself.
I think the bar for inclusion in the standard library needs to be higher than "super handy." Everything added to the standard creates an added and perpetually recurring cost, because subsequent features need to harmonise with a growing set of already existing facilities.
Well that would require the standard to be self consistent, which it is not. Just to pick one that was brought up earlier - charconv is wildly inconsistent with (until recently) to_string and some other std apis. It's also a terrible api that *always* has to be wrapped in my projects. Thing is, it's indeed the best facility for doing conversions because of its performance based on modern algorithms. Anyway, I disagree that there's some 'massive cost' with having something in the standard -- that said I agree that my preference is to see things that all c++ users need everyday get priority.
In the past and now more recently I have heard "you can already get networking as an external library." One could say the same for this BLAS. Why does the C++ Standard now have BLAS and yet still cannot connect to the Internet, with no capability to do so coming anytime soon?
In the end, it has BLAS because a group of people were willing to put in the time needed to get the facility standardized and there's not really any disagreement about the approach. Networking, unfortunately, isn't in the same boat.
Unlike BLAS, which has little to no second order effects (that is, new libraries whose interfaces are built on this std facility) networking is the opposite. C++ desperately needs a standard networking facility, as users are currently deprived of the rich ecosystem of external, derivative network libraries that are common in other languages.
Too often the justification for library-only features in the standard comes down to one or both of:
1. this is "useful" 2. avoid the need for package managers
These discussions of convenience and utility never consider the opportunity cost. That is, what we are sacrificing in order to have these things, likely because a true cost-benefit analysis would make papers more difficult to push through. There is never much in terms of quantitative analysis. How many people would benefit, and compared to the alternative (which is always simply to download and use a third party library).
Well, these things are explicitly discussed -- but that doesn't mean that
there will be agreement -- or that Vinnie (or Jeff's) priorities will be the ones chosen. Even how to do quantitative analysis becomes a point of discussion -- as it should. For example, just bc there's a thousand github repos that implement something doesn't mean it's popular -- that could just be the standard college challenge. And finally, you have to have people willing to make unreasonable time sacrifices to get anything substantial into the standard -- even if we all want it there. Jeff
participants (26)
-
Alain O' Miniussi
-
Andrey Semashev
-
Arno Schoedl
-
Artyom Beilis
-
Boris Kolpackov
-
Brook Milligan
-
Christian Mazakas
-
Daniela Engert
-
Daniele Lupo
-
Dominique Devienne
-
Glen Fernandes
-
Ivan Matek
-
Jakob Lövhall
-
Jeff Garland
-
John Maddock
-
Klemens Morgenstern
-
Marc Viala
-
Marcelo Zimbres Silva
-
Matt Borland
-
Niall Douglas
-
Nigel Stewart
-
Peter Dimov
-
Rainer Deyke
-
René Ferdinand Rivera Morell
-
Robert Ramey
-
Vinnie Falco