What is this "Beman Project Development"?
I got the email below from discourse.boost.org (I had subscribed long ago).
So getting something was not unexpected. What I'm wondering about are this:
Q: What is this new "Beman Project Development" thing about?
A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name
this way?
A: ??
---------- Forwarded message ---------
From: Jeff Garland via Beman Project
René Ferdinand Rivera Morell wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are this:
Q: What is this new "Beman Project Development" thing about? A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name this way? A: ??
Q: Even if they had, is their use of Beman's name in good taste?
---------- Forwarded message --------- From: Jeff Garland via Beman Project
Date: Fri, May 3, 2024 at 10:19 AM Subject: [Beman Project] [Beman Project Development] Becoming a member of the beman-project github Jeff-Garland https://discourse.boost.org/u/jeff-garland May 3
Please send your github id to @Jeff-Garland https://discourse.boost.org/u/jeff-garland @dsankel https://discourse.boost.org/u/dsankel or @bretbrownjr https://discourse.boost.org/u/bretbrownjr via DM. ------------------------------
Visit Topic <https://discourse.boost.org/t/becoming-a-member-of-the-beman-project- github/77/1> or reply to this email to respond.
To unsubscribe from these emails, click here <https://discourse.boost.org/email/unsubscribe/63382355ff9c551129d38f2a 5907896a3188f521d0af93a251af23386c9522c8> .
-- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supone Nada -- Robot Dreams - http://robot-dreams.net
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, May 3, 2024 at 11:50 AM Peter Dimov
René Ferdinand Rivera Morell wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are this:
Q: What is this new "Beman Project Development" thing about? A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name this way? A: ??
Q: Even if they had, is their use of Beman's name in good taste?
Taste is subjective. So all I can say is that I wouldn't use his name this way. -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supone Nada -- Robot Dreams - http://robot-dreams.net
On 5/3/24 19:45, René Ferdinand Rivera Morell via Boost wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are this:
Q: What is this new "Beman Project Development" thing about? A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name this way? A: ??
Deface?
---------- Forwarded message --------- From: Jeff Garland via Beman Project
Date: Fri, May 3, 2024 at 10:19 AM Subject: [Beman Project] [Beman Project Development] Becoming a member of the beman-project github Jeff-Garland https://discourse.boost.org/u/jeff-garland May 3
Please send your github id to @Jeff-Garland https://discourse.boost.org/u/jeff-garland @dsankel https://discourse.boost.org/u/dsankel or @bretbrownjr https://discourse.boost.org/u/bretbrownjr via DM. ------------------------------
Visit Topic https://discourse.boost.org/t/becoming-a-member-of-the-beman-project-github/... or reply to this email to respond.
To unsubscribe from these emails, click here https://discourse.boost.org/email/unsubscribe/63382355ff9c551129d38f2a590789... .
On 03/05/2024 17:45, René Ferdinand Rivera Morell via Boost wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are this:
Q: What is this new "Beman Project Development" thing about? A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name this way? A: ??
There is also a website: https://www.bemanproject.org/ And a github org with projects: https://github.com/beman-project And here is some more documentation which looks like this is intended to be a successor/replacement of Boost: https://github.com/beman-project/beman/wiki/Governance-Documents and https://github.com/beman-project/beman/wiki/Mission-Statement. I see a number of long standing Boost folk involved. I am very much out of the loop of all things C++ recently (indeed, I now write C not C++ for the day job), so obviously things have been a'happening in the background, resources allocated, decisions taken. It would be nice if some of those long standing Boost folk could explain some more. I might add - speaking personally - that I don't think the problem is getting a library into a fit state for standardisation, but rather how library standardisation works at WG21 is not fit for purpose in my opinion. In other words, the problem isn't a technical one, it's a _process_ and _political_ problem, in my opinion. So, in my opinion, I think they'll be tilting at windmills. Other standards bodies don't have those problems, so I really think WG21 needs to change how it implements library standardisation. All that said, I wish the best of luck to the Beman project people in their efforts, and to WG21 in general. Niall
On Tue, May 7, 2024 at 9:33 AM Niall Douglas wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are
On 03/05/2024 17:45, René Ferdinand Rivera Morell wrote: this:
Q: What is this new "Beman Project Development" thing about?
There is also a website: https://www.bemanproject.org/ And a github org with projects: https://github.com/beman-project
It's a separate project that won't interfere with the Boost C++ libraries in any way. The "discourse.boost.org" might cause confusion but I believe that is only temporary and will eventually be on a different domain.
On Tue, May 7, 2024 at 10:25 AM Glen Fernandes via Boost < boost@lists.boost.org> wrote:
On Tue, May 7, 2024 at 9:33 AM Niall Douglas wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are
On 03/05/2024 17:45, René Ferdinand Rivera Morell wrote: this:
Q: What is this new "Beman Project Development" thing about?
There is also a website: https://www.bemanproject.org/ And a github org with projects: https://github.com/beman-project
It's a separate project that won't interfere with the Boost C++ libraries in any way.
The "discourse.boost.org" might cause confusion but I believe that is only temporary and will eventually be on a different domain.
This is correct. discourse.boost.org will be moved to discourse.bemanproject.org.
Niall Douglas wrote:
I got the email below from discourse.boost.org (I had subscribed long ago). So getting something was not unexpected. What I'm wondering about are
On 03/05/2024 17:45, René Ferdinand Rivera Morell via Boost wrote: this:
Q: What is this new "Beman Project Development" thing about? A: It's something from C++Now initiated by The Boost Foundation (DBA)
Q: Did they get permission from the Beman family to use Beman Dawes' name this way? A: ??
There is also a website: https://www.bemanproject.org/
And a github org with projects: https://github.com/beman-project
And here is some more documentation which looks like this is intended to be a successor/replacement of Boost: https://github.com/beman-project/beman/wiki/Governance-Documents and https://github.com/beman-project/beman/wiki/Mission-Statement.
I see a number of long standing Boost folk involved.
I am very much out of the loop of all things C++ recently (indeed, I now write C not C++ for the day job), so obviously things have been a'happening in the background, resources allocated, decisions taken.
It would be nice if some of those long standing Boost folk could explain some more.
I agree with Niall. It would be good if those behind the new project can explain what they are trying to do and what the Boost Foundation's role is. It's not like we can't easily draw and share our own conclusions in the absence of such explanation, but it would still be good to hear them out first.
On 07/05/2024 16:11, Peter Dimov via Boost wrote:
I see a number of long standing Boost folk involved.
I am very much out of the loop of all things C++ recently (indeed, I now write C not C++ for the day job), so obviously things have been a'happening in the background, resources allocated, decisions taken.
It would be nice if some of those long standing Boost folk could explain some more.
I agree with Niall. It would be good if those behind the new project can explain what they are trying to do and what the Boost Foundation's role is.
It's not like we can't easily draw and share our own conclusions in the absence of such explanation, but it would still be good to hear them out first.
I was just about to say "aren't you **on** the Boost Foundation board?" and just there was an email from Sankel to confirm that you are. Are you saying that you - who serves on the Boost Foundation board - were unaware of this effort and cannot explain much about it? For starters: How was the name chosen? What is wrong with Boost for this role? Is there going to be a different library review process than Boost's, and if so, why and what is sought to be changed about Boost's process? This must be the sixth to tenth attempt at a "standards focused library collection" all of which petered out because the committee did not elect to standardise more than a tiny portion of what was proposed (and even then, often with many design changes which broke compatibility with the wider collection). What makes this different from all the previous attempts, which in many cases caused the authors to leave C++ entirely after experiencing the standardisation process? Niall
Niall Douglas wrote:
I was just about to say "aren't you **on** the Boost Foundation board?" and just there was an email from Sankel to confirm that you are.
Are you saying that you - who serves on the Boost Foundation board - were unaware of this effort and cannot explain much about it?
A good and a well-deserved question. While I am indeed on the board, my meeting attendance record is far from perfect, so I was indeed not very aware of this initiative. But even if I were aware of it, it's not my initiative, so I shouldn't be the one explaining it.
On Tue, May 7, 2024 at 6:33 AM Niall Douglas via Boost < boost@lists.boost.org> wrote:
I don't think the problem is getting a library into a fit state for standardisation,
but rather how library standardisation works at WG21 is not fit for purpose
in
my opinion. In other words, the problem isn't a technical one, it's a
_process_ and _political_ problem, in my opinion.
I agree with Niall here. The structure of WG21 creates perverse incentives, producing outcomes which are not aligned with the needs of the wider C++ community. For example "the standard library can't connect to the internet." Thanks
On Tue, 7 May 2024 at 20:16, Vinnie Falco via Boost
I don't think the problem is getting a library into a fit state for standardisation,
but rather how library standardisation works at WG21 is not fit for purpose
in
my opinion. In other words, the problem isn't a technical one, it's a
_process_ and _political_ problem, in my opinion.
I agree with Niall here. The structure of WG21 creates perverse incentives, producing outcomes which are not aligned with the needs of the wider C++ community. For example "the standard library can't connect to the internet."
I have never heard of such an incentive being expressed in WG21, and I have attended rather more of its meetings than the two people who make just slightly questionable claims about what WG21's library standardization process is fit for, considering how much experience they (don't) have about it. :) The mission statement of that project sounds fine, reference implementations for standard library proposals, early reviews. There's nothing there not to like. Sounds like a highly valuable service. I do agree with the name being questionable.
On Thu, May 9, 2024 at 1:56 PM Ville Voutilainen < ville.voutilainen@gmail.com> wrote:
On Tue, 7 May 2024 at 20:16, Vinnie Falco via Boost
wrote: The structure of WG21 creates perverse incentives, producing outcomes which are not aligned with the needs of the wider C++ community. For example "the standard library can't connect to the internet."
I have never heard of such an incentive being expressed in WG21, and I have attended rather more of its meetings than the two people who make
just slightly questionable claims about what WG21's library standardization process is fit for, considering how much experience
they (don't) have about it. :)
Many people have experiences and yet do not learn from them or otherwise grow. I believe wisdom is the result of pain combined with insight. The pain occurs when reality collides with your internal model of the world. And, with sufficient humility, the insights which follow the experience of pain produce wisdom. True statements about WG21 (and to be fair, all organizations which share a similar bureaucratic structure) can be made without attending even a single meeting. It all comes down to incentives. Paper writers are incentivized to get their paper through, which is not quite the same as serving the needs of the wider C++ community. Take for example, writing popular libraries and applications which in fact I do have a lot of experience with. I have to convince the entire world that my library is useful, fit for purpose, and better than competitors. The moment that my library stops doing these things, new users will go elsewhere and existing users may complain and then seek alternatives. Even if my library is great, anyone can come along without my permission and produce something which is even better. And users can easily switch (well, there is a bit of work involved in that still). I can't force anyone to use my library. Every single user has to be individually convinced to use my work product. This is not a theoretical scenario. My libraries Boost.Beast and Boost.JSON were both written with the intention to be better than the then-current best of class (websocketpp and RapidJSON respectively). One of our projects, called Mr. Docs, aims to replace Doxygen as a superior solution for C++ (it is based on the tip of clang/llvm). When someone uses one of my libraries in a commercial product, their economic success now depends on the quality of my library. Its documentation, performance, lack of defects, and timeliness of fixes and improvements. In other words, they have skin in the game. Let's compare that with getting a library-only feature into the standard. A paper writer need not even provide a working implementation, and when they do there are no particular requirements in terms of how widespread the usage is. All they need to do is convince a small group of people present in the room to vote yes. And eventually convince the larger WG21 body to vote yes. By the very nature of the rules by which WG21 conducts business, it is unavoidable that the progress of papers depends less on technical merit and more on the author's ability to navigate the bureaucracy. As Gor famously said to Niall on a bench while eating a sandwich "90% of the work of getting coroutines through was social not technical" (paraphrased, sourced from reddit). The people who vote yes to papers are not accountable to anyone except the rest of WG21, which is a much smaller group than the community of C++ users and corporations which utilize libraries written by others (including the standard library). If someone votes yes and a feature later turns out to be a dud, the person who voted yes faces no consequences. They have no skin in the game. However if someone votes no, there is a consequence. The authors of the paper may now have a different opinion of the person who voted no. In other words, the entire process is plagued with politics which interferes with technical excellence. Once a paper is accepted and a feature makes it into the standard, the author has no more obligation or even incentive to improve upon the feature. "I'm the author of an accepted C++ language feature" has sufficient value that people may write papers primarily for their own popularity and not out of a particular need. That WG21 has emphasized "participation" regardless of the credentials, skill, or experience of the participants exacerbates this considerably. There's even a program where random unknown individuals can attend WG21 meetings through the Boost Foundation. The bureaucratic structure of WG21 is not capable of responding to timely challenges. For example this business of "memory safety" is something that WG21 cannot hope to ever truly address. It requires, for lack of a better term, a "strong executive branch." That is, individuals who are imbued with decision making power. There is a comparison to the political system in the United States with the Executive versus Congress. WG21 is comparable to the house and senate, which are slow, deliberative bodies that require voting to achieve consensus. While the Executive branch can act quickly, responding to immediate threats. WG21 can't respond to immediate threats such as the government mandating memory safety.
The mission statement of that project sounds fine, reference implementations for standard library proposals, early reviews. There's nothing there not to like. Sounds like a highly valuable service.
I do agree with the name being questionable.
-- Regards, Vinnie Follow me on GitHub: https://github.com/vinniefalco
On Thu, May 9, 2024 at 3:07 PM Vinnie Falco
...
I pressed send by accident... continuing... When people vote on proposals, it is based on the honor system whether or not they have read the paper, if they are qualified, if they are knowledgeable about the domain, and if they are voting in good faith. WG21 has to simply trust that someone who votes yes is not doing so because the author voted yes or will vote yes to their own paper. WG21 has to simply trust that when someone votes no, they are doing so because they believe it is the best technical decision and not because they simply don't like the author's politics. Or criminal record. Or because they or their company intends to introduce a new competing paper in the future. WG21 leaves questions of conflicts of interest, political horse trading, non-technical voting up to the honor system. No one who votes is vetted for their talent, industry experience, and so on. Peter Dimov, probably the smartest guy on the planet, had his metaprogramming library turned away by people with far less talent. The consequence is we do not have Peter's library, but instead the promise of a better metaprogramming standard library component that has yet to be written. I do not have this luxury when I write libraries. I can't go up to say, a large corporation, and convince them that I'm a really great guy who should just be trusted. People have to opt-in to my library, unlike the standard where after a relatively small group of people vote, all vendors who produce standard libraries are compelled to add it. And every C++ user who uses that standard library now has that pushed on them. No one "forces" you to use standard library components. But the appearance of a feature or particular API in the standard library creates enormous resistance to alternatives, because there is value in having a normalized API which is bundled with the compiler. :LEWG is now notorious for having people write "direct-to-standard" proposals. That is, people find it far easier to just write a paper and socially engineer their way through WG21's tyranny of democracy than to invest the blood, sweat, and tears of writing a popular library. It has literally been said "going through Boost is more work than going through LEWG." In other words the bar for technical excellence in LEWG is lower than it is for Boost. I, for one, am glad that I am not someone who "have [sic] attended rather more of its meetings than the two people who make just slightly questionable claims about what WG21's library standardization process is fit for" because if I was, then I would be responsible for that lowering of the bar. As I believe that complaining without offering solutions is equivalent to "whining" I propose a simple solution. Eliminate LEWG, have library-only components go straight to LWG like how it used to be, ensure a process where people voting on papers are actual subject matter experts and not patsies or confederates, and require that these library-only components already have some level of adoption and represent the state of the art. Note that under this scheme, we would already have the Networking TS in the standard. And the standard library would be able to connect to the Internet. Thanks
On Fri, 10 May 2024 at 01:25, Vinnie Falco
As I believe that complaining without offering solutions is equivalent to "whining" I propose a simple solution. Eliminate LEWG, have library-only components go straight to LWG like how it used to be, ensure a process where people voting on papers are actual subject matter experts and not patsies or confederates, and require that these library-only components already have some level of adoption and represent the state of the art. Note that under this scheme, we would already have the Networking TS in the standard. And the standard library would be able to connect to the Internet.
While there's various pieces of what you wrote that I find incorrect or something I disagree with, this one has the particular problem that standardizing the Networking TS wouldn't have made the standard library able to connect to the Internet, because the Networking TS doesn't have that ability. In general.. there are various points that you raise that the project that is the topic of this thread can apparently help with.
On Thu, May 9, 2024 at 3:34 PM Ville Voutilainen < ville.voutilainen@gmail.com> wrote:
standardizing the Networking TS wouldn't have made the standard
library able to connect to the Internet, because the Networking TS
doesn't have that ability.
The last draft of N4734 from 2018 shows the types `basic_stream_socket`, `ip::basic_resolver `, and `ip::tcp` on pages 140, 206, and 212 respectively. This allows you to resolve a network name to an IP address, connect to the address, and communicate using TCP/IP. The Networking TS also supports UDP. It is true that SSL/TLS streams are missing but N4734 is certainly capable of "connect to the Internet" while C++23 and C++26 are certainly not. It seems obvious to me that the standard library needs a foundational networking component which identically mirrors the functionality of POSIX sockets, but with a modern interface approach and a reasonable solution to satisfying all desired flavors of asynchrony. POSIX sockets are battle-tested and proven with 36 years of experience. Yet the collective genius of WG21's democracy-flavored consensus algorithm has concluded that they can do better. There isn't even yet a paper proposing a way to connect two endpoints using TCP/IP aside from the rejected Networking TS. We do have a paper that proposes WG21 investigate, look into, and ponder the idea of maybe modeling some future API on a message-passing system which according to rumors is popular in some circles. In general.. there are various points that you raise that the project
that is the topic of this thread can apparently help with.
I am supportive of any new external project including the Beman Project, because the principals have skin in the game. They are risking their time and reputation on something which is not guaranteed success. I think that the rest of the Boost community should cut them some slack and give the project the time it needs to get its resources such as the website, the forum (or mailing list), GitHub repositories, mission statement, and other exposition fully developed and deployed. However I do not see how this solves the problem. LEWG would have to mandate that library-only proposals go through the Beman Project for external review and field experience gathering. What incentive does the chair of LEWG have to do this? If anything this decreases the power of the chair and transfers it to the Beman Project. Ironically, I liked it better when Beman was around and involved in WG21, before LEWG existed. You had to go through LWG. And let me tell you those LWG people, they were immensely qualified to make those decisions, as they are language lawyers, wordsmiths, and incredibly knowledgeable. Now things have to go through LEWG-I first and then LEWG before it lands on LWG. No offense but the real engineering talent is in LWG not LEWG. So now, by the time it gets to LWG all the design choices have been made and WG21 has tied the hands of the people most qualified to reject bad things or make necessary design corrections. The real problem is that the people who run the system are not aware it is broken, and derive social benefits from said system. No bureaucrat ever takes actions to reduce their own power (except maybe George Washington who refused to run for president again). Some other people might argue that while the system is broken, it isn't clear what the better system looks like. Regardless, I would like to be able to connect to the Internet using the standard library in my lifetime and it isn't immediately obvious that this will be possible. Thanks -- Regards, Vinnie Follow me on GitHub: https://github.com/vinniefalco
Vinnie Falco wrote:
However I do not see how this solves the problem. LEWG would have to mandate that library-only proposals go through the Beman Project for external review and field experience gathering. What incentive does the chair of LEWG have to do this? If anything this decreases the power of the chair and transfers it to the Beman Project.
The answer to this question is pretty simple. The chair of LEWG is Inbal Levi, one of the project leads of the so-called Beman Project.
On Thu, May 9, 2024 at 4:06 PM Peter Dimov via Boost
The answer to this question is pretty simple. The chair of LEWG is Inbal Levi, one of the project leads of the so-called Beman Project.
Well, that is great news. As its first official library the Beman Project should (with or without your assistance) adopt Boost.mp11 into the new collection. Then they should take the mp11 paper which was rejected in LEWG-I (or was it LEWG?) and bring it up to date. And finally submit the paper and accompanying implementation which has received over two years of field experience. This is quite literally the highest quality library the Beman Project will ever see. And the best thing about it is that all the work has already been done. The library is already written, the paper is already written, and it comes from one of the brightest minds in the C++ world. If mp11 can't get into the standard well I think that pretty much proves everything I've been saying. And there is no point to having a Beman Project, a Boost2, or any other initiative which is not capable of getting through something obviously useful and correct. If mp11 is successful, we might consider a larger work which is even more useful, has a paper written, and has ten times more field experience. Asio comes to mind. Thanks
On Thu, May 9, 2024 at 3:34 PM Ville Voutilainen < ville.voutilainen@gmail.com> wrote:
While there's various pieces of what you wrote that I find incorrect or something I disagree with
I would be happy to hear about how or why the points I raised are incorrect. It is true that I have attended far fewer meetings than you, so perhaps I am drawing incorrect conclusions. To make things easier I will rephrase some of my issues as questions: 1. What qualifications, skills, or experiences are required to attend a meeting? 2. Are all votes counted equally regardless of skill or experience? 3. What mechanism ensures that votes are made strictly on technical merit? 4. What stops people from voting when they haven't read the paper? 5. What stops people from voting when they don't understand the paper? 6. What stops people from voting when they don't understand the domain? 7. What protocol detects or prevents conflicts of interest? 8. What system discourages horse trading, or exchanging votes ("you vote for me I vote for you") 9. What are the measurable benefits of open attendance ("everyone should come to meetings") 9.a Should my girlfriend, who does not know C++, attend the meetings? Should she vote? Why? 10. What forces discourage bad ideas and encourage the good ones? 11. What stops bad ideas from getting passed ("trust me bro") 12. What ensures good ideas or needed features eventually arrive (tragedy of the commons)? 13. What retrospectives measure the performance of wg21 consensus decisions quantitatively? To further elaborate on number 11 using the same paradigm as my previous replies, in the external library "market" if the ideas in your library are good then people will use it. The merit of ideas in the external library ecosystem is measured quantitatively by the adoption of code. More people integrate your library when it is good, and people ignore your library when it is bad. In the WG21 process, there is no system of measurement except people's opinion. "I think this paper is good, therefore the paper must be good because I am an expert in such matters." When someone's paper goes into the standard, it is forced onto every developer's computer because vendors must include it with the toolchain. APIs in the standard library are thus offered a privileged position: they do not need to first become popular with the wider C++ community before everyone is eventually forced to download them. This of course attracts some unsavory folks who prefer to derive benefit from the work of others ("do what I say, because I know better"). It is unfortunate that these folks seem to also have an uncanny knack for navigating bureaucracies and social engineering (to which I admit I am fabulously inept). What I love about Boost is the absence of politics. You can't bribe your way into Boost and once your library is in, no one tells you what to do (a "federated" model [1]). You still have to satisfy users or else people will stop using your library, despite it being in Boost. This of course leads to another problem ("Boost has unmaintained or old libraries that don't work well") but I vastly prefer this outcome as it does not chisel into permanence a growing archeological record of bad group decisions ("muh ABI compatibility"). No one forces you to use Boost, yet because Boost offers such compelling utility it is bundled with many operating system distributions. There is probably a political analogy lurking in here where Boost's federated model reflects the benefits of capitalism, free markets, and competition while WG21's socialist form of model reflects the ills of a command economy complete with apparatchiks, the inefficient allocation of resources, and the failure to meet the needs and wants of consumers (C++ users). I wasn't there for it, but I guess the genius of Beman was setting up the federated structure of Boost which for better or for worse has motivated individuals to engage in the highest level of C++ charity: to write a library, subject it to review, and continue to maintain it for the benefit of everyone under the Boost Software License, without the use of force under the color of ISO standardization authority. When Boost library components were subsequently adopted into the standard it was done without force, they made it in on their technical merit and field experience. This seems quite different from how the process works now, and I wish we could return to it. Thanks [1] https://en.wikipedia.org/wiki/Federation Here, Boost is a federation and individual libraries in Boost are self-governing states which have autonomy over their internal affairs
I am watching this discussion from the sidelines, and I am little involved in the standardization. But I think judging from the outside, Vinnie has a point. Some recent additions to the standard made questionable design choices, which if a library had been implemented and widely used prior to standardization like in Boost, design choices may have been made differently.
Some examples:
- The distinction between views and containers is academic, and after standardizing ranges, the distinction had to be watered down to make ranges practical.
- std::format is a parallel development to ranges, outputting into a sink instead of being itself conceptually a range of characters.
- The distinction between weak and strong ordering is academic, I never found a good use case in our code where making the distinction would be beneficial.
- Both are not an enum, and not being able to switch over them is a pain.
I am all for implementing and trying things out first before we commit to a design. Boost was a very good staging area for that.
Arno
--
Dr. Arno Schödl
CTO
We are looking for C++ Developers: https://www.think-cell.com/career/dev
think-cell Software GmbH (https://www.think-cell.com)
Leipziger Str. 51, 10117 Berlin, Germany
Main phone +49 30 6664731-0 | US toll-free +1 800 891 8091
Amtsgericht Berlin-Charlottenburg HRB 180042
Directors: Alexander von Fritsch, Christoph Hobo
Please refer to our privacy policy (https://www.think-cell.com/privacy) on how we protect your personal data. schoedl@think-cell.com | +49 30 6664731-0> On 10. May 2024, at 00:25, Vinnie Falco via Boost
On Thu, May 9, 2024 at 3:07 PM Vinnie Falco
wrote: ...
I pressed send by accident... continuing...
When people vote on proposals, it is based on the honor system whether or not they have read the paper, if they are qualified, if they are knowledgeable about the domain, and if they are voting in good faith. WG21 has to simply trust that someone who votes yes is not doing so because the author voted yes or will vote yes to their own paper. WG21 has to simply trust that when someone votes no, they are doing so because they believe it is the best technical decision and not because they simply don't like the author's politics. Or criminal record. Or because they or their company intends to introduce a new competing paper in the future.
WG21 leaves questions of conflicts of interest, political horse trading, non-technical voting up to the honor system. No one who votes is vetted for their talent, industry experience, and so on. Peter Dimov, probably the smartest guy on the planet, had his metaprogramming library turned away by people with far less talent. The consequence is we do not have Peter's library, but instead the promise of a better metaprogramming standard library component that has yet to be written.
I do not have this luxury when I write libraries. I can't go up to say, a large corporation, and convince them that I'm a really great guy who should just be trusted. People have to opt-in to my library, unlike the standard where after a relatively small group of people vote, all vendors who produce standard libraries are compelled to add it. And every C++ user who uses that standard library now has that pushed on them. No one "forces" you to use standard library components. But the appearance of a feature or particular API in the standard library creates enormous resistance to alternatives, because there is value in having a normalized API which is bundled with the compiler.
:LEWG is now notorious for having people write "direct-to-standard" proposals. That is, people find it far easier to just write a paper and socially engineer their way through WG21's tyranny of democracy than to invest the blood, sweat, and tears of writing a popular library. It has literally been said "going through Boost is more work than going through LEWG." In other words the bar for technical excellence in LEWG is lower than it is for Boost. I, for one, am glad that I am not someone who "have [sic] attended rather more of its meetings than the two people who make just slightly questionable claims about what WG21's library standardization process is fit for" because if I was, then I would be responsible for that lowering of the bar.
As I believe that complaining without offering solutions is equivalent to "whining" I propose a simple solution. Eliminate LEWG, have library-only components go straight to LWG like how it used to be, ensure a process where people voting on papers are actual subject matter experts and not patsies or confederates, and require that these library-only components already have some level of adoption and represent the state of the art. Note that under this scheme, we would already have the Networking TS in the standard. And the standard library would be able to connect to the Internet.
Thanks
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Thu, May 9, 2024 at 7:13 PM Arno Schoedl
I am watching this discussion from the sidelines, and I am little involved in the standardization. But I think judging from the outside, Vinnie has a point.
Thank you. Some recent additions to the standard made questionable design choices,
which if a library had been implemented and widely used prior to standardization like in Boost, design choices may have been made differently.
Some examples:
I'm trying to be very careful here and not just drag up obvious examples like the loss of Networking TS. Doing so is easy but hasn't produced results for me in the past. Instead I prefer to look at behavioral incentives and feedback mechanisms. In my previous posts on this subject I alluded to perfect competition as the behavior and feedback system which rewards good libraries and punishes bad ones. What WG21 system incentives technical excellence? How is the technical excellence measured? What WG21 process ensures that the good solutions are brought forward? If a mistake is made what is the process for discovering the mistake and fixing the system? In business this is called a "postmortem analysis" [1] and it is crucial for helping projects get back on track if they veer off course. The committee process does not incentivize anyone to do such things. No one can get "fired" from the committee unless they basically commit a crime like assault. Imagine a business that cannot fire employees, or even deploy metrics for calculating if their workers are effective! WG21 is the equivalent of a totally dysfunctional business. It can't respond to market pressure, it can't go bankrupt, it doesn't know when it makes mistakes and even if it did it can't correct them (muh ABI). There are no requirements placed on attendees. No one has to submit a resume they are just "hired" on the spot, determine their own role in the "company", and their performance is never measured or held to account. This of course often attracts a certain kind of person, who craves acceptance, enjoys the illusion of productivity afforded by make-work, and cannot withstand the pressure of being held to a particular standard. This is why I no longer bother to argue about any undesirable outcomes of the WG21 process in particular. It is because the system itself is set up with the wrong incentives. There is nothing to reward technical excellence and discourage poor engineering. Important features like networking are viewed as a land-grab riddled with political intrigue and jockeying for position. For all its drawbacks, stackful coroutines are an incredibly important tool in the programming toolbox. They don't require changes to the language, the rules surrounding lifetime are dead simple to understand, and for a broad range of use-cases they are perfectly adequate. The underlying technology, which is the ability to save and restore the context, is perfectly understood and there is overwhelming field experience. In other words, stackful coroutines are a solved problem. There is little to no controversy surrounding them. The paper to add coroutines to C++ has been around for eleven years: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3708.pdf This is just one instance which illustrates why WG21 is not set up for consistent success. Many such examples exist. [1] https://www.linkedin.com/pulse/importance-post-mortem-analysis-chad-m-peterm...
For all its drawbacks, stackful coroutines are an incredibly important tool in the programming toolbox. They don't require changes to the language, the rules surrounding lifetime are dead simple to understand, and for a broad range of use-cases they are perfectly adequate. The underlying technology, which is the ability to save and restore the context, is perfectly understood and there is overwhelming field experience. In other words, stackful coroutines are a solved problem. There is little to no controversy surrounding them. Absolutely agreed. The current coroutines have their place but that we effectively need to keep two copies of every generic algorithm around, one plain and one coroutinified, just in case somewhere deep inside the callstack a caller-supplied lambda wants to yield is very unsatisfactory. Arno -- Dr. Arno Schödl CTO schoedl@think-cell.commailto:schoedl@think-cell.com | +49 30 6664731-0 We are looking for C++ Developers: https://www.think-cell.com/career/dev think-cell Software GmbH (Web sitehttps://www.think-cell.com) Leipziger Str. 51, 10117 Berlin, Germany Main phone +49 30 6664731-0 | US toll-free +1 800 891 8091 Amtsgericht Berlin-Charlottenburg HRB 180042 Directors: Alexander von Fritsch, Christoph Hobo Please refer to our privacy policyhttps://www.think-cell.com/privacy on how we protect your personal data.
On Fri, May 10, 2024 at 1:25 PM Arno Schoedl via Boost
For all its drawbacks, stackful coroutines are an incredibly important tool in the programming toolbox. They don't require changes to the language, the rules surrounding lifetime are dead simple to understand, and for a broad range of use-cases they are perfectly adequate. The underlying technology, which is the ability to save and restore the context, is perfectly understood and there is overwhelming field experience. In other words, stackful coroutines are a solved problem. There is little to no controversy surrounding them.
Absolutely agreed. The current coroutines have their place but that we effectively need to keep two copies of every generic algorithm around, one plain and one coroutinified, just in case somewhere deep inside the callstack a caller-supplied lambda wants to yield is very unsatisfactory.
Do you have a code example of that? I'm able to wrap stackful coroutines into std::coroutine_handle's, so I'd like to see if there's some UB-infused solution to your problem.
Arno
-- Dr. Arno Schödl CTO schoedl@think-cell.commailto:schoedl@think-cell.com | +49 30 6664731-0
We are looking for C++ Developers: https://www.think-cell.com/career/dev
think-cell Software GmbH (Web sitehttps://www.think-cell.com) Leipziger Str. 51, 10117 Berlin, Germany Main phone +49 30 6664731-0 | US toll-free +1 800 891 8091
Amtsgericht Berlin-Charlottenburg HRB 180042 Directors: Alexander von Fritsch, Christoph Hobo
Please refer to our privacy policyhttps://www.think-cell.com/privacy on how we protect your personal data.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Do you have a code example of that? I'm able to wrap stackful coroutines into std::coroutine_handle's, so I'd like to see if there's some UB-infused solution to your problem.
Not right now, on vacation. But let’s say you want a line std::for_each(range, …) to be part of a generator coroutine where … is itself a coroutine that yields. Is there a way to avoid std::for_each having to be a coroutine? Arno -- Dr. Arno Schödl CTO schoedl@think-cell.com | +49 30 6664731-0 We are looking for C++ Developers: https://www.think-cell.com/career/dev think-cell Software GmbH (https://www.think-cell.com) Leipziger Str. 51, 10117 Berlin, Germany Main phone +49 30 6664731-0 | US toll-free +1 800 891 8091 Amtsgericht Berlin-Charlottenburg HRB 180042 Directors: Alexander von Fritsch, Christoph Hobo Please refer to our privacy policy (https://www.think-cell.com/privacy) on how we protect your personal data.
On Fri, May 10, 2024 at 12:22 AM Arno Schoedl via Boost < boost@lists.boost.org> wrote:
Is there a way to avoid std::for_each having to be a coroutine?
No. This blog post elegantly relates the problem with this design (I think). https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ The viral nature of co_await was understood in 2015 and described in P0158R0: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0158r0.html Most of the problems predicted in P0158R0 before coroutines made it into the standard are now being reported independently by users. As I noted in my previous replies, there is no process in WG21 which revisits major language decisions to learn from mistakes. In fact... I remember when the author was lobbying hard for coroutines he was giving talks, going to lunches and dinners, posting on reddit, posting on the Official C++ Language Slack Workspace, hanging out in Discord, and so on. But after coroutines made it into the standard he disappeared faster than my ex-wife after I declared bankruptcy. He unlocked the achievement I suppose. If it sounds like I'm picking on the author, I am not. If he didn't do it, someone else would have. The WG21 process incentivizes this behavior. Land a high-profile feature desired by a corporate sponsor, and game over. It isn't a coincidence that C++ coroutines harmonize perfectly with their C# equivalent (Microsoft). Here's a blog post from a user describing their experience. If you look at items 1, 2, and 3 under 'Conclusion' you can see that this was exactly predicted in opposition papers pre-acceptance. Thanks
On Fri, May 10, 2024 at 4:32 PM Vinnie Falco via Boost
On Fri, May 10, 2024 at 12:22 AM Arno Schoedl via Boost < boost@lists.boost.org> wrote:
Is there a way to avoid std::for_each having to be a coroutine?
No. This blog post elegantly relates the problem with this design (I think).
https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
The viral nature of co_await was understood in 2015 and described in P0158R0:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0158r0.html
Most of the problems predicted in P0158R0 before coroutines made it into the standard are now being reported independently by users. As I noted in my previous replies, there is no process in WG21 which revisits major language decisions to learn from mistakes. In fact... I remember when the author was lobbying hard for coroutines he was giving talks, going to lunches and dinners, posting on reddit, posting on the Official C++ Language Slack Workspace, hanging out in Discord, and so on. But after coroutines made it into the standard he disappeared faster than my ex-wife after I declared bankruptcy. He unlocked the achievement I suppose.
Heh, I reached out to Gor when I wrote my coroutine library, never heard a thing. You'd think he would be interested in how his C++ feature is used. I don't think they're really problems as much as they are design choices or rather implication of stackless coroutines (or resumable functions). The problem is more that stackful coroutines (or fibers) are most likely not being considered now because "we got coroutines already", so we'll need to make due with boost.context - which is a black box for the compiler. The particular design issues I have with coroutines are minor things like no support for noexcept. It seems like they didn't bother updating std::coroutine_traits after noexcept became part of the function signature.
пт, 10 мая 2024 г. в 08:25, Arno Schoedl via Boost
Absolutely agreed. The current coroutines have their place but that we effectively need to keep two copies of every generic algorithm around, one plain and one coroutinified, just in case somewhere deep inside the callstack a caller-supplied lambda wants to yield is very unsatisfactory.
I was reliably informed in this paper (https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0171r0.html#island...) that the problem you are describing does not in fact exist.
On 09/05/2024 21:56, Ville Voutilainen via Boost wrote:
On Tue, 7 May 2024 at 20:16, Vinnie Falco via Boost
wrote: I don't think the problem is getting a library into a fit state for standardisation, but rather how library standardisation works at WG21 is not fit for purpose in my opinion. In other words, the problem isn't a technical one, it's a _process_ and _political_ problem, in my opinion.
I agree with Niall here. The structure of WG21 creates perverse incentives, producing outcomes which are not aligned with the needs of the wider C++ community. For example "the standard library can't connect to the internet."
I have never heard of such an incentive being expressed in WG21, and I have attended rather more of its meetings than the two people who make just slightly questionable claims about what WG21's library standardization process is fit for, considering how much experience they (don't) have about it. :)
Firstly, Vinnie's added comment I wouldn't agree with having seen the sausage been made in five years of attending meetings. However, seeing as you're publicly calling me out on my lack of total meetings attended compared to you, have you considered that perhaps you have attended so many meetings you have become blind to how different things could be? To put bluntly: you are no longer able to see clearly due to institutionalization? Back when I proposed a whole bunch of ways library standardisation could be done differently than at present - having seen for myself how the sausage is made over multiple years - if I remember rightly you were not keen on them. I have memories of you stating words to the effect of "that's not how things are done round these parts". To which I counter - and will continue to counter - why not? WG21 already does lots of things in ways which aren't strictly within ISO's rules. It's the last remaining modern programming language under ISO. I think they'll yield because they want to keep us. And if they don't, there's always the POSIX option. Leaving ISO has had no negative effect I've noticed on their standardisation work, if anything the opposite. Maybe you do genuinely believe the current process is optimal and cannot be improved upon. Fair enough. I think C++ can do a lot better if there was more will to stop trying to solve non-technical problems with technical solutions, which those in WG21 keep doing.
The mission statement of that project sounds fine, reference implementations for standard library proposals, early reviews. There's nothing there not to like. Sounds like a highly valuable service.
I wish its authors the best of luck in their endeavours. However, it's another example in my opinion of solving process problems with technical solutions. Axe meet rock. If their project was more about changing and transforming incentives across the ecosystem, that would get me a lot more optimistic. Processes only change when enough people buy into a new process, and for that the incentives need to shift. Niall
On Fri, 10 May 2024 at 02:11, Niall Douglas via Boost
Maybe you do genuinely believe the current process is optimal and cannot be improved upon.
No, I don't. I, however, don't agree with your assessments on what exactly is wrong with it and how to fix it. And I will suggest to everyone else to take your assessments on that matter with a grain of salt, and certainly not as pure objective facts. But since you suggest I'm blind and suggest things like the quoted bit, which is just made up and not backed by anything I have ever said or done, have you ever considered the possibility that your views on such matters might be clouded by various proposals of yours not being "just waved through", and that somehow being anything but the fault of any problems in those proposals?
On 10/05/2024 00:23, Ville Voutilainen wrote:
On Fri, 10 May 2024 at 02:11, Niall Douglas via Boost
wrote: Maybe you do genuinely believe the current process is optimal and cannot be improved upon.
But since you suggest I'm blind and suggest things like the quoted bit, which is just made up and not backed by anything I have ever said or done, have you ever considered the possibility that your views on such matters might be clouded by various proposals of yours not being "just waved through", and that somehow being anything but the fault of any problems in those proposals?
It is true I have been spectacularly unsuccessful at WG21. I have not affected one single word of normative text since I began attending meetings. The entire sum of my words affected would come from the next merge of the next C standard, where I have been far more effective with very considerably less work invested. Unsurprisingly, that's where I'll be moving to after the C 26 IS ships, as I'll have far more effect on the C++ standard from WG14 than I ever will from WG21. As to "just waved through" there is a point there. Everybody attending will have an opinion on the stuff they think was just waved through and not enough scrutiny was applied, and the stuff which got too much scrutiny and got dropped through attrition and exhaustion. I think that concentrates on the wrong thing, because all that is the result of the process chosen. We could choose a different process, and change that dynamic entirely. I definitely think far too much emphasis is placed by WG21 on library features which tick committee boxes such as "this is immediately compatible with the abstract machine" rather than what is actually useful to the ecosystem or end users. Most of my proposals which made no forward progress were due to demands that I first get improvements to the abstract machine through EWG to support things like memory mapped files, or virtual memory, or copy on write memory. What annoys me there is WG21 has the power to declare library APIs have magic powers. So just drizzle the magical powers over the APIs, draw a line under it, ship it. End users get portable memory mapped files with standardised semantics, the ecosystem benefits, everybody wins. WG21 wasn't willing to do that here, so I gave up. I have better uses of my time than fiddling with the abstract machine, for which by the way it was demanded that I had to fork a production compiler to implement the new abstract machine. I don't have that kind of free time, sorry. Of my two papers remaining, one has a pretty good chance of making it (path_view). The other has been repeatedly and publicly declared by multiple people as one of the finest library designs in C++ they've ever seen (std::error) but WG21 can't get over there being virtual functions in there. After three years of broken promises that WG21 will "do something" about ABI guarantees in libraries so the virtual functions can be replaced with something else, they just keep circling the drain about "virtual functions be bad". So maybe that will die too, we have three meetings left to find out. I'm of the opinion that carefully designed virtual functions are nothing like as bad as vast quantities of templates solidifying ABI. We continue to standardise vast tracts of new templates repeating the std::regex mistake. Yet a virtual function - they're somehow _worse_ - and I'm sorry, I just don't get it. Especially as there really is currently no better alternative to virtual functions for what they do, whereas most templates can be designed out entirely and WG21 could impose a "minimum possible templates" rule going forth if it chose. I've come to the conclusion that I'm the wrong type of engineer for WG21. Or, maybe better put, the kind of engineering I do does not fit what WG21 thinks is good engineering. A culture and fit problem. That's okay, I'll go where I'm more effective instead. Niall
Niall Douglas wrote:
It is true I have been spectacularly unsuccessful at WG21. I have not affected one single word of normative text since I began attending meetings. The entire sum of my words affected would come from the next merge of the next C standard, where I have been far more effective with very considerably less work invested. Unsurprisingly, that's where I'll be moving to after the C 26 IS ships, as I'll have far more effect on the C++ standard from WG14 than I ever will from WG21.
What did you get into the C standard?
On 10/05/2024 13:41, Peter Dimov via Boost wrote:
Niall Douglas wrote:
It is true I have been spectacularly unsuccessful at WG21. I have not affected one single word of normative text since I began attending meetings. The entire sum of my words affected would come from the next merge of the next C standard, where I have been far more effective with very considerably less work invested. Unsurprisingly, that's where I'll be moving to after the C 26 IS ships, as I'll have far more effect on the C++ standard from WG14 than I ever will from WG21.
What did you get into the C standard?
My biggest single direct contribution is fixing `fopen()`. I got defect resolutions merged into C23, and I have more improvements hopefully coming in C2y. Once I depart WG21, I have my magnus opus of standardisation coming: modernising signal handling, which will be mainly a WG14 based effort. Once I achieve that, I'll be moving on from standards (and probably be quite close to retirement by then). But it's wider than that. People on WG14 incorporate my advice and feedback including to wording. Several papers which were merged into the standard have included changes I suggested. I've also made suggestions about direction, and got consensus from the committee about that direction instead of being ignored or shouted at. It's nice to be listened to, and be taken seriously enough that people act on my suggestions. Myself and Boost have had a confrontational relationship in the past, but in my opinion (you and other will disagree), y'all after arguing heavily with me at the time then a few years later went ahead and quietly implemented almost everything I suggested. So I'm good with Boost at the present time - I spoke, you listened, you eventually implemented much of it. Rock on! Maybe WG21 will do what I suggest today a decade hence. I hope so. Niall
Niall Douglas wrote:
Myself and Boost have had a confrontational relationship in the past, but in my opinion (you and other will disagree), y'all after arguing heavily with me at the time then a few years later went ahead and quietly implemented almost everything I suggested. So I'm good with Boost at the present time - I spoke, you listened, you eventually implemented much of it. Rock on!
I have to admit that I can't at the moment think of any feature about which
I argued with you when you proposed it, and then went ahead and quietly
implemented later.
It's true that you had already proposed
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1196r0.html
but I'm not sure I actually knew that when I implemented this, and even if I did,
I definitely don't remember arguing with you about it.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1197r0.html
is - maybe - something you had proposed earlier, but not to Boost and not in
this form; the only discussion you were involved in about that (that I know
about) is the SG14 one, in which I hadn't participated. I only read the paper
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0824r0.pdf
but I think that at this point I've already heard enough complaints about
the std::string use in
On 10/05/2024 16:26, Peter Dimov via Boost wrote:
Niall Douglas wrote:
Myself and Boost have had a confrontational relationship in the past, but in my opinion (you and other will disagree), y'all after arguing heavily with me at the time then a few years later went ahead and quietly implemented almost everything I suggested. So I'm good with Boost at the present time - I spoke, you listened, you eventually implemented much of it. Rock on!
I have to admit that I can't at the moment think of any feature about which I argued with you when you proposed it, and then went ahead and quietly implemented later.
You've done lots of stuff yourself personally, but so have most of the rest of the core Boost maintainers. Stuff I suggested back 2012-2015. Stuff which didn't win me much love here at the time. The biggest two of my early suggestions that you (as a group) have since implemented I was most appreciating recently are the cmake build support and the much, much better dependency and isolation graph between bits of Boost. Recently I hacked out Boost.Context, stripped it of the bits I didn't need, reused the cmake you wrote for it, and voila standalone reusable component. Took me minutes, not hours or days. **Yay** Boost. This isn't an isolated case. Last dayjob we had a fork of Boost with all the symbols changed so it wouldn't clash with Boost. Generating that fork became ever increasingly easier over time as you and the other core Boost maintainers kept fixing and improving stuff. All very small, tedious, boring little stuff. But it really adds up over time, makes a very big difference. **Yay** Boost.
It's also true that system::result is very similar to outcome::result (and std::expected), but I don't think I argued with you about having that, either.
I'm perfectly willing to concede that you were there first. It's the arguing against part that I have no recollection of.
All that came much later, and to be fair, the std::error design wasn't my own, it was derived from a group of people on SG14. Which you then iterated upon for Boost.System. Which is exactly how software engineering excellence is achieved. Always grab the best bits from anything you see. Yay Boost. Niall
Fwiw, I think C++ as a language has never been at a better place. There's so many great things in the language now like coroutines, concepts, placeholder NTTPs, CNTTPs, designated initializers. The list goes on and on. I think it's hard to argue against the direction that C++ is evolving as a core language. I would say that LEWG's output is less than optimal and this is because LEWG can't thrive without a Boost and Boost can't thrive without LEWG's participation. They need each other and Project Beman is an attempt to return to that form. If it actually brings LEWG members back to a Boost-like place, that's only really a positive for C++. re: coroutines vs fibers Stackless coroutines are a better fit for standardization. The exemplar of the success of stackful coroutines is Go. Many other GC'd languages have fiber implementations as well like Kotlin/Java. The thing to note here is that these languages also have incredibly sophisticated runtimes that enable them to make efficient usage of these constructs. Systems languages have to use system APIs to create fibers which is something hard to retrofit against the C++ abstract machine. Stackless coroutines are a simple front-end transformation so they're easy to fit against C++'s abstract machine which is why they were kind of a slam dunk to standardize. And I think they're actually quite good as well. I've written async networking implementations in both Rust and C++ and personally, I think C++'s are better solely for how they handle resuming an awaiting task. I don't think the function coloring thing is an issue in practice or at least it's never something I've run up against even when I worked full time in Node. I've seen a lot of Rustaceans complain equally about this but I think it's because they're trying to shoehorn a design that just isn't workable with the tools available and they're refusing to compromise. One thing I've seen that surprised me was the contortions a Gopher will go through to guarantee a suspension point. It's interesting that in practice, the implicit suspension of a routine becomes undesirable because at a point in time, a user wants to *guarantee* a suspension and an explicit `co_await` or `.await` gives the developer a sense of psychological security. In short, it's a good thing Asio wasn't standardized as it was because it included async. It should've just been blocking I/O only. The other problem is, its interfaces are heavily coded against *existing* networking system APIs and now that we have io_uring, we see that the evolution of operating systems introduces backwards-incompatible API breaks. It's also a good thing fibers weren't standardized because it's hard to get right in the context of an IS and an abstract machine. If fibers are really such a good idea, then they'll emerge as the successor in the free market of libraries. - Christian
Niall Douglas wrote:
Peter Dimov wrote:
Niall Douglas wrote:
It is true I have been spectacularly unsuccessful at WG21. I have not affected one single word of normative text since I began attending meetings. The entire sum of my words affected would come from the next merge of the next C standard, where I have been far more effective with very considerably less work invested.
What did you get into the C standard?
My biggest single direct contribution is fixing `fopen()`. I got defect resolutions merged into C23, and I have more improvements hopefully coming in C2y. Once I depart WG21, I have my magnus opus of standardisation coming:
In other words, so far, just: Fixing fopen() ? Glen
Niall Douglas wrote:
Of my two papers remaining, one has a pretty good chance of making it (path_view). The other has been repeatedly and publicly declared by multiple people as one of the finest library designs in C++ they've ever seen (std::error) but WG21 can't get over there being virtual functions in there. After three years of broken promises that WG21 will "do something" about ABI guarantees in libraries so the virtual functions can be replaced with something else, they just keep circling the drain about "virtual functions be bad".
So maybe that will die too, we have three meetings left to find out. I'm of the opinion that carefully designed virtual functions are nothing like as bad as vast quantities of templates solidifying ABI. We continue to standardise vast tracts of new templates repeating the std::regex mistake. Yet a virtual function - they're somehow _worse_ - and I'm sorry, I just don't get it. Especially as there really is currently no better alternative to virtual functions for what they do, whereas most templates can be designed out entirely and WG21 could impose a "minimum possible templates" rule going forth if it chose.
Normal use of virtual functions, as in std::error_code/std::error_category case,
makes it absolutely impossible to develop the API within stable ABI constraints.
That's why all my papers about these were rejected. You can't add a virtual
function, because there's no way to reliably know whether the virtual function
is there. (As the object file where the vtable happened to be emitted could have
been compiled against an older C++ standard.)
There is a way to specify things (from the start) which makes API evolution
possible, and it's to add a version number to the interface:
struct error_category
{
virtual long version() const noexcept { return 201103L; }
// more virtuals
};
Then in error_code you can do
{
if( cat_->version() >= 202300L )
{
// use virtual functions added in C++23
}
else
{
// don't
}
}
Of course we can no longer do this for std::error_category.
Also of course, this has the overhead of twice the number of virtual calls.
I have, in my "things to propose" folder, a very unfinished proposal for
virtual data members, which can eliminate the overhead. But in any case,
you have to put the version in the spec when the interface is first
standardized, and so far LEWG/LWG haven't really displayed interest in
the "how we can specify our interfaces in a manner that would make it
possible to evolve them under ABI stability constraints" question.
(Or if they have, I don't know about it.)
There's actually a way in which we can evolve std::error_category, as it
stands today, and it's to derive error_category2 from it, and then use
{
if( auto p = dynamic_cast
On 10/05/2024 16:04, Peter Dimov via Boost wrote:
There is a way to specify things (from the start) which makes API evolution possible, and it's to add a version number to the interface:
You're now getting into that "broken promises to do something about ABI which would allow us to avoid standardising virtual functions" which I mentioned earlier. Your ideas are a good start, but TBH the work some of us did three years ago are better. Myself, Ben Craig and a few others did some work on seamless versioning of vtables in a non-breaking way that would solve the ABI evolution problem for vptr based objects. I vaguely remember a clever hack to fix RTTI sucking so badly most of the time too, without breaking backwards compatibility. The charitable interpretation of what happened next is that covid occurred, and it got forgotten/the boil went out of it, and then other WG21 stuff happened and then other priorities appeared and it got dropped. The non-charitable interpretation of what happened next is it was realised that this would have to get through EWG, and just about everybody thought they'd rather hammer nails through their fingers than endure that. So it died.
There's actually a way in which we can evolve std::error_category, as it stands today, and it's to derive error_category2 from it, and then use
{ if( auto p = dynamic_cast
(cat_) ) { // use virtual functions from error_category2 } else { // don't } } There's nothing wrong with that _in principle_, but in practice dynamic_cast is abysmally slow (without a good reason, arguably).
Yes you're on a similar track to what we were on. In my contribution to the work I intentionally copied what Microsoft COM does to evolve its vtable based APIs (which are deliberately compatible with MSVC's vtable format) as that is clearly the established precedent, and standards is supposed to standardise existing practice. But just thinking about presenting that at EWG, and the completely stupid FUD comments you'd have to endure from certain members of that working group just because this stuff is COM and/or Microsoft ... and then the demands to "prove your proposal" by forking a production compiler ... and then enduring sermons from those who think they know all about COM and their alternative design they dreamt up yesterday is clearly much better than thirty years of proven track record ... well I wasn't willing, and neither was anybody else. So unfortunately I don't think anything will change here until the processes WG21 uses changes. To make myself very clear here: if it's established existing practice, it should be automatically greenlit and it should be very, very, VERY hard to stop it no matter what the arguments against. There should be a very strong presumption in favour of standardisation of existing practice without redesign by committee, emphasising the "without redesign by committee" part. Niall
Niall Douglas wrote:
On 10/05/2024 16:04, Peter Dimov via Boost wrote:
There is a way to specify things (from the start) which makes API evolution possible, and it's to add a version number to the interface:
You're now getting into that "broken promises to do something about ABI which would allow us to avoid standardising virtual functions" which I mentioned earlier.
Your ideas are a good start, but TBH the work some of us did three years ago are better. Myself, Ben Craig and a few others did some work on seamless versioning of vtables in a non-breaking way that would solve the ABI evolution problem for vptr based objects. I vaguely remember a clever hack to fix RTTI sucking so badly most of the time too, without breaking backwards compatibility.
The charitable interpretation of what happened next is that covid occurred, and it got forgotten/the boil went out of it, and then other WG21 stuff happened and then other priorities appeared and it got dropped.
The non-charitable interpretation of what happened next is it was realised that this would have to get through EWG, and just about everybody thought they'd rather hammer nails through their fingers than endure that. So it died.
Did you (as in, you, Ben Craig, or one of the others) ever write this up in some form? I would certainly be interested in reading about it. I remember your having a paper about Microsoft COM, but if memory serves, I found it a bit difficult to understand. In particular, it wasn't quite clear what exactly was being proposed; as in, what needs to change, in language or library, and how.
Yes you're on a similar track to what we were on.
Well, that's certainly encouraging. :-)
On 10/05/2024 16:45, Peter Dimov via Boost wrote:
Niall Douglas wrote:
Did you (as in, you, Ben Craig, or one of the others) ever write this up in some form? I would certainly be interested in reading about it.
My memory is that it was mostly written up in private email exchanges amongst a small group. I wasn't always included in the discussion, either, so I only saw parts of the debate and design. I do remember seeing at one point a private github repo with some code, and me writing an email saying something like "no no no that's all wrong, do X, Y and Z instead" :) In any case I'm the wrong person to ask about this, as I wasn't invited into the main group who did most of the running. Ben I think would be a much better bet. I'll CC him there now to this.
I remember your having a paper about Microsoft COM, but if memory serves, I found it a bit difficult to understand. In particular, it wasn't quite clear what exactly was being proposed; as in, what needs to change, in language or library, and how.
That was a very, very long time ago. That was the thing I built for BlackBerry while I was working there. Basically a modernised Microsoft COM. Fully working implementation. Would have solved lots of future C++ ABI upgrade problems in BB10. Unfortunately, my colleagues at BlackBerry at the time reacted very similarly to how COM was reacted to in Microsoft when it was presented. COM only became what it did because a senior manager in Office went ahead and adopted it anyway despite the negative reaction. And that forced everybody else to use it, and before you knew it, everything on Windows had to be COM. And then the COM designers were being showered with accolades and given cheques of cash bonuses in gratitude. But yes you're right, I do think Microsoft COM is underrated and if C++ adopted it into the language, it would make a very substantial improvement to C++. Politically impossible, unfortunately. Gaby dos Reis tried to effectively propose a modernised COM for C++ Modules originally, and to say that went down like a lead balloon would be putting it mildly. Niall
On Fri, 10 May 2024 at 18:59, Niall Douglas via Boost
On 10/05/2024 16:45, Peter Dimov via Boost wrote:
Niall Douglas wrote:
Did you (as in, you, Ben Craig, or one of the others) ever write this up in some form? I would certainly be interested in reading about it.
My memory is that it was mostly written up in private email exchanges amongst a small group. I wasn't always included in the discussion, either, so I only saw parts of the debate and design.
I do remember seeing at one point a private github repo with some code, and me writing an email saying something like "no no no that's all wrong, do X, Y and Z instead" :)
In any case I'm the wrong person to ask about this, as I wasn't invited into the main group who did most of the running. Ben I think would be a much better bet. I'll CC him there now to this.
I remember your having a paper about Microsoft COM, but if memory serves, I found it a bit difficult to understand. In particular, it wasn't quite clear what exactly was being proposed; as in, what needs to change, in language or library, and how.
That was a very, very long time ago.
That was the thing I built for BlackBerry while I was working there. Basically a modernised Microsoft COM. Fully working implementation. Would have solved lots of future C++ ABI upgrade problems in BB10.
Unfortunately, my colleagues at BlackBerry at the time reacted very similarly to how COM was reacted to in Microsoft when it was presented. COM only became what it did because a senior manager in Office went ahead and adopted it anyway despite the negative reaction. And that forced everybody else to use it, and before you knew it, everything on Windows had to be COM. And then the COM designers were being showered with accolades and given cheques of cash bonuses in gratitude.
But yes you're right, I do think Microsoft COM is underrated and if C++ adopted it into the language, it would make a very substantial improvement to C++. Politically impossible, unfortunately. Gaby dos Reis
Well, you know.. that sort of a mechanism is not going to solve ABI issues now and forever. I have written one for another phone vendor, and they shipped it on a couple hundred of million devices. I didn't make it interface-only, it allowed you to get from an interface to a concrete non-virtual API, because that's an inevitable practical need for quite many users. And for quite some other users, it's an unbearable burden to have to go through interface indirections for an API that you never ended up needing to extend in ways that the purported ABI freedoms provided by such a beast allows. Are you going to make std::string a COM type with interface indirections? tuple? pair?
tried to effectively propose a modernised COM for C++ Modules originally, and to say that went down like a lead balloon would be putting it mildly.
I have absolutely no idea what you're talking about there. I cannot find a Modules proposal that does anything of the sort. Please point to the proposal you think you're talking about.
On 10/05/2024 23:01, Ville Voutilainen wrote:
Are you going to make std::string a COM type with interface indirections? tuple? pair?
You are correct that ABI resiliency historically came with a runtime overhead penalty. If a linker can devirtualise final classes, it certainly can de-indirect interface indirections. The prototype I built at BlackBerry didn't implement the runtime de-COMification stage, but it did emit all necessary metadata so a linker stage could do so in linear time. The way I had it was if compatible toolchains were used both sides of the ABI boundary, you got linking like now. If they differed, you got varying amounts of runtime overhead generated, with the worst being two toolchains with different calling conventions. As to whether to make all types which touch a nu-COM boundary be required to be also nu-COMed themselves ... I guess it depends on the impact on link times. If Modules can be implemented in a way where link times are bounded, then nu-COMifying everything is surely doable with bounded times. It is hard to do better than speculation here, but for sure for Debug builds we can have fast links and only for optimised builds do we invest more time on linking. I think it would work.
tried to effectively propose a modernised COM for C++ Modules originally, and to say that went down like a lead balloon would be putting it mildly.
I have absolutely no idea what you're talking about there. I cannot find a Modules proposal that does anything of the sort. Please point to the proposal you think you're talking about.
I don't think it ever made it as far as a formal WG21 paper. Gaby described it to me over dinner one evening, otherwise I wouldn't know about it either. The essence of it was https://github.com/GabrielDosReis/ipr, and your compiled C++ Module would be a shared library binary with an IPR interface description embedded into it. Your C++ process runtime loader would no longer use the ELF or PE symbol tables to link up the shared library binaries into a process, it would parse the IPR for the precompiled C++ Modules in the link database, assemble them into a graph, generate any thunk code as needed between them, and thus birth the process. This is why LLFIO works before main() gets called incidentally, I originally intended it to be used to bootstrap C++ Modules based C++ programs into existence. From what he told me, some within C++ tooling did not like there being a standard binary interface for Modules. As in, over my dead body showstopper red line no. So that got dropped very early on as it was politically infeasible to standardise, plus historically speaking WG21 has not told implementations exactly how to implement tooling, so it would have been a big land grab and a big ask to dictate to all implementations "all C++ Modules shall be implemented in this EXACT binary format". The C++ Modules we eventually got I think you'll agree got very considerably watered down even after the first WG21 papers appeared, so it was probably right that it was too big an ask of WG21 to standardise C++ Modules as a newly modernised COM of toolchain independent binary objects. No point proposing something impossible. Niall
participants (13)
-
Andrey Semashev
-
Arno Schoedl
-
Christian Mazakas
-
David Sankel
-
Glen Fernandes
-
Joaquín M López Muñoz
-
Klemens Morgenstern
-
Niall Douglas
-
Peter Dimov
-
René Ferdinand Rivera Morell
-
Ville Voutilainen
-
Vinnie Falco
-
Дмитрий Архипов