Em ter., 12 de abr. de 2022 às 17:38, Marcelo Zimbres Silva < mzimbres@gmail.com> escreveu:
That's not a library. That's an application.
That is why I pointed you at the specific line in the file, where the chat_participant class is defined and not to the application as a whole.
The abstraction is built for the specific application only. It'll carry its policies builtin.
A NodeJS application, for instance, will have a
http.createServer() and a callback that gets called at each new request. How, then, do you answer questions such as "how do I defer the acceptance of new connections during high-load scenarios?".
That's a general question to a problem I am not trying to solve. Discussing this here will only make things more confusing.
Boost.Asio won't solve specific problems either. The rule is: just don't make it worse. If you just follow Boost.Asio's style, you don't need to solve this problem, and nobody will suffer.
Boost.Asio OTOH never suffered from such problems.
Of course, not even Boost.Beast that is built on top of Asio suffers from this problem as it provides only a low-level HTTP library.
Well, I've built a HTTP library higher level than Beast and Boost.Asio style didn't force me to go lower-level not even a bit. However, let's take examples from somewhere else. This one even has bugs with respect to executors & completion tokens (in the past I've successfully wrapped AZMQ's functions in the past to fix their bugs): https://github.com/zeromq/azmq Check one of their examples: https://github.com/zeromq/azmq/blob/master/doc/examples/actor/main.cpp AZMQ won't make you deal with explicit queueing. AZMQ won't even make you worry about connection ordering or disconnections (we can call async_read() earlier than the remote's endpoint is configured). AZMQ even deals with subscriptions, all under a simple socket API that is very much Boost.Asio-like (you'll even find set_option() functions just like Boost.Asio native sockets).
- At the time you call deliver(msg) there may be an ongoing write,
in which case the message has to be queued and sent only after the ongoing write completes.
You can do the same with async_*() functions. There are multiple approaches. As an example:
https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_qu...
Although I don't really want to comment I am not familiar with I will, as after a short glance I spotted many problems
1. It uses two calls to async_write whereas in Chris' example there is only one.
2. It tries to do the same thing as the deliver() function I pointed you at but poorly. You see, it has to call async_write again in the completion of the previous async_write when the queue is not empty.
You're writing to a stream. The function correctly serializes the stream by ordering writes and avoiding a corrupt stream. Each time you call async_write(), post semantics are used and then you can have multiple concurrent CPS chains (in a fiber world, it'd be easy to explain: you'd allow multiple fibers). 3. It uses callback btw. which you were arguing against.
You're missing the point: "consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request." That's not hard to follow. Examples were plenty. Before Boost.Asio 1.54 we didn't have completion tokens. Everything was callbacks at every layer. However they're different types of callbacks. Boost.Asio refers to them as completion handlers. They're one-shot callbacks, not signal handlers that keep being called for unrelated events. They only deal with the completion of an event that you explicitly initiated. Completion tokens are just syntax sugar for that. That's why it maps so well to coroutines. They're CPS chains. They poorly emulate threads/fibers[1], but many of the concepts still apply by analogy. One of the very few places where it uses callbacks that are not part of CPS-chains is this: https://www.boost.org/doc/libs/1_72_0/doc/html/boost_asio/reference/ssl__str... However, set_verify_callback() is more like the composition for an algorithm (as it happens in std::find_if()). set_verify_callback() by itself won't schedule new operations. set_verify_callback() is not part of a framework that will make "the user a passive party in the design, who only has to react to incoming requests". Boost.Asio has a style and you're not following it. 4. It doesn't seem to handle concurrency correctly by means of Asio
primitives e.g. strands. It uses a lock free queue. I don't think any IO object should do this.
That's one of the approaches — a thread-safe socket (you don't need to wrap it under a strand). I've said earlier that many approaches exist. I'm giving an example at every step of any comment.
std::string msg;
co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable); session2->async_deliver(msg, net::detached);
You only need to check whether async_deliver() clones the buffer (if it doesn't then you can clone it yourself before the call).
Right now you're forcing all your users to go through the same policy. That's the mindset of an application, not a library.
No. I don't have any police, don't really know what you are talking about. I am only presenting
- async_connect - async_write - async_read - and two timer.async_wait
as a single composed operation to the user. So that they don't have to do that themselves, each one of them.
Okay. My impression was that Aedis do this in a loop, as in: for (;;) { // ... read(...) // call one callback and repeat the loop } Is Aedis' pattern different (as in a single-shot operation where the callback is called only once)? I think it's worth clarifying its behavior before we continue our discussion, so I haven't addressed all of your comments (maybe there's nothing to address depending on the answer here). Does the user call async_run() in a loop, or does async_run() has a loop inside already? [1] https://www.boost.org/doc/libs/1_72_0/doc/html/boost_asio/overview/core/hand... -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/