On Tue, 12 Apr 2022 at 06:28, VinÃcius dos Santos Oliveira
This class detracts a lot from Boost.Asio's style. I'll borrow an explanation that was already given before:
[...] makes the user a passive party in the design, who only has to react to incoming requests. I suggest that you consider a design that is closer to the Boost.Asio design. Let the user become the active party, who asks explicitly for the next request.
-- https://lists.boost.org/Archives/boost/2014/03/212072.php
IMO, he is confusing *Boost.Asio design* with *High vs low level design*. Let us have a look at this example from Asio itself https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea29... The public API of the chat_session is class chat_session { public: void start(); void deliver(const std::string& msg); }; One could also erroneously think that the deliver() function above is "not following the Asio style" because it is not an async function and has no completion token. But in fact, it has to be this way for a couple of reasons - At the time you call deliver(msg) there may be an ongoing write, in which case the message has to be queued and sent only after the ongoing write completes. - Users should be able to call deliver from inside other coroutines. That wouldn't work well if it were an async_ function. Think for example on two chat-sessions, sending messages to one another coroutine() // Session1 { for (;;) { std::string msg; co_await net::async_read(socket1, net::dynamic_buffer(msg), ...); // Wrong. co_await session2->async_deliver(msg); } } Now if session2 becomes unresponsive, so does session1, which is undesirable. The read operation should never be interrupted by others IO operations. These are some of the reasons why it has a simple implementation void deliver(const std::string& msg) { write_msgs_.push_back(msg); timer_.cancel_one(); } i.e. push the message in a queue and signal that to the writer coroutine, so it can write it. This is also what my send() and send_range() function do. Now to the start() function, as you see in the link, it will spawn two coroutines, one that keeps reading from the socket and one that keeps waiting for new messages to be written. As you can see, it doesn't communicate errors to the user. In Aedis, I reworked this function so it can communicate errors and renamed it to async_run. For that to be possible I use a parallel group https://github.com/mzimbres/aedis/blob/27b3bb89fbbec6acd8268f839f310e8d2b5a1... The explanation above should also make it clear why some way or another a high level api that loops on async_read and async_write will end up having a callback. Users must somehow get their code called after each operation completes.
Now, why does it matter? Take a look at this post: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-a...
Nathaniel goes a long way describing a lot of problems with Python network APIs. He just misses to pinpoint their origin. Due to missing the origin of the problem he'll obsess over something he has been calling "structured concurrency" in later blog posts which I suggest you ignore.
Anyway, the origin/solution of all problems with Python "callback" APIs is very simple: Just follow Boost.Asio style and you'll be covered (and right now you aren't).
That is not to mean that callbacks are forbidden. Boost.Asio itself uses callbacks: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/ssl__str.... But do notice the subtleties here. When Boost.Asio uses callbacks it's more like std::find_if() using a function object. Boost.Asio won't turn you into "a passive party in the design, who only has to react to incoming requests". Boost.Asio will keep you as the active party at all times.
Meanwhile the problems described by Nathaniel are more like callbacks acting almost as signal-handlers (a global "CPU time" resource that will call a callback in-between any of the scheduler's rounds, and don't really belong to any user's CPS chain sequence[1]).
[1] Continuation-passing style just emulates a user-level thread so we could borrow analogies from the thread world too if we wanted, such as SIGEV_SIGNAL/SIGEV_THREAD.
I believe this has also been addressed above. Marcelo