[boost.async] Some questions
Hi Everyone, I would like to thank Klemens for writing and sharing this library. I am sure the community needs a higher level library for asynchronous computations based on C++20 coroutines. Boost.Async addresses the need. I would like to ask a couple of questions to better understand the design goals and the scope of the library. Q1. Is single-threaded-ness the design goal? Or is the plan to enable multi-threaded use cases? There is a GitHub issue ( https://github.com/klemens-morgenstern/async/issues/19) suggesting the latter. Q2. Reading through the docs, I get an impression that Boost.Async is "Either a wrapper over ASIO, or an isolated usage for a generator". They seem like two unrelated worlds. The former requires a hard prerequisite of having a Boost.ASIO library (with Boost version at least 1.82). The latter can hardly be called "asynchronous". Generators, the way I understand them, are perfectly synchronous. Is this impression correct? The following are more remarks regarding the choice of names. Q3. The docs say that `promise` and `task` differ primarily by their level of eagerness. But the names do not seem to reflect this. What is the rationale or the intuition behind those names? R1. I find the name `use_op` uninformative. The examples in documentation suggest that it is an *adapter*: they change the Boost.ASIO interface into the Boost.Async interface. Is that correct? Regards, &rzej;
On Sun, Aug 13, 2023 at 5:33 PM Andrzej Krzemienski via Boost
Hi Everyone, I would like to thank Klemens for writing and sharing this library. I am sure the community needs a higher level library for asynchronous computations based on C++20 coroutines. Boost.Async addresses the need. I would like to ask a couple of questions to better understand the design goals and the scope of the library.
Thanks for looking into the library.
Q1. Is single-threaded-ness the design goal? Or is the plan to enable multi-threaded use cases? There is a GitHub issue ( https://github.com/klemens-morgenstern/async/issues/19) suggesting the latter.
I think most of the use cases for asynchronous code are best served being single threaded. That is you have a single IO thread and offload intense work (e.g. complex calculations) onto a thread pool. You can have multiple threads using async, but they can't interact safely with each other. I am considering adding support for that, but that's already possible by using asio's concurrenct_channel. So it's not high on the priority list.
Q2. Reading through the docs, I get an impression that Boost.Async is "Either a wrapper over ASIO, or an isolated usage for a generator". They seem like two unrelated worlds. The former requires a hard prerequisite of having a Boost.ASIO library (with Boost version at least 1.82). The latter can hardly be called "asynchronous". Generators, the way I understand them, are perfectly synchronous. Is this impression correct?
It is not. Generators are also async, i.e. they run on an event loop and can to asynchronous co_awaits. I also don't think "wrapper" is the correct term; async is like any coroutine library in that it needs an event loop (some of which are built into the language). So it uses asio's which I considered the best choice.
The following are more remarks regarding the choice of names.
Q3. The docs say that `promise` and `task` differ primarily by their level of eagerness. But the names do not seem to reflect this. What is the rationale or the intuition behind those names?
Promises are eager in JS, tasks lazy in Python. Can also be remembered like this: If you make a promise you should see it through eagerly, while a task can wait until it's scheduled.
R1. I find the name `use_op` uninformative. The examples in documentation suggest that it is an *adapter*: they change the Boost.ASIO interface into the Boost.Async interface. Is that correct?
Technically into an expression that can be used with co_await from any coroutine. All of asio's completion tokens are *adapter*s in a sense, e.g. use_future adopts it into a std::future. The type returned is an implementation of `op`, so `use_op` matches the asio pattern here.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
niedz., 13 sie 2023 o 11:55 Klemens Morgenstern < klemensdavidmorgenstern@gmail.com> napisał(a):
On Sun, Aug 13, 2023 at 5:33 PM Andrzej Krzemienski via Boost
wrote: Hi Everyone, I would like to thank Klemens for writing and sharing this library. I am sure the community needs a higher level library for asynchronous computations based on C++20 coroutines. Boost.Async addresses the need. I would like to ask a couple of questions to better understand the design goals and the scope of the library.
Thanks for looking into the library.
Q1. Is single-threaded-ness the design goal? Or is the plan to enable multi-threaded use cases? There is a GitHub issue ( https://github.com/klemens-morgenstern/async/issues/19) suggesting the latter.
I think most of the use cases for asynchronous code are best served being single threaded. That is you have a single IO thread and offload intense work (e.g. complex calculations) onto a thread pool.
You can have multiple threads using async, but they can't interact safely with each other. I am considering adding support for that, but that's already possible by using asio's concurrenct_channel. So it's not high on the priority list.
I understand the priorities, and they seem fine with me. Still, I would like to get a picture of the scope of this library. The docs say "simple single threaded asynchronicity", which seems to be implying "no multithreading by design". Your above response is more like "ultimately both single-threaded and multithreaded, but only single-threaded for now". Either option is fine, but it would be easier for potential users if one of these was indicated clearly.
Q2. Reading through the docs, I get an impression that Boost.Async is "Either a wrapper over ASIO, or an isolated usage for a generator". They seem like two unrelated worlds. The former requires a hard prerequisite
having a Boost.ASIO library (with Boost version at least 1.82). The latter can hardly be called "asynchronous". Generators, the way I understand
of them,
are perfectly synchronous. Is this impression correct?
It is not. Generators are also async, i.e. they run on an event loop and can to asynchronous co_awaits.
I also don't think "wrapper" is the correct term; async is like any coroutine library in that it needs an event loop (some of which are built into the language). So it uses asio's which I considered the best choice.
Two follow-up questions for me to understand better. One. Is this library prepared to work with any other event loop than Boost.Asio (or standalone ASIO)? If so, is there an example somewhere? Two. Does the `generator` example from the docs also involve the ASIO's executor?
The following are more remarks regarding the choice of names.
Q3. The docs say that `promise` and `task` differ primarily by their
level
of eagerness. But the names do not seem to reflect this. What is the rationale or the intuition behind those names?
Promises are eager in JS, tasks lazy in Python.
The above explanation should be included as notes in the docs of the corresponding types.
Can also be remembered like this: If you make a promise you should see it through eagerly, while a task can wait until it's scheduled.
The explanation for a `task` works for me. But I do not understand the analogy with the promise.
R1. I find the name `use_op` uninformative. The examples in documentation suggest that it is an *adapter*: they change the Boost.ASIO interface
into
the Boost.Async interface. Is that correct?
Technically into an expression that can be used with co_await from any coroutine.
All of asio's completion tokens are *adapter*s in a sense, e.g. use_future adopts it into a std::future. The type returned is an implementation of `op`, so `use_op` matches the asio pattern here.
I agree that all ASIO's completion tokens can be thought of as *adapters*. What I am missing is a clear indication that: `use_op` is a *completion tokens* in the ASIO model Even with a link to the ASIO docs for the completion token. Ok, now I also get that `op` is an Awaitable that wraps ASIO operation Is this statement precise and correct? (Does ASIO use the term "operation"?) Still the name `op` is very short and not informative. Does it need to be that short? Is it expected to be used very often? And this one: async::use_op.as_default_on() Is it not too clever? op_adapter() would be shorter, have a dedicated signature that could be separately documented. Regards, &rzej;
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes:
niedz., 13 sie 2023 o 11:32 Andrzej Krzemienski
Hi Everyone, I would like to thank Klemens for writing and sharing this library. I am sure the community needs a higher level library for asynchronous computations based on C++20 coroutines. Boost.Async addresses the need. I would like to ask a couple of questions to better understand the design goals and the scope of the library.
Q1. Is single-threaded-ness the design goal? Or is the plan to enable multi-threaded use cases? There is a GitHub issue ( https://github.com/klemens-morgenstern/async/issues/19) suggesting the latter.
Q2. Reading through the docs, I get an impression that Boost.Async is "Either a wrapper over ASIO, or an isolated usage for a generator". They seem like two unrelated worlds. The former requires a hard prerequisite of having a Boost.ASIO library (with Boost version at least 1.82). The latter can hardly be called "asynchronous". Generators, the way I understand them, are perfectly synchronous. Is this impression correct?
The following are more remarks regarding the choice of names.
Q3. The docs say that `promise` and `task` differ primarily by their level of eagerness. But the names do not seem to reflect this. What is the rationale or the intuition behind those names?
R1. I find the name `use_op` uninformative. The examples in documentation suggest that it is an *adapter*: they change the Boost.ASIO interface into the Boost.Async interface. Is that correct?
One more question. This interface of async::generator
One more question. This interface of async::generator
, taking two parameters, where one can not only generate values from the generator, but also obtain values: is there a real-life use case for this?
I'd say major languages like Python and JS allow for this, too. So if you're coming from these, it makes sense.
czw., 17 sie 2023 o 23:55 Ruben Perez
One more question. This interface of async::generator
, taking two parameters, where one can not only generate values from the generator, but also obtain values: is there a real-life use case for this? I'd say major languages like Python and JS allow for this, too. So if you're coming from these, it makes sense.
Thanks, but still, could someone show a plausible real-life example of this written in Boost.Asynch? I am not familiar with Python's or JS's coroutines. But do they have an *identical* interface? When I was trying to come up with an example, I found the results surprising: auto output1 = co_await generator(input1); auto output2 = co_await generator(input2); I expected that this instruction would mean "take input2, suspend, and when resumed return value computed from input2". But because the implementation in the coroutine has to read: auto next_input = co_yield compute(input); The consequence is that the co_awaits actually mean "take input2, suspend, and when resumed return value computed from input1". Maybe I am doing something wrong, I would like to be corrected. The argument that other languages have it is not a valid one for me. I would still like to know if this has a use case when implemented as it is with C++ coroutines. I enclose my example, where I tried to model a producer and consumer situation, and concluded that I couldn't. Regards, &rzej;
On Fri, Aug 18, 2023 at 6:51 AM Andrzej Krzemienski via Boost
czw., 17 sie 2023 o 23:55 Ruben Perez
napisał(a): One more question. This interface of async::generator
, taking two parameters, where one can not only generate values from the generator, but also obtain values: is there a real-life use case for this? I'd say major languages like Python and JS allow for this, too. So if you're coming from these, it makes sense.
Thanks, but still, could someone show a plausible real-life example of this written in Boost.Asynch? I am not familiar with Python's or JS's coroutines. But do they have an *identical* interface?
Not identical, you need to call `send` in python, instead of operator().
When I was trying to come up with an example, I found the results surprising:
auto output1 = co_await generator(input1); auto output2 = co_await generator(input2);
I expected that this instruction would mean "take input2, suspend, and when resumed return value computed from input2". But because the implementation in the coroutine has to read:
auto next_input = co_yield compute(input);
The consequence is that the co_awaits actually mean "take input2, suspend, and when resumed return value computed from input1".
generators have this "weird" kind of overlap by their nature.
They can be made lazy but then the inner workings get utterly
confusing too, because where does the input1 come from before the
co_yield?
i.e. in your example:
async::generator
Maybe I am doing something wrong, I would like to be corrected. The argument that other languages have it is not a valid one for me. I would still like to know if this has a use case when implemented as it is with C++ coroutines.
I think you're just looking for a lazy generator and I made it eager. There's no reason I couldn't support both.
I enclose my example, where I tried to model a producer and consumer situation, and concluded that I couldn't.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, Aug 18, 2023, 03:36 Klemens Morgenstern via Boost < boost@lists.boost.org> wrote:
On Fri, Aug 18, 2023 at 6:51 AM Andrzej Krzemienski via Boost
wrote: czw., 17 sie 2023 o 23:55 Ruben Perez
napisał(a):
One more question. This interface of async::generator
, taking
two
parameters, where one can not only generate values from the generator, but also obtain values: is there a real-life use case for this?
I'd say major languages like Python and JS allow for this, too. So if you're coming from these, it makes sense.
Thanks, but still, could someone show a plausible real-life example of this written in Boost.Asynch? I am not familiar with Python's or JS's coroutines. But do they have an *identical* interface?
Not identical, you need to call `send` in python, instead of operator().
When I was trying to come up with an example, I found the results surprising:
auto output1 = co_await generator(input1); auto output2 = co_await generator(input2);
I expected that this instruction would mean "take input2, suspend, and
when
resumed return value computed from input2". But because the implementation in the coroutine has to read:
auto next_input = co_yield compute(input);
The consequence is that the co_awaits actually mean "take input2, suspend, and when resumed return value computed from input1".
generators have this "weird" kind of overlap by their nature.
I agree with the observation. I mean, the weirdness comes into play when we employ the mechanism for injecting values into the generator. This is why I am asking for any use case that would be served by this feature. My hypothesis is that it is useless. Useless in Python and JS, and now it is copied into Boost.Async. I may be wrong about this. This is why an example of a plausible use case woul help proving me wrong. They can be made lazy but then the inner workings get utterly
confusing too, because where does the input1 come from before the co_yield?
i.e. in your example:
async::generator
client(Task t) { for (int i = 0; ; ++i < 100) { std::cout << "processing: " << t.value << std::endl; t = co_yield Result{ std::format("result-{}-{}", i, t.value) }; } co_return Result{ std::format("result-{}-{}", 100, t.value) }; } when I do the first co_await g(t) - where does `t` go? You're in the co_yield using the t passed in through the argument list, so it's not clear what's going on either. I actually did this in asio::experimental::coro, and I found it worse.
I agree that my example is confusing. I was trying to find any application for the feature (of injecting values into a generator) and I failed. Hence my hypothesis that it serves no use case.
There might be an option to support this with a runtime_option, e.g.:
async::generator
client() { auto t = co_await async::this_coro::initial; // wait for the first co_await & make the generator lazy. for (int i = 0; ; ++i < 100) { std::cout << "processing: " << t.value << std::endl; t = co_yield Result{ std::format("result-{}-{}", i, t.value) }; } co_return Result{ std::format("result-{}-{}", 100, t.value) }; }
You are describing a potential new feature, right?
Maybe I am doing something wrong, I would like to be corrected. The argument that other languages have it is not a valid one for me. I would still like to know if this has a use case when implemented as it is with C++ coroutines.
I think you're just looking for a lazy generator and I made it eager. There's no reason I couldn't support both.
I wasn't really requesting a lazy generator (but maybe it is useful). I just want to understand if there is any known use case for an eager generator that is injected values. (Because if there isn't, this would be a basis for criticizing this portion of the library interface.) Regards, &rzej;
I enclose my example, where I tried to model a producer and consumer situation, and concluded that I couldn't.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
.
I agree with the observation. I mean, the weirdness comes into play when we employ the mechanism for injecting values into the generator. This is why I am asking for any use case that would be served by this feature. My hypothesis is that it is useless. Useless in Python and JS, and now it is copied into Boost.Async. I may be wrong about this. This is why an example of a plausible use case woul help proving me wrong.
Well the simplest example would be a statemachine. You push in the transition and get the current state back when awaiting it. Simple of course isnt simple here, because state machines need a certain complexity to be useful.
I wasn't really requesting a lazy generator (but maybe it is useful).
Just fyi: i can add that as a runtime option to the existing generator. PR is open.
pt., 18 sie 2023 o 13:42 Klemens Morgenstern < klemensdavidmorgenstern@gmail.com> napisał(a):
.
I agree with the observation. I mean, the weirdness comes into play when we employ the mechanism for injecting values into the generator. This is why I am asking for any use case that would be served by this feature. My hypothesis is that it is useless. Useless in Python and JS, and now it is copied into Boost.Async. I may be wrong about this. This is why an example of a plausible use case woul help proving me wrong.
Well the simplest example would be a statemachine. You push in the transition and get the current state back when awaiting it.
Simple of course isnt simple here, because state machines need a certain complexity to be useful.
I am sorry, maybe my imagination is lacking, but from the description above I do not see how this would work. The way I understand a state machine, I would expect that if there is an object representing one, when I pass it a new event (transition), I would expect it to return the state after the transition. No? Is there some example online that you can point me to, so I could better understand such a use case? Maybe the "examples" section of Boost.Asynch would benefit from such an example.
I wasn't really requesting a lazy generator (but maybe it is useful).
Just fyi: i can add that as a runtime option to the existing generator. PR is open.
Thanks, &rzej;
On Fri, Aug 18, 2023, 04:36 Klemens Morgenstern via Boost < boost@lists.boost.org> wrote:
On Fri, Aug 18, 2023 at 6:51 AM Andrzej Krzemienski via Boost
wrote: czw., 17 sie 2023 o 23:55 Ruben Perez
napisał(a):
One more question. This interface of async::generator
, taking
two
parameters, where one can not only generate values from the generator, but also obtain values: is there a real-life use case for this?
I'd say major languages like Python and JS allow for this, too. So if you're coming from these, it makes sense.
Thanks, but still, could someone show a plausible real-life example of this written in Boost.Asynch? I am not familiar with Python's or JS's coroutines. But do they have an *identical* interface?
Not identical, you need to call `send` in python, instead of operator().
When I was trying to come up with an example, I found the results surprising:
auto output1 = co_await generator(input1); auto output2 = co_await generator(input2);
I expected that this instruction would mean "take input2, suspend, and
when
resumed return value computed from input2". But because the implementation in the coroutine has to read:
auto next_input = co_yield compute(input);
The consequence is that the co_awaits actually mean "take input2, suspend, and when resumed return value computed from input1".
generators have this "weird" kind of overlap by their nature. They can be made lazy but then the inner workings get utterly confusing too, because where does the input1 come from before the co_yield?
i.e. in your example:
async::generator
client(Task t) { for (int i = 0; ; ++i < 100) { std::cout << "processing: " << t.value << std::endl; t = co_yield Result{ std::format("result-{}-{}", i, t.value) }; } co_return Result{ std::format("result-{}-{}", 100, t.value) }; } when I do the first co_await g(t) - where does `t` go? You're in the co_yield using the t passed in through the argument list, so it's not clear what's going on either. I actually did this in asio::experimental::coro, and I found it worse.
There might be an option to support this with a runtime_option, e.g.:
async::generator
client() { auto t = co_await async::this_coro::initial; // wait for the first co_await & make the generator lazy. for (int i = 0; ; ++i < 100) { std::cout << "processing: " << t.value << std::endl; t = co_yield Result{ std::format("result-{}-{}", i, t.value) }; } co_return Result{ std::format("result-{}-{}", 100, t.value) }; } Maybe I am doing something wrong, I would like to be corrected. The argument that other languages have it is not a valid one for me. I would still like to know if this has a use case when implemented as it is with C++ coroutines.
I think you're just looking for a lazy generator and I made it eager. There's no reason I couldn't support both.
There's still one thing I don't understand, thoug. The docs say, the difference between the promise and the task type are in eagerness/laziness. But for generator, in the PR, laziness is enabled "manually" with co_await instruction. Couldn't the same be applied to the promise? Is there a value in encoding the laziness property in the type? If so, why the same argument would not apply to generator? Regards, &rzej;
I enclose my example, where I tried to model a producer and consumer situation, and concluded that I couldn't.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
There's still one thing I don't understand, thoug. The docs say, the difference between the promise and the task type are in eagerness/laziness. But for generator, in the PR, laziness is enabled "manually" with co_await instruction. Couldn't the same be applied to the promise? Is there a value in encoding the laziness property in the type? If so, why the same argument would not apply to generator?
In theory, yes. But there's a bit more to it. tasks are taking their executor when they first get resumed, which allows them to get spawned onto another executor unlike promises. That's also reflected in their memory allocation. The promise starts running immediately and might resume something else (that awaits it) on completion. The generator will always get co_awaited and resumed by it's caller, by default on the first co_yield, so making the lazyness a run-time parameter with `co_await initial` is basically a co_yield that doesn't yield a value. So this change is minor and just changes the timing & the overhead is minimal. But note that I didn't merge it yet, so I am not a 100% sure either yet.
participants (3)
-
Andrzej Krzemienski
-
Klemens Morgenstern
-
Ruben Perez