library to support async/await pattern
Is there interest for a coroutine-based library to make asynchronous APIs easier to deal with? Typically, for each task there is cruft: chaining callbacks, managing intermediate state and error codes (since exceptions don't fit this model). The code flow get inverted and becomes difficult to follow. Recent versions of F# and C# solve this problem. They implement an await operator that effectively suspends the executing method until a task completes. The compiler takes care of transforming subsequent code into a continuation. Everything runs on the main thread, with asynchronous methods spending most time awaiting. N3328 proposes resumable functions of this kind in C++. For an immediate solution we could leverage the Boost.Context/Coroutine library. The resulting code may look like this: try { task = do_a_async(...) // yield until task done task.await(); } catch (const some_exception& e) { // exceptions arrive in awaiting context } // normal code flow for (auto& task : tasks1) { task.await(); } taskAny = await_any(tasks2); taskAny.await(); ... There needs to be a representation for Awaitable tasks (similar to std::future but non-blocking). The other requirement is to have a Scheduler (run loop) in order to weave between coroutines. Benefits: - normal code flow: plain conditionals, loops, exceptions, RAII - algorithm state tracked on coroutine stack - async tasks are composable - any async API can be wrapped Cons: - must wrap async APIs (e.g. Boost.Asio) - needs std::exception_ptr to dispatch exceptions - stackful coroutines are sometimes difficult to debug I wrote an open-source library that does this: https://github.com/vmilea/CppAwait. It's far from Boost style but the concept looks sane. For a comparison between async patterns please see: https://github.com/vmilea/CppAwait/blob/master/Examples/ex_stockClient.cpp Making a Boost version would involve serious redesign. So is this worth pursuing? Thanks!
How does it compare to boost.fiber (github.com/olk/boost-fiber)?
2013/4/18 Valentin Milea
Is there interest for a coroutine-based library to make asynchronous APIs easier to deal with?
Typically, for each task there is cruft: chaining callbacks, managing intermediate state and error codes (since exceptions don't fit this model). The code flow get inverted and becomes difficult to follow.
Recent versions of F# and C# solve this problem. They implement an await operator that effectively suspends the executing method until a task completes. The compiler takes care of transforming subsequent code into a continuation. Everything runs on the main thread, with asynchronous methods spending most time awaiting. N3328 proposes resumable functions of this kind in C++.
For an immediate solution we could leverage the Boost.Context/Coroutine library. The resulting code may look like this:
try { task = do_a_async(...)
// yield until task done task.await(); } catch (const some_exception& e) { // exceptions arrive in awaiting context }
// normal code flow for (auto& task : tasks1) { task.await(); }
taskAny = await_any(tasks2); taskAny.await();
...
There needs to be a representation for Awaitable tasks (similar to std::future but non-blocking). The other requirement is to have a Scheduler (run loop) in order to weave between coroutines.
Benefits: - normal code flow: plain conditionals, loops, exceptions, RAII - algorithm state tracked on coroutine stack - async tasks are composable - any async API can be wrapped
Cons: - must wrap async APIs (e.g. Boost.Asio) - needs std::exception_ptr to dispatch exceptions - stackful coroutines are sometimes difficult to debug
I wrote an open-source library that does this: https://github.com/vmilea/CppAwait.
It's far from Boost style but the concept looks sane. For a comparison between async patterns please see: https://github.com/vmilea/CppAwait/blob/master/Examples/ex_stockClient.cpp
Making a Boost version would involve serious redesign. So is this worth pursuing?
Thanks!
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
I think Boost.Fiber tries to simulate thread semantics on top of
coroutines. So it has a very different goal.
CppAwait implements a task-based asynchronous pattern. Every asynchronous
operation is exposed as an abstract task. The focus is on being able to
compose tasks and to express complex async work in plain sequential code.
The closest C++ library I know of is Protothreads (
https://github.com/benhoyt/protothreads-cpp ). It's based on Duff's device
so very limited.
These .NET overviews cover the core concept well:
http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx
http://msdn.microsoft.com/en-us/library/hh873175.aspx
And don't forget CppAwait examples for a quick intro.
On Thu, Apr 18, 2013 at 9:22 PM, Oliver Kowalke
How does it compare to boost.fiber (github.com/olk/boost-fiber)?
2013/4/18 Valentin Milea
Is there interest for a coroutine-based library to make asynchronous APIs easier to deal with?
Typically, for each task there is cruft: chaining callbacks, managing intermediate state and error codes (since exceptions don't fit this model). The code flow get inverted and becomes difficult to follow.
Recent versions of F# and C# solve this problem. They implement an await operator that effectively suspends the executing method until a task completes. The compiler takes care of transforming subsequent code into a continuation. Everything runs on the main thread, with asynchronous methods spending most time awaiting. N3328 proposes resumable functions of this kind in C++.
For an immediate solution we could leverage the Boost.Context/Coroutine library. The resulting code may look like this:
try { task = do_a_async(...)
// yield until task done task.await(); } catch (const some_exception& e) { // exceptions arrive in awaiting context }
// normal code flow for (auto& task : tasks1) { task.await(); }
taskAny = await_any(tasks2); taskAny.await();
...
There needs to be a representation for Awaitable tasks (similar to std::future but non-blocking). The other requirement is to have a Scheduler (run loop) in order to weave between coroutines.
Benefits: - normal code flow: plain conditionals, loops, exceptions, RAII - algorithm state tracked on coroutine stack - async tasks are composable - any async API can be wrapped
Cons: - must wrap async APIs (e.g. Boost.Asio) - needs std::exception_ptr to dispatch exceptions - stackful coroutines are sometimes difficult to debug
I wrote an open-source library that does this: https://github.com/vmilea/CppAwait.
It's far from Boost style but the concept looks sane. For a comparison between async patterns please see:
https://github.com/vmilea/CppAwait/blob/master/Examples/ex_stockClient.cpp
Making a Boost version would involve serious redesign. So is this worth pursuing?
Thanks!
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Le 18/04/13 20:22, Oliver Kowalke a écrit :
How does it compare to boost.fiber (github.com/olk/boost-fiber)?
This should be compare with Boost.Task, isn't it? https://github.com/olk/boost-task Best, Vicente
2013/4/19 Vicente J. Botet Escriba
Le 18/04/13 20:22, Oliver Kowalke a écrit :
How does it compare to boost.fiber (github.com/olk/boost-fiber)?
This should be compare with Boost.Task, isn't it?
https://github.com/olk/boost-**task https://github.com/olk/boost-task
Best, Vicente
hmm - yes, I was confused by the await keyword. boost.tasks hides how the task is executed (cooperative scheduled etc.) best regards, Oliver
I've started looking over Boost.Task, impressive design! Some things are
hard to tell because of incomplete code.
Please let me know if my understanding is correct:
When run from a pool thread, boost::tasks::handle::get() / wait() don't
block current thread. Instead they suspend current task until result
becomes available. I assume each task has its own fcontext_t. Then wait()
from a _fiber_ pool would behave like the proposed 'await'.
I'm not sure how a boost::task wrapper would be implemented over Asio API.
Run it as a subtask and trigger some condition variable in Asio callback?
Also, I noticed a couple of missing features:
- can't configure stack size per task
- no waitfor_any is quite a limitation
Best regards,
Valentin
On Fri, Apr 19, 2013 at 8:42 PM, Oliver Kowalke
2013/4/19 Vicente J. Botet Escriba
Le 18/04/13 20:22, Oliver Kowalke a écrit :
How does it compare to boost.fiber (github.com/olk/boost-fiber)?
This should be compare with Boost.Task, isn't it?
https://github.com/olk/boost-**task https://github.com/olk/boost-task
Best, Vicente
hmm - yes, I was confused by the await keyword. boost.tasks hides how the task is executed (cooperative scheduled etc.)
best regards, Oliver
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
2013/4/20 Valentin Milea
I've started looking over Boost.Task, impressive design! Some things are hard to tell because of incomplete code.
yes - I'm suffer to have enough time (I've to implemnet interface V2 to coroutine, finalize fiber, refactor context for checkpointing ...)
When run from a pool thread, boost::tasks::handle::get() / wait() don't block current thread. Instead they suspend current task until result becomes available. I assume each task has its own fcontext_t. Then wait() from a _fiber_ pool would behave like the proposed 'await'.
correct
I'm not sure how a boost::task wrapper would be implemented over Asio API. Run it as a subtask and trigger some condition variable in Asio callback?
I wasn't thinking about an asio wrapper - it might be require a new library (maybe worth of investigation). I supect that boost.task would fit here (as a wrapper for coop. operations wrapping boost.asio). Also, I noticed a couple of missing features:
- can't configure stack size per task
will be added - and for gcc the stack will grow on demand
- no waitfor_any is quite a limitation
yes best regards, Oliver
On Sat, Apr 20, 2013 at 12:03 PM, Oliver Kowalke
2013/4/20 Valentin Milea
[...]
I'm not sure how a boost::task wrapper would be implemented over Asio API. Run it as a subtask and trigger some condition variable in Asio callback?
I wasn't thinking about an asio wrapper - it might be require a new library (maybe worth of investigation). I supect that boost.task would fit here (as a wrapper for coop. operations wrapping boost.asio).
I do not believe in wrappers. There are too many async libraries out
there to write wrappers for each one.
A good coroutine/task library should be able to work with existing
APIs transparently. I like a future/promise approach, something like
this:
future
I do not believe in wrappers. There are too many async libraries out there to write wrappers for each one.
I agree the focus should be on a generic solution. However, wrappers can offer better abstractions. With CppAwait you can take some blocking Asio code and make it asynchonous almost mechanically. This is because the wrapper takes care of converting error codes into exceptions, and deals with task interruption. The benefits become more obvious as you start to compose tasks. So, I think exposing every async operation as a boost::task is good. Hand-writing wrappers for each async function is bad. Best regards, Valentin
Is there interest for a coroutine-based library to make asynchronous APIs easier to deal with?
Typically, for each task there is cruft: chaining callbacks, managing intermediate state and error codes (since exceptions don't fit this model). The code flow get inverted and becomes difficult to follow.
Recent versions of F# and C# solve this problem. They implement an await operator that effectively suspends the executing method until a task completes. The compiler takes care of transforming subsequent code into a continuation. Everything runs on the main thread, with asynchronous methods spending most time awaiting. N3328 proposes resumable functions of this kind in C++.
For an immediate solution we could leverage the Boost.Context/Coroutine library. The resulting code may look like this:
try { task = do_a_async(...)
// yield until task done task.await(); } catch (const some_exception& e) { // exceptions arrive in awaiting context }
// normal code flow for (auto& task : tasks1) { task.await(); }
taskAny = await_any(tasks2); taskAny.await();
...
FWIW, HPX provides all this and more (https://github.com/STEllAR-GROUP/hpx/). It's well aligned with the Standard's semantics and exposes an interface very close to what you're showing above. As a bonus all of this is available in distributed scenarios as well (remote thread scheduling and synchronization).
There needs to be a representation for Awaitable tasks (similar to std::future but non-blocking). The other requirement is to have a Scheduler (run loop) in order to weave between coroutines.
We strongly believe that we don't need a new construct for this. Threads and
futures is exactly the abstraction to be used for this as well. In HPX,
hpx::thread (full semantic equivalence to std::thread, except it represents
a task) and hpx::future expose semantics similar to those you're describing:
hpx::future<int> f = hpx::async([](){ return 42; });
BOOST_ASSERT(f.get() == 42);
Additionally HPX implements N3558 (A Standardized Representation of
Asynchronous Operations,
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3558.pdf), allowing
to write:
hpx::future<int> f1 = hpx::async([](){ return 42; });
hpx::future<int> f2 = hpx::async([](){ return 43; });
tuple
Benefits: - normal code flow: plain conditionals, loops, exceptions, RAII - algorithm state tracked on coroutine stack - async tasks are composable - any async API can be wrapped
Cons: - must wrap async APIs (e.g. Boost.Asio) - needs std::exception_ptr to dispatch exceptions - stackful coroutines are sometimes difficult to debug
I wrote an open-source library that does this: https://github.com/vmilea/CppAwait.
It's far from Boost style but the concept looks sane. For a comparison between async patterns please see: https://github.com/vmilea/CppAwait/blob/master/Examples/ex_stockClient.cpp
Making a Boost version would involve serious redesign. So is this worth pursuing?
participants (5)
-
Giovanni Piero Deretta
-
Hartmut Kaiser
-
Oliver Kowalke
-
Valentin Milea
-
Vicente J. Botet Escriba