[Async] Review of proposed Boost.Async begins
Dear Boost, It is my great pleasure to announce the beginning of the peer review of proposed Boost.Async, which is hoped to become the most popular way of programming C++ coroutines in future C++. The peer review shall run between Tuesday 8th August and Friday 18th August. Like many of you have found, until now there has been no good standard and easy way of writing C++ coroutine code which works easily with multiple third party codebases and foreign asynchronous design patterns. ASIO’s current coroutine support is tightly coupled to ASIO, and it rejects foreign awaitables. Libunifex is dense to master and non-obvious to make work well with foreign asynchronous design patterns, not helped by a lack of documentation, a problem which also afflicts stdexec which also has a steep and dense learning curve. CppCoro is probably currently the best in class solution to the space of easy to use fire and forget C++ coroutine programming, but it isn’t particularly portable and was designed long before C++ coroutine standardisation was completed. Klemens very kindly stepped up and created for us an open source ground-up reimplementation of the best ideas of hitherto proprietary commercial C++ coroutine libraries, taking in experience and feedback from multiple domain experts in the area to create an easy to use C++ coroutine support library with excellent support for foreign asynchronous design patterns. For example, if you want your coroutine to suspend while a stackful fiber runs and then resume when that fiber has done something – AND that fiber implementation is unknown to Boost.Async – this is made as easy as possible. If your C++ coroutine code wants to await on unknown foreign awaitables, that generally “just works” with no added effort. This makes tying together multiple third party dependencies into a solution based on C++ coroutines much easier than until now. The industry standard executor is ASIO, so Boost.Async uses Boost.ASIO as its base executor. To keep things simple and high performance, Boost.Async requires each pool of things it works with to execute on a single kernel thread at a time – though there can be a pool of things per kernel thread, or an ASIO strand can ensure execution never occurs on multiple kernel threads. The basic concurrency primitives of promise, generator and task are provided, with the fundamental operations of select, join and gather. Some more complex async support is supplied: Go-type channels, and async teardown. Performance is superior to both stackful coroutines and ASIO’s own C++ coroutine support, and sometimes far superior. You can read the documentation at https://klemens.dev/async/ and study or try out the code at https://github.com/klemens-morgenstern/async. Anyone with experience using C++ coroutines is welcome to contribute a review at the Boost mailing list (https://lists.boost.org/mailman/listinfo.cgi/boost), here at this Reddit page, via email to me personally, or any other mechanism where I the review manager will see it. In your review please state at the end whether you recommend acceptance, acceptance with conditions, or rejection. Please state your experience with C++ coroutines and ASIO in your review, and how many hours you spent on the review. Thanks in advance for your time and reviews! Niall
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi Niall, i just found a small typo: In the TOC: polymoprhic memory resource - -> polymorphic memory resource Otherwise, i try it out! Bye Georg -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEcSCAewDCysh7zXDlcEsNp+0azOkFAmTSOkYACgkQcEsNp+0a zOmA4RAAlVdsYJ0BB4eiJIkisKpkeHTNMnwbEfZydtjGLxpHoAdLNGF7aOau5tjN PfRlllmIz+1BhvSU1PuYjXi+zlRrj7DXzIeKckwABLtZdKAR++/1gD60b+HAE/Xt 2NVg33Og6xIXqlDBU9tgTAdalEvLzEtSKRB7X3bYUaKoAHPM4PjP7LZw6DX74BX6 G01gR9SXEy52CL15zUfIYIUUYLsEHdFaFsUFQpLBb+wxq+pAG1uTQzrB6G3MshHH YDfX1S+Tk/QPxFg6QucIiY/YWjDR0+WcwT4mZAFlPcxkmM/BGjt8O8mPXfNQIdUD 67gXEDugvTsEUwinDlyyua8zUPKSyxmQ3nJeEkMEiAnVyoxETC+bQ4TA7euGAOpm nIb2Kw6eIM7wdGFLHyEgEdtqzQAXZUiroTANCYQPLXHWMrbXbTVw1LIOPNT450y6 U+v/PsPs83V7nelwDVHe2tncZEJJ1s1RT/WLOm+c2fRTvRw6Tod14tHx+XbyTmWu lDuJDFSocTWTFLws+3NCF3ppvp+jJ2rsZqxV40gOmhM501RFFBZz9DLa+bIQJRKr hq/7TF1ybIELI7hcxGOHsx9P7hpeOXXpHUzh/pHx/fkQqpM+roGhgkflLv1mfqxn sTyQpaBfwU3Wc5KOMz/S89fSNfIMxrL01NE4ndoJgn49ExlNx8E= =H25Y -----END PGP SIGNATURE-----
wt., 8 sie 2023 o 07:33 Niall Douglas via Boost
Dear Boost,
It is my great pleasure to announce the beginning of the peer review of proposed Boost.Async, which is hoped to become the most popular way of programming C++ coroutines in future C++. The peer review shall run between Tuesday 8th August and Friday 18th August.
Like many of you have found, until now there has been no good standard and easy way of writing C++ coroutine code which works easily with multiple third party codebases and foreign asynchronous design patterns. ASIO’s current coroutine support is tightly coupled to ASIO, and it rejects foreign awaitables. Libunifex is dense to master and non-obvious to make work well with foreign asynchronous design patterns, not helped by a lack of documentation, a problem which also afflicts stdexec which also has a steep and dense learning curve. CppCoro is probably currently the best in class solution to the space of easy to use fire and forget C++ coroutine programming, but it isn’t particularly portable and was designed long before C++ coroutine standardisation was completed.
Klemens very kindly stepped up and created for us an open source ground-up reimplementation of the best ideas of hitherto proprietary commercial C++ coroutine libraries, taking in experience and feedback from multiple domain experts in the area to create an easy to use C++ coroutine support library with excellent support for foreign asynchronous design patterns. For example, if you want your coroutine to suspend while a stackful fiber runs and then resume when that fiber has done something – AND that fiber implementation is unknown to Boost.Async – this is made as easy as possible. If your C++ coroutine code wants to await on unknown foreign awaitables, that generally “just works” with no added effort. This makes tying together multiple third party dependencies into a solution based on C++ coroutines much easier than until now.
The industry standard executor is ASIO, so Boost.Async uses Boost.ASIO as its base executor. To keep things simple and high performance, Boost.Async requires each pool of things it works with to execute on a single kernel thread at a time – though there can be a pool of things per kernel thread, or an ASIO strand can ensure execution never occurs on multiple kernel threads. The basic concurrency primitives of promise, generator and task are provided, with the fundamental operations of select, join and gather. Some more complex async support is supplied: Go-type channels, and async teardown. Performance is superior to both stackful coroutines and ASIO’s own C++ coroutine support, and sometimes far superior.
You can read the documentation at https://klemens.dev/async/ and study or try out the code at https://github.com/klemens-morgenstern/async.
Is there a single-header version of this library? One that could be tested in Compiler Explorer? Regards, &rzej;
Anyone with experience using C++ coroutines is welcome to contribute a review at the Boost mailing list (https://lists.boost.org/mailman/listinfo.cgi/boost), here at this Reddit page, via email to me personally, or any other mechanism where I the review manager will see it. In your review please state at the end whether you recommend acceptance, acceptance with conditions, or rejection. Please state your experience with C++ coroutines and ASIO in your review, and how many hours you spent on the review.
Thanks in advance for your time and reviews!
Niall
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Sun, Aug 13, 2023, 1:53 AM Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
wt., 8 sie 2023 o 07:33 Niall Douglas via Boost
napisał(a): Is there a single-header version of this library? One that could be tested in Compiler Explorer?
Regards, &rzej;
There is not and the library is also not header only.
niedz., 13 sie 2023 o 03:06 Klemens Morgenstern via Boost < boost@lists.boost.org> napisał(a):
On Sun, Aug 13, 2023, 1:53 AM Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
wt., 8 sie 2023 o 07:33 Niall Douglas via Boost
napisał(a): Is there a single-header version of this library? One that could be tested in Compiler Explorer?
Regards, &rzej;
There is not and the library is also not header only.
Oh, I see. Thanks. It is a shame, though. Usually Boost candidates provide these. They are not useful for developing real apps, but they help in the review process. During the Boost reviews I often demo a library in my work. This is a way to (a) popularize a library and (b) obtain additional feedback. Due to different corporate restrictions, often the only way to do it is via the Compiler Explorer. Of course, this is not a requirement for a library author. You have already gone through a lot of effort to bring the library to a good shape. But maybe someone on the list can suggest how to make a live demo of a candidate library easily accessible online. I think Boost (as a platform for library authors) should provide a tool or instructions for making such live tests available. Regards, &rzej;
On Mon, Aug 14, 2023 at 9:46 AM Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
It is a shame, though. Usually Boost candidates provide these. They are not useful for developing real apps, but they help in the review process.
Consider opening an issue in the new documentation repository to make sure this gets mentioned: https://github.com/cppalliance/site-docs/issues Or, if you are feeling spicy you might submit a pull request to the corresponding docs :) Thanks
pon., 14 sie 2023 o 19:44 Vinnie Falco
On Mon, Aug 14, 2023 at 9:46 AM Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
It is a shame, though. Usually Boost candidates provide these. They are not useful for developing real apps, but they help in the review process.
Consider opening an issue in the new documentation repository to make sure this gets mentioned:
Thanks for the suggestion. Here's the issue report: https://github.com/cppalliance/site-docs/issues/130 Regards, &rzej;
Or, if you are feeling spicy you might submit a pull request to the corresponding docs :)
Thanks
On Tue, 8 Aug 2023 at 07:33, Niall Douglas via Boost
Dear Boost,
It is my great pleasure to announce the beginning of the peer review of proposed Boost.Async, which is hoped to become the most popular way of programming C++ coroutines in future C++. The peer review shall run between Tuesday 8th August and Friday 18th August.
<snip>
I recommend ACCEPT Boost.Async in Boost. More details about my decision can be found in another thread in this ML. The key points of my decision are 1. It was straightforward to integrate Boost.Redis with Boost.Async. This is also shown in the report from Ruben Perez in this ML. 2. The docs are well written and read linearly. That is in part because it does not have to dig into material unrelated to coroutines, which is important for starters. 3. The library has good defaults: single-threaded contexts, automatic installation of signal handlers and co_main entry point. This reduces the cognitive burden for starters, that now won't be tempted into using multi-threaded contexts and strands. This is likely to result in better performance by default on anything that is IO bound. 4. I did not measure this myself but the docs claim it has faster e.g. channels by assuming a single-threaded context. I found this very surprising. I must say, channels are very important elements in any asynchronous programming. Any kind of improvements here are welcome. 5. It offers ways (async_ready) of skipping unnecessary suspensions. I believe this will open the door for countless optimization opportunities that avoid rescheduling in the Asio event loop, such as the one I mentioned in the other thread where async_read_utill has enough bytes in the buffer and does not need to perform any IO. 6. Uses widespread names like select and join that are more easily recognized by people without Asio background. And offers asynchronous facilities (with) that aren't available in Asio. Given all the points above, I am confident Boost.Async can become the correct place for prototyping and starting new projects that want to experiment with coroutines. This is going to be important as high-level Asio libraries pop up in Boost and elsewhere. One of my main concerns when I saw the start of the review is that Boost.Async hasn't had much adoption yet and in fact its author has just finished writing it, at least a version that can be reviewed. However, it also does not seem correct to me to let this library wait years before engaging in a review. I am not making my acceptance conditional, but would strongly recommend the author to provide more examples about how to write your own awaitables. This seems to be one of the strongest arguments for this library, but I don't have experience with them and could neither appreciate this feature in the docs nor have a feeling about its real importance. The last thing I would like to mention is that Klemens, the author of this proposed library, was the review manager of Boost.Redis, of which I am the author of. The review manager should take this into consideration before issuing his final decision since this might be seen as a conflict of interests. He is free to not take my Review into consideration if he judges so. Thanks again to Niall for managing this review and Klemens for submitting it. Marcelo
Hi all,
This is my review of the proposed Boost.Async.
- Does this library bring real benefit to C++ developers for real world
use-case?
I strongly believe so. There is no de-facto standard on C++20 I/O coroutines,
and this library can achieve this. It defines simple primitives that are
well-known in almost all other modern languages.
Writing async code using the universal Asio model is currently extremely
hard, time-consuming and error prone. This library can lift this burden.
Coroutines are now in the standard, so it's likely that we will see a rise
in demand for this kind of library in the forthcoming years. This library
anticipates this and puts Boost in a good position regarding I/O.
- Do you have an application for this library?
I have already written a real-time chat application with it. I envision this
library as the seed of an async ecosystem for C++, similar to Javascript's
promises or Python's asyncio. I think most use cases will involve
high-performance web/networking server applications.
- Does the API match with current best practices?
It does for most high-level classes and utilities. This includes promise,
task, generator, select, gather, join, co_main, use_op and detached. They
model well-known concepts and are easy to use.
I think this library uses exceptions too much. For instance, channel::read()
and channel::write() will always throw an exception on cancellation. Considering
that select may cancel all but the first-to-complete awaitable, you may end up
having exceptions thrown and caught in your regular code flow, which IMO
is a design error. I think this must be addressed before acceptance.
Tracked by https://github.com/klemens-morgenstern/async/issues/76
Is exposing enable_await_allocator, enable_await_executor
and friends necessary?. It increases the API surface of the library,
which makes evolving more difficult. These seem to be geared towards
implementing your own coroutine types. I'd place these in an experimental
namespace or hide them completely. If they are to be exposed,
we need more docs about this.
What's the rationale behind BOOST_ASYNC_USE_STD_PMR and friends? Being a
C++20 library, why is std::pmr not enough? From the Jamfile, the library is
built only with std::pmr by default. This looks like a source of problems -
you need to explain to users how to build your library with Boost.Container pmr
and without it. Config macros like these increase maintenance effort a lot.
I've also noticed that boost::async::this_coro::executor_t is an alias for
boost::asio::this_coro::executor_t. I think this is confusing - if
I got the implementation right, no functionality from Asio is used here,
but just the name. Can we not define an async::executor_t?
- Is the documentation helpful and clear?
The discussion is more or less clear for the high-level classes and utilities,
including promise, task, generator, select, gather, join and co_main.
use_op assumes too much Asio knowledge. It should at least mention that it's
a completion token, and at least link to Asio's page.
I'm missing a proper reference for some functions. While promise, task or
wait_group have these, select, gather and join only show examples, and not
the actual signatures of the functions.
The section on implementing hand-coded operations needs some expansion, too.
async::handler and async::completion_handler are not shown in the reference,
which they should.
I've had a hard time navigating the docs, since they are single-page.
I'm a little bit concerned about the duplication between code and docs - having
docs generated from the source code makes sure that there are no
inconsistencies.
I think the current setup violates the DRY principle. Not having in-source
comments makes usage harder, too, since in modern IDEs it's common to go
to the entity's definition, and it's useful to have some comments there.
I've almost missed the examples that are not included in the docs. I'd advocate
to include all of them in the docs, since they add valuable information that is
easy to miss. The Python one needs more comments, BTW.
- Did you try to use it? What problems or surprises did you encounter?
I migrated a real-time chat application I'm writing from stackful coroutines
(i.e. boost::asio::spawn) to Boost.Async coroutines. In general, my experience
was positive, with some gotchas. I summarized my experience in this email:
https://lists.boost.org/Archives/boost/2023/08/254921.php
- What is your evaluation of the implementation?
I have only looked at parts of it. I like the part where promise, task and
friends inherit from a common set of classes to implement individual features
(enable_await_allocator, enable_await_executor and friends).
I've got the feel that I found a bunch of bugs while testing. Concretely,
https://github.com/klemens-morgenstern/async/issues/68 this one made the library
not usable. The author has fixed most of them already (thanks Klemens).
I think more testing is required to cover cases like the one above.
I can't see any regression tests covering the different pmr B2 features
and BOOST_ASYNC_CUSTOM_EXECUTOR, either.
- Are you knowledgeable about the subject?
I've implemented generic, async code using Boost.Asio in the form of a
MySQL client (currently Boost.MySQL). I'm not an expert in C++20 coroutines,
but I know enough to understand the implementation. I've used
asio::yield_context and similar coroutines in other languages thoroughly.
- How much time did you spend evaluating the library?
I've built the library, read the source code and used it to implement a
web server. I've dedicated something less than 20h to the full process.
- Please explicitly state that you either _accept_ or _reject_ the inclusion
of this library into boost.
I think Boost.Async should be _CONDITIONALLY ACCEPTED_ into Boost, provided that
the following is addressed:
- It should be possible to use generators without exceptions
(https://github.com/klemens-morgenstern/async/issues/76)
- All build errors and bugs reported during testing should be fixed and
proper regression tests added for each of them.
- BOOST_ASYNC_USE_STD_PMR macros are either removed or regression tested.
- enable_await_allocator, enable_await_executor and friends are either
hidden or expanded in the docs.
Disclaimer: both Klemens and I are associated with the C++ Alliance. I'm
writing this review trying to be as impartial as possible.
Thanks Klemens for submitting and Niall for managing the review.
Regards,
Ruben.
On Tue, 8 Aug 2023 at 07:33, Niall Douglas via Boost
Dear Boost,
It is my great pleasure to announce the beginning of the peer review of proposed Boost.Async, which is hoped to become the most popular way of programming C++ coroutines in future C++. The peer review shall run between Tuesday 8th August and Friday 18th August.
Like many of you have found, until now there has been no good standard and easy way of writing C++ coroutine code which works easily with multiple third party codebases and foreign asynchronous design patterns. ASIO’s current coroutine support is tightly coupled to ASIO, and it rejects foreign awaitables. Libunifex is dense to master and non-obvious to make work well with foreign asynchronous design patterns, not helped by a lack of documentation, a problem which also afflicts stdexec which also has a steep and dense learning curve. CppCoro is probably currently the best in class solution to the space of easy to use fire and forget C++ coroutine programming, but it isn’t particularly portable and was designed long before C++ coroutine standardisation was completed.
Klemens very kindly stepped up and created for us an open source ground-up reimplementation of the best ideas of hitherto proprietary commercial C++ coroutine libraries, taking in experience and feedback from multiple domain experts in the area to create an easy to use C++ coroutine support library with excellent support for foreign asynchronous design patterns. For example, if you want your coroutine to suspend while a stackful fiber runs and then resume when that fiber has done something – AND that fiber implementation is unknown to Boost.Async – this is made as easy as possible. If your C++ coroutine code wants to await on unknown foreign awaitables, that generally “just works” with no added effort. This makes tying together multiple third party dependencies into a solution based on C++ coroutines much easier than until now.
The industry standard executor is ASIO, so Boost.Async uses Boost.ASIO as its base executor. To keep things simple and high performance, Boost.Async requires each pool of things it works with to execute on a single kernel thread at a time – though there can be a pool of things per kernel thread, or an ASIO strand can ensure execution never occurs on multiple kernel threads. The basic concurrency primitives of promise, generator and task are provided, with the fundamental operations of select, join and gather. Some more complex async support is supplied: Go-type channels, and async teardown. Performance is superior to both stackful coroutines and ASIO’s own C++ coroutine support, and sometimes far superior.
You can read the documentation at https://klemens.dev/async/ and study or try out the code at https://github.com/klemens-morgenstern/async.
Anyone with experience using C++ coroutines is welcome to contribute a review at the Boost mailing list (https://lists.boost.org/mailman/listinfo.cgi/boost), here at this Reddit page, via email to me personally, or any other mechanism where I the review manager will see it. In your review please state at the end whether you recommend acceptance, acceptance with conditions, or rejection. Please state your experience with C++ coroutines and ASIO in your review, and how many hours you spent on the review.
Thanks in advance for your time and reviews!
Niall
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, Aug 18, 2023 at 5:59 AM Ruben Perez via Boost
Hi all,
This is my review of the proposed Boost.Async.
Thank you so much for taking the time to review my library!
What's the rationale behind BOOST_ASYNC_USE_STD_PMR and friends? Being a C++20 library, why is std::pmr not enough? From the Jamfile, the library is built only with std::pmr by default. This looks like a source of problems - you need to explain to users how to build your library with Boost.Container pmr and without it. Config macros like these increase maintenance effort a lot.
That's because clang only added pmr support in 16, but had decent coroutine support for a few versions. Additionally some people feel clang's quality went downhill, so they're stuck on clang-14.
I've also noticed that boost::async::this_coro::executor_t is an alias for boost::asio::this_coro::executor_t. I think this is confusing - if I got the implementation right, no functionality from Asio is used here, but just the name. Can we not define an async::executor_t?
It can be done rather easily. I thought it didn't hurt, and accidentally using asio::this_coro::executor would still work. These types are more like pseudo-keywords, so I didn't want to make them restrictive.
- Please explicitly state that you either _accept_ or _reject_ the inclusion of this library into boost.
I think Boost.Async should be _CONDITIONALLY ACCEPTED_ into Boost, provided that the following is addressed:
- It should be possible to use generators without exceptions (https://github.com/klemens-morgenstern/async/issues/76)
I assume you meant channels?
- All build errors and bugs reported during testing should be fixed and proper regression tests added for each of them. - BOOST_ASYNC_USE_STD_PMR macros are either removed or regression tested. - enable_await_allocator, enable_await_executor and friends are either hidden or expanded in the docs.
Disclaimer: both Klemens and I are associated with the C++ Alliance. I'm writing this review trying to be as impartial as possible.
Thanks Klemens for submitting and Niall for managing the review.
Thank you.
Regards, Ruben.
On Thu, Aug 17, 2023 at 5:25 PM Klemens Morgenstern via Boost < boost@lists.boost.org> wrote:
What's the rationale behind BOOST_ASYNC_USE_STD_PMR and friends?
I see that Boost.Async is using allocators in some way but I have not looked at the library or its documentation. Was this something that was shoved in "just because" or will I find some examples of how to use the library with allocators to achieve specific goals such as improving performance? A discussion of when custom allocators are helpful? Any sorts of benchmarks or list of tradeoffs? Some analysis on common patterns for how the library allocates memory and how to optimize it? I find the knee-jerk practice of "adding allocator support" to a library without rationale or rigor, simply to achieve a checkmark on a list of features, to be a poor engineering principle. Thanks
On Fri, Aug 18, 2023 at 10:08 PM Vinnie Falco
On Thu, Aug 17, 2023 at 5:25 PM Klemens Morgenstern via Boost
wrote: What's the rationale behind BOOST_ASYNC_USE_STD_PMR and friends?
I see that Boost.Async is using allocators in some way but I have not looked at the library or its documentation. Was this something that was shoved in "just because" or will I find some examples of how to use the library with allocators to achieve specific goals such as improving performance? A discussion of when custom allocators are helpful? Any sorts of benchmarks or list of tradeoffs? Some analysis on common patterns for how the library allocates memory and how to optimize it?
C++20 coroutines need to allocate their function frame somehow. Since the library is single threaded it makes sense to optimize for that and so async uses std::pmr or boost::container::pmr for clang < 16 for that. In most cases it just provides a thread_local std::pmr::unsynchronized_pool_resource for the coroutine frame, and for async operations it uses a small monotonic_buffer for the associated allocator. To visualize this, let's say you have a promise like this: promise<int> dummy() {co_return 42; } Whenever you call dummy(), there'a non zero chance your compiler might allocate its function frame, because the compiler optimizations are not inline enough yet. Thus if you just write co_await dummy(); co_await dummy(); co_await dummy(); You may have three allocations & deallocations of the same size on the same thread. Thus using a thread_local resource can help to minimize that, to avoid locking here. I did not want to run my own thread_local solution like asio's awaitables do, which is why std::pmr was the obvious choice. Note that because it's async I can however not assume that the allocations happen in a stack-like pattern, as asio does, I just can assume they're on one thread for thread, main & run. For spawn no such resource is used as one can spawn onto a strand. But a user might spawn onto a single-threaded io_context, in which case he can use an unsynchronized resource manually. I do not expect many customizations here, as async just does the right thing. TL;DR: async is not really using allocators, but std::pmr to optimized it's own allocations for the single threaded environment. Most user shouldn't ever need to touch this.
To Vinnie's point though, are there actual measurements demonstrating that using PMR here is a gain over not using it? Otherwise, we're essentially adding a conditional dependency on Container for no real gain which is a net negative for users. - Christian
participants (8)
-
Andrzej Krzemienski
-
Christian Mazakas
-
Georg Gast
-
Klemens Morgenstern
-
Marcelo Zimbres Silva
-
Niall Douglas
-
Ruben Perez
-
Vinnie Falco