Re: [boost] [gsoc-2013] Boost.Thread/ThreadPool project
Le 29/04/13 21:35, Niall Douglas a ?crit :
No major changes to the C++11 futures API is needed. Rather that a different, but compatible, optimized implementation of future<> gets returned when used from a central asynchronous dispatcher. So the C++ standard remains in force, only the underlying implementation is more optimal.
Could you explain me what a central asynchronous dispatcher is?
An asynchronous procedure call implementation. Here's Microsoft's: http://msdn.microsoft.com/en-us/library/windows/desktop/ms681951(v=vs.85).as px. Here's QNX's: http://www.qnx.com/developers/articles/article_870_1.html. Boost.ASIO is the same thing for Boost and C++ but implemented by application code. It is *not* like asynchronous POSIX signals which are a nearly useless subset of APCs.
If not, please could you elaborate what kind of optimizations can be obtained?
If you have transactional memory in particular, you gain multi-CAS and the ability to (un)lock large ranges of CAS locks atomically, and a central dispatcher design can create batch lists of threading primitive operations and execute the batch at once as a transaction. Without TM, you really need the kernel to provide a batch syscall for threading primitives to see large performance gains.
Futures come in because you'd naturally use std::packaged_task<> with Boost.ASIO. It returns a future. Could you point me to the Networking paper proposal that has packaged_task<> on its user interface. I would expect this to be an implementation detail.
For reference, the AFSIO/AFIO project is *not* a threadpool. It's a batch asynchronous execution engine based on Boost.ASIO that lets you chain, in vast numbers, huge arrays of std::function<> whose returns are fetched using std::future<> to be executed asynchronously according to specified dependencies e.g. if A and B and C, then D, then E-G. That sort of
You misunderstand me. *If* you want Boost.ASIO to dispatch unknown std::function<>, *then* std::packaged_task<> is the most obvious route forwards. And the correct way to pass a value from one execution context to another is std::future<>. This includes the single threaded case e.g. when going through a C++ -> C -> C++ transition as a stack unwinds. thing.
So the thread pool you need is an internal one that is adapted to you particular needs, isn't it?
I think you don't understand what Boost.ASIO is or how it works. Boost.ASIO's core is boost::asio::io_service. That is its dispatcher implementation which each dispatch execution context being executed via boost::asio::io_service::run() which is effectively an event loop. Third parties then enqueue items to be dispatched using boost::asio::io_service::post(). You don't have to run Boost.ASIO using multiple threads: it can be single threaded. So is my thread pool an internal one adapted to my particular needs? In the sense I need not use threads at all, yes. In the sense that boost::asio::io_service's use scenario is inviolate, no. Boost.ASIO has its API, and you have to use it. And Boost.ASIO's API will be similar to the TR2 networking API. There is nothing stopping a person merging a Boost.ASIO managed thread pool with a traditional thread pool. I would struggle to see the use case though - this would an excellent thought project actually which would benefit any Boost.ThreadPool.
One thing presently implemented on that engine is asynchronous file i/o, but in the next month or two you'll hopefully see batch parallel SHA256 using 4-SHA256 SSE2 and NEON implementations also added to the asynchronous engine. The idea is that the engine is fairly generic for anywhere where you do need to chain lots of coroutine type items together (not that it supports Boost.Coroutine yet). v1 isn't particularly generic nor optimal, but I'm hoping with feedback from Boost that v2 in a few years' time would be much improved.
As Oliver noted you could take a look at Boost.Fiber and Boost.Task.
Right now anything Boost.Context based can't have multiple contexts simultaneously entering the kernel apart from on QNX and possibly Hurd. Therefore, for the time being, full fat threads are the only thing being considered. If kernel support ever includes coroutines or fibres, that will be eagerly added (I like coroutines, I use them a lot on Python). Note that on async i/o capable platforms multiple threads are solely used for non-async APIs like batch directory creation and batch chmod. Normal i/o gets multiplexed using completion handlers. Currently on Windows for example, threads are barely used at all. Niall --- Opinions expressed here are my own and do not necessarily represent those of BlackBerry Inc.
2013/4/30 Niall Douglas
As Oliver noted you could take a look at Boost.Fiber and Boost.Task.
Right now anything Boost.Context based can't have multiple contexts simultaneously entering the kernel apart from on QNX and possibly Hurd. Therefore, for the time being, full fat threads are the only thing being considered. If kernel support ever includes coroutines or fibres, that will be eagerly added (I like coroutines, I use them a lot on Python).
the idea of coroutines/fibers is that the context switche do not involve the kernel - usually a context switch with coroutiens on x86 takes ca. 40 cycles - switches with threads (which require syscalls into the kernel) require ca. 1000 cpu cyles.
No major changes to the C++11 futures API is needed. Rather that a different, but compatible, optimized implementation of future<> gets returned when used from a central asynchronous dispatcher. So the C++ standard remains in force, only the underlying implementation is more optimal. Could you explain me what a central asynchronous dispatcher is? An asynchronous procedure call implementation. Here's Microsoft's: http://msdn.microsoft.com/en-us/library/windows/desktop/ms681951(v=vs.85).as
Le 29/04/13 21:35, Niall Douglas a ?crit : px. Here's QNX's: http://www.qnx.com/developers/articles/article_870_1.html. Boost.ASIO is the same thing for Boost and C++ but implemented by application code. It is *not* like asynchronous POSIX signals which are a nearly useless subset of APCs. I don't see the term "central asynchronous dispatcher" used in any of
Le 30/04/13 17:16, Niall Douglas a écrit : the links. Could you clarify what it is?
If not, please could you elaborate what kind of optimizations can be obtained? If you have transactional memory in particular, you gain multi-CAS and the ability to (un)lock large ranges of CAS locks atomically, and a central dispatcher design can create batch lists of threading primitive operations and execute the batch at once as a transaction. Without TM, you really need the kernel to provide a batch syscall for threading primitives to see large performance gains. I'm really lost.
Futures come in because you'd naturally use std::packaged_task<> with Boost.ASIO. It returns a future. Could you point me to the Networking paper proposal that has packaged_task<> on its user interface. I would expect this to be an implementation detail. You misunderstand me. *If* you want Boost.ASIO to dispatch unknown std::function<>, *then* std::packaged_task<> is the most obvious route forwards.
And the correct way to pass a value from one execution context to another is std::future<>. This includes the single threaded case e.g. when going through a C++ -> C -> C++ transition as a stack unwinds. This is the way a 3pp library can do it using the external std interfaces, but a C++1y proposal could define an interface that can not be implemented suing the future<> external interface and let the library implementor use some internals of his future<> implementation.
For reference, the AFSIO/AFIO project is *not* a threadpool. It's a batch asynchronous execution engine based on Boost.ASIO that lets you chain, in vast numbers, huge arrays of std::function<> whose returns are fetched using std::future<> to be executed asynchronously according to specified dependencies e.g. if A and B and C, then D, then E-G. That sort of thing. So the thread pool you need is an internal one that is adapted to you particular needs, isn't it? I think you don't understand what Boost.ASIO is or how it works. I confirm. Boost.ASIO's core is boost::asio::io_service. That is its dispatcher implementation which each dispatch execution context being executed via boost::asio::io_service::run() which is effectively an event loop. Third parties then enqueue items to be dispatched using boost::asio::io_service::post(). You don't have to run Boost.ASIO using multiple threads: it can be single threaded.
So is my thread pool an internal one adapted to my particular needs? In the sense I need not use threads at all, yes. In the sense that boost::asio::io_service's use scenario is inviolate, no. Boost.ASIO has its API, and you have to use it. And Boost.ASIO's API will be similar to the TR2 networking API.
There is nothing stopping a person merging a Boost.ASIO managed thread pool with a traditional thread pool. I would struggle to see the use case though - this would an excellent thought project actually which would benefit any Boost.ThreadPool.
I could not comment until I understand what Boost.ASIO provides and it can interact with thread_pools :( Best, Vicente
participants (3)
-
Niall Douglas
-
Oliver Kowalke
-
Vicente J. Botet Escriba