Re: [boost] [gsoc-2013] Boost.Thread/ThreadPool project
I don't see the term "central asynchronous dispatcher" used in any of the links. Could you clarify what it is? I have to admit I'm struggling to see where your block is, but Dave Abrahams often found the same with me, so it must be me. It is just that I don't know the term. As I don't know the domain (the words) you are talking about we can stop
Le 01/05/13 20:39, Niall Douglas a ?crit : the exchange here if you prefer, or you could continue to explaining me the things I don't know/understand.
Don't worry, it's almost certainly me as native English speakers rarely understand me either. Besides when I lived in Spain I remember only too well the problem with domain terminology, like when a dentist told me I had a "del gasto" which I took to him wanting money because he had high rent :). I'll keep going as long as you need. We'll get there eventually.
The central means one execution context does the dispatch. The asynchronous means that callbacks are processed by recipients asynchronous to the dispatcher. And dispatcher, well that dispatches. In order to be sure I understand what you say could you tell me how a decentralized asynchronous dispatcher behaves? What are the differences?
A centralized dispatcher dispatches from one execution context to many. A decentralized dispatcher dispatches from many execution contexts to one or more contexts. An example of the latter is cloud apps actually, where millions of apps will dispatch events to a load balanced core cluster of servers i.e. many => few. By execution context I mean a thread, a fibre, a coroutine, a handler or even a closure. Generally if it's got its own distinct stack separate from other stacks, it's an execution context.
I'd take a reasonable guess that MCAS will benefit centralized kernel-based threading primitives architectures such as Windows NT more than decentralized threading primitive architectures such as POSIX. That said, I could see Linux using TM for batch multi-kernel-futex ops, though I'd have to say implementing that without breakage would be daunting.
Are you saying that current TM technology should be much more useful for centralized than decentralized asynchronous dispatchers?
I'm saying my suspicion is that. Others on SG5 may disagree (they disagree with me about most of my opinions). It's still very early days for TM in C++.
Does this means that the whole application would improve its performances using always centralized asynchronous dispatchers?
I'm saying it's a likely optimization case. Other than a recompile, application need not know the difference.
You may find N3562 Executors and schedulers coproposed March 2013 for Bristol by Google and Microsoft of use to you in understanding the relation between thread pools and Boost.ASIO (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3562.pdf). I know this proposal. Could you explain me how it is related to Boost.Asio? Could you point me where in the Asio documentation could I find related information?
I think your other post is more relevant, so I'll answer there. But agreed that Asio's documentation assumes the user already understands how it works. If you come from a NT kernel or VMS or libuv programming background, it's obvious. But much less so for everyone else.
My proposed Boost.AFIO library is an implementation of that same N3562 idea, albeit I extend Boost.ASIO which is IMHO more C++-y whereas Google and Microsoft have gone with proprietary implementations. Also, they include timing and many other more general purpose features, and mine does not (currently) as it's mainly aimed at maximising input/ouput with highly jittery random latency storage.
Thanks Niall for all the explanations and sorry if all this is trivial for you.
These types of discussion always add to the collective good. Niall --- Opinions expressed here are my own and do not necessarily represent those of BlackBerry Inc.
participants (1)
-
Niall Douglas