What's so cool about Boost.MPI?
Just wanted to call some attention to this article I wrote about Boost.MPI: http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/ I think it might be useful to link that from the Boost.MPI docs, too. -- Dave Abrahams BoostPro Computing http://www.boostpro.com
On 18/10/10 23:41, David Abrahams wrote:
Just wanted to call some attention to this article I wrote about Boost.MPI:
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too.
While the library seems nice (it bridges serialization with MPI), I don't really see anything in there that makes me think it's "so cool". So after reading it, I felt a bit like the question "What's so cool about Boost.MPI?" remained unanswered.
Hi Mathias,
On Tue, Oct 19, 2010 at 21:59, Mathias Gaunard
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too. While the library seems nice (it bridges serialization with MPI), I don't really see anything in there that makes me think it's "so cool".
So after reading it, I felt a bit like the question "What's so cool about Boost.MPI?" remained unanswered.
My suggestion to you is to rather than asking this question, use MPI as you normally would and if you reach a point where something feels tedious, then see if having Boost.MPI as an easier interface will help you. Basically, I think it is a matter of "taste" and there's no reason to jump into Boost.MPI. In the end, you might end up feeling perfectly comfortable with MPI as-is and that's ok. I don't think it is something that is for everyone... Ray
Den 19-10-2010 14:59, Mathias Gaunard skrev:
On 18/10/10 23:41, David Abrahams wrote:
Just wanted to call some attention to this article I wrote about Boost.MPI:
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too.
While the library seems nice (it bridges serialization with MPI), I don't really see anything in there that makes me think it's "so cool".
So after reading it, I felt a bit like the question "What's so cool about Boost.MPI?" remained unanswered.
The semi-automatic, but very easy to maintain, way of creating/maintaining the MPI type maps (via Boost.Serialization), IIUC. -Thorsten
At Tue, 19 Oct 2010 16:26:04 +0200, Thorsten Ottosen wrote:
Den 19-10-2010 14:59, Mathias Gaunard skrev:
On 18/10/10 23:41, David Abrahams wrote:
Just wanted to call some attention to this article I wrote about Boost.MPI:
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too.
While the library seems nice (it bridges serialization with MPI), I don't really see anything in there that makes me think it's "so cool".
So after reading it, I felt a bit like the question "What's so cool about Boost.MPI?" remained unanswered.
The semi-automatic, but very easy to maintain, way of creating/maintaining the MPI type maps (via Boost.Serialization), IIUC.
Which yields huge performance benefits and makes practical some computations that might otherwise not be, due to resource constraints and the difficulty of generating type maps for arbitrary data structures. Do I need to be more explicit about that, or is it simply “not that cool?” -- Dave Abrahams BoostPro Computing http://www.boostpro.com
On 18/10/10 23:41, David Abrahams wrote:
Just wanted to call some attention to this article I wrote about Boost.MPI:
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too.
While the library seems nice (it bridges serialization with MPI), I don't really see anything in there that makes me think it's "so cool".
So after reading it, I felt a bit like the question "What's so cool about Boost.MPI?" remained unanswered. I didn't feel that way! I got that what was so cool, is that MPI, while VERY cool in itself is a bit of a
On 10/19/2010 05:59 AM, Mathias Gaunard wrote: pain to use with data structures, and if you change the data structure you have to go through the pain all over again. Then comes Captain Caveman! JK Boost.MPI to the rescue and automates all that, so that 1) you don't have the pain of getting MPI to work with your data structures and 2) you don't have the pain of having someone later change the data structure and everything breaks and someone has to figure out why. Then, to add meta-coolness (aka elegance), it does it in an optimal way. Sick! Patrick
Hi All, I am using boost asio to do secure socket communication (asio/ssl) my code compile and work fine on window. On MAC Even after linking with "libcrypto.dylib" and "libssl.0.9.8.dylib", I am getting following link error "_SSL_library_init", and "_SSL_load_error_strings". , referenced from: boost::asio::ssl::detail::openssl_init<true>::do_init::do_init()in ... Do I suppose to link any other library? Am I missing here something? System Detail: MAC OS X Version 10.6.4. Xcode Version 3.2.3. Thanks, Akhilesh
Can somebody explain to me, what is so great about mpi or boost/mpi?
When I read about it,
it does seem to provide the same functionality as read/write from/to a
filedescriptor/socket.
I really don't need a library for things like this.
"David Abrahams"
Just wanted to call some attention to this article I wrote about Boost.MPI:
http://daveabrahams.com/2010/09/03/whats-so-cool-about-boost-mpi/
I think it might be useful to link that from the Boost.MPI docs, too.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
On Fri, Nov 5, 2010 at 3:51 PM, Peter Foelsche
Can somebody explain to me, what is so great about mpi or boost/mpi?
Daunting task, but I'll give it a try :-) MPI is a library interface[1] to ease some common communication patterns in scientific/HPC software. In particular, MPI provides library functions for: 1. Reliable delivery of messages between two concurrent processes; there are functions for both synchronous and asynchronous communication. 2. Collective communication of the running processes: e.g., one process broadcasting a message to all others; all processes broadcasting to all; scattering a large array to all running processes, etc. 3. "Remote shared memory": a process can expose a memory buffer that all others can directly read/write. 4. Parallel I/O, where many processes access (possibly interleaved) portions of a file independently. Despite being non-trivial features to implement on top of a standard TCP socket interface, MPI implementations are mostly into performance: they can usually take advantage of special hardware (e.g., Infiniband or Myrinet interconnects) and use the native protocols (instead of TCP/IP) to provide faster communication speed. A "message" in usual socket programming is just a flat array of bytes. A "message" in MPI is a (potentially non-contiguous portion) of a C or FORTRAN data structure; MPI can handle C structures and multi-dimensional C-style arrays, and multi-dimensional arrays of C structures (containing multi-dimensional arrays, etc.). However, since MPI is implemented as a library, you have to tdescribe to MPI the in-memory layout of the data structures you want to send as a message; thus "duplicating" work that is done by the compiler. You can imagine that this quickly gets tedious and unwieldy once you start having non-trivial data structures, possibly with members of unknown length (e.g., a list). This is where Boost.MPI comes to the rescue: if you can provide Boost.Serialization support for your classes, you can send them as MPI messages with no extra effort: no need for lengthy calls to MPI_Type_create_struct() etc. And it does this with minimal or even no overhead! (Plus you also get a nice C++ interface to MPI functionality, whereas the MPI C calls are quite low-level.) .. [1] MPI proper is a specification; there are several implementations of the spec (e.g., OpenMPI, MPICH, MVAPICH, plus many vendor/proprietary one), but they are all compatible at the source level. Best regards, Riccardo
On 5 Nov 2010, at 20:04, Riccardo Murri wrote:
On Fri, Nov 5, 2010 at 3:51 PM, Peter Foelsche
wrote: Can somebody explain to me, what is so great about mpi or boost/mpi?
Daunting task, but I'll give it a try :-)
MPI is a library interface[1] to ease some common communication patterns in scientific/HPC software. In particular, MPI provides library functions for:
1. Reliable delivery of messages between two concurrent processes; there are functions for both synchronous and asynchronous communication.
2. Collective communication of the running processes: e.g., one process broadcasting a message to all others; all processes broadcasting to all; scattering a large array to all running processes, etc.
3. "Remote shared memory": a process can expose a memory buffer that all others can directly read/write.
4. Parallel I/O, where many processes access (possibly interleaved) portions of a file independently.
Despite being non-trivial features to implement on top of a standard TCP socket interface, MPI implementations are mostly into performance: they can usually take advantage of special hardware (e.g., Infiniband or Myrinet interconnects) and use the native protocols (instead of TCP/IP) to provide faster communication speed.
A "message" in usual socket programming is just a flat array of bytes. A "message" in MPI is a (potentially non-contiguous portion) of a C or FORTRAN data structure; MPI can handle C structures and multi-dimensional C-style arrays, and multi-dimensional arrays of C structures (containing multi-dimensional arrays, etc.). However, since MPI is implemented as a library, you have to tdescribe to MPI the in-memory layout of the data structures you want to send as a message; thus "duplicating" work that is done by the compiler. You can imagine that this quickly gets tedious and unwieldy once you start having non-trivial data structures, possibly with members of unknown length (e.g., a list).
This is where Boost.MPI comes to the rescue: if you can provide Boost.Serialization support for your classes, you can send them as MPI messages with no extra effort: no need for lengthy calls to MPI_Type_create_struct() etc. And it does this with minimal or even no overhead! (Plus you also get a nice C++ interface to MPI functionality, whereas the MPI C calls are quite low-level.)
.. [1] MPI proper is a specification; there are several implementations of the spec (e.g., OpenMPI, MPICH, MVAPICH, plus many vendor/proprietary one), but they are all compatible at the source level.
Best regards, Riccardo
Nice summary! Matthias
On Sat, Nov 6, 2010 at 3:37 AM, Matthias Troyer
Nice summary!
Agreed. Can anyone suggest changes to my article that would obviate the need for a second explanation? -- Dave Abrahams BoostPro Computing http://www.boostpro.com
On Sun, Nov 7, 2010 at 1:47 AM, Dave Abrahams
On Sat, Nov 6, 2010 at 3:37 AM, Matthias Troyer
wrote: Nice summary!
Agreed. Can anyone suggest changes to my article that would obviate the need for a second explanation?
A backgrounder on Beowulf clusters, parallel programming, and high performance computing can frame Boost.MPI and MPI better. It's sad that not everyone knows how to program clusters of workstations to turn them into parallel supercomputers. An article like this is a good way of introducing the "art" of massively parallel computing and the importance of high performance computing at large. That would be my suggestion at least as a one-time heavy user of MPI and later on Boost.MPI to solve large problems. -- Dean Michael Berris deanberris.com
Thank you, Riccardo. Your post was very helpful. Regards/AL On 11/6/2010 12:04 AM, Riccardo Murri wrote:
On Fri, Nov 5, 2010 at 3:51 PM, Peter Foelsche
wrote: Can somebody explain to me, what is so great about mpi or boost/mpi? Daunting task, but I'll give it a try :-)
participants (12)
-
alapex0310@gmail.com
-
Dave Abrahams
-
David Abrahams
-
Dean Michael Berris
-
Kumar, Akhilesh
-
Mathias Gaunard
-
Matthias Troyer
-
Patrick Horgan
-
Peter Foelsche
-
Raymond Wan
-
Riccardo Murri
-
Thorsten Ottosen