Hi Julio -
I may be completely wrong, but I was under the understanding that when
a send call happens, serialization magic occurs that builds an
MPI_Datatype, and that by then handing the data into MPI_Send etc...
we avoid an extra copy?
But perhaps that won't work in my case. I doubt that MPI_Recv is
capable of building a complex hierarchy back up including pointers,
using operator new etc... Perhaps you have to have a fully
instantiated object of the same kind in order to use this
functionality with MPI_Recv?
I have a virtual message hierarchy, and the messages (or shared_ptrs
of messages) perform virtual dispatch upon being recv'd. Is there
anything performance-wise to be gained by using boost::mpi for
send/recv/broadcast? Or is the MPI_Datatype performance gain only
applicable to classes that have (perhaps complex, but) concrete layout
with object instantiation on the stack?
It seems that if I can't get the MPI_Datatype benefit for my types, I
may be better off maintaining my own buffers for serialization, so I
can potentially lower the number of memory allocations.
Thanks,
Brian
On Thu, Sep 6, 2012 at 3:29 AM, Júlio Hoffimann
Brian,
You can think of Boost.MPI as a very well-designed wrapper. All it does is to call the underlying C (OpenMPI, MPICH, others) implementation when the types are covered by the MPI standard.
On other hand, i agree with you, maybe would be possible to specialize a template for std::vector<T> that handles it as a raw buffer. Someone has an opinion about this?
When i have time, i'll think carefully to see if i can contribute a patch.
Regards, Júlio.
2012/9/6 Brian Budge
Okay. I can do that. I was just wondering if there was a trick to make it happen under the hood. I'm curious as to why Bcast doesn't get called by boost::mpi::broadcast for non-trivial types.
Thanks, Brian
On Wed, Sep 5, 2012 at 7:00 PM, Júlio Hoffimann
wrote: Hi Brian,
If i understood correctly, you're actually doing something like:
std::vector<char> gigaVec; MPI_Bcast(blah, blah, ..., &gigaVec[0])
and want to replace that by boost::mpi::broadcast, is that correct?
Just do it the same way, if the type of the container is a MPI type, you're guaranteed that the underlying MPI implementation will be called.
Regards, Júlio.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users