From the discussions it seems that it isn't a great idea to reimplement everything in ublas itself as there can be other options giving much better
Hi all,
Thanks a lot for your suggestions. It made a lot of things clearer. My
proposal was to add advanced matrix operations that were missing from
ublas.
performance.
@karen - I am not specifically interested in machine learning. I want to
contribute to ublas for performance and more usability in applications.
@david - The discussions lead me to think if adding Openblas/LAPACK backend
for advanced matrix operation is a really good idea. It would boost
performance and make some hardware acceleration possible (depending on blas
implementation). It would be nice to have some interface with them. Can
this be a potential gsoc anyone would like to mentor?
Regards
Shikhar Srivastava
On 22-Jan-2018 10:59 PM, "Karen Shaeffer via Boost"
On Mon, Jan 22, 2018 at 11:16:55AM +0100, Hans Dembinski via Boost wrote:
Hi David, Rajaditya, Artyom, Shikhar,
On 21. Jan 2018, at 13:00, David Bellot via Boost < boost@lists.boost.org> wrote:
- yes uBLAS is slow. Not only needs it a support for hardware acceleration but also it needs to be more modern (C++11/14/17) and its architecture could be much more improved. This would help integrating hardware acceleration, IMHO.
I am surprised that no one here mentioned Eigen, an IMHO excellent high-level header-only C++ library to do matrix-vector calculations.
http://eigen.tuxfamily.org/index.php?title=Main_Page < http://eigen.tuxfamily.org/index.php?title=Main_Page>
It uses expression templates to re-structure the expressions written by users to fast optimised code. It supports both matrices with dimensions known at compile-time and at run-time, and supports mixed versions, e.g. one dimension fixed the other dynamic.
Hello, Tensorflow uses Eigen.
If Shikhar is most interested in machine learning libraries, the TensorFlow might have some GSOC projects.
Karen.
According to their own (admittedly, a bit outdated) benchmarks, they are
extremely competitive, either on par or even beating $$$ closed-source libraries.
http://eigen.tuxfamily.org/index.php?title=Benchmark>
Eigen tracks its own performance, so you don't need to worry about it
getting slower, but perhaps the others have gotten faster in the meantime.
http://eigen.tuxfamily.org/index.php?title=Performance_monitoring <
http://eigen.tuxfamily.org/index.php?title=Performance_monitoring>
I used the library in projects and it is very convenient. You just write
your expressions as you would on paper and let Eigen generate the fastest possible code out of that. I only dislike that Eigen does not play well with "auto" since C++11, because they use the assignment to a matrix or vector to trigger the evaluation of the expression template.
I haven't done it myself, but Eigen seems to have a good strategy for
adding new features, too. People can start to develop their stuff in an "unsupported" subdirectory, which is not meant for production, but the code is nevertheless distributed with Eigen. This helps to exposes the experimental code to more adventurous users. Eventually, a project that passed the test of time can move from "unsupported" to the main library.
Best regards, Hans
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/
mailman/listinfo.cgi/boost --- end quoted text ---
-- Karen Shaeffer The subconscious mind is driven by your deeply Neuralscape Services held beliefs -- not your deeply held desires.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost