[GSOC2015] uBLAS Matrix Solver Project
Hi, My name is Raj and I am Phd student in Computer Graphics. I am interested in tackling the problem of uBLAS Matrix Solver and in order to write my proposal, I am looking for inputs for which of the following algorithms will be most useful for prospective users in boost-numeric library. Here is a categorical list of all the prospective ones which will bring uBLAS updated to other commercial libraries like Eigen/Armadillo. Please let me know your preferences.... *David Bellot* : As a potential mentor, do you have any specific additions or deletions for this list? This could also be useful for other candidates pursuing this project. *DENSE SOLVERS AND DECOMPOSITION* : 1) *QR Decomposition* - *(Must have)* For orthogonalization of column spaces and solutions to linear systems. (Bonus : Also rank revealing..) 2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive Definite systems often encountered in PDE for FEM Systems... 3) *Householder Method* - Conversion to tridiagonal form for eigen solvers. *SPARSE SOLVERS AND PRECONDITIONERS* : 1) *Conjugate Gradient* - *(Must have)* For symmetric Positive Definite systems, this is the kryvlov space method of choice. Both general and preconditioned variants need to be implemented for convergence issues. 2) *BiCGSTAB* *(Needs introspection)* - For non symmetric systems.. 3) *Incomplete Cholesky Decomposition* *(Good to have)* - For symmetric Positive definite sparse matrices, to be used as preconditioner as extension to (1) for preconditioned CG Methods ... 4) *Jacobi Preconditioner* *(Must have)* - As prerequisite for step(1). *EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:* 1) *Symmetric Eigen Values* - *(Must have)* Like SSYEV Module in Lapack - That is first reduction to a tridiagonal form using Householder then using QR Algorithm for Eigen Value computation. 2) *NonSymmetric Eigen Values* - *(Good to have)* Like SGEEV module in Lapack - using Schur decompositions as an intermediate step in the above algorithm. 3) *Generalized Eigen Values* - *(needs introspection)* I use this in my research a lot and its a good thing to have.. ** Computing Eigen Decomposition of sparse modules needs special robust numerical treatment using implicitly restarted arnoldi iterations and may be treated as optional extensions.
On 06 Mar 2015, at 04:50, Rajaditya Mukherjee
wrote: Hi,
My name is Raj and I am Phd student in Computer Graphics. I am interested in tackling the problem of uBLAS Matrix Solver and in order to write my proposal, I am looking for inputs for which of the following algorithms will be most useful for prospective users in boost-numeric library. Here is a categorical list of all the prospective ones which will bring uBLAS updated to other commercial libraries like Eigen/Armadillo. Please let me know your preferences....
Hi Raj, for me finishing the "boost numeric bindings" which would allow me to use Atlas and other BLAS engines as a backend would be most useful. Having that interface would give me high performance linear algebra. From: http://mathema.tician.de "Boost Bindings is a bindings library (not just) for Boost.Ublas. It offers an easy way of calling BLAS, LAPACK, UMFPACK, MUMPS and many other mature legacy numerical codes from within C++." Code in the sandbox: https://svn.boost.org/svn/boost/sandbox/numeric_bindings/
Hi Raj, I think you have a pretty good list there. It has a good set of fundamental methods that are very useful and reasonably easy to implement, and you are avoiding (for now) the more advanced methods that are very tricky to implement (and that very few people have implemented at all, since most people bind to legacy fortran routines instead). Here are a few notes:
1) *QR Decomposition* - *(Must have)* For orthogonalization of column spaces and solutions to linear systems. (Bonus : Also rank revealing..)
QR decomp is obviously a must have, but I would say that the rank-revealing version is also a must have. Ideally, we should be able to cover the full trio of matrix inversion / linear-solution. The trio is QR, RRQR and SVD. In my experience, the best way to robustly do a general matrix inversion or linear solution is to first try the simplest and fastest method, which is QR decomposition. This could fail due to rank-deficiency, at which point you fall-back to the next simplest and fastest method that can deal with that deficiency, which is RRQR decomposition. And because there are some rare cases where RRQR will fail to correctly reveal the rank (push all the zeros down), there must be an ultimate, albeit much slower, fall-back method that will always work, which is basically SVD (afaik). In some real life situations (e.g., inverting the Jacobian of a system), I've had problems where about 1/1,000 times the QR would fail, requiring fall-back to RRQR, and then, having about 1/1,000,000 times where RRQR would fail, requiring to fall-back on SVD. I don't see how one could professionally rely on Boost.uBLAS (alone, without binding to the older fortran libraries) if those three layers of fall-back are not present.
2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive Definite systems often encountered in PDE for FEM Systems...
For Cholesky, I think it is important to also include its variants, especially LDL and band-limited version (for band-symmetric matrices). That's mainly because implementing them is very easy and they are also very useful, when applicable.
*EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:*
That's where it gets trickier. First, you have to implement matrix balancing with exact floating-point arithmetic. This is a deceivingly simple algorithm to write, but don't underestimate its difficulty, nor its importance later on. I think that in order to tackle eigen decompositions, you will need to work your way up with its many building blocks. You have to make sure to have good, streamlined and flexible code for applying the three fundamental transforms (Householder, Givens, and Jacobi rotations). Then, get your hands dirty with the Hessenberg decompositions and reductions. Then, move to the different Schur decompositions. And then, go on to QR/QZ steps, whose difficulty you should not underestimate. And then, you'll have your eigen value solvers. In my opinion, if you are going to implement any kind of eigen value decomposition, you should just do them all. These algorithms are all very much related, Jacobi / QR / QZ, Hess / Tri-diag, Shur / Real-Schur, etc... you'll be better off tackling them together. In so far as any additional methods, I think that one easy thing to add would be matrix exponentials (Pade Square-and-sum algorithm), which is sometimes useful in various contexts, especially in system dynamics. But really, I think that for me, the holy grail is an algebraic Riccati equation (ARE) solver. I implemented one in my own library, and so, I know what it entails, and it's really not easy to do, and my implementation is still rough around the edges (when numerical conditioning is bad on the matrices involved). If you think eigen value decomposition is tricky, AREs are ten times worse (and as far as I know, my implementation is one of two implementations that exist in the world, the other one is the fortran code written by the original authors of the algorithm itself). I don't think this is possible to achieve for a GSoC project, but there are a number of building blocks within it that are useful on their own, like Schur and reductions. By the way, if you need additional inspiration, you might want to look at my matrix numerical methods code: https://github.com/mikael-s-persson/ReaK/tree/master/src/ReaK/core/lin_alg Most of this code is a bit old, and still somewhat rough around the edges, but I've been using it for years and with great success. I would also be happy to help in the mentorship of this GSoC project. Things are in a state of flux for me right now (in my work), so I'm not exactly sure how much time I could dedicate to that over the summer. Cheers, Mikael. -- Sven Mikael Persson, M.Sc.(Tech.) PhD Candidate and Vanier CGS Scholar, Department of Mechanical Engineering, McGill University,
Hi Mikael and Thijs,
I am extremely glad for your response and I would like to address each of
you.
*Thijs* : I am not so sure what is the algorithmic challenges of such an
undertaking is. More specifically, I am concerned whether it will be
substantial enough for a Gsoc Project. If you think thats something you can
mentor me for this undertaking, I am extremely open to the idea. I use
Intel High Performance MKL libraries for my day to day reseach and one of
the reasons why Eigen is so attractive is because it can be made to call
the high performance MKL Libraries in the backend with a single pragma
call. I am sure such a functionality will go a long way in enhancing the
usefulness of Boost. Let me know your thoughts on this..
*Mikael* : After reading your observations, I believe that a nice way to
write the proposal would be to skip out on the sparse decompositions
altogether and just focus on a subset of dense decompositions and methods
but extremely test them to account for numerical stability. As you have
mentioned that such an endeavour would require writing quite a lot of
additional helper based decompositions - (for example an actual stable
eigen solver would require implementing all the algorithms such as Schur,
QR, Decompositions like Hessenberg and Tridiagonal)...
With that in mind, I would like to propose for this project a complete set
of dense eigen value solvers with maturity comparable to eigen library.
What do you think ? Regarding the mentorship, perhaps David and you can
work something depending on your time commitment availability. If you have
any more queries regarding my experiences with numerical solvers, please
let me know..
*Rajaditya Mukherjee *
*3rd Year Graduate Student*
Dept. of Computer Science and Engineering
The Ohio State University
Columbus, Ohio
Tel :- +1-(614)-271-4439
email :- rajaditya.mukherjee@gmail.com
Hi Raj,
I think you have a pretty good list there. It has a good set of fundamental methods that are very useful and reasonably easy to implement, and you are avoiding (for now) the more advanced methods that are very tricky to implement (and that very few people have implemented at all, since most people bind to legacy fortran routines instead).
Here are a few notes:
1) *QR Decomposition* - *(Must have)* For orthogonalization of column spaces and solutions to linear systems. (Bonus : Also rank revealing..)
QR decomp is obviously a must have, but I would say that the rank-revealing version is also a must have. Ideally, we should be able to cover the full trio of matrix inversion / linear-solution. The trio is QR, RRQR and SVD. In my experience, the best way to robustly do a general matrix inversion or linear solution is to first try the simplest and fastest method, which is QR decomposition. This could fail due to rank-deficiency, at which point you fall-back to the next simplest and fastest method that can deal with that deficiency, which is RRQR decomposition. And because there are some rare cases where RRQR will fail to correctly reveal the rank (push all the zeros down), there must be an ultimate, albeit much slower, fall-back method that will always work, which is basically SVD (afaik). In some real life situations (e.g., inverting the Jacobian of a system), I've had problems where about 1/1,000 times the QR would fail, requiring fall-back to RRQR, and then, having about 1/1,000,000 times where RRQR would fail, requiring to fall-back on SVD. I don't see how one could professionally rely on Boost.uBLAS (alone, without binding to the older fortran libraries) if those three layers of fall-back are not present.
2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive Definite systems often encountered in PDE for FEM Systems...
For Cholesky, I think it is important to also include its variants, especially LDL and band-limited version (for band-symmetric matrices). That's mainly because implementing them is very easy and they are also very useful, when applicable.
*EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:*
That's where it gets trickier.
First, you have to implement matrix balancing with exact floating-point arithmetic. This is a deceivingly simple algorithm to write, but don't underestimate its difficulty, nor its importance later on.
I think that in order to tackle eigen decompositions, you will need to work your way up with its many building blocks. You have to make sure to have good, streamlined and flexible code for applying the three fundamental transforms (Householder, Givens, and Jacobi rotations). Then, get your hands dirty with the Hessenberg decompositions and reductions. Then, move to the different Schur decompositions. And then, go on to QR/QZ steps, whose difficulty you should not underestimate. And then, you'll have your eigen value solvers.
In my opinion, if you are going to implement any kind of eigen value decomposition, you should just do them all. These algorithms are all very much related, Jacobi / QR / QZ, Hess / Tri-diag, Shur / Real-Schur, etc... you'll be better off tackling them together.
In so far as any additional methods, I think that one easy thing to add would be matrix exponentials (Pade Square-and-sum algorithm), which is sometimes useful in various contexts, especially in system dynamics. But really, I think that for me, the holy grail is an algebraic Riccati equation (ARE) solver. I implemented one in my own library, and so, I know what it entails, and it's really not easy to do, and my implementation is still rough around the edges (when numerical conditioning is bad on the matrices involved). If you think eigen value decomposition is tricky, AREs are ten times worse (and as far as I know, my implementation is one of two implementations that exist in the world, the other one is the fortran code written by the original authors of the algorithm itself). I don't think this is possible to achieve for a GSoC project, but there are a number of building blocks within it that are useful on their own, like Schur and reductions.
By the way, if you need additional inspiration, you might want to look at my matrix numerical methods code: https://github.com/mikael-s-persson/ReaK/tree/master/src/ReaK/core/lin_alg Most of this code is a bit old, and still somewhat rough around the edges, but I've been using it for years and with great success.
I would also be happy to help in the mentorship of this GSoC project. Things are in a state of flux for me right now (in my work), so I'm not exactly sure how much time I could dedicate to that over the summer.
Cheers, Mikael.
-- Sven Mikael Persson, M.Sc.(Tech.) PhD Candidate and Vanier CGS Scholar,
Department of Mechanical Engineering, McGill University,
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (3)
-
Mikael Persson
-
Rajaditya Mukherjee
-
Thijs (M.A.) van den Berg