Hello,
I'm new to ublas, so it's easily possible that I overlooked something
obvious. The first thing I tried to calculate was the following line:
numerics::matrix<double> M(dimension,n);
numerics::matrix<double> Covariance
= numerics::prod(M,numerics::trans(M))/n;
where dimension is 50 and n is 8000.
Unfortunately. it turned out that the following "stupid" code actually
performs better:
numerics::matrix<double> Covariance(dimension,dimension);
for (int i=0; i
"Thomas Willhalm"
Hello,
I'm new to ublas, so it's easily possible that I overlooked something obvious. The first thing I tried to calculate was the following line:
numerics::matrix<double> M(dimension,n); numerics::matrix<double> Covariance = numerics::prod(M,numerics::trans(M))/n;
where dimension is 50 and n is 8000.
Unfortunately. it turned out that the following "stupid" code actually performs better:
numerics::matrix<double> Covariance(dimension,dimension); for (int i=0; i
What am I doing wrong?
I'm using gcc 2.95.3 under Linux 2.4.18 on a mobile Pentium III, if it matters.
How much of a performance difference are you seeing? (I assume that you were using an optimized build.) I expect that you pay *some* price for using nicer abstractions. I've been playing around with uBLAS for a couple of weeks, but haven't really considered its performance yet. I've been re-learning my linear algebra at the same time, so it's been slow slogging. :-)
Thomas Willhalm wrote:
I'm new to ublas, so it's easily possible that I overlooked something obvious. The first thing I tried to calculate was the following line:
numerics::matrix<double> M(dimension,n); numerics::matrix<double> Covariance = numerics::prod(M,numerics::trans(M))/n;
where dimension is 50 and n is 8000.
Unfortunately. it turned out that the following "stupid" code actually performs better:
numerics::matrix<double> Covariance(dimension,dimension); for (int i=0; i
What am I doing wrong?
I'm using gcc 2.95.3 under Linux 2.4.18 on a mobile Pentium III, if it matters.
First of all, it seems that you have fairly old version of ublas.
New version is in namespace `boost::numeric::ublas'.
But I don't think that this influences the performance ;o)
You must define NDEBUG. Otherwise expression templates
are not enabled. You can also try to compile with -O2 or -O3.
For the following test program, compiled with gcc 3.2, I got:
without -DNDEBUG:
ublas: 9.36
loop: 5.74
with -DNDEBUG:
ublas: 5.6
loop: 4.37
with -DNDEBUG -O3:
ublas: 0.9
loop: 1.59
with -DNDEBUG -O3 -funroll-loops:
ublas: 0.81
loop: 1.6
Test program:
=============================================
#include <iostream>
#include
Kresimir Fresl wrote:
Thomas Willhalm wrote:
I'm new to ublas, so it's easily possible that I overlooked something obvious. The first thing I tried to calculate was the following line:
numerics::matrix<double> M(dimension,n); numerics::matrix<double> Covariance = numerics::prod(M,numerics::trans(M))/n;
where dimension is 50 and n is 8000.
Unfortunately. it turned out that the following "stupid" code actually performs better:
numerics::matrix<double> Covariance(dimension,dimension); for (int i=0; i
What am I doing wrong?
I'm using gcc 2.95.3 under Linux 2.4.18 on a mobile Pentium III, if it matters.
First of all, it seems that you have fairly old version of ublas. New version is in namespace `boost::numeric::ublas'. But I don't think that this influences the performance ;o)
Seems like the reason is that I have an old version of gcc: 2.95.3 doesn't check the namespaces...
You must define NDEBUG. Otherwise expression templates are not enabled.
That's it, thank you very much! Without -DNDEBUG I had: ublas 5.28 loop 1.33 Now, it's: ublas 0.58 loop 0.94 (Upgrading ublas didn't change much, but the version I had was only some weeks old.) Thanks again Thomas
participants (3)
-
Dan Muller
-
Kresimir Fresl
-
Thomas Willhalm