Has anyone implemented any of these algorithms? In the talk it's been said that these are very very fast, with very little compromise on performance compared to SVD. The code below is very simple. I tested in on A of size 1000x5000 but don't get very similar results with the SVD, i.e. SVD is way better. There's some improvement from the RSVD though, so I don't think it's completely broken. What could be the problem?
def rsvd(A, k):
//Omega = random((N,k))
Omega = standard_normal((N,k))
Y = dot(A, Omega)
Q, R = qr(Y, econ = True)
B = dot(Q.T,A)
Uhat, Sigma, V = svd(B)
U = Q.dot(Uhat)
return U, Sigma, V
Oliver, if you're interested in exploring the stochastic decomposition algorithm for yourself, you can get inspiration from ready implementations:
As for accuracy, it depends on the spectral properties of your problem. In Natural Language Processing, the spectrum declines very gradually (no sharp drop-off), so you'd need many power iterations for the stochastic algorithm to be reasonably accurate. Even massive oversampling (like
Performance-wise, people just use standard Gaussian for the random matrix
Which are these random numbers you're using? They should be gaussian with mean 0 and variance 1, or something similar. If they are 0-1 uniform random numbers the method gets a lot worse. Also, you should allow for some numerical errors by instead computing the truncated SVD of the first k+5 or so dimensions instead of k, and then truncating.
answered Nov 05 '10 at 06:42
Alexandre Passos ♦
When you use Gunnar Martinsson's method to obtain an approximate SVD decomposition whose rank is lower than `k', the accuracy can become very bad. However, the computation becomes faster.
In summary, Gunnar Martinsson's method is supposed to be a fast approximation to the exact SVD, both of which has the same rank. You can verify this very easily by writing a program.
answered Feb 11 '14 at 18:41