I'm studying the kernelized version of the classical PCA (Principal Component Analysis).

Let x_i, i=1,2,...,n be the available training points and

an estimate of the correlation matrix, where phi maps points in X into a RKHS H. We want to perform the eigendecomposition of R, that is,

Let's assume that the data are centered (i.e. that the sum of all the phi(x_i) is zero). By the definition of R, it can be shown that v lies in

Indeed,

and for lambda different from zero we can write

Now the book says that combining this last expression with , we can see that the problem is equivalent to performing the eigendecomposition of the Gram matrix

where

and

with K being the adopted kernel function.

I can't see why performing the eigendecomposition of R is "equivalent" to performing that of the Gram matrix. Any help would be greatly appreciated!

asked Aug 29 '13 at 10:19

Kiuhnm's gravatar image

Kiuhnm
266610


One Answer:

Substituting the expression for the eigen vectors (v) in the eigen decompostion Rv = lambda v and expanding it is term of the feature space phi(x_i), you will end up at an expression that you are looking for. It is just arithmetic. Try writing down the complete matrix multiplication between Rv while expressing them in terms of phi(.)s.

Max Welling's tutorial has the complete derivation

answered Aug 29 '13 at 13:13

Rakesh%20Chalasani's gravatar image

Rakesh Chalasani
2641210

edited Aug 29 '13 at 13:13

I was missing a key point: I needed to multiply both members of Rv = lv by phi(x_k)^T. In the end, I can find the final expression only if K is invertible, but is it invertible?

(Aug 30 '13 at 10:26) Kiuhnm

Yes, K is invertible. Any positive definite matrix is invertible.

(Aug 31 '13 at 09:51) Rakesh Chalasani
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.