I have vectors x klzzwxh:0000n klzzwxh:0001athbb{R}^n and I expect some multivariate normal distribution.

I want to normalize the vectors in such a way that y = M (x - b) has mean zero (E[Y] = 0) and variance 1 (var(Y) = 1).

For n=1, that is basically the Standard score.

  1. With M = 1 and b = E[X], I get it mean-normalized but not variance-normalized.

  2. With M being a non-negative symmetric square root of var(X)^{-1}, we get var(Y) = 1. We can choose b as in case 1.

I want to estimate M and b based on some running/moving algorithm. For the first case, this is easy:

For every new x, I can use the update rule

b klzzwxh:0005eftarrow x klzzwxh:0006lpha + b (1 - klzzwxh:0007lpha)

for some α. I could set α = 1/N and set N ← N + 1 in every iteration, starting with N = 1.

How would I do something similar for the second case, esp. for M?

I could use a similar update rule to calculate the covariance matrix, e.g.

klzzwxh:0009igma klzzwxh:0010eftarrow (x - b) (x - b)^T klzzwxh:0011lpha + klzzwxh:0012igma (1- klzzwxh:0013lpha)

but I'm not sure how well that works in practice (also, I haven't really seen that formula in such form) or if that even converges (given that it uses also the moving b).

Also, if I use this, I only have an estimation of var(X) but I still need the M. I probably could use another estimation to get the M based on the Σ (which one?), but I wonder if I could just get an estimation of M more directly.


(I asked the same question also on Math.StackExchange.)

asked May 16 '14 at 08:57

Albert%20Zeyer's gravatar image

Albert Zeyer
717913

edited May 16 '14 at 09:00

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.