Revision history[back]
click to hide/show revision 1
Revision n. 1

Sep 11 '10 at 23:35

Alexandre%20Passos's gravatar image

Alexandre Passos
2524153274417

This is the (squared) mahalanobis distance between x and u. exp(-this) is proportional to the gaussian probability of x. I don't understand what this has to do with prediction, however (unless in the case of fisher discriminants, in which you choose the class that minimizes that).

Wasn't your friend talking about gaussian processes? They use covariance matrices (actually covariance functions, sampled appropriately in a matrix), but in a rather different way.

click to hide/show revision 2
Revision n. 2

Sep 11 '10 at 23:36

Alexandre%20Passos's gravatar image

Alexandre Passos
2524153274417

This is the (squared) mahalanobis distance distance between x and u. exp(-this) is proportional to the gaussian probability of x. I don't understand what this has to do with prediction, however (unless in the case of fisher discriminants, in which you choose the class that minimizes that).

Wasn't your friend talking about gaussian processes? They use covariance matrices (actually covariance functions, sampled appropriately in a matrix), but in a rather different way.

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.