Hi all

I'm dealing with a dataset that has a low number of occurrences per instance, but I have a good estimate on the prior, so Bayes learning seems to work well.

I modeled my prior distribution as a bivariate normal with mean = [0,0]^T and cov = [[16,0],[0,27]] and I used the Normal-inverse-Wishart, as it appears to be the correct choice given Wikipedia.

The problem is, I get a consistent estimated mean, but the covariance estimation doesn't convince me. Is what I get the real estimation of the covariance of the data, or is it only the covariance of the distribution of the estimated mean, which I wouldn't need since I am using only the expected value of the mean?

In the second case, how do I estimate the covariance of my data? I am thinking of an approach in which I assume the mean computed before as if it was the known mean, and use the Inverse-Wishart conjugate prior to get it, but I'm not sure whether a statistician would horripilate seeing this approach (a frequentis statistician would do for sure, but maybe with bayesian approach this makes sense).

Btw, I also posted on StackExchange the procedure that I am using, if you want to have a look.

Thanks.

asked Feb 27 '13 at 04:48

unziberla's gravatar image

unziberla
1112

edited Feb 27 '13 at 04:56

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.