Has anyone here used Correlated Topic Models (Blei & Lafferty 07)? Do they work any better than LDA in practice? By this, I mean are they fast to train, and do the topics make more sense?

asked Jul 08 '10 at 18:04

aditi's gravatar image

aditi
85072034

edited Jul 08 '10 at 19:26


2 Answers:

There was a paper from David Blei's group last nips that, interestingly enough, proved empirically that correlated topic models fare worse than straight LDA when humans try to interpret the results.

answered Jul 08 '10 at 20:29

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

We are playing around with them at the moment. When you mean better, do you mean this in terms of predictive performance? If so, we've had mixed experience with them, sometimes they beat plain LDA, sometimes they don't. If you mean in terms of training time, we find them generally to be worse than plain LDA because of some internal optimization iterations one has to run. Also, we found that the Blei bound in the correlated topic model worked better than the Bouchard bound for large scale corpora.

Did you have a more specific question about what you meant with "work better in practice"?

answered Jul 08 '10 at 19:17

Jurgen's gravatar image

Jurgen
99531419

Thanks - this is the sort of answer I was looking for, no specific sense of "work better" in mind, just people's thoughts on using one instead of the other.

(Jul 08 '10 at 19:27) aditi
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.