I have read some of the papers like collobert(2008) where a number of words itself are represented as feature of words. Can you please make clear how are those feature vectors look like and how those features helps to increase the accuracy in prediction? Some example or explanation in reference to this paper would be really appreciable.

asked Mar 04 '12 at 12:42

Kuri_kuri's gravatar image

Kuri_kuri
293273040

edited May 22 '12 at 21:53

Joseph%20Turian's gravatar image

Joseph Turian ♦♦
577551125146


One Answer:

You can visualize these embeddings and download them from Joseph Turian's page on word representations. There is also a link to a github project with code to embed these in conditional random fields.

answered Mar 04 '12 at 15:13

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2551153277421

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.