Hi, Does anybody think Manifold Learning would be useful for Natural Language Processing. Especially in the case of dependency parsing? It is seemingly a good algorithm for the process of semi-supervised learning, but is mostly used in image processing and related fields. Do you think this would scale for NLP tasks?

asked Jun 06 '12 at 07:04

kakashi_'s gravatar image

kakashi_
1224

edited Jun 07 '12 at 06:39


One Answer:

Given that Brown clusters, neural language models, and other similar techniques to learn low-dimensional feature representations do improve the performance of standard dependency parsers I'd say that some kind of manifold method would probably be useful. The challenge is that most current manifold learning techniques were not designed for the NLP setting, where each instance has a handful of active features out of a few million.

answered Jun 07 '12 at 09:31

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.