|
What NLP problem would you tackle if computers were a million times faster? And how would you model it? |
|
I think one obvious answer is that we'd be able to tackle the web, which to me, means parse it, and run all the algorithms we have: information extraction, named entity recognition, coreference resolution, temporal order identification, etc etc. Taking my Search-UI-tinted view, I'd then want to know what kinds of user interfaces and NLP we'd need in order to use that data to give people a general sense of some topic they were interested in. Maybe this is only interesting to me because I'm exploring this on the NYTimes 1987-2007 corpus, but to move beyond a search box and a list of results, to give people a better overview of a topic. How would we frame this in terms of concrete NLP problems? What kind of interface would allow users to explore, or even ask about such a thing? |
|
I still think the most important and biggest question that NLP experts need to answer is the machine translation. Chris Manning's book has a very nice discussion on why this is important. If i could vote this answer up more than once, I would. But it's not quite clear that the only bottleneck to human level machine translation performance is machine speed. You can make some pretty convincing arguments that open-domain MT is an AI-complete problem.
(Jul 08 '10 at 23:24)
Andrew Rosenberg
|