I have doubts which I tried to made clear myself, but could no. Please help me to make clear with some simple example:

In structure prediction, we predict output structure i'e we select output structure from the set of output all the possible outputs. I could not understand, how do we get a set of possible outputs?

asked Feb 16 '12 at 09:28

Kuri_kuri's gravatar image

Kuri_kuri
293273040


One Answer:

You assume, using information you know about the problem, what do output structures look like. For parsing, for example, the structures can look like binary trees where the observed words are the leaves.

answered Feb 16 '12 at 12:27

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

Lets take an example of Semantic Role Labeling,we will be having parse tree as input and we will extract features from the tree. Based on those features,we try to learn the parameters. For a input parse tree, we will also know the arguments which need to be labeled by roles. Say, we have 10 different[ roles and 5 arguments.In this case we can have 5^10 different combinations. Is it the good idea to take all the possible output tree as possible tree and calculating the score of each of the possible tree and predicting the tree with highest score?I am not little be clear about. Can you please explain taking the example of SLR?I am not able to clearly understand it.

(Feb 16 '12 at 16:07) Kuri_kuri
1

Examining all 5^10 trees will be expensive. History-based models assign a probability to each intermediate decision in the search space, so you can avoid paths with low probability. You could, for example, do a best-first search.

(Feb 22 '12 at 04:50) Joseph Turian ♦♦
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.