|
Many structured inference algorithms(for example, this paper page 3, just above claim 3.2) rely on the ability of the loss augmented inference step (in training) to return top two best structures. I can understand this is trivial if we used a beam search approach for loss augmented decoding, but how is it straightforward for other problems(for e.g. ILP based inference)? (some sketchy examples will be great!) |
|
Amir Globerson's group has done some work on M-best MAP problem which should be relevant. Essentially, the key is to add constraints to your LP/ILP that cut off the MAP already obtained while ensuring that the new vertices created in your polytope (i.e. vertices which were not there before and hence are not of interest to you) DO NOT correspond to the MAP for the resulting problem. http://www.cs.huji.ac.il/~gamir/pubs/fro_glo_nbest.pdf just what I needed! thanks.
(Jan 24 '14 at 21:04)
shyamupa
|
|
For ILP-based inference if your variables are binary you can find the best model which disagrees with your one-best output variables by at least one easily. This can be done with a linear constraint which counts how many of the variables active in the optimal solution are active in this solution and makes them sum to at most n-1. I think this is incomplete. You should also consider outputs which disagree by one in that they have more variables active than the gold structure. (i.e. >=n). But overall, I get the idea. Thanks.
(Jan 24 '14 at 21:00)
shyamupa
|