|
Yesterday's lecture on Ranking by Hang Li on ranking included some neat ideas. Slides here One of those though, still doesn't seem to click with me. And it is the idea of ranking SVM's. The basic idea is to compare the ranking of each entry X_i-X_i-1, and force it to be higher than zero via an SVM classification. I find far more intuitive to make a generative model that generates rankings using a distribution over the permutations. Since you are working with the entire list of rankings. In my opinion, with the SVM approach you are highly prone to some kind of over fitting of the rankings, and if you want to rank a new list based on the same factors you cannot use your SVM, you need to generate a new one. What is your opinion in this topic. |
|
There are many listwise methods of learning to rank, including some that use SVMs. RankSVMs can indeed overfit in particular ways (for example, by learning to get the head of the list wrong so it can make less ranking errors on the tail of the list), but most of the time it is actually pretty useful, as you can train it really quickly using something like sofia. The Microsoft Research LETOR website keeps a large up-to-date list of papers on learning to rank, and it does provide references to many listwise methods. They are, however, in general discriminative and far more complex than pairwise methods to understand and implement. |