Dear metaoptimize community,

I would like to know which is the easiest way to implement a linear SVM solver, when replacing the regularization term dot(w.transposed(), w) w) (traditional L2 norm) by a term dot(dot(w.transposed(), M), w) where M is an user defined matrix.

I have not found an SVM implementation that makes it easy (i.e. documented feature) to replace the regularization term. Which implementation/hack would you suggest ?

ps: If you wonder why I would need this, you can for example consult equation 17 of this paper

asked Feb 14 '14 at 04:55

Rodrigo%20Benenson's gravatar image

Rodrigo Benenson
6112

edited Feb 14 '14 at 04:56


One Answer:

If your matrix in the regularizer is positive definite you can take its square root ( http://en.wikipedia.org/wiki/Square_root_of_a_matrix ) and fold it into the feature vectors of the examples, whih should allow you to reduce solving an SVM problem with an arbitrary norm to solving a standard SVM problem.

answered Feb 15 '14 at 01:32

Alexandre%20Passos's gravatar image

Alexandre Passos ♦
2554154278421

That did the trick indeed, using the pseudo-inverse of the square root to transform the data, and then transform the learned weights back into the original data domain. Thanks for the suggestion !

(Feb 19 '14 at 16:51) Rodrigo Benenson
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.