|
The paper "A Generalized Representer Theorem(Bernhard Scholkopf)" provide a representer theorem for followed optimization formulation:
it states that the solution of this objective has this form( f(.) = sum_i a_i K(x_i,.) ) However, we all know that a SVM objective is like this
(1-y_i(wx_i+b))_+ can be a specific L(f), but how can ||w||^2 be a specific g(||f||_H)? It's confusing! Do you have an answer? thank you! |
|
The second equation you've shown is the primal SVM objective, which can be reformulated as a dual objective using the representer theorem as in equation 1. Any book on SVMs will have it, eg. Learning with Kernels by Scholkopf and Smola, probably on wikipedia too. Thanks! It's still confusing that a solution hat{f} can only be found in RKHS(created by positive definite kernel) in representer theorem. What if the solution f is not in RKHS?
(Dec 15 '13 at 10:50)
etali
If I understand your question correctly, but the representer theorem states that the solution has to be in RKHS.
(Dec 16 '13 at 19:37)
digdug
I see. Thank you! I have another question. When a new formulation come, how to judge whether representer theorem can be applied to it?
(Dec 17 '13 at 03:22)
etali
Not sure what you mean by new formulation, but lots of methods can be expressed in terms of positive definite kernels, like linear and logistic regression, PCA, etc.
(Dec 19 '13 at 01:47)
digdug
Actually I mean there is a model with an objective function. How to judge whether representer theorem can be applied to this objective function? I see that in least square, logistic regression and SVM, there is a dot product of w[parameters] and x[sample] in their objective functions. So representer theorem can be easily applied to them? But what if there is no dot product in the objective function?
(Dec 19 '13 at 07:42)
etali
If there is no dot product then I don't know, but many commonly used methods can be expressed in that form anyway.
(Dec 21 '13 at 17:35)
digdug
showing 5 of 6
show all
|