|
The Gaussian prior is equivalent to a squared L2 regularizer. Hence the following two command line parameters will do: -lambda <float> specify the precision of the Gaussian prior (default: 1) -tune tune lambda using repeated optimizations (starts with specified -lambda value and drops by half each time until optimal dev error rate is achieved) I am calling megam via NLTK. Is there any way to specify these parameters OR to chuck megam and go for Logistic Regression in scikits.learn with a l2 penalty?
(May 12 '11 at 13:06)
Dexter
Read the source, Luke :)
(May 13 '11 at 05:07)
ogrisel
|