Hi,

I am working on implementing a Logistic Regression model, using the newton-cg and lbfgs optimsers provided by scipy as the backend. I find the problems in which I fit the intercept, to be 50% slower than those in which I don't (which is equivalent to adding a column of ones on X).

I am guessing that the reason behind this is because of the difference in between the other features in the training data and the column of ones. Is there are any way I can precondition/scale the intercept, so that this difference is nullified and the model becomes faster. Or is my guess that the fit intercept case is slower because of this reason completely wrong?

asked Jul 19 '14 at 13:50

Manoj%20Kumar's gravatar image

Manoj Kumar
1111

Be the first one to answer this question!
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.