|
Hi, I am working on implementing a Logistic Regression model, using the newton-cg and lbfgs optimsers provided by scipy as the backend. I find the problems in which I fit the intercept, to be 50% slower than those in which I don't (which is equivalent to adding a column of ones on X). I am guessing that the reason behind this is because of the difference in between the other features in the training data and the column of ones. Is there are any way I can precondition/scale the intercept, so that this difference is nullified and the model becomes faster. Or is my guess that the fit intercept case is slower because of this reason completely wrong? |