From a practical perspective, what you really want to know is whether or not you are overfitting the training data. One good way to test this is to look at your mean error on the training set vs on the test set. If they are the same, this is an indication that you are not overfitting the training data. You may perhaps be underfitting it in this case, however. If your error on the training data is significantly less than that on the test data, this indicates overfitting. Ideally, your model should be as complex as possible (e.g., the regularization should be as weak as possible) while keeping the training and test errors about equal to each other.
answered
Feb 22 '12 at 11:05
Kevin Canini
1260●2●13●30