|
I'd like to train a multi-class classifier and have a doubt regarding the procedure for training, validating and testing it. Here's my understanding of what should be done:
Firstly, is this procedure correct? If it is, then which model shall be used to assess the classifier performance on test set (step 7)? As the output of step 6 would be k different models (output to each iteration of the cross validation process), which one should I use for step 7? -A PS: I wanted to make steps 3-6 look like substeps of Step 2, but the weird formatting options wouldn't let me do that! |
|
6 - calculate average and standard error of average [ie st deviation of k test errors/sqrt[k-1] ] , and use it to fine tune classifier parameters. [ the standard deviation gives you error bars -> to give you an idea of whether your parameter test profile is trustworthy or just noise] 7) retrain [a new model ] on the whole training set [80%] with the best parameters as identified by step 6) 8) Assess the performance of the best model configuration on the test set (20% of the original data). How do I use early stopping for training MLP ? Each fold will reach the minimum validation error at different epoch. For final training (step 7) when do I stop?
(Sep 09 '13 at 11:57)
Ng0323
|