|
I haven't found any literature on the application of Random Forests to MNIST, CIFAR, STL-10, etc. so I thought I'd try them with the permutation-invariant MNIST myself. In R, I tried:
This ran for 2 hours and got a 2.8% test error. I also tried scikit-learn, with
After 70 minutes, I got a 2.9% test error, but with n_estimators=200 instead, I got a 2.8% test error after just 7 minutes. With OpenCV, I tried
This ran for 6.5 minutes, and using In neural networks, for the permutation-invariant MNIST benchmark, the state of the art is 0.8% test error, although training would probably take more than 2 hours on one CPU. Is it possible to do much better than the 2.8% test error on MNIST using Random Forests? I thought that the general consensus was that Random Forests are usually at least as good as kernel SVMs, which I believe can get a 1.4% test error. |