|
These days, deep learning methods are usually the state of the art on most if not all ML benchmarks: MNIST, CIFAR, STL-10, ImageNet, TIMIT. However, Random Forests are still widely used in practice and presumably have their advantages. How well do they do on the popular ML benchmarks? (Published papers usually focus on the absolute winners, so I had difficulty finding relevant results) |