|
Just googling for CIFAR-100, I'm not able to turn up any work on CIFAR-100 other than Alex Krizhevsky's.
showing 5 of 6
show all
|
|
You can find a crowd sourced list at http://rodrigob.github.com/are_we_there_yet/build/classification_datasets_results.html#43494641522d313030 new entries welcome ! |
|
I'm pretty sure there was some work from some other people in Hinton's group. But I'm not sure who it was. The dataset is really not used much, though, so I recommend you use another one if you try to benchmark your system. Why do you want to use this particular dataset? 2
CIFAR-100 is used in the Transfer Learning Challenge for the NIPS workshops this year. I'm entering the challenge and am trying to get an idea of how competitive my submission is.
(Oct 22 '11 at 11:11)
Ian Goodfellow
|
Alex only had results on CIFAR-10. I asked him whether he ever ran on CIFAR-100 and he said no. Anyway... I'm getting around 50% error on that data set. I believe the best result on CIFAR-10 is around 25% error (in a paper from Andrew Ng's group).
Best published result on CIFAR-10 is Coates and Ng, ICML 2011, 81.5% accuracy.
You should really email Alex Krizhevsky and ask him to post a table of the best results he knows for both data sets. I suspect he has much better results on CIFAR-10 than the 18.5% error that just haven't been published, although that isn't what you asked about.
I asked Alex last month, and he didn't have any results on CIFAR-100 then.
@Ian: I'm curious too... what accuracy are you getting on CIFAR-100? I believe my best so far is 53% accuracy.
@Laurens: I got 52.6% but that was using extra unlabeled data, not just the standard CIFAR-100 train set. I have only run one experiment though, so I don't know much about the effect of hyperparameters / whether the extra unlabeled data was important.
Is there any paper doing CIFAR-10 for semi-supervised learning?