A more ambitious image classification dataset : CIFAR-100
Keras tutorial, CIFAR-100
Objectives
We now turn to a more difficult problem of classifying RBG images belonging to one of 100 classes with the CIFAR-100 dataset. The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 50000 training images and 10000 test images. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a “fine” label (the class to which it belongs) and a “coarse” label (the superclass to which it belongs). To give you an idea of the coarse labels, you find fruits, fish, aquatic mammals, vehicles, … and the fine labels are for example seal, whale, orchids, bicycle, bus, … Keras provides functions to automatically get the CIFAR-100 dataset.
Classical dataset augmentation in CIFAR-100 include :
- feature wise standardization
- horizontal flip
- zero padding of 4 pixels on each side, with random crops of 32x32.
For the last augmentation, you can make use of width_shift_range, height_shift_range, fill_mode=“constant” and cval=0.
I now propose you a list of recent papers published on arxiv and I propose you to try reimplementing their architecture and training setup :
- MobileNet (Howard et al., 2017) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Application
- SqueezeNet (Iandola et al., 2016) SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5M model size
- DenseNet (G. Huang, Liu, & Weinberger, 2016) Densely Connected Convolutional Networks
- WideResNet (Zagoruyko & Komodakis, 2016) Wide Residual Networks
- Xception (Chollet, 2016) Xception: Deep Learning with Depthwise Separable Convolutions
- NASNet : (Zoph, Vasudevan, Shlens, & Le, 2017) Learning Transferable Architectures for Scalable Image Recognition
The following papers are trickier to implement :
- ShuffleNet (Zhang, Zhou, Lin, & Sun, 2017) ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
- ResNet with Stochastic Depth (J. Huang et al., 2016) Deep networks with stochastic depth
- Shake-Shake (Gastaldi, 2017) Shake-Shake regularization
If you wish to get an idea of the state of the art in 2015 on CIFAR-100, I invite you to visite the classification scores website.