Image Classification for Vehicle Type Dataset Using State-of-the-art Convolutional Neural Network Architecture

Author(s):  
Yian Seo ◽  
Kyung-shik Shin
Author(s):  
A. Ferreyra-Ramirez ◽  
C. Aviles-Cruz ◽  
E. Rodriguez-Martinez ◽  
J. Villegas-Cortez ◽  
A. Zuñiga-Lopez

IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


2017 ◽  
Vol 2 (2) ◽  
Author(s):  
Ardian Yusuf Wicaksono ◽  
Nanik Suciati ◽  
Chastine Fatichah ◽  
Keiichi Uchimura ◽  
Gou Koutaki

Sign in / Sign up

Export Citation Format

Share Document