Static Block Floating-Point Quantization for Convolutional Neural Networks on FPGA

Author(s):  
Hongxiang Fan ◽  
Gang Wang ◽  
Martin Ferianc ◽  
Xinyu Niu ◽  
Wayne Luk
2021 ◽  
Vol 11 (5) ◽  
pp. 2092
Author(s):  
Hong Hai Hoang ◽  
Hoang Hieu Trinh

In this paper, we examine and research the effect of long skip connection on convolutional neural networks (CNNs) for the tasks of image (surface defect) classification. The standard popular models only apply short skip connection inside blocks (layers with the same size). We apply the long version of residual connection on several proposed models, which aims to reuse the lost spatial knowledge from the layers close to input. For some models, Depthwise Separable Convolution is used rather than traditional convolution in order to reduce both count of parameters and floating-point operations per second (FLOPs). Comparative experiments of the newly upgraded models and some popular models have been carried out on different datasets including Bamboo strips datasets and a reduced version of ImageNet. The modified version of DenseNet 121 (we call MDenseNet 121) achieves higher validation accuracy while it has about 75% of weights and FLOPs in comparison to the original DenseNet 121.


2020 ◽  
Vol 10 (6) ◽  
pp. 2096 ◽  
Author(s):  
Minjun Jeon ◽  
Young-Seob Jeong

Scene text detection is the task of detecting word boxes in given images. The accuracy of text detection has been greatly elevated using deep learning models, especially convolutional neural networks. Previous studies commonly aimed at developing more accurate models, but their models became computationally heavy and worse in efficiency. In this paper, we propose a new efficient model for text detection. The proposed model, namely Compact and Accurate Scene Text detector (CAST), consists of MobileNetV2 as a backbone and balanced decoder. Unlike previous studies that used standard convolutional layers as a decoder, we carefully design a balanced decoder. Through experiments with three well-known datasets, we then demonstrated that the balanced decoder and the proposed CAST are efficient and effective. The CAST was about 1.1x worse in terms of the F1 score, but 30∼115x better in terms of floating-point operations per second (FLOPS).


Author(s):  
Vincent W.-S. Tseng ◽  
Sourav Bhattacharya ◽  
Javier Fernández Marqués ◽  
Milad Alizadeh ◽  
Catherine Tong ◽  
...  

We propose Deterministic Binary Filters, an approach to Convolutional Neural Networks that learns weighting coefficients of predefined orthogonal binary basis instead of the conventional approach of learning directly the convolutional filters. This approach results in model architectures with significantly fewer parameters (4x to 16x) and smaller model sizes (32x due to the use of binary rather than floating point precision). We show our deterministic filter design can be integrated into well-known network architectures (such as ResNet and SqueezeNet) with as little as 2% loss of accuracy (under datasets like CIFAR-10). Under ImageNet, they result in 3x model size reduction compared to sub-megabyte binary networks while reaching comparable accuracy levels.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document