SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization

Author(s):  
Shijie Cao ◽  
Lingxiao Ma ◽  
Wencong Xiao ◽  
Chen Zhang ◽  
Yunxin Liu ◽  
...  
2021 ◽  
Vol 1757 (1) ◽  
pp. 012047
Author(s):  
Xiaozhong Liu ◽  
Zaixing Wang ◽  
Lijun Zheng ◽  
Jinhui Gao

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Feng Wang ◽  
Shanshan Huang ◽  
Chao Liang

Sensing the external complex electromagnetic environment is an important function for cognitive radar, and the concept of cognition has attracted wide attention in the field of radar since it was proposed. In this paper, a novel method based on an idea of multidimensional feature map and convolutional neural network (CNN) is proposed to realize the automatic modulation classification of jamming entering the cognitive radar system. The multidimensional feature map consists of two envelope maps before and after the pulse compression processing and a time-frequency map of the receiving beam signal. Drawing the one-dimensional envelope in a 2-dimensional plane and quantizing the time-frequency data to a 2-dimensional plane, we treat the combination of the three planes (multidimensional feature map) as one picture. A CNN-based algorithm with linear kernel sensing the three planes simultaneously is selected to accomplish jamming classification. The classification of jamming, such as noise frequency modulation jamming, noise amplitude modulation jamming, slice jamming, and dense repeat jamming, is validated by computer simulation. A performance comparison study on convolutional kernels in different size demonstrates the advantage of selecting the linear kernel.


2020 ◽  
Vol 64 (2) ◽  
pp. 20507-1-20507-10 ◽  
Author(s):  
Hee-Jin Yu ◽  
Chang-Hwan Son ◽  
Dong Hyuk Lee

Abstract Traditional approaches for the identification of leaf diseases involve the use of handcrafted features such as colors and textures for feature extraction. Therefore, these approaches may have limitations in extracting abundant and discriminative features. Although deep learning approaches have been recently introduced to overcome the shortcomings of traditional approaches, existing deep learning models such as VGG and ResNet have been used in these approaches. This indicates that the approach can be further improved to increase the discriminative power because the spatial attention mechanism to predict the background and spot areas (i.e., local areas with leaf diseases) has not been considered. Therefore, a new deep learning architecture, which is hereafter referred to as region-of-interest-aware deep convolutional neural network (ROI-aware DCNN), is proposed to make deep features more discriminative and increase classification performance. The primary idea is that leaf disease symptoms appear in leaf area, whereas the background region does not contain useful information regarding leaf diseases. To realize this, two subnetworks are designed. One subnetwork is the ROI subnetwork to provide more discriminative features from the background, leaf areas, and spot areas in the feature map. The other subnetwork is the classification subnetwork to increase the classification accuracy. To train the ROI-aware DCNN, the ROI subnetwork is first learned with a new image set containing the ground truth images where the background, leaf area, and spot area are divided. Subsequently, the entire network is trained in an end-to-end manner to connect the ROI subnetwork with the classification subnetwork through a concatenation layer. The experimental results confirm that the proposed ROI-aware DCNN can increase the discriminative power by predicting the areas in the feature map that are more important for leaf diseases identification. The results prove that the proposed method surpasses conventional state-of-the-art methods such as VGG, ResNet, SqueezeNet, bilinear model, and multiscale-based deep feature extraction and pooling.


2019 ◽  
Vol 9 (14) ◽  
pp. 2917 ◽  
Author(s):  
Yan Chen ◽  
Chengming Zhang ◽  
Shouyi Wang ◽  
Jianping Li ◽  
Feng Li ◽  
...  

Using satellite remote sensing has become a mainstream approach for extracting crop spatial distribution. Making edges finer is a challenge, while simultaneously extracting crop spatial distribution information from high-resolution remote sensing images using a convolutional neural network (CNN). Based on the characteristics of the crop area in the Gaofen 2 (GF-2) images, this paper proposes an improved CNN to extract fine crop areas. The CNN comprises a feature extractor and a classifier. The feature extractor employs a spectral feature extraction unit to generate spectral features, and five coding-decoding-pair units to generate five level features. A linear model is used to fuse features of different levels, and the fusion results are up-sampled to obtain a feature map consistent with the structure of the input image. This feature map is used by the classifier to perform pixel-by-pixel classification. In this study, the SegNet and RefineNet models and 21 GF-2 images of Feicheng County, Shandong Province, China, were chosen for comparison experiment. Our approach had an accuracy of 93.26%, which is higher than those of the existing SegNet (78.12%) and RefineNet (86.54%) models. This demonstrates the superiority of the proposed method in extracting crop spatial distribution information from GF-2 remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document