Scene Classification of Optical Remote Sensing Images Based on CNN Automatic Transfer

Author(s):  
Jicheng Quan ◽  
Chen Wu ◽  
Hongwei Wang ◽  
Zhiqiang Wang
2021 ◽  
Vol 58 (2) ◽  
pp. 0210001
Author(s):  
汪鹏 Wang Peng ◽  
刘瑞 Liu Rui ◽  
辛雪静 Xin Xuejing ◽  
刘沛东 Liu Peidong

2018 ◽  
Vol 2018 (16) ◽  
pp. 1650-1657
Author(s):  
Xu Jiaqing ◽  
Lv Qi ◽  
Liu Hongjun ◽  
He Jie

2018 ◽  
Vol 06 (11) ◽  
pp. 185-193
Author(s):  
Feng’an Zhao ◽  
Xiongmei Zhang ◽  
Xiaodong Mu ◽  
Zhaoxiang Yi ◽  
Zhou Yang

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Linyi Li ◽  
Tingbao Xu ◽  
Yun Chen

In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.


2019 ◽  
Vol 11 (17) ◽  
pp. 1996 ◽  
Author(s):  
Zhu ◽  
Yan ◽  
Mo ◽  
Liu

Scene classification of highresolution remote sensing images (HRRSI) is one of the most important means of landcover classification. Deep learning techniques, especially the convolutional neural network (CNN) have been widely applied to the scene classification of HRRSI due to the advancement of graphic processing units (GPU). However, they tend to extract features from the whole images rather than discriminative regions. The visual attention mechanism can force the CNN to focus on discriminative regions, but it may suffer from the influence of intraclass diversity and repeated texture. Motivated by these problems, we propose an attention-based deep feature fusion (ADFF) framework that constitutes three parts, namely attention maps generated by Gradientweighted Class Activation Mapping (GradCAM), a multiplicative fusion of deep features and the centerbased cross-entropy loss function. First of all, we propose to make attention maps generated by GradCAM as an explicit input in order to force the network to concentrate on discriminative regions. Then, deep features derived from original images and attention maps are proposed to be fused by multiplicative fusion in order to consider both improved abilities to distinguish scenes of repeated texture and the salient regions. Finally, the centerbased cross-entropy loss function that utilizes both the cross-entropy loss and center loss function is proposed to backpropagate fused features so as to reduce the effect of intraclass diversity on feature representations. The proposed ADFF architecture is tested on three benchmark datasets to show its performance in scene classification. The experiments confirm that the proposed method outperforms most competitive scene classification methods with an average overall accuracy of 94% under different training ratios.


2019 ◽  
Vol 13 (04) ◽  
pp. 1
Author(s):  
Xin Zhang ◽  
Yongcheng Wang ◽  
Ning Zhang ◽  
Dongdong Xu ◽  
Bo Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document