A Multi-Scale Feature Aggregation Network Based on Channel-Spatial Attention for Remote Sensing Scene Classification

Author(s):  
Ming Li ◽  
Lin Lei ◽  
Xiao Li ◽  
Yuli Sun
2021 ◽  
Vol 13 (13) ◽  
pp. 2532
Author(s):  
Joseph Kim ◽  
Mingmin Chi

In real applications, it is necessary to classify new unseen classes that cannot be acquired in training datasets. To solve this problem, few-shot learning methods are usually adopted to recognize new categories with only a few (out-of-bag) labeled samples together with the known classes available in the (large-scale) training dataset. Unlike common scene classification images obtained by CCD (Charge-Coupled Device) cameras, remote sensing scene classification datasets tend to have plentiful texture features rather than shape features. Therefore, it is important to extract more valuable texture semantic features from a limited number of labeled input images. In this paper, a multi-scale feature fusion network for few-shot remote sensing scene classification is proposed by integrating a novel self-attention feature selection module, denoted as SAFFNet. Unlike a pyramidal feature hierarchy for object detection, the informative representations of the images with different receptive fields are automatically selected and re-weighted for feature fusion after refining network and global pooling operation for a few-shot remote sensing classification task. Here, the feature weighting value can be fine-tuned by the support set in the few-shot learning task. The proposed model is evaluated on three publicly available datasets for few shot remote sensing scene classification. Experimental results demonstrate the effectiveness of the proposed SAFFNet to improve the few-shot classification accuracy significantly compared to other few-shot methods and the typical multi-scale feature fusion network.


2021 ◽  
Vol 32 (2) ◽  
Author(s):  
Mehrdad Sheoiby ◽  
Sadegh Aliakbarian ◽  
Saeed Anwar ◽  
Lars Petersson

2021 ◽  
Vol 13 (3) ◽  
pp. 433
Author(s):  
Junge Shen ◽  
Tong Zhang ◽  
Yichen Wang ◽  
Ruxin Wang ◽  
Qi Wang ◽  
...  

Remote sensing images contain complex backgrounds and multi-scale objects, which pose a challenging task for scene classification. The performance is highly dependent on the capacity of the scene representation as well as the discriminability of the classifier. Although multiple models possess better properties than a single model on these aspects, the fusion strategy for these models is a key component to maximize the final accuracy. In this paper, we construct a novel dual-model architecture with a grouping-attention-fusion strategy to improve the performance of scene classification. Specifically, the model employs two different convolutional neural networks (CNNs) for feature extraction, where the grouping-attention-fusion strategy is used to fuse the features of the CNNs in a fine and multi-scale manner. In this way, the resultant feature representation of the scene is enhanced. Moreover, to address the issue of similar appearances between different scenes, we develop a loss function which encourages small intra-class diversities and large inter-class distances. Extensive experiments are conducted on four scene classification datasets include the UCM land-use dataset, the WHU-RS19 dataset, the AID dataset, and the OPTIMAL-31 dataset. The experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-arts.


Author(s):  
Xu Tang ◽  
Weiquan Lin ◽  
Chao Liu ◽  
Xiao Han ◽  
Wenjing Wang ◽  
...  

2021 ◽  
pp. 229-244
Author(s):  
Karina O. M. Bogdan ◽  
Guilherme A. S. Megeto ◽  
Rovilson Leal ◽  
Gustavo Souza ◽  
Augusto C. Valente ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document