scholarly journals Early Labeled and Small Loss Selection Semi-Supervised Learning Method for Remote Sensing Image Scene Classification

2021 ◽  
Vol 13 (20) ◽  
pp. 4039
Author(s):  
Ye Tian ◽  
Yuxin Dong ◽  
Guisheng Yin

The classification of aerial scenes has been extensively studied as the basic work of remote sensing image processing and interpretation. However, the performance of remote sensing image scene classification based on deep neural networks is limited by the number of labeled samples. In order to alleviate the demand for massive labeled samples, various methods have been proposed to apply semi-supervised learning to train the classifier using labeled and unlabeled samples. However, considering the complex contextual relationship and huge spatial differences, the existing semi-supervised learning methods bring different degrees of incorrectly labeled samples when pseudo-labeling unlabeled data. In particular, when the number of labeled samples is small, it affects the generalization performance of the model. In this article, we propose a novel semi-supervised learning method with early labeled and small loss selection. First, the model learns the characteristics of simple samples in the early stage and uses multiple early models to screen out a small number of unlabeled samples for pseudo-labeling based on this characteristic. Then, the model is trained in a semi-supervised manner by combining labeled samples, pseudo-labeled samples, and unlabeled samples. In the training process of the model, small loss selection is used to further eliminate some of the noisy labeled samples to improve the recognition accuracy of the model. Finally, in order to verify the effectiveness of the proposed method, it is compared with several state-of-the-art semi-supervised classification methods. The results show that when there are only a few labeled samples in remote sensing image scene classification, our method is always better than previous methods.

2020 ◽  
Vol 12 (20) ◽  
pp. 3276 ◽  
Author(s):  
Zhicheng Zhao ◽  
Ze Luo ◽  
Jian Li ◽  
Can Chen ◽  
Yingchao Piao

In recent years, the development of convolutional neural networks (CNNs) has promoted continuous progress in scene classification of remote sensing images. Compared with natural image datasets, however, the acquisition of remote sensing scene images is more difficult, and consequently the scale of remote sensing image datasets is generally small. In addition, many problems related to small objects and complex backgrounds arise in remote sensing image scenes, presenting great challenges for CNN-based recognition methods. In this article, to improve the feature extraction ability and generalization ability of such models and to enable better use of the information contained in the original remote sensing images, we introduce a multitask learning framework which combines the tasks of self-supervised learning and scene classification. Unlike previous multitask methods, we adopt a new mixup loss strategy to combine the two tasks with dynamic weight. The proposed multitask learning framework empowers a deep neural network to learn more discriminative features without increasing the amounts of parameters. Comprehensive experiments were conducted on four representative remote sensing scene classification datasets. We achieved state-of-the-art performance, with average accuracies of 94.21%, 96.89%, 99.11%, and 98.98% on the NWPU, AID, UC Merced, and WHU-RS19 datasets, respectively. The experimental results and visualizations show that our proposed method can learn more discriminative features and simultaneously encode orientation information while effectively improving the accuracy of remote sensing scene classification.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4867
Author(s):  
Lu Chen ◽  
Hongjun Wang ◽  
Xianghao Meng

With the development of science and technology, neural networks, as an effective tool in image processing, play an important role in gradual remote-sensing image-processing. However, the training of neural networks requires a large sample database. Therefore, expanding datasets with limited samples has gradually become a research hotspot. The emergence of the generative adversarial network (GAN) provides new ideas for data expansion. Traditional GANs either require a large number of input data, or lack detail in the pictures generated. In this paper, we modify a shuffle attention network and introduce it into GAN to generate higher quality pictures with limited inputs. In addition, we improved the existing resize method and proposed an equal stretch resize method to solve the problem of image distortion caused by different input sizes. In the experiment, we also embed the newly proposed coordinate attention (CA) module into the backbone network as a control test. Qualitative indexes and six quantitative evaluation indexes were used to evaluate the experimental results, which show that, compared with other GANs used for picture generation, the modified Shuffle Attention GAN proposed in this paper can generate more refined and high-quality diversified aircraft pictures with more detailed features of the object under limited datasets.


Sign in / Sign up

Export Citation Format

Share Document