scholarly journals RESEARCH ON SE-INCEPTION IN HIGH-RESOLUTION REMOTE SENSING IMAGE CLASSIFICATION

Author(s):  
Z. L. Cai ◽  
Q. Weng ◽  
S. Z. Ye

Abstract. With the deepening research and cross-fusion in the modern remote sensing image area, the classification of high spatial resolution remote sensing images has captured the attention of the researchers in the field of remote sensing. However, due to the serious phenomenon of “same object, different spectrum” and “same spectrum, different object” of high-resolution remote sensing image, the traditional classification strategy is hard to handle this challenge. In this paper, a remote sensing image scene classification model based on SENet and Inception-V3 is proposed by utilizing the deep learning method and transfer learning strategy. The model first adds a dropout layer before the full connection layer of the original Inception-V3 model to avoid over-fitting. Then we embed the SENet module into the Inception-V3 model for optimizing the network performance. In this paper, global average pooling is used as squeeze operation, and then two fully connected layers are used to construct a bottleneck structure. The model proposed in this paper is more non-linear, can better fit the complex correlation between channels, and greatly reduces the amount of parameters and computation. In the training process, this paper adopts the transfer learning strategy, makes full use of existing models and knowledge, improves training efficiency, and finally obtains scene classification results. The experimental results based on AID high-score remote sensing scene images show that SE-Inception has faster convergence speed and more stable training effect than the original Inception-V3 training. Compared with other traditional methods and deep learning networks, the improved model proposed in this paper has greater accuracy improvement.

2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2019 ◽  
Vol 9 (10) ◽  
pp. 2028
Author(s):  
Xin Zhang ◽  
Yongcheng Wang ◽  
Ning Zhang ◽  
Dongdong Xu ◽  
Bo Chen

One of the challenges in the field of remote sensing is how to automatically identify and classify high-resolution remote sensing images. A number of approaches have been proposed. Among them, the methods based on low-level visual features and middle-level visual features have limitations. Therefore, this paper adopts the method of deep learning to classify scenes of high-resolution remote sensing images to learn semantic information. Most of the existing methods of convolutional neural networks are based on the existing model using transfer learning, while there are relatively few articles about designing of new convolutional neural networks based on the existing high-resolution remote sensing image datasets. In this context, this paper proposes a multi-view scaling strategy, a new convolutional neural network based on residual blocks and fusing strategy of pooling layer maps, and uses optimization methods to make the convolutional neural network named RFPNet more robust. Experiments on two benchmark remote sensing image datasets have been conducted. On the UC Merced dataset, the test accuracy, precision, recall, and F1-score all exceed 93%. On the SIRI-WHU dataset, the test accuracy, precision, recall, and F1-score all exceed 91%. Compared with the existing methods, such as the most traditional methods and some deep learning methods for scene classification of high-resolution remote sensing images, the proposed method has higher accuracy and robustness.


2019 ◽  
Vol 12 (1) ◽  
pp. 86 ◽  
Author(s):  
Rafael Pires de Lima ◽  
Kurt Marfurt

Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically with a higher resolution. The need to analyze these modern digital data motivated research to accelerate remote-sensing image classification. Fortunately, great advances have been made by the computer vision community to classify natural images or photographs taken with an ordinary camera. Natural image datasets can range up to millions of samples and are, therefore, amenable to deep-learning techniques. Many fields of science, remote sensing included, were able to exploit the success of natural image classification by convolutional neural network models using a technique commonly called transfer learning. We provide a systematic review of transfer learning application for scene classification using different datasets and different deep-learning models. We evaluate how the specialization of convolutional neural network models affects the transfer learning process by splitting original models in different points. As expected, we find the choice of hyperparameters used to train the model has a significant influence on the final performance of the models. Curiously, we find transfer learning from models trained on larger, more generic natural images datasets outperformed transfer learning from models trained directly on smaller remotely sensed datasets. Nonetheless, results show that transfer learning provides a powerful tool for remote-sensing scene classification.


Sign in / Sign up

Export Citation Format

Share Document