scholarly journals Research on Scene Classification Method of High-Resolution Remote Sensing Images Based on RFPNet

2019 ◽  
Vol 9 (10) ◽  
pp. 2028
Author(s):  
Xin Zhang ◽  
Yongcheng Wang ◽  
Ning Zhang ◽  
Dongdong Xu ◽  
Bo Chen

One of the challenges in the field of remote sensing is how to automatically identify and classify high-resolution remote sensing images. A number of approaches have been proposed. Among them, the methods based on low-level visual features and middle-level visual features have limitations. Therefore, this paper adopts the method of deep learning to classify scenes of high-resolution remote sensing images to learn semantic information. Most of the existing methods of convolutional neural networks are based on the existing model using transfer learning, while there are relatively few articles about designing of new convolutional neural networks based on the existing high-resolution remote sensing image datasets. In this context, this paper proposes a multi-view scaling strategy, a new convolutional neural network based on residual blocks and fusing strategy of pooling layer maps, and uses optimization methods to make the convolutional neural network named RFPNet more robust. Experiments on two benchmark remote sensing image datasets have been conducted. On the UC Merced dataset, the test accuracy, precision, recall, and F1-score all exceed 93%. On the SIRI-WHU dataset, the test accuracy, precision, recall, and F1-score all exceed 91%. Compared with the existing methods, such as the most traditional methods and some deep learning methods for scene classification of high-resolution remote sensing images, the proposed method has higher accuracy and robustness.

2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


2022 ◽  
Author(s):  
Md. Sarkar Hasanuzzaman

Abstract Hyperspectral imaging is a versatile and powerful technology for gathering geo-data. Planes and satellites equipped with hyperspectral cameras are currently the leading contenders for large-scale imaging projects. Aiming at the shortcomings of traditional methods for detecting sparse representation of multi-spectral images, this paper proposes wireless sensor networks (WSNs) based single-hyperspectral image super-resolution method based on deep residual convolutional neural networks. We propose a different strategy that involves merging cheaper multispectral sensors to achieve hyperspectral-like spectral resolution while maintaining the WSN's spatial resolution. This method studies and mines the nonlinear relationship between low-resolution remote sensing images and high-resolution remote sensing images, constructs a deep residual convolutional neural network, connects multiple residual blocks in series, and removes some unnecessary modules. For this purpose, a decision support system is used that provides the outcome to the next layer. Finally, this paper, fully explores the similarities between natural images and hyperspectral images, use natural image samples to train convolutional neural networks, and further use migration learning to introduce the trained network model to the super-resolution problem of high-resolution remote sensing images, and solve the lack of training samples problem. A comparison between different algorithms for processing data on datasets collected in situ and via remote sensing is used to evaluate the proposed approach. The experimental results show that the method has good performance and can obtain better super-resolution effects.


2020 ◽  
Vol 12 (21) ◽  
pp. 3547 ◽  
Author(s):  
Yuanyuan Ren ◽  
Xianfeng Zhang ◽  
Yongjian Ma ◽  
Qiyuan Yang ◽  
Chuanjian Wang ◽  
...  

Remote sensing image segmentation with samples imbalance is always one of the most important issues. Typically, a high-resolution remote sensing image has the characteristics of high spatial resolution and low spectral resolution, complex large-scale land covers, small class differences for some land covers, vague foreground, and imbalanced distribution of samples. However, traditional machine learning algorithms have limitations in deep image feature extraction and dealing with sample imbalance issue. In the paper, we proposed an improved full-convolution neural network, called DeepLab V3+, with loss function based solution of samples imbalance. In addition, we select Sentinel-2 remote sensing images covering the Yuli County, Bayingolin Mongol Autonomous Prefecture, Xinjiang Uygur Autonomous Region, China as data sources, then a typical region image dataset is built by data augmentation. The experimental results show that the improved DeepLab V3+ model can not only utilize the spectral information of high-resolution remote sensing images, but also consider its rich spatial information. The classification accuracy of the proposed method on the test dataset reaches 97.97%. The mean Intersection-over-Union reaches 87.74%, and the Kappa coefficient 0.9587. The work provides methodological guidance to sample imbalance correction, and the established data resource can be a reference to further study in the future.


Author(s):  
Y. Dai ◽  
J. S. Xiao ◽  
B. S. Yi ◽  
J. F. Lei ◽  
Z. Y. Du

Abstract. Aiming at multi-class artificial object detection in remote sensing images, the detection framework based on deep learning is used to extract and localize the numerous targets existing in very high resolution remote sensing images. In order to realize rapid and efficient detection of the typical artificial targets on the remote sensing image, this paper proposes an end-to-end multi-category object detection method in remote sensing image based on the convolutional neural network to solve several challenges, including dense objects and objects with arbitrary direction and large aspect ratios. Specifically, in this paper, the feature extraction process is improved by utilizing a more advanced backbone network with deeper layers and combining multiple feature maps including the high-resolution features maps with more location details and low-resolution feature maps with highly-abstracted information. And a Rotating Regional Proposal Network is adopted into the Faster R-CNN network to generate candidate object-like regions with different orientations and to improve the sensitivity to dense and cluttered objects. The rotation factor is added into the regional proposal network to control the generation of anchor box’s angle and to cover enough directions of typical man-made objects. Meanwhile, the misalignment caused by the two quantifications operations in the pooling process is eliminated and a convolution layer is appended before the fully connected layer of the final classification network to reduce the feature parameters and avoid overfitting. Compared with current generic object detection method, the proposed algorithm focus on the arbitrary oriented and dense artificial targets in remote sensing images. After comprehensive evaluation with several state-of-the-art object detection algorithms, our method is proved to be effective to detect multi-class artificial object in remote sensing image. Experiments demonstrate that the proposed method combines the powerful features extracted by the improved convolutional neural networks with multi-scale features and rotating region network is more accurate in the public DOTA dataset.


Author(s):  
S. Jiang ◽  
H. Zhao ◽  
W. Wu ◽  
Q. Tan

High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.


Sign in / Sign up

Export Citation Format

Share Document