scholarly journals REMOTE SENSING SCENE CLASSIFICATION USING MULTIPLE PYRAMID POOLING

Author(s):  
Y. Yao ◽  
H. Zhao ◽  
D. Huang ◽  
Q. Tan

<p><strong>Abstract.</strong> Remote sensing image scene classification has gained remarkable attention, due to its versatile use in different applications like geospatial object detection, ground object information extraction, environment monitoring and etc. The scene not only contains the information of the ground objects, but also includes the spatial relationship between the ground objects and the environment. With rapid growth of the amount of remote sensing image data, the need for automatic annotation methods for image scenes is more urgent. This paper proposes a new framework for high resolution remote sensing images scene classification based on convolutional neural network. To eliminate the requirement of fixed-size input image, multiple pyramid pooling strategy is equipped between convolutional layers and fully connected layers. Then, the fixed-size features generated by multiple pyramid pooling layer was extended to one-dimension fixed-length vector and fed into fully connected layers. Our method could generate a fixed-length representation regardless of image size, at the same time get higher classification accuracy. On UC-Merced and NWPU-RESISC45 datasets, our framework achieved satisfying accuracies, which is 93.24% and 88.62% respectively.</p>

2021 ◽  
Vol 13 (22) ◽  
pp. 4542
Author(s):  
Qingwen Li ◽  
Dongmei Yan ◽  
Wanrong Wu

The complexity of scene images makes the research on remote-sensing image scene classification challenging. With the wide application of deep learning in recent years, many remote-sensing scene classification methods using a convolutional neural network (CNN) have emerged. Current CNN usually output global information by integrating the depth features extricated from the convolutional layer through the fully connected layer; however, the global information extracted is not comprehensive. This paper proposes an improved remote-sensing image scene classification method based on a global self-attention module to address this problem. The global information is derived from the depth characteristics extracted by the CNN. In order to better express the semantic information of the remote-sensing image, the multi-head self-attention module is introduced for global information augmentation. Meanwhile, the local perception unit is utilized to improve the self-attention module’s representation capabilities for local objects. The proposed method’s effectiveness is validated through comparative experiments with various training ratios and different scales on public datasets (UC Merced, AID, and NWPU-NESISC45). The precision of our proposed model is significantly improved compared to other methods for remote-sensing image scene classification.


2021 ◽  
Vol 13 (18) ◽  
pp. 3579
Author(s):  
Junge Shen ◽  
Chi Zhang ◽  
Yu Zheng ◽  
Ruxin Wang

Remote sensing image scene classification acts as an important task in remote sensing image applications, which benefits from the pleasing performance brought by deep convolution neural networks (CNNs). When applying deep models in this task, the challenges are, on one hand, that the targets with highly different scales may exist in the image simultaneously and the small targets could be lost in the deep feature maps of CNNs; and on the other hand, the remote sensing image data exhibits the properties of high inter-class similarity and high intra-class variance. Both factors could limit the performance of the deep models, which motivates us to develop an adaptive decision-level information fusion framework that can incorporate with any CNN backbones. Specifically, given a CNN backbone that predicts multiple classification scores based on the feature maps of different layers, we develop a pluginable importance factor generator that aims at predicting a factor for each score. The factors measure how confident the scores in different layers are with respect to the final output. Formally, the final score is obtained by a class-wise and weighted summation based on the scores and the corresponding factors. To reduce the co-adaptation effect among the scores of different layers, we propose a stochastic decision-level fusion training strategy that enables each classification score to randomly participate in the decision-level fusion. Experiments on four popular datasets including the UC Merced Land-Use dataset, the RSSCN 7 dataset, the AID dataset, and the NWPU-RESISC 45 dataset demonstrate the superiority of the proposed method over other state-of-the-art methods.


2021 ◽  
Vol 336 ◽  
pp. 06030
Author(s):  
Fengbing Jiang ◽  
Fang Li ◽  
Guoliang Yang

Convolution neural network for remote sensing image scene classification consumes a lot of time and storage space to train, test and save the model. In this paper, firstly, elastic variables are defined for convolution layer filter, and combined with filter elasticity and batch normalization scaling factor, a compound pruning method of convolution neural network is proposed. Only the superparameter of pruning rate needs to be adjusted during training. in the process of training, the performance of the model can be improved by means of transfer learning. In this paper, algorithm tests are carried out on NWPU-RESISC45 remote sensing image data to verify the effectiveness of the proposed method. According to the experimental results, the proposed method can not only effectively reduce the number of model parameters and computation, but also ensure the accuracy of the algorithm in remote sensing image classification.


2019 ◽  
Vol 11 (5) ◽  
pp. 494 ◽  
Author(s):  
Wei Zhang ◽  
Ping Tang ◽  
Lijun Zhao

Remote sensing image scene classification is one of the most challenging problems in understanding high-resolution remote sensing images. Deep learning techniques, especially the convolutional neural network (CNN), have improved the performance of remote sensing image scene classification due to the powerful perspective of feature learning and reasoning. However, several fully connected layers are always added to the end of CNN models, which is not efficient in capturing the hierarchical structure of the entities in the images and does not fully consider the spatial information that is important to classification. Fortunately, capsule network (CapsNet), which is a novel network architecture that uses a group of neurons as a capsule or vector to replace the neuron in the traditional neural network and can encode the properties and spatial information of features in an image to achieve equivariance, has become an active area in the classification field in the past two years. Motivated by this idea, this paper proposes an effective remote sensing image scene classification architecture named CNN-CapsNet to make full use of the merits of these two models: CNN and CapsNet. First, a CNN without fully connected layers is used as an initial feature maps extractor. In detail, a pretrained deep CNN model that was fully trained on the ImageNet dataset is selected as a feature extractor in this paper. Then, the initial feature maps are fed into a newly designed CapsNet to obtain the final classification result. The proposed architecture is extensively evaluated on three public challenging benchmark remote sensing image datasets: the UC Merced Land-Use dataset with 21 scene categories, AID dataset with 30 scene categories, and the NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that the proposed method can lead to a competitive classification performance compared with the state-of-the-art methods.


2021 ◽  
Vol 11 (19) ◽  
pp. 9204
Author(s):  
Xinyi Ma ◽  
Zhifeng Xiao ◽  
Hong-sik Yun ◽  
Seung-Jun Lee

High-resolution remote sensing image scene classification is a challenging visual task due to the large intravariance and small intervariance between the categories. To accurately recognize the scene categories, it is essential to learn discriminative features from both global and local critical regions. Recent efforts focus on how to encourage the network to learn multigranularity features with the destruction of the spatial information on the input image at different scales, which leads to meaningless edges that are harmful to training. In this study, we propose a novel method named Semantic Multigranularity Feature Learning Network (SMGFL-Net) for remote sensing image scene classification. The core idea is to learn both global and multigranularity local features from rearranged intermediate feature maps, thus, eliminating the meaningless edges. These features are then fused for the final prediction. Our proposed framework is compared with a collection of state-of-the-art (SOTA) methods on two fine-grained remote sensing image scene datasets, including the NWPU-RESISC45 and Aerial Image Datasets (AID). We justify several design choices, including the branch granularities, fusion strategies, pooling operations, and necessity of feature map rearrangement through a comparative study. Moreover, the overall performance results show that SMGFL-Net consistently outperforms other peer methods in classification accuracy, and the superiority is more apparent with less training data, demonstrating the efficacy of feature learning of our approach.


2021 ◽  
Vol 13 (4) ◽  
pp. 1917
Author(s):  
Alma Elizabeth Thuestad ◽  
Ole Risbøl ◽  
Jan Ingolf Kleppe ◽  
Stine Barlindhaug ◽  
Elin Rose Myrvoll

What can remote sensing contribute to archaeological surveying in subarctic and arctic landscapes? The pros and cons of remote sensing data vary as do areas of utilization and methodological approaches. We assessed the applicability of remote sensing for archaeological surveying of northern landscapes using airborne laser scanning (LiDAR) and satellite and aerial images to map archaeological features as a basis for (a) assessing the pros and cons of the different approaches and (b) assessing the potential detection rate of remote sensing. Interpretation of images and a LiDAR-based bare-earth digital terrain model (DTM) was based on visual analyses aided by processing and visualizing techniques. 368 features were identified in the aerial images, 437 in the satellite images and 1186 in the DTM. LiDAR yielded the better result, especially for hunting pits. Image data proved suitable for dwellings and settlement sites. Feature characteristics proved a key factor for detectability, both in LiDAR and image data. This study has shown that LiDAR and remote sensing image data are highly applicable for archaeological surveying in northern landscapes. It showed that a multi-sensor approach contributes to high detection rates. Our results have improved the inventory of archaeological sites in a non-destructive and minimally invasive manner.


2020 ◽  
Vol 17 (6) ◽  
pp. 968-972 ◽  
Author(s):  
Tianyu Wei ◽  
Jue Wang ◽  
Wenchao Liu ◽  
He Chen ◽  
Hao Shi

2021 ◽  
Vol 13 (10) ◽  
pp. 1950
Author(s):  
Cuiping Shi ◽  
Xin Zhao ◽  
Liguo Wang

In recent years, with the rapid development of computer vision, increasing attention has been paid to remote sensing image scene classification. To improve the classification performance, many studies have increased the depth of convolutional neural networks (CNNs) and expanded the width of the network to extract more deep features, thereby increasing the complexity of the model. To solve this problem, in this paper, we propose a lightweight convolutional neural network based on attention-oriented multi-branch feature fusion (AMB-CNN) for remote sensing image scene classification. Firstly, we propose two convolution combination modules for feature extraction, through which the deep features of images can be fully extracted with multi convolution cooperation. Then, the weights of the feature are calculated, and the extracted deep features are sent to the attention mechanism for further feature extraction. Next, all of the extracted features are fused by multiple branches. Finally, depth separable convolution and asymmetric convolution are implemented to greatly reduce the number of parameters. The experimental results show that, compared with some state-of-the-art methods, the proposed method still has a great advantage in classification accuracy with very few parameters.


Sign in / Sign up

Export Citation Format

Share Document