scholarly journals Cross-Domain Scene Classification Based on a Spatial Generalized Neural Architecture Search for High Spatial Resolution Remote Sensing Images

2021 ◽  
Vol 13 (17) ◽  
pp. 3460
Author(s):  
Yuling Chen ◽  
Wentao Teng ◽  
Zhen Li ◽  
Qiqi Zhu ◽  
Qingfeng Guan

By labelling high spatial resolution (HSR) images with specific semantic classes according to geographical properties, scene classification has been proven to be an effective method for HSR remote sensing image semantic interpretation. Deep learning is widely applied in HSR remote sensing scene classification. Most of the scene classification methods based on deep learning assume that the training datasets and the test datasets come from the same datasets or obey similar feature distributions. However, in practical application scenarios, it is difficult to guarantee this assumption. For new datasets, it is time-consuming and labor-intensive to repeat data annotation and network design. The neural architecture search (NAS) can automate the process of redesigning the baseline network. However, traditional NAS lacks the generalization ability to different settings and tasks. In this paper, a novel neural network search architecture framework—the spatial generalization neural architecture search (SGNAS) framework—is proposed. This model applies the NAS of spatial generalization to cross-domain scene classification of HSR images to bridge the domain gap. The proposed SGNAS can automatically search the architecture suitable for HSR image scene classification and possesses network design principles similar to the manually designed networks, which can make the obtained network migrate to different tasks. To obtain a simple and low-dimensional search space, the traditional NAS search space was optimized and the human-the-loop method was used. To extend the optimized search space to different tasks, the search space was generalized. The experimental results demonstrate that the network searched by the SGNAS framework with good generalization ability displays its effectiveness for cross-domain scene classification of HSR images, both in accuracy and time efficiency.

2021 ◽  
pp. 107949
Author(s):  
Yifan Fan ◽  
Xiaotian Ding ◽  
Jindong Wu ◽  
Jian Ge ◽  
Yuguo Li

Forests ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 1047 ◽  
Author(s):  
Ying Sun ◽  
Jianfeng Huang ◽  
Zurui Ao ◽  
Dazhao Lao ◽  
Qinchuan Xin

The monitoring of tree species diversity is important for forest or wetland ecosystem service maintenance or resource management. Remote sensing is an efficient alternative to traditional field work to map tree species diversity over large areas. Previous studies have used light detection and ranging (LiDAR) and imaging spectroscopy (hyperspectral or multispectral remote sensing) for species richness prediction. The recent development of very high spatial resolution (VHR) RGB images has enabled detailed characterization of canopies and forest structures. In this study, we developed a three-step workflow for mapping tree species diversity, the aim of which was to increase knowledge of tree species diversity assessment using deep learning in a tropical wetland (Haizhu Wetland) in South China based on VHR-RGB images and LiDAR points. Firstly, individual trees were detected based on a canopy height model (CHM, derived from LiDAR points) by the local-maxima-based method in the FUSION software (Version 3.70, Seattle, USA). Then, tree species at the individual tree level were identified via a patch-based image input method, which cropped the RGB images into small patches (the individually detected trees) based on the tree apexes detected. Three different deep learning methods (i.e., AlexNet, VGG16, and ResNet50) were modified to classify the tree species, as they can make good use of the spatial context information. Finally, four diversity indices, namely, the Margalef richness index, the Shannon–Wiener diversity index, the Simpson diversity index, and the Pielou evenness index, were calculated from the fixed subset with a size of 30 × 30 m for assessment. In the classification phase, VGG16 had the best performance, with an overall accuracy of 73.25% for 18 tree species. Based on the classification results, mapping of tree species diversity showed reasonable agreement with field survey data (R2Margalef = 0.4562, root-mean-square error RMSEMargalef = 0.5629; R2Shannon–Wiener = 0.7948, RMSEShannon–Wiener = 0.7202; R2Simpson = 0.7907, RMSESimpson = 0.1038; and R2Pielou = 0.5875, RMSEPielou = 0.3053). While challenges remain for individual tree detection and species classification, the deep-learning-based solution shows potential for mapping tree species diversity.


2021 ◽  
Author(s):  
Rajagopal T K P ◽  
Sakthi G ◽  
Prakash J

Abstract Hyperspectral remote sensing based image classification is found to be a very widely used method employed for scene analysis that is from a remote sensing data which is of a high spatial resolution. Classification is a critical task in the processing of remote sensing. On the basis of the fact that there are different materials with reflections in a particular spectral band, all the traditional pixel-wise classifiers both identify and also classify all materials on the basis of their spectral curves (or pixels). Owing to the dimensionality of the remote sensing data of high spatial resolution along with a limited number of labelled samples, a remote sensing image of a high spatial resolution tends to suffer from something known as the Hughes phenomenon which can pose a serious problem. In order to overcome such a small-sample problem, there are several methods of learning like the Support Vector Machine (SVM) along with the other methods that are kernel based and these were introduced recently for a remote sensing classification of the image and this has shown a good performance. For the purpose of this work, an SVM along with Radial Basis Function (RBF) method was proposed. But, a feature learning approach for the classification of the hyperspectral image is based on the Convolutional Neural Networks (CNNs). The results of the experiment that were based on various image datasets that were hyperspectral which implies that the method proposed will be able to achieve a better performance of classification compared to other traditional methods like the SVM and the RBF kernel and also all conventional methods based on deep learning (CNN).


Sign in / Sign up

Export Citation Format

Share Document