scholarly journals Deep Learning Spatial-Spectral Classification of Remote Sensing Images by Applying Morphology-Based Differential Extinction Profile (DEP)

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2893
Author(s):  
Nafiseh Kakhani ◽  
Mehdi Mokhtarzade ◽  
Mohammad Javad Valadan Zoej

Since the technology of remote sensing has been improved recently, the spatial resolution of satellite images is getting finer. This enables us to precisely analyze the small complex objects in a scene through remote sensing images. Thus, the need to develop new, efficient algorithms like spatial-spectral classification methods is growing. One of the most successful approaches is based on extinction profile (EP), which can extract contextual information from remote sensing data. Moreover, deep learning classifiers have drawn attention in the remote sensing community in the past few years. Recent progress has shown the effectiveness of deep learning at solving different problems, particularly segmentation tasks. This paper proposes a novel approach based on a new concept, which is differential extinction profile (DEP). DEP makes it possible to have an input feature vector with both spectral and spatial information. The input vector is then fed into a proposed straightforward deep-learning-based classifier to produce a thematic map. The approach is carried out on two different urban datasets from Pleiades and World-View 2 satellites. In order to prove the capabilities of the suggested approach, we compare the final results to the results of other classification strategies with different input vectors and various types of common classifiers, such as support vector machine (SVM) and random forests (RF). It can be concluded that the proposed approach is significantly improved in terms of three kinds of criteria, which are overall accuracy, Kappa coefficient, and total disagreement.

2020 ◽  
pp. 35
Author(s):  
M. Campos-Taberner ◽  
F.J. García-Haro ◽  
B. Martínez ◽  
M.A. Gilabert

<p class="p1">The use of deep learning techniques for remote sensing applications has recently increased. These algorithms have proven to be successful in estimation of parameters and classification of images. However, little effort has been made to make them understandable, leading to their implementation as “black boxes”. This work aims to evaluate the performance and clarify the operation of a deep learning algorithm, based on a bi-directional recurrent network of long short-term memory (2-BiLSTM). The land use classification in the Valencian Community based on Sentinel-2 image time series in the framework of the common agricultural policy (CAP) is used as an example. It is verified that the accuracy of the deep learning techniques is superior (98.6 % overall success) to that other algorithms such as decision trees (DT), k-nearest neighbors (k-NN), neural networks (NN), support vector machines (SVM) and random forests (RF). The performance of the classifier has been studied as a function of time and of the predictors used. It is concluded that, in the study area, the most relevant information used by the network in the classification are the images corresponding to summer and the spectral and spatial information derived from the red and near infrared bands. These results open the door to new studies in the field of the explainable deep learning in remote sensing applications.</p>


2021 ◽  
Vol 13 (15) ◽  
pp. 2903
Author(s):  
Wancheng Tao ◽  
Zixuan Xie ◽  
Ying Zhang ◽  
Jiayu Li ◽  
Fu Xuan ◽  
...  

Black soil is one of the most productive soils with high organic matter content. Crop residue covering is important for protecting black soil from alleviating soil erosion and increasing soil organic carbon. Mapping crop residue covered areas accurately using remote sensing images can monitor the protection of black soil in regional areas. Considering the inhomogeneity and randomness, resulting from human management difference, the high spatial resolution Chinese GF-1 B/D image and developed MSCU-net+C deep learning method are used to mapping corn residue covered area (CRCA) in this study. The developed MSCU-net+C is joined by a multiscale convolution group (MSCG), the global loss function, and Convolutional Block Attention Module (CBAM) based on U-net and the full connected conditional random field (FCCRF). The effectiveness of the proposed MSCU-net+C is validated by the ablation experiment and comparison experiment for mapping CRCA in Lishu County, Jilin Province, China. The accuracy assessment results show that the developed MSCU-net+C improve the CRCA classification accuracy from IOUAVG = 0.8604 and KappaAVG = 0.8864 to IOUAVG = 0.9081 and KappaAVG = 0.9258 compared with U-net. Our developed and other deep semantic segmentation networks (MU-net, GU-net, MSCU-net, SegNet, and Dlv3+) improve the classification accuracy of IOUAVG/KappaAVG with 0.0091/0.0058, 0.0133/0.0091, 0.044/0.0345, 0.0104/0.0069, and 0.0107/0.0072 compared with U-net, respectively. The classification accuracies of IOUAVG/KappaAVG of traditional machine learning methods, including support vector machine (SVM) and neural network (NN), are 0.576/0.5526 and 0.6417/0.6482, respectively. These results reveal that the developed MSCU-net+C can be used to map CRCA for monitoring black soil protection.


2020 ◽  
Vol 12 (5) ◽  
pp. 832 ◽  
Author(s):  
Chunhua Liao ◽  
Jinfei Wang ◽  
Qinghua Xie ◽  
Ayman Al Baz ◽  
Xiaodong Huang ◽  
...  

Annual crop inventory information is important for many agriculture applications and government statistics. The synergistic use of multi-temporal polarimetric synthetic aperture radar (SAR) and available multispectral remote sensing data can reduce the temporal gaps and provide the spectral and polarimetric information of the crops, which is effective for crop classification in areas with frequent cloud interference. The main objectives of this study are to develop a deep learning model to map agricultural areas using multi-temporal full polarimetric SAR and multi-spectral remote sensing data, and to evaluate the influence of different input features on the performance of deep learning methods in crop classification. In this study, a one-dimensional convolutional neural network (Conv1D) was proposed and tested on multi-temporal RADARSAT-2 and VENµS data for crop classification. Compared with the Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN) and non-deep learning methods including XGBoost, Random Forest (RF), and Support Vector Machina (SVM), the Conv1D performed the best when the multi-temporal RADARSAT-2 data (Pauli decomposition or coherency matrix) and VENµS multispectral data were fused by the Minimum Noise Fraction (MNF) transformation. The Pauli decomposition and coherency matrix gave similar overall accuracy (OA) for Conv1D when fused with the VENµS data by the MNF transformation (OA = 96.65 ± 1.03% and 96.72 ± 0.77%). The MNF transformation improved the OA and F-score for most classes when Conv1D was used. The results reveal that the coherency matrix has a great potential in crop classification and the MNF transformation of multi-temporal RADARSAT-2 and VENµS data can enhance the performance of Conv1D.


Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 28
Author(s):  
Ziqiang Yao ◽  
Jinlu Jia ◽  
Yurong Qian

Cloud detection plays a vital role in remote sensing data preprocessing. Traditional cloud detection algorithms have difficulties in feature extraction and thus produce a poor detection result when processing remote sensing images with uneven cloud distribution and complex surface background. To achieve better detection results, a cloud detection method with multi-scale feature extraction and content-aware reassembly network (MCNet) is proposed. Using pyramid convolution and channel attention mechanisms to enhance the model’s feature extraction capability, MCNet can fully extract the spatial information and channel information of clouds in an image. The content-aware reassembly is used to ensure that sampling on the network can recover enough in-depth semantic information and improve the model cloud detection effect. The experimental results show that the proposed MCNet model has achieved good detection results in cloud detection tasks.


Information ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 365
Author(s):  
Chenming Xu ◽  
Yunlong Mao

This study introduces a software-based traffic congestion monitoring system. The transportation system controls the traffic between cities all over the world. Traffic congestion happens not only in cities, but also on highways and other places. The current transportation system is not satisfactory in the area without monitoring. In order to improve the limitations of the current traffic system in obtaining road data and expand its visual range, the system uses remote sensing data as the data source for judging congestion. Since some remote sensing data needs to be kept confidential, this is a problem to be solved to effectively protect the safety of remote sensing data during the deep learning training process. Compared with the general deep learning training method, this study provides a federated learning method to identify vehicle targets in remote sensing images to solve the problem of data privacy in the training process of remote sensing data. The experiment takes the remote sensing image data sets of Los Angeles Road and Washington Road as samples for training, and the training results can achieve an accuracy of about 85%, and the estimated processing time of each image can be as low as 0.047 s. In the final experimental results, the system can automatically identify the vehicle targets in the remote sensing images to achieve the purpose of detecting congestion.


2021 ◽  
Vol 13 (8) ◽  
pp. 1507
Author(s):  
Haibo Wang ◽  
Jianchao Qi ◽  
Yufei Lei ◽  
Jun Wu ◽  
Bo Li ◽  
...  

Automatic detection of newly constructed building areas (NCBAs) plays an important role in addressing issues of ecological environment monitoring, urban management, and urban planning. Compared with low-and-middle resolution remote sensing images, high-resolution remote sensing images are superior in spatial resolution and display of refined spatial details. Yet its problems of spectral heterogeneity and complexity have impeded research of change detection for high-resolution remote sensing images. As generalized machine learning (including deep learning) technologies proceed, the efficiency and accuracy of recognition for ground-object in remote sensing have been substantially improved, providing a new solution for change detection of high-resolution remote sensing images. To this end, this study proposes a refined NCBAs detection method consisting of four parts based on generalized machine learning: (1) pre-processing; (2) candidate NCBAs are obtained by means of bi-temporal building masks acquired by deep learning semantic segmentation, and then registered one by one; (3) rules and support vector machine (SVM) are jointly adopted for classification of NCBAs with high, medium and low confidence; and (4) the final vectors of NCBAs are obtained by post-processing. In addition, area-based and pixel-based methods are adopted for accuracy assessment. Firstly, the proposed method is applied to three groups of GF1 images covering the urban fringe areas of Jinan, whose experimental results are divided into three categories: high, high-medium, and high-medium-low confidence. The results show that NCBAs of high confidence share the highest F1 score and the best overall effect. Therefore, only NCBAs of high confidence are considered to be the final detection result by this method. Specifically, in NCBAs detection for three groups GF1 images in Jinan, the mean Recall of area-based and pixel-based assessment methods reach around 77% and 91%, respectively, the mean Pixel Accuracy (PA) 88% and 92%, and the mean F1 82% and 91%, confirming the effectiveness of this method on GF1. Similarly, the proposed method is applied to two groups of ZY302 images in Xi’an and Kunming. The scores of F1 for two groups of ZY302 images are also above 90% respectively, confirming the effectiveness of this method on ZY302. It can be concluded that adoption of area registration improves registration efficiency, and the joint use of prior rules and SVM classifier with probability features could avoid over and missing detection for NCBAs. In practical applications, this method is contributive to automatic NCBAs detection from high-resolution remote sensing images.


2021 ◽  
Vol 13 (3) ◽  
pp. 504
Author(s):  
Wanting Yang ◽  
Xianfeng Zhang ◽  
Peng Luo

The collapse of buildings caused by earthquakes can lead to a large loss of life and property. Rapid assessment of building damage with remote sensing image data can support emergency rescues. However, current studies indicate that only a limited sample set can usually be obtained from remote sensing images immediately following an earthquake. Consequently, the difficulty in preparing sufficient training samples constrains the generalization of the model in the identification of earthquake-damaged buildings. To produce a deep learning network model with strong generalization, this study adjusted four Convolutional Neural Network (CNN) models for extracting damaged building information and compared their performance. A sample dataset of damaged buildings was constructed by using multiple disaster images retrieved from the xBD dataset. Using satellite and aerial remote sensing data obtained after the 2008 Wenchuan earthquake, we examined the geographic and data transferability of the deep network model pre-trained on the xBD dataset. The result shows that the network model pre-trained with samples generated from multiple disaster remote sensing images can extract accurately collapsed building information from satellite remote sensing data. Among the adjusted CNN models tested in the study, the adjusted DenseNet121 was the most robust. Transfer learning solved the problem of poor adaptability of the network model to remote sensing images acquired by different platforms and could identify disaster-damaged buildings properly. These results provide a solution to the rapid extraction of earthquake-damaged building information based on a deep learning network model.


2021 ◽  
Author(s):  
Yao Li

&lt;p&gt;There is a growing demand for constructing a complete and accurate landslide maps and inventories in a wide range, which leading explosive growth in extraction algorithm study based on remote sensing images. To the best of our knowledge, no study focused on deep learning-based methods for landslide detection on hyperspectral images.We proposes a deep learning frameworkwith constraints to detect landslides on hyperspectral image. The framework consists of two steps. First, a deep belief network is employed to extract the spectral&amp;#8211;spatial features of a landslide. Second, we insert the high-level features and constraints into a logistic regression classifier for verifying the landslide. Experimental results demonstrated that the framework can achieve higher overall accuracy when compared to traditional hyperspectral image classification methods. The precision of the landslide detection on the whole image, obtained by the proposed method, can reach 97.91%, whereas the precision of the linear support vector machine, spectral information divergence, and spectral angle match are 94.36%, 84.50%, and 86.44%, respectively. Also, this article reveals that the high-level feature extraction system has a significant potential for landslide detection, especially in multi-source remote sensing.&lt;/p&gt;


2019 ◽  
Vol 8 (2) ◽  
pp. 3960-3963

In this paper, we have done exploratory experiments using deep learning convolutional neural network framework to classify crops into cotton, sugarcane and mulberry. In this contribution we have used Earth Observing-1 hyperion hyperspectral remote sensing data as the input. Structured data has been extracted from hyperspectral data using a remote sensing tool. An analytical assessment shows that convolutional neural network (CNN) gives more accuracy over classical support vector machine (SVM) and random forest methods. It has been observed that accuracy of SVM is 75 %, accuracy of random forest classification is 78 % and accuracy of CNN using Adam optimizer is 99.3 % and loss is 2.74 %. CNN using RMSProp also gives the same accuracy 99.3 % and the loss is 4.43 %. This identified crop information will be used for finding crop production and for deciding market strategies


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


Sign in / Sign up

Export Citation Format

Share Document