Research on Parallel Detection Technology of Remote Sensing Object Based on Deep Learning

Author(s):  
Chengguang Zhang ◽  
Xuebo Zhang ◽  
Min Jiang
2019 ◽  
Vol 15 (5) ◽  
pp. 391-395 ◽  
Author(s):  
Min Wang ◽  
Jin-yong Chen ◽  
Gang Wang ◽  
Feng Gao ◽  
Kang Sun ◽  
...  

2020 ◽  
Author(s):  
Benjamin Aubrey Robson ◽  
Tobias Bolch ◽  
Shelley MacDonell ◽  
Daniel Hölbling ◽  
Philip Rastner ◽  
...  

<p>Rock glaciers are an important, but often overlooked, component of the cryosphere and are one of the few visible manifestations of permafrost. In certain parts of the world, rock glaciers can contribute up to 30% of catchment streamflow. Remote sensing has permitted the creation of rock glacier inventories for large regions, however, due to the spectral similarity between rock glaciers and the surrounding material, the creation of such inventories is typically conducted based on manual interpretation of remote sensing data which is both time consuming and subjective. Here, we present a method that combines deep learning (convolutional neural networks or CNNs) and object-based image analysis (OBIA) into one workflow based on freely available Sentinel-2 imagery, Sentinel-1 interferometric coherence, and a Digital Elevation Model. CNNs work by identifying recurring patterns and textures and produce a heatmap where each pixel indicates the probability that it belongs to a rock glacier or not. By using OBIA we can segment the datasets and classify objects based on their heatmap value as well as morphological and spatial characteristics and convert the raw probability heatmap generated by the deeo learning into rock glacier polygons. We analysed two distinct catchments, the La Laguna catchment in the Chilean semi-arid Andes and the Poiqu catchment on the Tibetan Plateau. In total, our method mapped 72% of the rock glaciers across both catchments, although many of the individual rock glacier polygons contained false positives that are texturally similar, such as debris-flows, avalanche deposits, or fluvial material causing the user’s accuracy to be moderate (64-69%) even if the producer’s accuracy was higher (75%). We repeated our method on very-high resolution Pléiades satellite imagery (resampled to 2 m resolution) for a subset of the Poiqu catchment to ascertain what difference the image resolution makes. We found that working at a higher spatial resolution has little influence on the user’s accuracy (an increase of 3%) yet as smaller landforms were mapped, the producer’s accuracy rose by 13% to 88%. By running all the processing within an object-based analysis it was possible to both generate the deep learning heatmap and automate some of the post-processing through image segmentation and object reshaping. Given the difficulties in differentiating rock glaciers using image spectra, deep learning offers a feasible method for automated mapping of rock glaciers over large regional scales.</p>


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2019 ◽  
Vol 16 (9) ◽  
pp. 1343-1347 ◽  
Author(s):  
Yibo Sun ◽  
Qiaolin Zeng ◽  
Bing Geng ◽  
Xinwen Lin ◽  
Bilige Sude ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3848
Author(s):  
Wei Cui ◽  
Meng Yao ◽  
Yuanjie Hao ◽  
Ziwei Wang ◽  
Xin He ◽  
...  

Pixel-based semantic segmentation models fail to effectively express geographic objects and their topological relationships. Therefore, in semantic segmentation of remote sensing images, these models fail to avoid salt-and-pepper effects and cannot achieve high accuracy either. To solve these problems, object-based models such as graph neural networks (GNNs) are considered. However, traditional GNNs directly use similarity or spatial correlations between nodes to aggregate nodes’ information, which rely too much on the contextual information of the sample. The contextual information of the sample is often distorted, which results in a reduction in the node classification accuracy. To solve this problem, a knowledge and geo-object-based graph convolutional network (KGGCN) is proposed. The KGGCN uses superpixel blocks as nodes of the graph network and combines prior knowledge with spatial correlations during information aggregation. By incorporating the prior knowledge obtained from all samples of the study area, the receptive field of the node is extended from its sample context to the study area. Thus, the distortion of the sample context is overcome effectively. Experiments demonstrate that our model is improved by 3.7% compared with the baseline model named Cluster GCN and 4.1% compared with U-Net.


2021 ◽  
Vol 13 (15) ◽  
pp. 2883
Author(s):  
Gwanggil Jeon

Remote sensing is a fundamental tool for comprehending the earth and supporting human–earth communications [...]


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 14 (13) ◽  
Author(s):  
Ratna Kumari Vemuri ◽  
Pundru Chandra Shaker Reddy ◽  
B S Puneeth Kumar ◽  
Jayavadivel Ravi ◽  
Sudhir Sharma ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document