scholarly journals Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters

2018 ◽  
Vol 10 (1) ◽  
pp. 144 ◽  
Author(s):  
Yongyang Xu ◽  
Liang Wu ◽  
Zhong Xie ◽  
Zhanlong Chen
2020 ◽  
Vol 12 (15) ◽  
pp. 2426
Author(s):  
Alin-Ionuț Pleșoianu ◽  
Mihai-Sorin Stupariu ◽  
Ionuț Șandric ◽  
Ileana Pătru-Stupariu ◽  
Lucian Drăguț

Traditional methods for individual tree-crown (ITC) detection (image classification, segmentation, template matching, etc.) applied to very high-resolution remote sensing imagery have been shown to struggle in disparate landscape types or image resolutions due to scale problems and information complexity. Deep learning promised to overcome these shortcomings due to its superior performance and versatility, proven with reported detection rates of ~90%. However, such models still find their limits in transferability across study areas, because of different tree conditions (e.g., isolated trees vs. compact forests) and/or resolutions of the input data. This study introduces a highly replicable deep learning ensemble design for ITC detection and species classification based on the established single shot detector (SSD) model. The ensemble model design is based on varying the input data for the SSD models, coupled with a voting strategy for the output predictions. Very high-resolution unmanned aerial vehicles (UAV), aerial remote sensing imagery and elevation data are used in different combinations to test the performance of the ensemble models in three study sites with highly contrasting spatial patterns. The results show that ensemble models perform better than any single SSD model, regardless of the local tree conditions or image resolution. The detection performance and the accuracy rates improved by 3–18% with only as few as two participant single models, regardless of the study site. However, when more than two models were included, the performance of the ensemble models only improved slightly and even dropped.


2018 ◽  
Vol 10 (11) ◽  
pp. 1768 ◽  
Author(s):  
Hui Yang ◽  
Penghai Wu ◽  
Xuedong Yao ◽  
Yanlan Wu ◽  
Biao Wang ◽  
...  

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.


2016 ◽  
Vol 8 (10) ◽  
pp. 814 ◽  
Author(s):  
ZhiYong Lv ◽  
Haiqing He ◽  
Jón Benediktsson ◽  
Hong Huang

Sign in / Sign up

Export Citation Format

Share Document