scholarly journals High-resolution lidar data for the Chilkat Ridge area, Alaska

2020 ◽  
Author(s):  
R.D. Daanen ◽  
Katreen Wikstrom Jones
2021 ◽  
Vol 13 (13) ◽  
pp. 2473
Author(s):  
Qinglie Yuan ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Aidi Hizami Alias ◽  
Shaiful Jahari Hashim

Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction.


2018 ◽  
Vol 10 (9) ◽  
pp. 1459 ◽  
Author(s):  
Ying Sun ◽  
Xinchang Zhang ◽  
Xiaoyang Zhao ◽  
Qinchuan Xin

Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data.


2017 ◽  
Vol 205 ◽  
pp. 1216-1230 ◽  
Author(s):  
D. Lingfors ◽  
J.M. Bright ◽  
N.A. Engerer ◽  
J. Ahlberg ◽  
S. Killinger ◽  
...  

2006 ◽  
Vol 88 (3-4) ◽  
pp. 160-172 ◽  
Author(s):  
Rou-Fei Chen ◽  
Kuo-Jen Chang ◽  
Jacques Angelier ◽  
Yu-Chang Chan ◽  
Benoît Deffontaines ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document