Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features

2021 ◽  
Vol 181 ◽  
pp. 129-147
Author(s):  
Bai Zhu ◽  
Yuanxin Ye ◽  
Liang Zhou ◽  
Zhilin Li ◽  
Gaofei Yin
Author(s):  
B. Zhu ◽  
Y. Ye ◽  
C. Yang ◽  
L. Zhou ◽  
H. Liu ◽  
...  

Abstract. Co-Registration of aerial imagery and Light Detection and Ranging (LiDAR) data is quilt challenging because the different imaging mechanism causes significant geometric and radiometric distortions between such data. To tackle the problem, this paper proposes an automatic registration method based on structural features and three-dimension (3D) phase correlation. In the proposed method, the LiDAR point cloud data is first transformed into the intensity map, which is used as the reference image. Then, we employ the Fast operator to extract uniformly distributed interest points in the aerial image by a partition strategy and perform a local geometric correction by using the collinearity equation to eliminate scale and rotation difference between images. Subsequently, a robust structural feature descriptor is build based on dense gradient features, and the 3D phase correlation is used to detect control points (CPs) between aerial images and LiDAR data in the frequency domain, where the image matching is accelerated by the 3D Fast Fourier Transform (FFT). Finally, the obtained CPs are employed to correct the exterior orientation elements, which is used to achieve co-registration of aerial images and LiDAR data. Experiments with two datasets of aerial images and LiDAR data show that the proposed method is much faster and more robust than state of the art methods.


2016 ◽  
Vol 8 (12) ◽  
pp. 1030 ◽  
Author(s):  
Shouji Du ◽  
Yunsheng Zhang ◽  
Rongjun Qin ◽  
Zhihua Yang ◽  
Zhengrong Zou ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2473
Author(s):  
Qinglie Yuan ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Aidi Hizami Alias ◽  
Shaiful Jahari Hashim

Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction.


2020 ◽  
Vol 12 (11) ◽  
pp. 1820
Author(s):  
Raoul Blackman ◽  
Fei Yuan

Urban forests provide ecosystem services; tree canopy cover is the basic quantification of ecosystem services. Ground assessment of the urban forest is limited; with continued refinement, remote sensing can become an essential tool for analyzing the urban forest. This study addresses three research questions that are essential for urban forest management using remote sensing: (1) Can object-based image analysis (OBIA) and non-image classification methods (such as random point-based evaluation) accurately determine urban canopy coverage using high-spatial-resolution aerial images? (2) Is it possible to assess the impact of natural disturbances in addition to other factors (such as urban development) on urban canopy changes in the classification map created by OBIA? (3) How can we use Light Detection and Ranging (LiDAR) data and technology to extract urban canopy metrics accurately and effectively? The urban forest canopy area and location within the City of St Peter, Minnesota (MN) boundary between 1938 and 2019 were defined using both OBIA and random-point-based methods with high-spatial-resolution aerial images. Impacts of natural disasters, such as the 1998 tornado and tree diseases, on the urban canopy cover area, were examined. Finally, LiDAR data was used to determine the height, density, crown area, diameter, and volume of the urban forest canopy. Both OBIA and random-point methods gave accurate results of canopy coverages. The OBIA is relatively more time-consuming and requires specialist knowledge, whereas the random-point-based method only shows the total coverage of the classes without locational information. Canopy change caused by tornado was discernible in the canopy OBIA-based classification maps while the change due to diseases was undetectable. To accurately exact urban canopy metrics besides tree locations, dense LiDAR point cloud data collected at the leaf-on season as well as algorithms or software developed specifically for urban forest analysis using LiDAR data are needed.


2017 ◽  
Vol 194 ◽  
pp. 437-446 ◽  
Author(s):  
Rubén Valbuena ◽  
Matti Maltamo ◽  
Lauri Mehtätalo ◽  
Petteri Packalen

Sign in / Sign up

Export Citation Format

Share Document