Semiautomatically Register MMS LiDAR Points and Panoramic Image Sequence Using Road Lamp and Lane

2019 ◽  
Vol 85 (11) ◽  
pp. 829-840
Author(s):  
Ningning Zhu ◽  
Yonghong Jia ◽  
Xia Huang

We propose using the feature points of road lamp and lane to register mobile mapping system (MMS) LiDAR points and panoramic image sequence. Road lamp and lane are the common sobjects on roads; the spatial distributions are regular, and thus our registration method has wide applicability and high precision. First, the road lamp and lane were extracted from the LiDAR points by horizontal grid and reflectance intensity and then by optimizing the endpoints as the feature points of road lamp and lane. Second, the feature points were projected onto the panoramic image by initial parameters and then by extracting corresponding feature points near the projection location. Third, the direct linear transformation method was used to solve the registration model and eliminate mismatching feature points. In the experiments, we compare the accuracy of our registration method with other registration methods by a sequence of panoramic images. The results show that our registration method is effective; the registration accuracy of our method is less than 10 pixels and averaged 5.84 pixels in all 31 panoramic images (4000 × 8000 pixels), which is much less than that of the 56.24 pixels obtained by the original registration method.

Author(s):  
N. Zhu ◽  
B. Yang ◽  
Y. Jia

Abstract. We propose using the relative orientation model (ROM) of panoramic to register the MMS LiDAR points and panoramic image sequence, which has the wide applicability. The feature points, extracted and matched from panoramic image pairs, are used to solve the relative position and attitude parameters in the ROM, then, combining the absolute position and attitude parameters of the initial panoramic image, the MMS LiDAR points and panoramic image sequence are registered. First, we propose the position/attitude ROM (PA-ROM) and attitude ROM (A-ROM) of panoramic images respectively, which are apply to the position/attitude parameters both unknown and only the attitude parameters unknown. Second, we automatically extract and match feature points from panoramic image pairs using the SURF algorithm, as these mismatching points will affect the registration accuracy, the RANSAC algorithm and ROM were used to choose the best matching points automatically. Finally, we select the feature points manually from MMS LiDAR points and panoramic image sequence as the checkpoints, and then compare the registration accuracy of continuous/discontinuous panoramic image pairs. The results show that MMS LiDAR points and panoramic image sequence are registered accurately based on ROM (7.36 and 3.75 pixels in dataset I and II), what's more, our registration method just tackle the image pairs (uninvolved LiDAR points), so it is suitable for more road scenes.


2021 ◽  
Vol 87 (12) ◽  
pp. 913-922
Author(s):  
Ningning Zhu ◽  
Bisheng Yang ◽  
Zhen Dong ◽  
Chi Chen ◽  
Xia Huang ◽  
...  

To register mobile mapping system (MMS) lidar points and panoramic-image sequences, a relative orientation model of panoramic images (PROM) is proposed. The PROM is suitable for cases in which attitude or orientation parameters are unknown in the panoramic-image sequence. First, feature points are extracted and matched from panoramic-image pairs using the SURF algorithm. Second, these matched feature points are used to solve the relative attitude parameters in the PROM. Then, combining the PROM with the absolute position and attitude parameters of the initial panoramic image, the MMS lidar points and panoramic-image sequence are registered. Finally, the registration accuracy of the PROM method is assessed using corresponding points manually selected from the MMSlidar points and panoramic-image sequence. The results show that three types of MMSdata sources are registered accurately based on the proposed registration method. Our method transforms the registration of panoramic images and lidar points into image feature-point matching, which is suitable for diverse road scenes compared with existing methods.


Author(s):  
Tingting Cui ◽  
Shunping Ji ◽  
Jie Shan ◽  
Jianya Gong ◽  
Kejian Liu

For multi-sensor integrated systems, such as a mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, though the latter is more reliable.


2020 ◽  
Vol 9 (11) ◽  
pp. 689
Author(s):  
Jhe-Syuan Lai ◽  
Yu-Chi Peng ◽  
Min-Jhen Chang ◽  
Jun-Yi Huang

The present researchers took multistation-based panoramic images and imported the processed images into a virtual tour platform to create webpages and a virtual reality environment. The integrated multimedia platform aims to assist students in a surveying practice course. A questionnaire survey was conducted to evaluate the platform’s usefulness to students, and its design was modified according to respondents’ feedback. Panoramic photos were taken using a full-frame digital single-lens reflex camera with an ultra-wide-angle zoom lens mounted on a panoramic instrument. The camera took photos at various angles, generating a visual field with horizontal and vertical viewing angles close to 360°. Multiple overlapping images were stitched to form a complete panoramic image for each capturing station. Image stitching entails extracting feature points to verify the correspondence between the same feature point in different images (i.e., tie points). By calculating the root mean square error of a stitched image, we determined the stitching quality and modified the tie point location when necessary. The root mean square errors of nearly all panoramas were lower than 5 pixels, meeting the recommended stitching standard. Additionally, 92% of the respondents (n = 62) considered the platform helpful for their surveying practice course. We also discussed and provided suggestions for the improvement of panoramic image quality, camera parameter settings, and panoramic image processing.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.


Author(s):  
Luciano Augusto Cano Martins ◽  
Eduarda Helena Leandro Nascimento ◽  
Hugo Gaêta-Araujo ◽  
Matheus L Oliveira ◽  
Deborah Queiroz Freitas

Objective: To map the shape, location, and thickness of the focal trough of a panoramic radiography device with a multilayer imaging program. Methods: An acrylic plate (148 × 148 × 3 mm) containing 1156 holes distributed in a matrix of 34 × 34 rows was placed in the OP300 Maxio at the levels of the maxilla and mandible. 20 metal spheres (3.5 mm in diameter) were placed on the holes of the plate under 15 different arrangements and panoramic images were acquired for each arrangement at 66 kV, 8 mA, and an exposure time of 16 s. The resulting panoramic radiographs from the five image layers were exported, the horizontal and vertical dimensions of the metal spheres were measured in all images using the Image J software, and the magnification and distortion rates of the spheres were calculated. All metal spheres presenting a magnification rate lower than 30% in both vertical and horizontal dimensions and a distortion rate lower than 10% were considered to map the focal troughs of each of the five image layers. Results: All panoramic image layers had a curved shape ranging from 39° to 51° for both dental arches and varied in position and thickness. The anterior region of maxilla was anteriorly displaced when compared to the anterior region of the mandible for all layers. Image layers are thicker at the level of the mandible than those at the level of the maxilla; also, inner layers were thinner and outer layers were thicker. Conclusion All image layers in the studied panoramic radiography device had a curved shape and varied in position and thickness. The anterior region of maxilla was anteriorly displaced when compared to that of the mandible for all layers.


Author(s):  
C. Platias ◽  
M. Vakalopoulou ◽  
K. Karantzalos

In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.


Author(s):  
J. Shao ◽  
W. Zhang ◽  
Y. Zhu ◽  
A. Shen

Image has rich color information, and it can help to promote recognition and classification of point cloud. The registration is an important step in the application of image and point cloud. In order to give the rich texture and color information for LiDAR point cloud, the paper researched a fast registration method of point cloud and sequence images based on the ground-based LiDAR system. First, calculating transformation matrix of one of sequence images based on 2D image and LiDAR point cloud; second, using the relationships of position and attitude information among multi-angle sequence images to calculate all transformation matrixes in the horizontal direction; last, completing the registration of point cloud and sequence images based on the collinear condition of image point, projective center and LiDAR point. The experimental results show that the method is simple and fast, and the stitching error between adjacent images is litter; meanwhile, the overall registration accuracy is high, and the method can be used in engineering application.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document