scholarly journals PERSPECTIVE INTENSITY IMAGES FOR CO-REGISTRATION OF TERRESTRIAL LASER SCANNER AND DIGITAL CAMERA

Author(s):  
Yubin Liang ◽  
Yan Qiu ◽  
Tiejun Cui

Co-registration of terrestrial laser scanner and digital camera has been an important topic of research, since reconstruction of visually appealing and measurable models of the scanned objects can be achieved by using both point clouds and digital images. This paper presents an approach for co-registration of terrestrial laser scanner and digital camera. A perspective intensity image of the point cloud is firstly generated by using the collinearity equation. Then corner points are extracted from the generated perspective intensity image and the camera image. The fundamental matrix F is then estimated using several interactively selected tie points and used to obtain more matches with RANSAC. The 3D coordinates of all the matched tie points are directly obtained or estimated using the least squares method. The robustness and effectiveness of the presented methodology is demonstrated by the experimental results. Methods presented in this work may also be used for automatic registration of terrestrial laser scanning point clouds.

Author(s):  
Yubin Liang ◽  
Yan Qiu ◽  
Tiejun Cui

Co-registration of terrestrial laser scanner and digital camera has been an important topic of research, since reconstruction of visually appealing and measurable models of the scanned objects can be achieved by using both point clouds and digital images. This paper presents an approach for co-registration of terrestrial laser scanner and digital camera. A perspective intensity image of the point cloud is firstly generated by using the collinearity equation. Then corner points are extracted from the generated perspective intensity image and the camera image. The fundamental matrix F is then estimated using several interactively selected tie points and used to obtain more matches with RANSAC. The 3D coordinates of all the matched tie points are directly obtained or estimated using the least squares method. The robustness and effectiveness of the presented methodology is demonstrated by the experimental results. Methods presented in this work may also be used for automatic registration of terrestrial laser scanning point clouds.


Author(s):  
C. L. Lau ◽  
S. Halim ◽  
M. Zulkepli ◽  
A. M. Azwan ◽  
W. L. Tang ◽  
...  

The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.


Author(s):  
S. Peterson ◽  
J. Lopez ◽  
R. Munjy

<p><strong>Abstract.</strong> A small unmanned aerial vehicle (UAV) with survey-grade GNSS positioning is used to produce a point cloud for topographic mapping and 3D reconstruction. The objective of this study is to assess the accuracy of a UAV imagery-derived point cloud by comparing a point cloud generated by terrestrial laser scanning (TLS). Imagery was collected over a 320&amp;thinsp;m by 320&amp;thinsp;m area with undulating terrain, containing 80 ground control points. A SenseFly eBee Plus fixed-wing platform with PPK positioning with a 10.6&amp;thinsp;mm focal length and a 20&amp;thinsp;MP digital camera was used to fly the area. Pix4Dmapper, a computer vision based commercial software, was used to process a photogrammetric block, constrained by 5 GCPs while obtaining cm-level RMSE based on the remaining 75 checkpoints. Based on results of automatic aerial triangulation, a point cloud and digital surface model (DSM) (2.5&amp;thinsp;cm/pixel) are generated and their accuracy assessed. A bias less than 1 pixel was observed in elevations from the UAV DSM at the checkpoints. 31 registered TLS scans made up a point cloud of the same area with an observed horizontal root mean square error (RMSE) of 0.006m, and negligible vertical RMSE. Comparisons were made between fitted planes of extracted roof features of 2 buildings and centreline profile comparison of a road in both UAV and TLS point clouds. Comparisons showed an average +8&amp;thinsp;cm bias with UAV point cloud computing too high in two features. No bias was observed in the roof features of the southernmost building.</p>


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2013 ◽  
Vol 405-408 ◽  
pp. 3032-3036
Author(s):  
Yi Bo Sun ◽  
Xin Qi Zheng ◽  
Zong Ren Jia ◽  
Gang Ai

At present, most of the commercial 3D laser scanning measurement systems do work for a large area and a big scene, but few shows their advantage in the small area or small scene. In order to solve this shortage, we design a light-small mobile 3D laser scanning system, which integrates GPS, INS, laser scanner and digital camera and other sensors, to generate the Point Cloud data of the target through data filtering and fusion. This system can be mounted on airborne or terrestrial small mobile platform and enables to achieve the goal of getting Point Cloud data rapidly and reconstructing the real 3D model. Compared to the existing mobile 3D laser scanning system, the system we designed has high precision but lower cost, smaller hardware and more flexible.


2019 ◽  
Vol 13 (2) ◽  
pp. 105-134 ◽  
Author(s):  
Mohammad Omidalizarandi ◽  
Boris Kargoll ◽  
Jens-André Paffenholz ◽  
Ingo Neumann

Abstract In the last two decades, the integration of a terrestrial laser scanner (TLS) and digital photogrammetry, besides other sensors integration, has received considerable attention for deformation monitoring of natural or man-made structures. Typically, a TLS is used for an area-based deformation analysis. A high-resolution digital camera may be attached on top of the TLS to increase the accuracy and completeness of deformation analysis by optimally combining points or line features extracted both from three-dimensional (3D) point clouds and captured images at different epochs of time. For this purpose, the external calibration parameters between the TLS and digital camera needs to be determined precisely. The camera calibration and internal TLS calibration are commonly carried out in advance in the laboratory environments. The focus of this research is to highly accurately and robustly estimate the external calibration parameters between the fused sensors using signalised target points. The observables are the image measurements, the 3D point clouds, and the horizontal angle reading of a TLS. In addition, laser tracker observations are used for the purpose of validation. The functional models are determined based on the space resection in photogrammetry using the collinearity condition equations, the 3D Helmert transformation and the constraint equation, which are solved in a rigorous bundle adjustment procedure. Three different adjustment procedures are developed and implemented: (1) an expectation maximization (EM) algorithm to solve a Gauss-Helmert model (GHM) with grouped t-distributed random deviations, (2) a novel EM algorithm to solve a corresponding quasi-Gauss-Markov model (qGMM) with t-distributed pseudo-misclosures, and (3) a classical least-squares procedure to solve the GHM with variance components and outlier removal. The comparison of the results demonstrates the precise, reliable, accurate and robust estimation of the parameters in particular by the second and third procedures in comparison to the first one. In addition, the results show that the second procedure is computationally more efficient than the other two.


2020 ◽  
Author(s):  
Moritz Bruggisser ◽  
Johannes Otepka ◽  
Norbert Pfeifer ◽  
Markus Hollaus

&lt;p&gt;Unmanned aerial vehicles-borne laser scanning (ULS) allows time-efficient acquisition of high-resolution point clouds on regional extents at moderate costs. The quality of ULS-point clouds facilitates the 3D modelling of individual tree stems, what opens new possibilities in the context of forest monitoring and management. In our study, we developed and tested an algorithm which allows for i) the autonomous detection of potential stem locations within the point clouds, ii) the estimation of the diameter at breast height (DBH) and iii) the reconstruction of the tree stem. In our experiments on point clouds from both, a RIEGL miniVUX-1DL and a VUX-1UAV, respectively, we could detect 91.0 % and 77.6 % of the stems within our study area automatically. The DBH could be modelled with biases of 3.1 cm and 1.1 cm, respectively, from the two point cloud sets with respective detection rates of 80.6 % and 61.2 % of the trees present in the field inventory. The lowest 12 m of the tree stem could be reconstructed with absolute stem diameter differences below 5 cm and 2 cm, respectively, compared to stem diameters from a point cloud from terrestrial laser scanning. The accuracy of larger tree stems thereby was higher in general than the accuracy for smaller trees. Furthermore, we recognized a small influence only of the completeness with which a stem is covered with points, as long as half of the stem circumference was captured. Likewise, the absolute point count did not impact the accuracy, but, in contrast, was critical to the completeness with which a scene could be reconstructed. The precision of the laser scanner, on the other hand, was a key factor for the accuracy of the stem diameter estimation.&amp;#160;&lt;br&gt;The findings of this study are highly relevant for the flight planning and the sensor selection of future ULS acquisition missions in the context of forest inventories.&lt;/p&gt;


2014 ◽  
Vol 638-640 ◽  
pp. 2160-2163
Author(s):  
Gui Hua Cang ◽  
Jian Ping Yue

Fusion of close range photogrammetry (CRP) and terrestrial laser scanning (TLS) technology has been a hot topic in the field of building reconstruction. There are many ways to realize the fusion of the two kind data. In this paper, we propose a method for 3D-2D data registration based on Scale Invariant Feature Transform (SIFT) algorithm and range intensity data. 3D terrestrial laser scanner and digital camera are different sensors, which will lead to large difference between intensity image (derived from range intensity data) and color image. The traditional image matching method can not apply to register these kind images. This paper focuses on studying the feasibility and practicability of SIFT algorithm on such different images matching. The result shows that the principal of SIFT method is suitable for the registration of the two kind images.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Sign in / Sign up

Export Citation Format

Share Document