2D lidar to kinematic chain calibration using planar features of indoor scenes

Author(s):  
Bernardo Lourenço ◽  
Tiago Madeira ◽  
Paulo Dias ◽  
Vitor M. Ferreira Santos ◽  
Miguel Oliveira

Purpose 2D laser rangefinders (LRFs) are commonly used sensors in the field of robotics, as they provide accurate range measurements with high angular resolution. These sensors can be coupled with mechanical units which, by granting an additional degree of freedom to the movement of the LRF, enable the 3D perception of a scene. To be successful, this reconstruction procedure requires to evaluate with high accuracy the extrinsic transformation between the LRF and the motorized system. Design/methodology/approach In this work, a calibration procedure is proposed to evaluate this transformation. The method does not require a predefined marker (commonly used despite its numerous disadvantages), as it uses planar features in the point acquired clouds. Findings Qualitative inspections show that the proposed method reduces artifacts significantly, which typically appear in point clouds because of inaccurate calibrations. Furthermore, quantitative results and comparisons with a high-resolution 3D scanner demonstrate that the calibrated point cloud represents the geometries present in the scene with much higher accuracy than with the un-calibrated point cloud. Practical implications The last key point of this work is the comparison of two laser scanners: the lemonbot (authors’) and a commercial FARO scanner. Despite being almost ten times cheaper, the laser scanner was able to achieve similar results in terms of geometric accuracy. Originality/value This work describes a novel calibration technique that is easy to implement and is able to achieve accurate results. One of its key features is the use of planes to calibrate the extrinsic transformation.

2021 ◽  
Vol 11 (4) ◽  
pp. 1465
Author(s):  
Rocio Mora ◽  
Jose Antonio Martín-Jiménez ◽  
Susana Lagüela ◽  
Diego González-Aguilera

Total and automatic digitalization of indoor spaces in 3D implies a great advance in building maintenance and construction tasks, which currently require visits and manual works. Terrestrial laser scanners (TLS) have been widely used for these tasks, although the acquisition methodology with TLS systems is time consuming, and each point cloud is acquired in a different coordinate system, so the user has to post-process the data to clean and get a unique point cloud of the whole scenario. This paper presents a solution for the automatic data acquisition and registration of point clouds from indoor scenes, designed for point clouds acquired with a terrestrial laser scanner (TLS) mounted on an unmanned ground vehicle (UGV). The methodology developed allows the generation of one complete dense 3D point cloud consisting of the acquired point clouds registered in the same coordinate system, reaching an accuracy below 1 cm in section dimensions and below 1.5 cm in walls thickness, which makes it valid for quality control in building works. Two different study cases corresponding to building works were chosen for the validation of the method, showing the applicability of the methodology developed for tasks related to the control of the evolution of the construction.


2018 ◽  
Vol 10 (11) ◽  
pp. 1754 ◽  
Author(s):  
Shayan Nikoohemat ◽  
Michael Peter ◽  
Sander Oude Elberink ◽  
George Vosselman

The data acquisition with Indoor Mobile Laser Scanners (IMLS) is quick, low-cost and accurate for indoor 3D modeling. Besides a point cloud, an IMLS also provides the trajectory of the mobile scanner. We analyze this trajectory jointly with the point cloud to support the labeling of noisy, highly reflected and cluttered points in indoor scenes. An adjacency-graph-based method is presented for detecting and labeling of permanent structures, such as walls, floors, ceilings, and stairs. Through occlusion reasoning and the use of the trajectory as a set of scanner positions, gaps are discriminated from real openings in the data. Furthermore, a voxel-based method is applied for labeling of navigable space and separating them from obstacles. The results show that 80% of the doors and 85% of the rooms are correctly detected, and most of the walls and openings are reconstructed. The experimental outcomes indicate that the trajectory of MLS systems plays an essential role in the understanding of indoor scenes.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Author(s):  
M. Franzini ◽  
V. Casella ◽  
P. Marchese ◽  
M. Marini ◽  
G. Della Porta ◽  
...  

Abstract. Recent years showed a gradual transition from terrestrial to aerial survey thanks to the development of UAV and sensors for it. Many sectors benefited by this change among which geological one; drones are flexible, cost-efficient and can support outcrops surveying in many difficult situations such as inaccessible steep and high rock faces. The experiences acquired in terrestrial survey, with total stations, GNSS or terrestrial laser scanner (TLS), are not yet completely transferred to UAV acquisition. Hence, quality comparisons are still needed. The present paper is framed in this perspective aiming to evaluate the quality of the point clouds generated by an UAV in a geological context; data analysis was conducted comparing the UAV product with the homologous acquired with a TLS system. Exploiting modern semantic classification, based on eigenfeatures and support vector machine (SVM), the two point clouds were compared in terms of density and mutual distance. The UAV survey proves its usefulness in this situation with a uniform density distribution in the whole area and producing a point cloud with a quality comparable with the more traditional TLS systems.


Author(s):  
Ravinder Singh ◽  
Archana Khurana ◽  
Sunil Kumar

Purpose This study aims to develop an optimized 3D laser point reconstruction using Descent Gradient algorithm. Precise and accurate reconstruction of 3D laser point cloud of the complex environment/object is a key solution for many industries such as construction, gaming, automobiles, aerial navigation, architecture and automation. A 2D laser scanner along with a servo motor/pan tilt/inertial measurement unit is used for generating 3D point cloud (either environment/object or both) by acquiring the real-time data from sensors. However, while generating the 3D laser point cloud, various problems related to time synchronization problem between laser and servomotor and torque variation in servomotors arise, which causes misalignment in stacking the 2D laser scan for generating the 3D point cloud of the environment. Because of the misalignment in stacking, the 2D laser scan corresponding to the erroneous angular and position information by the servomotor and the 3D laser point cloud become distorted in terms of inconsistency for measuring the dimension of the objects. Design/methodology/approach This paper addresses a modified 3D laser system assembled from a 2D laser scanner coupled with a servomotor (dynamixel motor) for developing an efficient 3D laser point cloud with the implementation of an optimization technique: descent gradient filter (DGT). The proposed approach reduces the cost function (error) in the angular and position coordinates of the servo motor caused because of torque variation and time synchronization, which resulted in enhancing the accuracy in 3D point cloud mapping for the accurate measurement of the object’s dimensions. Findings Various real-world experiments are performed with the proposed DGT filter linked with laser scanner and servomotor and an improvement of 6.5 per cent in measuring the accurate dimension of object is obtained while comparing with conventional approaches for generating a 3D laser point cloud. Originality/value This proposed technique may be applicable for various industrial applications that are based on robotics arms (such as painting, welding and cutting) in the automobile industry, the optimized measurement of object, efficient mobile robot navigation, precise 3D reconstruction of environment/object in construction, architecture applications, airborne applications and aerial navigation.


2020 ◽  
Author(s):  
Moritz Bruggisser ◽  
Johannes Otepka ◽  
Norbert Pfeifer ◽  
Markus Hollaus

<p>Unmanned aerial vehicles-borne laser scanning (ULS) allows time-efficient acquisition of high-resolution point clouds on regional extents at moderate costs. The quality of ULS-point clouds facilitates the 3D modelling of individual tree stems, what opens new possibilities in the context of forest monitoring and management. In our study, we developed and tested an algorithm which allows for i) the autonomous detection of potential stem locations within the point clouds, ii) the estimation of the diameter at breast height (DBH) and iii) the reconstruction of the tree stem. In our experiments on point clouds from both, a RIEGL miniVUX-1DL and a VUX-1UAV, respectively, we could detect 91.0 % and 77.6 % of the stems within our study area automatically. The DBH could be modelled with biases of 3.1 cm and 1.1 cm, respectively, from the two point cloud sets with respective detection rates of 80.6 % and 61.2 % of the trees present in the field inventory. The lowest 12 m of the tree stem could be reconstructed with absolute stem diameter differences below 5 cm and 2 cm, respectively, compared to stem diameters from a point cloud from terrestrial laser scanning. The accuracy of larger tree stems thereby was higher in general than the accuracy for smaller trees. Furthermore, we recognized a small influence only of the completeness with which a stem is covered with points, as long as half of the stem circumference was captured. Likewise, the absolute point count did not impact the accuracy, but, in contrast, was critical to the completeness with which a scene could be reconstructed. The precision of the laser scanner, on the other hand, was a key factor for the accuracy of the stem diameter estimation. <br>The findings of this study are highly relevant for the flight planning and the sensor selection of future ULS acquisition missions in the context of forest inventories.</p>


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Tengfei Zhou ◽  
Deng Pan ◽  
Guoli Wang

To obtain high-precision measurement data using vehicle-borne light detection and ranging (LiDAR) scanning (VLS) systems, calibration is necessary before a data acquisition mission. Thus, a novel calibration method based on a homemade target ball is proposed to estimate the system mounting parameters, which refer to the rotational and translational offsets between the LiDAR sensor and inertial measurement unit (IMU) orientation and position. Firstly, the spherical point cloud is fitted into a sphere to extract the coordinates of the centre, and each scan line on the sphere is fitted into a section of the sphere to calculate the distance ratio from the centre to the nearest two sections, and the attitude and trajectory parameters of the centre are calculated by linear interpolation. Then, the real coordinates of the centre of the sphere are calculated by measuring the coordinates of the reflector directly above the target ball with the total station. Finally, three rotation parameters and three translation parameters are calculated by two least-squares adjustments. Comparisons of the point cloud before and after calibration and the calibrated point clouds with the point cloud obtained by the terrestrial laser scanner show that the accuracy significantly improved after calibration. The experiment indicates that the calibration method based on the homemade target ball can effectively improve the accuracy of the point cloud, which can promote VLS development and applications.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Sign in / Sign up

Export Citation Format

Share Document