scholarly journals STOCHASTIC SURFACE MESH RECONSTRUCTION

Author(s):  
M. Ozendi ◽  
D. Akca ◽  
H. Topan

A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Author(s):  
M. Franzini ◽  
V. Casella ◽  
P. Marchese ◽  
M. Marini ◽  
G. Della Porta ◽  
...  

Abstract. Recent years showed a gradual transition from terrestrial to aerial survey thanks to the development of UAV and sensors for it. Many sectors benefited by this change among which geological one; drones are flexible, cost-efficient and can support outcrops surveying in many difficult situations such as inaccessible steep and high rock faces. The experiences acquired in terrestrial survey, with total stations, GNSS or terrestrial laser scanner (TLS), are not yet completely transferred to UAV acquisition. Hence, quality comparisons are still needed. The present paper is framed in this perspective aiming to evaluate the quality of the point clouds generated by an UAV in a geological context; data analysis was conducted comparing the UAV product with the homologous acquired with a TLS system. Exploiting modern semantic classification, based on eigenfeatures and support vector machine (SVM), the two point clouds were compared in terms of density and mutual distance. The UAV survey proves its usefulness in this situation with a uniform density distribution in the whole area and producing a point cloud with a quality comparable with the more traditional TLS systems.


Author(s):  
Bernardo Lourenço ◽  
Tiago Madeira ◽  
Paulo Dias ◽  
Vitor M. Ferreira Santos ◽  
Miguel Oliveira

Purpose 2D laser rangefinders (LRFs) are commonly used sensors in the field of robotics, as they provide accurate range measurements with high angular resolution. These sensors can be coupled with mechanical units which, by granting an additional degree of freedom to the movement of the LRF, enable the 3D perception of a scene. To be successful, this reconstruction procedure requires to evaluate with high accuracy the extrinsic transformation between the LRF and the motorized system. Design/methodology/approach In this work, a calibration procedure is proposed to evaluate this transformation. The method does not require a predefined marker (commonly used despite its numerous disadvantages), as it uses planar features in the point acquired clouds. Findings Qualitative inspections show that the proposed method reduces artifacts significantly, which typically appear in point clouds because of inaccurate calibrations. Furthermore, quantitative results and comparisons with a high-resolution 3D scanner demonstrate that the calibrated point cloud represents the geometries present in the scene with much higher accuracy than with the un-calibrated point cloud. Practical implications The last key point of this work is the comparison of two laser scanners: the lemonbot (authors’) and a commercial FARO scanner. Despite being almost ten times cheaper, the laser scanner was able to achieve similar results in terms of geometric accuracy. Originality/value This work describes a novel calibration technique that is easy to implement and is able to achieve accurate results. One of its key features is the use of planes to calibrate the extrinsic transformation.


2020 ◽  
Author(s):  
Moritz Bruggisser ◽  
Johannes Otepka ◽  
Norbert Pfeifer ◽  
Markus Hollaus

<p>Unmanned aerial vehicles-borne laser scanning (ULS) allows time-efficient acquisition of high-resolution point clouds on regional extents at moderate costs. The quality of ULS-point clouds facilitates the 3D modelling of individual tree stems, what opens new possibilities in the context of forest monitoring and management. In our study, we developed and tested an algorithm which allows for i) the autonomous detection of potential stem locations within the point clouds, ii) the estimation of the diameter at breast height (DBH) and iii) the reconstruction of the tree stem. In our experiments on point clouds from both, a RIEGL miniVUX-1DL and a VUX-1UAV, respectively, we could detect 91.0 % and 77.6 % of the stems within our study area automatically. The DBH could be modelled with biases of 3.1 cm and 1.1 cm, respectively, from the two point cloud sets with respective detection rates of 80.6 % and 61.2 % of the trees present in the field inventory. The lowest 12 m of the tree stem could be reconstructed with absolute stem diameter differences below 5 cm and 2 cm, respectively, compared to stem diameters from a point cloud from terrestrial laser scanning. The accuracy of larger tree stems thereby was higher in general than the accuracy for smaller trees. Furthermore, we recognized a small influence only of the completeness with which a stem is covered with points, as long as half of the stem circumference was captured. Likewise, the absolute point count did not impact the accuracy, but, in contrast, was critical to the completeness with which a scene could be reconstructed. The precision of the laser scanner, on the other hand, was a key factor for the accuracy of the stem diameter estimation. <br>The findings of this study are highly relevant for the flight planning and the sensor selection of future ULS acquisition missions in the context of forest inventories.</p>


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Tengfei Zhou ◽  
Deng Pan ◽  
Guoli Wang

To obtain high-precision measurement data using vehicle-borne light detection and ranging (LiDAR) scanning (VLS) systems, calibration is necessary before a data acquisition mission. Thus, a novel calibration method based on a homemade target ball is proposed to estimate the system mounting parameters, which refer to the rotational and translational offsets between the LiDAR sensor and inertial measurement unit (IMU) orientation and position. Firstly, the spherical point cloud is fitted into a sphere to extract the coordinates of the centre, and each scan line on the sphere is fitted into a section of the sphere to calculate the distance ratio from the centre to the nearest two sections, and the attitude and trajectory parameters of the centre are calculated by linear interpolation. Then, the real coordinates of the centre of the sphere are calculated by measuring the coordinates of the reflector directly above the target ball with the total station. Finally, three rotation parameters and three translation parameters are calculated by two least-squares adjustments. Comparisons of the point cloud before and after calibration and the calibrated point clouds with the point cloud obtained by the terrestrial laser scanner show that the accuracy significantly improved after calibration. The experiment indicates that the calibration method based on the homemade target ball can effectively improve the accuracy of the point cloud, which can promote VLS development and applications.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Author(s):  
Z. Koppanyi ◽  
C, K. Toth

Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.


Author(s):  
Van Sinh Nguyen ◽  
Manh Ha Tran ◽  
Ba Cong Nhan

Reconstructing the surface of 3D point clouds is a reconstruction from a cloud of 3D points to a triangular mesh. This process approximates a discrete point cloud by a continuous/smooth surface depending on the input data and the applications of users. In this paper, we propose a complete method to reconstruct an elevation surface from 3D point clouds. The method consists of three steps. In the first step, we triangulate an elevation surface of 3D point cloud structured in a 3D grid. In the second step, we remove the outward triangles to deal with concave regions on the boundary of the triangular mesh. In the third step, we reconstruct this surface by filling the hole of triangular mesh. Our method could process very fast for triangulating the surface, preserve the topology and characteristic of the input surface after reconstruction.


Sign in / Sign up

Export Citation Format

Share Document