scholarly journals DETECTION AND PURGING OF SPECULAR REFLECTIVE AND TRANSPARENT OBJECT INFLUENCES IN 3D RANGE MEASUREMENTS

Author(s):  
R. Koch ◽  
S. May ◽  
A. Nüchter

3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It is more reliable in 3D as in 2D. Nevertheless, collect the data of multiple scans and post-filter them as soon as the object was bypassed should pursued. This is why future work concentrates on implementing a post-filter module. Besides, it is the aim to improve the discrimination between specular reflective and transparent objects.

Author(s):  
D. Costantino ◽  
M. G. Angelini ◽  
F. Settembrini

The paper presents a software dedicated to the elaboration of point clouds, called <i>Intelligent Cloud Viewer</i> (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of “no” very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, <i>Computational Geometry Algorithms Library</i>), registration and advanced algorithms for point clouds (PCL, <i>Point Cloud Library</i>), advanced data structures (BOOST, <i>Basic Object Oriented Supporting Tools</i>), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (<i>Terrestrial Laser Scanner</i>) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (<i>Above Ground Level</i>) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.


2021 ◽  
Vol 13 (24) ◽  
pp. 13526
Author(s):  
Vicente Bayarri ◽  
Elena Castillo ◽  
Sergio Ripoll ◽  
Miguel A. Sebastián

There is a growing demand for measurements of natural and built elements, which require quantifiable accuracy and reliability, within various fields of application. Measurements from 3D Terrestrial Laser Scanner come in a point cloud, and different types of surfaces such as spheres or planes can be modelled. Due to the occlusions and/or limited field of view, it is seldom possible to survey a complete feature from one location, and information has to be acquired from multiple points of view and later co-registered and geo-referenced to obtain a consistent coordinate system. The aim of this paper is not to match point clouds, but to show a methodology to adjust, following the traditional topo-geodetic methods, 3DTLS data by modelling references such as calibrated spheres and checker-boards to generate a 3D trilateration network from them to derive accuracy and reliability measurements and post-adjustment statistical analysis. The method tries to find the function that best fits the measured data, taking into account not only that the measurements made in the field are not perfect, but that each one of them has a different deviation depending on the adjustment of each reference, so they have to be weighted accordingly.


2021 ◽  
Vol 15 (3) ◽  
pp. 324-333
Author(s):  
Kenta Ohno ◽  
Hiroaki Date ◽  
Satoshi Kanai ◽  
◽  

Recently, three-dimensional (3D) laser scanning technology using terrestrial laser scanner (TLS) has been widely used in the fields of plant manufacturing, civil engineering and construction, and surveying. It is desirable for the operator to be able to immediately and intuitively confirm the scanned point cloud to reduce unscanned regions and acquire scanned point clouds of high quality. Therefore, in this study, we developed a method to superimpose the point cloud on the actual environment to assist environmental 3D laser measurements, allowing the operator to check the scanned point cloud or unscanned regions in real time using the camera image. The method included extracting the correspondences of the camera image and the image generated by point clouds by considering unscanned regions, estimating the camera position and attitude in the point cloud by sampling correspondence points, and superimposing the scanned point cloud and unscanned regions on the camera image. When the proposed method was applied to two types of environments, that is, a boiler room and university office, the estimated camera image had a mean position error of approximately 150 mm and mean attitude error of approximately 1°, while the scanned point cloud and unscanned regions were superimposed on the camera image on a tablet PC at a rate of approximately 1 fps.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2263
Author(s):  
Haileleol Tibebu ◽  
Jamie Roche ◽  
Varuna De Silva ◽  
Ahmet Kondoz

Creating an accurate awareness of the environment using laser scanners is a major challenge in robotics and auto industries. LiDAR (light detection and ranging) is a powerful laser scanner that provides a detailed map of the environment. However, efficient and accurate mapping of the environment is yet to be obtained, as most modern environments contain glass, which is invisible to LiDAR. In this paper, a method to effectively detect and localise glass using LiDAR sensors is proposed. This new approach is based on the variation of range measurements between neighbouring point clouds, using a two-step filter. The first filter examines the change in the standard deviation of neighbouring clouds. The second filter uses a change in distance and intensity between neighbouring pules to refine the results from the first filter and estimate the glass profile width before updating the cartesian coordinate and range measurement by the instrument. Test results demonstrate the detection and localisation of glass and the elimination of errors caused by glass in occupancy grid maps. This novel method detects frameless glass from a long range and does not depend on intensity peak with an accuracy of 96.2%.


Author(s):  
Guillermo Oliver ◽  
Pablo Gil ◽  
Jose F. Gomez ◽  
Fernando Torres

AbstractIn this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with different sizes and characteristics. Grasps were performed in two different configurations, obtaining an average score of 97.5% of successful real grasps for soles without heel made with materials of low or medium flexibility. In both cases, the grasping method was tested without carrying out tactile control throughout the task.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4227
Author(s):  
Nicolás Jacob-Loyola ◽  
Felipe Muñoz-La Rivera ◽  
Rodrigo F. Herrera ◽  
Edison Atencio

The physical progress of a construction project is monitored by an inspector responsible for verifying and backing up progress information, usually through site photography. Progress monitoring has improved, thanks to advances in image acquisition, computer vision, and the development of unmanned aerial vehicles (UAVs). However, no comprehensive and simple methodology exists to guide practitioners and facilitate the use of these methods. This research provides recommendations for the periodic recording of the physical progress of a construction site through the manual operation of UAVs and the use of point clouds obtained under photogrammetric techniques. The programmed progress is then compared with the actual progress made in a 4D BIM environment. This methodology was applied in the construction of a reinforced concrete residential building. The results showed the methodology is effective for UAV operation in the work site and the use of the photogrammetric visual records for the monitoring of the physical progress and the communication of the work performed to the project stakeholders.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


Sign in / Sign up

Export Citation Format

Share Document