scholarly journals Automated Accuracy Assessment of a Mobile Mapping System with Lightweight Laser Scanning and MEMS Sensors

2021 ◽  
Vol 11 (3) ◽  
pp. 1007
Author(s):  
Kaleel Al-Durgham ◽  
Derek D. Lichti ◽  
Eunju Kwak ◽  
Ryan Dixon

The accuracy assessment of mobile mapping system (MMS) outputs is usually reliant on manual labor to inspect the quality of a vast amount of collected geospatial data. This paper presents an automated framework for the accuracy assessment and quality inspection of point cloud data collected by MMSs operating with lightweight laser scanners and consumer-grade microelectromechanical systems (MEMS) sensors. A new, large-scale test facility has been established in a challenging navigation environment (downtown area) to support the analyses conducted in this research work. MMS point cloud data are divided into short time slices for comparison with the higher-accuracy, terrestrial laser scanner (TLS) point cloud of the test facility. MMS data quality is quantified by the results of registering the point cloud of each slice with the TLS datasets. Experiments on multiple land vehicle MMS point cloud datasets using a lightweight laser scanner and three different MEMS devices are presented to demonstrate the effectiveness of the proposed method. The mean accuracy of a consumer grade MEMS (<$100) was found to be 1.13 ± 0.47 m. The mean accuracy of two commercial MEMS (>$100) was in the range of 0.48 ± 0.23 m to 0.85 ± 0.52 m. The method presented here in can be straightforwardly implemented and adopted for the accuracy assessment of other MMSs types such as unmanned aerial vehicles (UAV)s.

Author(s):  
Y. Li ◽  
M. Sakamoto ◽  
T. Shinohara ◽  
T. Satoh

In recent years, extensive research has been conducted to automatically generate high-accuracy and high-precision road orthophotos using images and laser point cloud data acquired from a mobile mapping system (MMS). However, it is necessary to mask out non-road objects such as vehicles, bicycles, pedestrians and their shadows in MMS images in order to eliminate erroneous textures from the road orthophoto. Hence, we proposed a novel vehicle and its shadow detection model based on Faster R-CNN for automatically and accurately detecting the regions of vehicles and their shadows from MMS images. The experimental results show that the maximum recall of the proposed model was high &amp;ndash; 0.963 (intersection-over-union &amp;gt;&amp;thinsp;0.7) &amp;ndash; and the model could identify the regions of vehicles and their shadows accurately and robustly from MMS images, even when they contain varied vehicles, different shadow directions, and partial occlusions. Furthermore, it was confirmed that the quality of road orthophoto generated using vehicle and its shadow masks was significantly improved as compared to those generated using no masks or using vehicle masks only.


Author(s):  
M. Kuschnerus ◽  
D. Schröder ◽  
R. Lindenbergh

Abstract. The advancement of permanently measuring laser scanners has opened up a wide range of new applications, but also led to the need for more advanced approaches on error quantification and correction. Time-dependent and systematic error influences may only become visible in data of quasi-permanent measurements. During a scan experiment in February/March 2020 point clouds were acquired every thirty minutes with a Riegl VZ-2000 laser scanner, and various other sensors (inclination sensors, weather station and GNSS sensors) were used to survey the environment of the laser scanner and the study site. Using this measurement configuration, our aim is to identify apparent displacements in multi-temporal scans due to systematic error influences and to investigate data quality for assessment of geomorphic changes in coastal regions. We analyse scan data collected around two storm events around 09/02/2020 (Ciara) and around 22/02/2020 (Yulia) and derive the impact of heavy storms on the point cloud data through comparison with the collected auxiliary data. To investigate the systematic residuals on data acquired by permanent laser scanning, we extracted several stable flat surfaces from the point cloud data. From a plane fitted through the respective surfaces of each scan, we estimated the mean displacement of each plane with the respective root mean square errors. Inclination sensors, internal and external, recorded pitch and roll values during each scan. We derived a mean inclination per scan (in pitch and roll) and the standard deviation from the mean as a measure of the stability of the laser scanner during each scan. Evaluation of the data recorded by a weather station together with knowledge of the movement behaviour, allows to derive possible causes of displacements and/or noise and correction models. The results are compared to independent measurements from GNSS sensors for validation. For wind speeds of 10 m/s and higher, movements of the scanner considerably increase the noise level in the point cloud data.


Author(s):  
Torben Peters ◽  
Claus Brenner

Abstract We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.


2019 ◽  
Vol 11 (23) ◽  
pp. 2737 ◽  
Author(s):  
Minsu Kim ◽  
Seonkyung Park ◽  
Jeffrey Danielson ◽  
Jeffrey Irwin ◽  
Gregory Stensaas ◽  
...  

The traditional practice to assess accuracy in lidar data involves calculating RMSEz (root mean square error of the vertical component). Accuracy assessment of lidar point clouds in full 3D (three dimension) is not routinely performed. The main challenge in assessing accuracy in full 3D is how to identify a conjugate point of a ground-surveyed checkpoint in the lidar point cloud with the smallest possible uncertainty value. Relatively coarse point-spacing in airborne lidar data makes it challenging to determine a conjugate point accurately. As a result, a substantial unwanted error is added to the inherent positional uncertainty of the lidar data. Unless we keep this additional error small enough, the 3D accuracy assessment result will not properly represent the inherent uncertainty. We call this added error “external uncertainty,” which is associated with conjugate point identification. This research developed a general external uncertainty model using three-plane intersections and accounts for several factors (sensor precision, feature dimension, and point density). This method can be used for lidar point cloud data from a wide range of sensor qualities, point densities, and sizes of the features of interest. The external uncertainty model was derived as a semi-analytical function that takes the number of points on a plane as an input. It is a normalized general function that can be scaled by smooth surface precision (SSP) of a lidar system. This general uncertainty model provides a quantitative guideline on the required conditions for the conjugate point based on the geometric features. Applications of the external uncertainty model were demonstrated using various lidar point cloud data from the U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) library to determine the valid conditions for a conjugate point from three-plane modeling.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3908 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Acquisition of 3D point cloud data (PCD) using a laser scanner and aligning it with a video frame is a new approach that is efficient for retrofitting comprehensive objects in heavy pipeline industrial facilities. This work contributes a generic framework for interactive retrofitting in a virtual environment and an unmanned aerial vehicle (UAV)-based sensory setup design to acquire PCD. The framework adopts a 4-in-1 alignment using a point cloud registration algorithm for a pre-processed PCD alignment with the partial PCD, and frame-by-frame registration method for video alignment. This work also proposes a virtual interactive retrofitting framework that uses pre-defined 3D computer-aided design models (CAD) with a customized graphical user interface (GUI) and visualization of a 4-in-1 aligned video scene from a UAV camera in a desktop environment. Trials were carried out using the proposed framework in a real environment at a water treatment facility. A qualitative and quantitative study was conducted to evaluate the performance of the proposed generic framework from participants by adopting the appropriate questionnaire and retrofitting task-oriented experiment. Overall, it was found that the proposed framework could be a solution for interactive 3D CAD model retrofitting on a combination of UAV sensory setup-acquired PCD and real-time video from the camera in heavy industrial facilities.


2014 ◽  
Vol 644-650 ◽  
pp. 2656-2660
Author(s):  
Yao Cheng ◽  
Guang Xue Chen ◽  
Chen Chen ◽  
Jiang Ping Yuan

In the process of 3D printing, stereo image acquisition is the basis and premise of 3D modeling so that it’s important to study the acquisition methods and techniques. This paper will study the process of point cloud data acquisition of a hand model by using handheld laser scanner REVscan, and processed by the reverse engineering software Geomagic Studio. Using the object model captured, we can greatly improve the efficiency and accuracy, as well as reduce the cycle of the 3D printing. This will help achieve the transmission of 3D printing data without geographical restrictions, in which truly realize the concept "What You See Is What You Get".


Sign in / Sign up

Export Citation Format

Share Document