Geometric Quality Indicators for Scanned Point Clouds

2012 ◽  
Vol 523-524 ◽  
pp. 901-906
Author(s):  
Hiromasa Suzuki ◽  
Yutaka Ohtake ◽  
Shusaku Shibata ◽  
Takashi Michikawa

We propose geometric quality indicators for evaluating the quality of point clouds (sets of scanned points). These indicators represent aspects of quality often considered in practical scanning procedures, such as cloud thickness and cloud density. We defined the indicators mathematically and developed software to compute them, then conducted experiments to evaluate the indicators for point clouds obtained by scanning the same samples with different types of surface scanners and scanning procedures. The results showed that the indicators are capable of highlighting various aspects of point cloud quality.

Author(s):  
J. Markiewicz ◽  
D. Zawieska ◽  
P. Podlasiak

This paper presents an analysis of source photogrammetric data in relation to the examination of verticality in a monumental tower. In the proposed data processing methodology, the geometric quality of the point clouds relating to the monumental tower of the castle in Iłżawas established by using terrestrial laser scanning (Z+F 5006h, Leica C10), terrestrial photographs and digital images sourced via unmanned aerial vehicles (UAV) (Leica Aibot X6 Hexacopter). Tests were performed using the original software, developed by the authors, which allows for the automation of 3D point cloud processing. The software also facilitates the verification of the verticality of the tower and the assessment of the quality of utilized data.


Author(s):  
V. Walter ◽  
M. Kölle ◽  
D. Collmar ◽  
Y. Zhang

Abstract. In this article, we present a two-level approach for the crowd-based collection of vehicles from 3D point clouds. In the first level, the crowdworkers are asked to identify the coarse positions of vehicles in 2D rasterized shadings that were derived from the 3D point cloud. In order to increase the quality of the results, we utilize the wisdom of the crowd principle which says that averaging multiple estimates of a group of individuals provides an outcome that is often better than most of the underlying estimates or even better than the best estimate. For this, each crowd job is duplicated 10 times and the multiple results are integrated with a DBSCAN cluster algorithm. In the second level, we use the integrated results as pre-information for extracting small subsets of the 3D point cloud that are then presented to crowdworkers for approximating the included vehicle by means of a Minimum Bounding Box (MBB). Again, the crowd jobs are duplicated 10 times and an average bounding box is calculated from the individual bounding boxes. We will discuss the quality of the results of both steps and show that the wisdom of the crowd significantly improves the completeness as well as the geometric quality. With a tenfold acquisition, we have achieve a completeness of 93.3 percent and a geometric deviation of less than 1 m for 95 percent of the collected vehicles.


2021 ◽  
Vol 13 (8) ◽  
pp. 1442
Author(s):  
Kaisen Ma ◽  
Yujiu Xiong ◽  
Fugen Jiang ◽  
Song Chen ◽  
Hua Sun

Detecting and segmenting individual trees in forest ecosystems with high-density and overlapping crowns often results in bias due to the limitations of the commonly used canopy height model (CHM). To address such limitations, this paper proposes a new method to segment individual trees and extract tree structural parameters. The method involves the following key steps: (1) unmanned aerial vehicle (UAV)-scanned, high-density laser point clouds were classified, and a vegetation point cloud density model (VPCDM) was established by analyzing the spatial density distribution of the classified vegetation point cloud in the plane projection; and (2) a local maximum algorithm with an optimal window size was used to detect tree seed points and to extract tree heights, and an improved watershed algorithm was used to extract the tree crowns. The proposed method was tested at three sites with different canopy coverage rates in a pine-dominated forest in northern China. The results showed that (1) the kappa coefficient between the proposed VPCDM and the commonly used CHM was 0.79, indicating that performance of the VPCDM is comparable to that of the CHM; (2) the local maximum algorithm with the optimal window size could be used to segment individual trees and obtain optimal single-tree segmentation accuracy and detection rate results; and (3) compared with the original watershed algorithm, the improved watershed algorithm significantly increased the accuracy of canopy area extraction. In conclusion, the proposed VPCDM may provide an innovative data segmentation model for light detection and ranging (LiDAR)-based high-density point clouds and enhance the accuracy of parameter extraction.


Author(s):  
Y. Cao ◽  
M. Previtali ◽  
M. Scaioni

Abstract. In the wake of the success of Deep Learning Networks (DLN) for image recognition, object detection, shape classification and semantic segmentation, this approach has proven to be both a major breakthrough and an excellent tool in point cloud classification. However, understanding how different types of DLN achieve still lacks. In several studies the output of segmentation/classification process is compared against benchmarks, but the network is treated as a “black-box” and intermediate steps are not deeply analysed. Specifically, here the following questions are discussed: (1) what exactly did DLN learn from a point cloud? (2) On the basis of what information do DLN make decisions? To conduct such a quantitative investigation of these DLN applied to point clouds, this paper investigates the visual interpretability for the decision-making process. Firstly, we introduce a reconstruction network able to reconstruct and visualise the learned features, in order to face with question (1). Then, we propose 3DCAM to indicate the discriminative point cloud regions used by these networks to identify that category, thus dealing with question (2). Through answering the above two questions, the paper would like to offer some initial solutions to better understand the application of DLN to point clouds.


2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


2020 ◽  
Vol 6 (9) ◽  
pp. 94
Author(s):  
Magda Alexandra Trujillo-Jiménez ◽  
Pablo Navarro ◽  
Bruno Pazos ◽  
Leonardo Morales ◽  
Virginia Ramallo ◽  
...  

Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.


2021 ◽  
Vol 293 ◽  
pp. 02031
Author(s):  
Guocheng Qin ◽  
Ling Wang ◽  
YiMei Hou ◽  
HaoRan Gui ◽  
YingHao Jian

The digital twin model of the factory is the basis for the construction of a digital factory, and the professional system of the factory is complex. The traditional BIM model is not completely consistent with the actual position of the corresponding component, and it is difficult to directly replace the digital twin model. In response to this situation, relying on a certain factory project, the point cloud is used to eliminate the positional deviation between the BIM model and the factory during the construction phase, improve the efficiency and accuracy and reliability of model adjustment and optimization, and , realize the conversion from BIM model to digital twin model. A novel algorithm is developed to quickly detect and evaluate the construction quality of the local structure of the factory, so as to input the initial deformation data of the structure into the corresponding model and feed back to the construction party for improvement. The results show that the digital twin model, which is highly consistent with the actual location of the factory components, not only lays a solid foundation for the construction of a digital factory, but also further deepens the integration and application of BIM and point clouds.


Author(s):  
A. Murtiyos ◽  
P. Grussenmeyer ◽  
D. Suwardhi ◽  
W. A. Fadilah ◽  
H. A. Permana ◽  
...  

<p><strong>Abstract.</strong> 3D recording is an important procedure in the conservation of heritage sites. This past decade, a myriad of 3D sensors has appeared in the market with different advantages and disadvantages. Most notably, the laser scanning and photogrammetry methods have become some of the most used techniques in 3D recording. The integration of these different sensors is an interesting topic, one which will be discussed in this paper. Integration is an activity to combine two or more data with different characteristics to produce a 3D model with the best results. The discussion in this study includes the process of acquisition, processing, and analysis of the geometric quality from the results of the 3D recording process; starting with the acquisition method, registration and georeferencing process, up to the integration of laser scanning and photogrammetry 3D point clouds. The final result of the integration of the two point clouds is the 3D point cloud model that has become a single entity. Some detailed parts of the object of interest draw both geometric and textural information from photogrammetry, while laser scanning provided a point cloud depicting the overall overview of the building. The object used as our case study is Sari Temple, located in Special Region of Yogyakarta, Indonesia.</p>


Author(s):  
F. Dadras Javan ◽  
M. Savadkouhi

Abstract. In the last few years, Unmanned Aerial Vehicles (UAVs) are being frequently used to acquire high resolution photogrammetric images and consequently producing Digital Surface Models (DSMs) and orthophotos in a photogrammetric procedure for topography and surface processing applications. Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution. But yet, thermal mapping is getting more important in civil applications, as thermal sensors can be used in condition that visible sensors cannot, such as foggy weather and night times which is not possible for visible cameras. But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with. This study aims to offer a solution for to fixing mentioned problem and generating a thermal 3D model with higher spatial resolution based on thermal and visible point clouds integration. This integration leads to generate a more accurate thermal point cloud and DEM with more density and resolution which is appropriate for 3D thermal modelling. The main steps of this study are: generating thermal and RGB point clouds separately, registration of them in two course and fine level and finally adding thermal information to RGB high resolution point cloud by interpolation concept. Experimental results are presented in a mesh that has more faces (With a factor of 23) which leads to a higher resolution textured mesh with thermal information.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


Sign in / Sign up

Export Citation Format

Share Document