scholarly journals Parameter Reduction and Optimisation for Point Cloud and Occupancy Mapping Algorithms

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7004
Author(s):  
Yu Miao ◽  
Alan Hunter ◽  
Ioannis Georgilas

Occupancy mapping is widely used to generate volumetric 3D environment models from point clouds, informing a robotic platform which parts of the environment are free and which are not. The selection of the parameters that govern the point cloud generation algorithms and mapping algorithms affects the process and the quality of the final map. Although previous studies have been reported in the literature on optimising major parameter configurations, research in the process to identify optimal parameter sets to achieve best occupancy mapping performance remains limited. The current work aims to fill this gap with a two-step principled methodology that first identifies the most significant parameters by conducting Neighbourhood Component Analysis on all parameters and then optimise those using grid search with the area under the Receiver Operating Characteristic curve. This study is conducted on 20 data sets with specially designed targets, providing precise ground truths for evaluation purposes. The methodology is tested on OctoMap with point clouds created by applying StereoSGBM on the images from a stereo camera. A clear indication can be seen that mapping parameters are more important than point cloud generation parameters. Moreover, up to 15% improvement in mapping performance can be achieved over default parameters.

Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


Author(s):  
F. Condorelli ◽  
R. Higuchi ◽  
S. Nasu ◽  
F. Rinaudo ◽  
H. Sugawara

Abstract. The use of Structure-from-Motion algorithms is a common practice to obtain a rapid photogrammetric reconstruction. However, the performance of these algorithms is limited by the fact that in some conditions the resulting point clouds present low density. This is the case when processing materials from historical archives, such as photographs and videos, which generates only sparse point clouds due to the lack of necessary information in the photogrammetric reconstruction. This paper explores ways to improve the performance of open source SfM algorithms in order to guarantee the presence of strategic feature points in the resulting point cloud, even if sparse. To reach this objective, a photogrammetric workflow is proposed to process historical images. The first part of the workflow presents a method that allows the manual selection of feature points during the photogrammetric process. The second part evaluates the metric quality of the reconstruction on the basis of a comparison with a point cloud that has a different density from the sparse point cloud. The workflow was applied to two different case studies. Transformations of wall paintings of the Karanlık church in Cappadocia were analysed thanks to the comparison of 3D model resulting from archive photographs and a recent survey. Then a comparison was performed between the state of the Komise building in Japan, before and after restoration. The findings show that the method applied allows the metric scale and evaluation of the model also in bad condition and when only low-density point clouds are available. Moreover, this tool should be of great use for both art and architecture historians and geomatics experts, to study the evolution of Cultural Heritage.


Author(s):  
R. Moritani ◽  
S. Kanai ◽  
H. Date ◽  
Y. Niina ◽  
R. Honma

<p><strong>Abstract.</strong> In this paper, we introduce a method for predicting the quality of dense points and selecting low-quality regions on the points generated by the structure from motion (SfM) and multi-view stereo (MVS) pipeline to realize high-quality and efficient as-is model reconstruction, using only results from the former: sparse point clouds and camera poses. The method was shown to estimate the quality of the final dense points as the quality predictor on an approximated model obtained from SfM only, without requiring the time-consuming MVS process. Moreover, the predictors can be used for selection of low-quality regions on the approximated model to estimate the next-best optimum camera poses which could improve quality. Furthermore, the method was applied to the prediction of dense point quality generated from the image sets of a concrete bridge column and construction site, and the prediction was validated in a time much shorter than using MVS. Finally, we discussed the correlation between the predictors and the final dense point quality.</p>


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Srinivasa Rao Gadde ◽  
Arnold K. Fulment ◽  
Josephat K. Peter

The proposed sampling plan in this article is referred to as multiple dependent state (MDS) sampling plans, for rejecting a lot based on properties of the current and preceding lot sampled. The median life of the product for the proposed sampling plan is assured based on a time-truncated life test, when a lifetime of the product follows exponentiated Weibull distribution (EWD). For the proposed plan, optimal parameters such as the number of preceding lots required for deciding whether to accept or reject the current lot, sample size, and rejection and acceptance numbers are obtained by the approach of two points on the operating characteristic curve (OC curve). Tables are constructed for various combinations of consumer and producer’s risks for various shape parameters. The proposed MDS sampling plan for EWD is demonstrated using the coronavirus (COVID-19) outbreak in China. The performance of the proposed sampling plan is compared with the existing single-sampling plan (SSP) when the quality of the product follows EWD.


2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


2020 ◽  
Vol 6 (9) ◽  
pp. 94
Author(s):  
Magda Alexandra Trujillo-Jiménez ◽  
Pablo Navarro ◽  
Bruno Pazos ◽  
Leonardo Morales ◽  
Virginia Ramallo ◽  
...  

Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.


2021 ◽  
Vol 293 ◽  
pp. 02031
Author(s):  
Guocheng Qin ◽  
Ling Wang ◽  
YiMei Hou ◽  
HaoRan Gui ◽  
YingHao Jian

The digital twin model of the factory is the basis for the construction of a digital factory, and the professional system of the factory is complex. The traditional BIM model is not completely consistent with the actual position of the corresponding component, and it is difficult to directly replace the digital twin model. In response to this situation, relying on a certain factory project, the point cloud is used to eliminate the positional deviation between the BIM model and the factory during the construction phase, improve the efficiency and accuracy and reliability of model adjustment and optimization, and , realize the conversion from BIM model to digital twin model. A novel algorithm is developed to quickly detect and evaluate the construction quality of the local structure of the factory, so as to input the initial deformation data of the structure into the corresponding model and feed back to the construction party for improvement. The results show that the digital twin model, which is highly consistent with the actual location of the factory components, not only lays a solid foundation for the construction of a digital factory, but also further deepens the integration and application of BIM and point clouds.


Author(s):  
N. Tyagur ◽  
M. Hollaus

During the last ten years, mobile laser scanning (MLS) systems have become a very popular and efficient technology for capturing reality in 3D. A 3D laser scanner mounted on the top of a moving vehicle (e.g. car) allows the high precision capturing of the environment in a fast way. Mostly this technology is used in cities for capturing roads and buildings facades to create 3D city models. In our work, we used an MLS system in Moravian Karst, which is a protected nature reserve in the Eastern Part of the Czech Republic, with a steep rocky terrain covered by forests. For the 3D data collection, the Riegl VMX 450, mounted on a car, was used with integrated IMU/GNSS equipment, which provides low noise, rich and very dense 3D point clouds. <br><br> The aim of this work is to create a digital terrain model (DTM) from several MLS data sets acquired in the neighbourhood of a road. The total length of two covered areas is 3.9 and 6.1 km respectively, with an average width of 100 m. For the DTM generation, a fully automatic, robust, hierarchic approach was applied. The derivation of the DTM is based on combinations of hierarchical interpolation and robust filtering for different resolution levels. For the generation of the final DTMs, different interpolation algorithms are applied to the classified terrain points. The used parameters were determined by explorative analysis. All MLS data sets were processed with one parameter set. As a result, a high precise DTM was derived with high spatial resolution of 0.25 x 0.25 m. The quality of the DTMs was checked by geodetic measurements and visual comparison with raw point clouds. The high quality of the derived DTM can be used for analysing terrain changes and morphological structures. Finally, the derived DTM was compared with the DTM of the Czech Republic (DMR 4G) with a resolution of 5 x 5 m, which was created from airborne laser scanning data. The vertical accuracy of the derived DTMs is around 0.10 m.


Sign in / Sign up

Export Citation Format

Share Document