Study of rapid face modeling technology based on Kinect

Author(s):  
Shan Liu ◽  
Guanghong Gong ◽  
Luhao Xiao ◽  
Mengyuan Sun ◽  
Zhengliang Zhu

This paper improves the algorithm of point cloud filtering and registration in 3D modeling, aiming for smaller sampling error and shorter processing time of point cloud data. Based on collaborative sampling among several Kinect devices, we analyze the deficiency of current filtering algorithm, and use a novel method of point cloud filtering. Meanwhile, we use Fast Point Feature Histogram (FPFH) algorithm for feature extraction and point cloud registration. Compared with the aligning process using Point Feature Histograms (PFH), it only takes 9[Formula: see text]min when the number of points is about 500,000, shortening the aligning time by 47.1%. To measure the accuracy of the registration, we propose an algorithm which calculates the average distance of the corresponding coincident parts of two point clouds, and we improve the accuracy to an average distance of 0.7[Formula: see text]mm. In the surface reconstruction section, we adopt Ball Pivoting algorithm for surface reconstruction, obtaining image with higher accuracy in a shorter time.

2021 ◽  
Author(s):  
Kacper Pluta ◽  
Gisela Domej

<p>The process of transforming point cloud data into high-quality meshes or CAD objects is, in general, not a trivial task. Many problems, such as holes, enclosed pockets, or small tunnels, can occur during the surface reconstruction process, even if the point cloud is of excellent quality. These issues are often difficult to resolve automatically and may require detailed manual adjustments. Nevertheless, in this work, we present a semi-automatic pipeline that requires minimal user-provided input and still allows for high-quality surface reconstruction. Moreover, the presented pipeline can be successfully used by non-specialists and only relies commonly available tools.</p><p>Our pipeline consists of the following main steps: First, a normal field over the point cloud is estimated, and Screened Poisson Surface Reconstruction is applied to obtain the initial mesh. At this stage, the reconstructed mesh usually contains holes, small tunnels, and excess parts – i.e., surface parts that do not correspond to the point cloud geometry. In the next step, we apply morphological and geometrical filtering in order to resolve the problems mentioned before. Some fine details are also removed during the filtration process; however, we show how these can be restored – without reintroducing the problems – using a distance guided projection. In the last step, the filtered mesh is re-meshed to obtain a high-quality triangular mesh, which – if needed – can be converted to a CAD object represented by a small number of quadrangular NURBS patches.</p><p>Our workflow is designed for a point cloud recorded by a laser scanner inside one of seven artificially carved caves resembling chapels with several niches and passages to the outside of a sandstone hill slope in Georgia. We note that we have not tested the approach for other data. Nevertheless, we believe that a similar pipeline can be applied for other types of point cloud data, – e.g., natural caves or mining shafts, geotechnical constructions, rock cliffs, geo-archeological sites, etc. This workflow was created independently, it is not part of a funded project and does not advertise particular software. The case study's point cloud data was used by courtesy of the Dipartimento di Scienze dell'Ambiente e della Terra of the Università degli Studi di Milano–Bicocca.</p>


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


2021 ◽  
Vol 10 (3) ◽  
pp. 157
Author(s):  
Paul-Mark DiFrancesco ◽  
David A. Bonneau ◽  
D. Jean Hutchinson

Key to the quantification of rockfall hazard is an understanding of its magnitude-frequency behaviour. Remote sensing has allowed for the accurate observation of rockfall activity, with methods being developed for digitally assembling the monitored occurrences into a rockfall database. A prevalent challenge is the quantification of rockfall volume, whilst fully considering the 3D information stored in each of the extracted rockfall point clouds. Surface reconstruction is utilized to construct a 3D digital surface representation, allowing for an estimation of the volume of space that a point cloud occupies. Given various point cloud imperfections, it is difficult for methods to generate digital surface representations of rockfall with detailed geometry and correct topology. In this study, we tested four different computational geometry-based surface reconstruction methods on a database comprised of 3668 rockfalls. The database was derived from a 5-year LiDAR monitoring campaign of an active rock slope in interior British Columbia, Canada. Each method resulted in a different magnitude-frequency distribution of rockfall. The implications of 3D volume estimation were demonstrated utilizing surface mesh visualization, cumulative magnitude-frequency plots, power-law fitting, and projected annual frequencies of rockfall occurrence. The 3D volume estimation methods caused a notable shift in the magnitude-frequency relations, while the power-law scaling parameters remained relatively similar. We determined that the optimal 3D volume calculation approach is a hybrid methodology comprised of the Power Crust reconstruction and the Alpha Solid reconstruction. The Alpha Solid approach is to be used on small-scale point clouds, characterized with high curvatures relative to their sampling density, which challenge the Power Crust sampling assumptions.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Franco Spettu ◽  
Simone Teruggi ◽  
Francesco Canali ◽  
Cristiana Achille ◽  
Francesco Fassi

Cultural Heritage (CH) 3D digitisation is getting increasing attention and importance. Advanced survey techniques provide as output a 3D point cloud, wholly and accurately describing even the most complex architectural geometry with a priori established accuracy. These 3D point models are generally used as the base for the realisation of 2D technical drawings and 3D advanced representations. During the last 12 years, the 3DSurveyGroup (3DSG, Politecnico di Milano) conduced an omni-comprehensive, multi-technique survey, obtaining the full point cloud of Milan Cathedral, from which were produced the 2D technical drawings and the 3D model of the Main Spire used by the Veneranda Fabbrica del Duomo di Milano (VF) to plan its periodic maintenance and inspection activities on the Cathedral. Using the survey product directly to plan VF activities would help to skip a long-lasting, uneconomical and manual process of 2D and 3D technical elaboration extraction. In order to do so, the unstructured point cloud data must be enriched with semantics, providing a hierarchical structure that can communicate with a powerful, flexible information system able to effectively manage both point clouds and 3D geometries as hybrid models. For this purpose, the point cloud was segmented using a machine-learning algorithm with multi-level multi-resolution (MLMR) approach in order to obtain a manageable, reliable and repeatable dataset. This reverse engineering process allowed to identify directly on the point cloud the main architectonic elements that are then re-organised in a logical structure inserted inside the informative system built inside the 3DExperience environment, developed by Dassault Systémes.


Sign in / Sign up

Export Citation Format

Share Document