A New Kinect-Based Scanning System and its Application

2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.

Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.


Author(s):  
A. Murtiyoso ◽  
P. Grussenmeyer

In the field of 3D heritage documentation, point cloud registration is a relatively common issue. With rising needs for Historic Building Information Models (HBIMs), this issue has become more important as it determines the quality of the data to be used for HBIM modelling. Furthermore, in the context of historical buildings, it is often interesting to document both the exterior façades as well as the interior. This paper will discuss two approaches of the registration and georeferencing of building exterior and interior point clouds coming from different sensors, namely the independent georeferencing method and the free-network registration and georeferencing. Building openings (mainly windows) were used to establish common points between the systems. These two methods will be compared in terms of geometrical quality, while technical problems in performing them will also be discussed. Furthermore, an attempt to automate some parts of the workflow using automatic 3D keypoints and features detection and matching will also be described in the paper. Results show that while both approaches give similar results, the independent approach requires less work to perform. However, the free-network method has the advantage of being able to compensate for any systematic georeferencing error on either system. As regards to the automation attempt, the use of 3D keypoints and features may reduce processing time; however correct tie point correspondence filtering remains difficult in the presence of heavy point cloud noise.


2020 ◽  
Vol 6 (9) ◽  
pp. 94
Author(s):  
Magda Alexandra Trujillo-Jiménez ◽  
Pablo Navarro ◽  
Bruno Pazos ◽  
Leonardo Morales ◽  
Virginia Ramallo ◽  
...  

Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.


2021 ◽  
Vol 293 ◽  
pp. 02031
Author(s):  
Guocheng Qin ◽  
Ling Wang ◽  
YiMei Hou ◽  
HaoRan Gui ◽  
YingHao Jian

The digital twin model of the factory is the basis for the construction of a digital factory, and the professional system of the factory is complex. The traditional BIM model is not completely consistent with the actual position of the corresponding component, and it is difficult to directly replace the digital twin model. In response to this situation, relying on a certain factory project, the point cloud is used to eliminate the positional deviation between the BIM model and the factory during the construction phase, improve the efficiency and accuracy and reliability of model adjustment and optimization, and , realize the conversion from BIM model to digital twin model. A novel algorithm is developed to quickly detect and evaluate the construction quality of the local structure of the factory, so as to input the initial deformation data of the structure into the corresponding model and feed back to the construction party for improvement. The results show that the digital twin model, which is highly consistent with the actual location of the factory components, not only lays a solid foundation for the construction of a digital factory, but also further deepens the integration and application of BIM and point clouds.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Tengfei Zhou ◽  
Deng Pan ◽  
Guoli Wang

To obtain high-precision measurement data using vehicle-borne light detection and ranging (LiDAR) scanning (VLS) systems, calibration is necessary before a data acquisition mission. Thus, a novel calibration method based on a homemade target ball is proposed to estimate the system mounting parameters, which refer to the rotational and translational offsets between the LiDAR sensor and inertial measurement unit (IMU) orientation and position. Firstly, the spherical point cloud is fitted into a sphere to extract the coordinates of the centre, and each scan line on the sphere is fitted into a section of the sphere to calculate the distance ratio from the centre to the nearest two sections, and the attitude and trajectory parameters of the centre are calculated by linear interpolation. Then, the real coordinates of the centre of the sphere are calculated by measuring the coordinates of the reflector directly above the target ball with the total station. Finally, three rotation parameters and three translation parameters are calculated by two least-squares adjustments. Comparisons of the point cloud before and after calibration and the calibrated point clouds with the point cloud obtained by the terrestrial laser scanner show that the accuracy significantly improved after calibration. The experiment indicates that the calibration method based on the homemade target ball can effectively improve the accuracy of the point cloud, which can promote VLS development and applications.


Author(s):  
G. Tran ◽  
D. Nguyen ◽  
M. Milenkovic ◽  
N. Pfeifer

Full-waveform (FWF) LiDAR (Light Detection and Ranging) systems have their advantage in recording the entire backscattered signal of each emitted laser pulse compared to conventional airborne discrete-return laser scanner systems. The FWF systems can provide point clouds which contain extra attributes like amplitude and echo width, etc. In this study, a FWF data collected in 2010 for Eisenstadt, a city in the eastern part of Austria was used to classify four main classes: buildings, trees, waterbody and ground by employing a decision tree. Point density, echo ratio, echo width, normalised digital surface model and point cloud roughness are the main inputs for classification. The accuracy of the final results, correctness and completeness measures, were assessed by comparison of the classified output to a knowledge-based labelling of the points. Completeness and correctness between 90% and 97% was reached, depending on the class. While such results and methods were presented before, we are investigating additionally the transferability of the classification method (features, thresholds …) to another urban FWF lidar point cloud. Our conclusions are that from the features used, only echo width requires new thresholds. A data-driven adaptation of thresholds is suggested.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


2021 ◽  
Author(s):  
DIOGO GARCIA ◽  
Andre Souto ◽  
Gustavo Sandri ◽  
Tomas Borges ◽  
Ricardo Queiroz

Geometry-based point cloud compression (G-PCC) has been rapidly evolving in the context of international standards. Despite the inherent scalability of octree-based geometry description, current G-PCC attribute compression techniques prevent full scalability for compressed point clouds. In this paper, we present a solution to add scalability to attributes compressed using the region-adaptive hierarchical transform (RAHT), enabling the reconstruction of the point cloud using only a portion of the original bitstream. Without the full geometry information, one cannot compute the weights in which the RAHT relies on to calculate its coefficients for further levels of detail. In order to overcome this problem, we propose a linear relationship approximation relating the downsampled point cloud to the truncated inverse RAHT coefficients at that same level. The linear relationship parameters are sent as side information. After truncating the bitstream at a point corresponding to a given octree level, we can, then, recreate the attributes at that level. Tests were carried out and results attest the good approximation quality of the proposed technique.


2011 ◽  
Vol 6 ◽  
pp. 370-375
Author(s):  
Sebastian Vetter ◽  
Gunnar Siedler

Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP) these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM) can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The procedure described here is implemented in software Metigo 3D.


Sign in / Sign up

Export Citation Format

Share Document