Efficiently registering scan point clouds of 3D printed parts for shape accuracy assessment and modeling

2020 ◽  
Vol 56 ◽  
pp. 587-597 ◽  
Author(s):  
Nathan Decker ◽  
Yuanxiang Wang ◽  
Qiang Huang
Author(s):  
Andrzej Gessner ◽  
Roman Staniek

The publication demonstrates an accuracy assessment method for machine tool body casting utilizing an optical scanner and a reference design of the machine tool body. The process allows assessing the casting shape accuracy, as well as determining whether the size of the allowances of all work surfaces is sufficient for appropriate machining, corresponding to the construction design. The described method allows dispensing with the arduous manual operation - marking out. Marking out, depending on the size and complexity, might take several working shifts for prototype casting. In case of large and elaborate casts, as those of machine tool bodies, marking out is often restricted only to the first cast of the desired body produced in a given casting mold. Such course of action is based on an assumption that casting is reproducible; hence, no need to assess each and every individual cast. While this approach saves time, it often results in late detection of casting errors (allowance shifts or insufficiencies) during the actual machining process. That, in turn, results in considerable losses due to the disruption of the work process and often demands cast repair. The aim of the hereby presented study is to introduce a new technological premise dispensing with manual marking out as well as allowing fast verification of the cast shapes.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


2020 ◽  
Vol 9 (3) ◽  
pp. 832 ◽  
Author(s):  
Dave Chamo ◽  
Bilal Msallem ◽  
Neha Sharma ◽  
Soheila Aghlmandi ◽  
Christoph Kunz ◽  
...  

The use of patient-specific implants (PSIs) in craniofacial surgery is often limited due to a lack of expertise and/or production costs. Therefore, a simple and cost-efficient template-based fabrication workflow has been developed to overcome these disadvantages. The aim of this study is to assess the accuracy of PSIs made from their original templates. For a representative cranial defect (CRD) and a temporo-orbital defect (TOD), ten PSIs were made from polymethylmethacrylate (PMMA) using computer-aided design (CAD) and three-dimensional (3D) printing technology. These customized implants were measured and compared with their original 3D printed templates. The implants for the CRD revealed a root mean square (RMS) value ranging from 1.128 to 0.469 mm with a median RMS (Q1 to Q3) of 0.574 (0.528 to 0.701) mm. Those for the TOD revealed an RMS value ranging from 1.079 to 0.630 mm with a median RMS (Q1 to Q3) of 0.843 (0.635 to 0.943) mm. This study demonstrates that a highly precise duplication of PSIs can be achieved using this template-molding workflow. Thus, virtually planned implants can be accurately transferred into haptic PSIs. This workflow appears to offer a sophisticated solution for craniofacial reconstruction and continues to prove itself in daily clinical practice.


Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


2020 ◽  
Vol 12 (6) ◽  
pp. 986 ◽  
Author(s):  
Gottfried Mandlburger ◽  
Martin Pfennigbauer ◽  
Roland Schwarz ◽  
Sebastian Flöry ◽  
Lukas Nussbaumer

We present the sensor concept and first performance and accuracy assessment results of a novel lightweight topo-bathymetric laser scanner designed for integration on Unmanned Aerial Vehicles (UAVs), light aircraft, and helicopters. The instrument is particularly well suited for capturing river bathymetry in high spatial resolution as a consequence of (i) the low nominal flying altitude of 50–150 m above ground level resulting in a laser footprint diameter on the ground of typically 10–30 cm and (ii) the high pulse repetition rate of up to 200 kHz yielding a point density on the ground of approximately 20–50 points/m2. The instrument features online waveform processing and additionally stores the full waveform within the entire range gate for waveform analysis in post-processing. The sensor was tested in a real-world environment by acquiring data from two freshwater ponds and a 500 m section of the pre-Alpine Pielach River (Lower Austria). The captured underwater points featured a maximum penetration of two times the Secchi depth. On dry land, the 3D point clouds exhibited (i) a measurement noise in the range of 1–3 mm; (ii) a fitting precision of redundantly captured flight strips of 1 cm; and (iii) an absolute accuracy of 2–3 cm compared to terrestrially surveyed checkerboard targets. A comparison of the refraction corrected LiDAR point cloud with independent underwater checkpoints exhibited a maximum deviation of 7.8 cm and revealed a systematic depth-dependent error when using a refraction coefficient of n = 1.36 for time-of-flight correction. The bias is attributed to multi-path effects in the turbid water column (Secchi depth: 1.1 m) caused by forward scattering of the laser signal at suspended particles. Due to the high spatial resolution, good depth performance, and accuracy, the sensor shows a high potential for applications in hydrology, fluvial morphology, and hydraulic engineering, including flood simulation, sediment transport modeling, and habitat mapping.


Geosciences ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 323 ◽  
Author(s):  
Gordana Jakovljevic ◽  
Miro Govedarica ◽  
Flor Alvarez-Taboada ◽  
Vladimir Pajic

Digital elevation model (DEM) has been frequently used for the reduction and management of flood risk. Various classification methods have been developed to extract DEM from point clouds. However, the accuracy and computational efficiency need to be improved. The objectives of this study were as follows: (1) to determine the suitability of a new method to produce DEM from unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) data, using a raw point cloud classification and ground point filtering based on deep learning and neural networks (NN); (2) to test the convenience of rebalancing datasets for point cloud classification; (3) to evaluate the effect of the land cover class on the algorithm performance and the elevation accuracy; and (4) to assess the usability of the LiDAR and UAV structure from motion (SfM) DEM in flood risk mapping. In this paper, a new method of raw point cloud classification and ground point filtering based on deep learning using NN is proposed and tested on LiDAR and UAV data. The NN was trained on approximately 6 million points from which local and global geometric features and intensity data were extracted. Pixel-by-pixel accuracy assessment and visual inspection confirmed that filtering point clouds based on deep learning using NN is an appropriate technique for ground classification and producing DEM, as for the test and validation areas, both ground and non-ground classes achieved high recall (>0.70) and high precision values (>0.85), which showed that the two classes were well handled by the model. The type of method used for balancing the original dataset did not have a significant influence in the algorithm accuracy, and it was suggested not to use any of them unless the distribution of the generated and real data set will remain the same. Furthermore, the comparisons between true data and LiDAR and a UAV structure from motion (UAV SfM) point clouds were analyzed, as well as the derived DEM. The root mean square error (RMSE) and the mean average error (MAE) of the DEM were 0.25 m and 0.05 m, respectively, for LiDAR data, and 0.59 m and –0.28 m, respectively, for UAV data. For all land cover classes, the UAV DEM overestimated the elevation, whereas the LIDAR DEM underestimated it. The accuracy was not significantly different in the LiDAR DEM for the different vegetation classes, while for the UAV DEM, the RMSE increased with the height of the vegetation class. The comparison of the inundation areas derived from true LiDAR and UAV data for different water levels showed that in all cases, the largest differences were obtained for the lowest water level tested, while they performed best for very high water levels. Overall, the approach presented in this work produced DEM from LiDAR and UAV data with the required accuracy for flood mapping according to European Flood Directive standards. Although LiDAR is the recommended technology for point cloud acquisition, a suitable alternative is also UAV SfM in hilly areas.


Sign in / Sign up

Export Citation Format

Share Document