scholarly journals Surface Reconstruction for Three-Dimensional Rockfall Volumetric Analysis

2019 ◽  
Vol 8 (12) ◽  
pp. 548 ◽  
Author(s):  
David Bonneau ◽  
Paul-Mark DiFrancesco ◽  
D. Jean Hutchinson

Laser scanning is routinely being used for the characterization and management of rockfall hazards. A key component of many studies is the ability to use the high-resolution topographic datasets for detailed volume estimates. 2.5-Dimensional (2.5D) approaches exist to estimate the volume of rockfall events; however these approaches require rasterization of the point cloud. These 2.5D volume estimates are therefore sensitive to picking an appropriate cell size to preserve resolution while minimizing interpolation, especially for lower volume rockfall events. To overcome the limitations of working with 2.5D raster datasets, surface reconstruction methods originating from the field of computational geometry can be implemented to assess the volume of rockfalls in 3D. In this technical note, the authors address the methods and implications of how the surface of 3D rockfall objects, derived from sequential terrestrial laser scans (TLS), are reconstructed for volumetric analysis. The Power Crust, Convex Hull and Alpha-shape algorithms are implemented to reconstruct a synthetic rockfall object generated in Houdini, a procedural modeling and animation software package. The reconstruction algorithms are also implemented for a selection of three rockfall cases studies which occurred in the White Canyon, British Columbia, Canada. The authors find that there is a trade-off between accurate surface topology reconstruction and ensuring the mesh is watertight manifold; which is required for accurate volumetric estimates. Power Crust is shown to be the most robust algorithm, however, the iterative Alpha-shape approach introduced in the study is also shown to find a balance between hole-filling and loss of detail.

Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 164
Author(s):  
Dongxu Wu ◽  
Fusheng Liang ◽  
Chengwei Kang ◽  
Fengzhou Fang

Optical interferometry plays an important role in the topographical surface measurement and characterization in precision/ultra-precision manufacturing. An appropriate surface reconstruction algorithm is essential in obtaining accurate topography information from the digitized interferograms. However, the performance of a surface reconstruction algorithm in interferometric measurements is influenced by environmental disturbances and system noise. This paper presents a comparative analysis of three algorithms commonly used for coherence envelope detection in vertical scanning interferometry, including the centroid method, fast Fourier transform (FFT), and Hilbert transform (HT). Numerical analysis and experimental studies were carried out to evaluate the performance of different envelope detection algorithms in terms of measurement accuracy, speed, and noise resistance. Step height standards were measured using a developed interferometer and the step profiles were reconstructed by different algorithms. The results show that the centroid method has a higher measurement speed than the FFT and HT methods, but it can only provide acceptable measurement accuracy at a low noise level. The FFT and HT methods outperform the centroid method in terms of noise immunity and measurement accuracy. Even if the FFT and HT methods provide similar measurement accuracy, the HT method has a superior measurement speed compared to the FFT method.


2021 ◽  
Vol 10 (3) ◽  
pp. 157
Author(s):  
Paul-Mark DiFrancesco ◽  
David A. Bonneau ◽  
D. Jean Hutchinson

Key to the quantification of rockfall hazard is an understanding of its magnitude-frequency behaviour. Remote sensing has allowed for the accurate observation of rockfall activity, with methods being developed for digitally assembling the monitored occurrences into a rockfall database. A prevalent challenge is the quantification of rockfall volume, whilst fully considering the 3D information stored in each of the extracted rockfall point clouds. Surface reconstruction is utilized to construct a 3D digital surface representation, allowing for an estimation of the volume of space that a point cloud occupies. Given various point cloud imperfections, it is difficult for methods to generate digital surface representations of rockfall with detailed geometry and correct topology. In this study, we tested four different computational geometry-based surface reconstruction methods on a database comprised of 3668 rockfalls. The database was derived from a 5-year LiDAR monitoring campaign of an active rock slope in interior British Columbia, Canada. Each method resulted in a different magnitude-frequency distribution of rockfall. The implications of 3D volume estimation were demonstrated utilizing surface mesh visualization, cumulative magnitude-frequency plots, power-law fitting, and projected annual frequencies of rockfall occurrence. The 3D volume estimation methods caused a notable shift in the magnitude-frequency relations, while the power-law scaling parameters remained relatively similar. We determined that the optimal 3D volume calculation approach is a hybrid methodology comprised of the Power Crust reconstruction and the Alpha Solid reconstruction. The Alpha Solid approach is to be used on small-scale point clouds, characterized with high curvatures relative to their sampling density, which challenge the Power Crust sampling assumptions.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2013 ◽  
Vol 2013 ◽  
pp. 1-14
Author(s):  
Joshua Kim ◽  
Huaiqun Guan ◽  
David Gersten ◽  
Tiezhi Zhang

Tetrahedron beam computed tomography (TBCT) performs volumetric imaging using a stack of fan beams generated by a multiple pixel X-ray source. While the TBCT system was designed to overcome the scatter and detector issues faced by cone beam computed tomography (CBCT), it still suffers the same large cone angle artifacts as CBCT due to the use of approximate reconstruction algorithms. It has been shown that iterative reconstruction algorithms are better able to model irregular system geometries and that algebraic iterative algorithms in particular have been able to reduce cone artifacts appearing at large cone angles. In this paper, the SART algorithm is modified for the use with the different TBCT geometries and is tested using both simulated projection data and data acquired using the TBCT benchtop system. The modified SART reconstruction algorithms were able to mitigate the effects of using data generated at large cone angles and were also able to reconstruct CT images without the introduction of artifacts due to either the longitudinal or transverse truncation in the data sets. Algebraic iterative reconstruction can be especially useful for dual-source dual-detector TBCT, wherein the cone angle is the largest in the center of the field of view.


2015 ◽  
Vol 12 (5) ◽  
pp. 1629-1634 ◽  
Author(s):  
T. Hakala ◽  
O. Nevalainen ◽  
S. Kaasalainen ◽  
R. Mäkipää

Abstract. We present an empirical application of multispectral laser scanning for monitoring the seasonal and spatial changes in pine chlorophyll (a + b) content and upscaling the accurate leaf-level chlorophyll measurements into branch and tree level. The results show the capability of the new instrument for monitoring the changes in the shape and physiology of tree canopy: the spectral indices retrieved from the multispectral point cloud agree with laboratory measurements of the chlorophyll a and b content. The approach opens new prospects for replacing destructive and labour-intensive manual sampling with remote observations of tree physiology.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3701 ◽  
Author(s):  
Jin Zheng ◽  
Jinku Li ◽  
Yi Li ◽  
Lihui Peng

Electrical Capacitance Tomography (ECT) image reconstruction has developed for decades and made great achievements, but there is still a need to find a new theoretical framework to make it better and faster. In recent years, machine learning theory has been introduced in the ECT area to solve the image reconstruction problem. However, there is still no public benchmark dataset in the ECT field for the training and testing of machine learning-based image reconstruction algorithms. On the other hand, a public benchmark dataset can provide a standard framework to evaluate and compare the results of different image reconstruction methods. In this paper, a benchmark dataset for ECT image reconstruction is presented. Like the great contribution of ImageNet that transformed machine learning research, this benchmark dataset is hoped to be helpful for society to investigate new image reconstruction algorithms since the relationship between permittivity distribution and capacitance can be better mapped. In addition, different machine learning-based image reconstruction algorithms can be trained and tested by the unified dataset, and the results can be evaluated and compared under the same standard, thus, making the ECT image reconstruction study more open and causing a breakthrough.


2021 ◽  
Vol 13 (19) ◽  
pp. 3796
Author(s):  
Lei Fan ◽  
Yuanzhi Cai

Laser scanning is a popular means of acquiring the indoor scene data of buildings for a wide range of applications concerning indoor environment. During data acquisition, unwanted data points beyond the indoor space of interest can also be recorded due to the presence of openings, such as windows and doors on walls. For better visualization and further modeling, it is beneficial to filter out those data, which is often achieved manually in practice. To automate this process, an efficient image-based filtering approach was explored in this research. In this approach, a binary mask image was created and updated through mathematical morphology operations, hole filling and connectively analysis. The final mask obtained was used to remove the data points located outside the indoor space of interest. The application of the approach to several point cloud datasets considered confirms its ability to effectively keep the data points in the indoor space of interest with an average precision of 99.50%. The application cases also demonstrate the computational efficiency (0.53 s, at most) of the approach proposed.


2007 ◽  
Vol 94 (8) ◽  
pp. 623-630 ◽  
Author(s):  
Hanns-Christian Gunga ◽  
Tim Suthau ◽  
Anke Bellmann ◽  
Andreas Friedrich ◽  
Thomas Schwanebeck ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document