scholarly journals On the fast approximation of point clouds using Chebyshev polynomials

2021 ◽  
Vol 15 (4) ◽  
pp. 305-317
Author(s):  
Sven Weisbrich ◽  
Georgios Malissiovas ◽  
Frank Neitzel

Abstract Suppose a large and dense point cloud of an object with complex geometry is available that can be approximated by a smooth univariate function. In general, for such point clouds the “best” approximation using the method of least squares is usually hard or sometimes even impossible to compute. In most cases, however, a “near-best” approximation is just as good as the “best”, but usually much easier and faster to calculate. Therefore, a fast approach for the approximation of point clouds using Chebyshev polynomials is described, which is based on an interpolation in the Chebyshev points of the second kind. This allows to calculate the unknown coefficients of the polynomial by means of the Fast Fourier transform (FFT), which can be extremely efficient, especially for high-order polynomials. Thus, the focus of the presented approach is not on sparse point clouds or point clouds which can be approximated by functions with few parameters, but rather on large dense point clouds for whose approximation perhaps even millions of unknown coefficients have to be determined.

Author(s):  
Tao Peng ◽  
Satyandra K. Gupta

Point cloud acquisition using digital fringe projection (PCCDFP) is a non-contact technique for acquiring dense point clouds to represent the 3-D shapes of objects. Most existing PCCDFP systems use projection patterns consisting of straight fringes with fixed fringe pitches. In certain situations, such patterns do not give the best results. In our earlier work, we have shown that in some situations, patterns that use curved fringes with spatial pitch variation can significantly improve the process of constructing point clouds. This paper describes algorithms for automatically generating adaptive projection patterns that use curved fringes with spatial pitch variation to provide improved results for an object being measured. In addition, we also describe the supporting algorithms that are needed for utilizing adaptive projection patterns. Both simulation and physical experiments show that, adaptive patterns are able to achieve improved performance, in terms of measurement accuracy and coverage, than fixed-pitch straight fringe patterns.


Author(s):  
Jinglu Wang ◽  
Bo Sun ◽  
Yan Lu

In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


Author(s):  
C. Vasilakos ◽  
S. Chatzistamatis ◽  
O. Roussou ◽  
N. Soulakellis

<p><strong>Abstract.</strong> Building damage assessment caused by earthquakes is essential during the response phase following a catastrophic event. Modern techniques include terrestrial and aerial photogrammetry based on Structure from Motion algorithm and Laser Scanning with the latter to prove its superiority in accuracy assessment due to the high-density point clouds. However, standardized procedures during emergency surveys often could not be followed due to restrictions of outdoor operations because of debris or decrepit buildings, the high human presence of civil protection agencies, expedited deployment of survey team and cost of operations. The aim of this paper is to evaluate whether terrestrial photogrammetry based on a handheld amateur DSLR camera can be used to map building damages, structural deformations and facade production in an accepted accuracy comparing to laser scanning technique. The study area is the Vrisa village, Lesvos, Greece where a Mw&amp;thinsp;6.3 earthquake occurred on June 12th, 2017. A dense point cloud from some digital images created based on Structure from Motion algorithm and compared with a dense point cloud acquired by a laser scanner. The distance measurement and the comparison were conducted with the Multiscale Model to Model Cloud Comparison method. According to the results, the mean of the absolute distances between the two clouds is 0.038&amp;thinsp;m while the 94.9&amp;thinsp;% of the point distances are less than 0.1&amp;thinsp;m. Terrestrial photogrammetry proved to be an accurate methodology for rapid earthquake damage assessment thus its products were used by local authorities for the calculation of the compensation for the property loss.</p>


Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


Author(s):  
M. Dahaghin ◽  
F. Samadzadegan ◽  
F. Dadras Javan

Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


2020 ◽  
Vol 34 (07) ◽  
pp. 11596-11603 ◽  
Author(s):  
Minghua Liu ◽  
Lu Sheng ◽  
Sheng Yang ◽  
Jing Shao ◽  
Shi-Min Hu

3D point cloud completion, the task of inferring the complete geometric shape from a partial point cloud, has been attracting attention in the community. For acquiring high-fidelity dense point clouds and avoiding uneven distribution, blurred details, or structural loss of existing methods' results, we propose a novel approach to complete the partial point cloud in two stages. Specifically, in the first stage, the approach predicts a complete but coarse-grained point cloud with a collection of parametric surface elements. Then, in the second stage, it merges the coarse-grained prediction with the input point cloud by a novel sampling algorithm. Our method utilizes a joint loss function to guide the distribution of the points. Extensive experiments verify the effectiveness of our method and demonstrate that it outperforms the existing methods in both the Earth Mover's Distance (EMD) and the Chamfer Distance (CD).


Author(s):  
Suliman Gargoum ◽  
Karim El-Basyouny

Datasets collected using light detection and ranging (LiDAR) technology often consist of dense point clouds. However, the density of the point cloud could vary depending on several different factors including the capabilities of the data collection equipment, the conditions in which data are collected, and other features such as range and angle of incidence. Although variation in point density is expected to influence the quality of the information extracted from LiDAR, the extent to which changes in density could affect the extraction is unknown. Understanding such impacts is essential for agencies looking to adopt LiDAR technology and researchers looking to develop algorithms to extract information from LiDAR. This paper focuses specifically on understanding the impacts of point density on extracting traffic signs from LiDAR datasets. The densities of point clouds are first reduced using stratified random sampling; traffic signs are then extracted from those datasets at different levels of point density. The precision and accuracy of the detection process was assessed at the different levels of point cloud density and on four different highway segments. In general, it was found that for signs with large panels along the approach on which LiDAR data were collected, reducing the point cloud density by up to 70% of the original point cloud had minimal impacts on the sign detection rates. Results of this study provide practical guidance to transportation agencies interested in understanding the tradeoff in price, quality, and coverage, when acquiring LiDAR equipment for the inventory of traffic signs on their transportation networks.


Author(s):  
Y. D. Rajendra ◽  
S. C. Mehrotra ◽  
K. V. Kale ◽  
R. R. Manza ◽  
R. K. Dhumal ◽  
...  

Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object’s surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.


Sign in / Sign up

Export Citation Format

Share Document