scholarly journals Comparison of Depth Camera and Terrestrial Laser Scanner in Monitoring Structural Deflections

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.

Author(s):  
C. L. Lau ◽  
S. Halim ◽  
M. Zulkepli ◽  
A. M. Azwan ◽  
W. L. Tang ◽  
...  

The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.


2019 ◽  
Vol 1 (1) ◽  
pp. 47-60
Author(s):  
Ezil Defri Maharfi ◽  
Taufik Arief ◽  
Diana Purbasari

PT. Bukit Asam, Tbk. merupakan perusahaan pertambangan batubara yang terletak di Tanjung Enim, Kabupaten Muara Enim, Provinsi Sumatera Selatan. Selama ini pengukuran volume pengupasan overburden dilakukan menggunakan alat Total Station. Pengukuran area overburden yang luas dan bentuk permukaan yang beragam menggunakan Total Station dinilai kurang efektif karena lamanya waktu yang dibutuhkan dan rendahnya tingkat ketelitian. Oleh kerena itu, diperlukan alat yang dapat mengukur volume dengan cepat serta menghasilkan data ukuran volume yang detail dan dengan kerapatan tinggi. Salah satunya yaitu penggunaan Terrestrial Laser Scanner. Metode yang digunakan dalam melakukan pengukuran yaitu metode occupation and backsight. Pengukuran menggunakan metode occupation and backsight diperlukan dua titik yang telah diketahui koordinatnya yang digunakan sebagai titik berdiri alat dan untuk titik acuan (backsight). Metode registrasi yang digunakan yaitu metode occupation and backsight dan metode cloud to cloud. Data point clouds yang telah diregistrasi perlu dilakukan filtering untuk menghilangkan noise dan objek asing yang bukan lapisan overburden. Perhitungan volume dilakukan dengan metode cut and fill terhadap model tiga dimensi dari point cloud yang terbentuk. Data hasil perhitungan didapatkan volume pengupasan overburden selama Desember 2017 sampai dengan Mei 2018 adalah sebesar 847.937 m3, dengan rincian 255.700 m3 di bulan Desember 2017, 299.120 m3 di bulan Januari 2018, 227.543 m3 di Bulan Februari 2018 dan 65.572 m3 di bulan Maret 2018.


Author(s):  
F. Sadeghi ◽  
H. Arefi ◽  
A. Fallah ◽  
M. Hahn

3D The three dimensional building modelling has been an interesting topic of research for decades and it seems that photogrammetry methods provide the only economic means to acquire truly 3D city data. According to the enormous developments of 3D building reconstruction with several applications such as navigation system, location based services and urban planning, the need to consider the semantic features (such as windows and doors) becomes more essential than ever, and therefore, a 3D model of buildings as block is not any more sufficient. To reconstruct the façade elements completely, we employed the high density point cloud data that obtained from the handheld laser scanner. The advantage of the handheld laser scanner with capability of direct acquisition of very dense 3D point clouds is that there is no need to derive three dimensional data from multi images using structure from motion techniques. This paper presents a grammar-based algorithm for façade reconstruction using handheld laser scanner data. The proposed method is a combination of bottom-up (data driven) and top-down (model driven) methods in which, at first the façade basic elements are extracted in a bottom-up way and then they are served as pre-knowledge for further processing to complete models especially in occluded and incomplete areas. The first step of data driven modelling is using the conditional RANSAC (RANdom SAmple Consensus) algorithm to detect façade plane in point cloud data and remove noisy objects like trees, pedestrians, traffic signs and poles. Then, the façade planes are divided into three depth layers to detect protrusion, indentation and wall points using density histogram. Due to an inappropriate reflection of laser beams from glasses, the windows appear like holes in point cloud data and therefore, can be distinguished and extracted easily from point cloud comparing to the other façade elements. Next step, is rasterizing the indentation layer that holds the windows and doors information. After rasterization process, the morphological operators are applied in order to remove small irrelevant objects. Next, the horizontal splitting lines are employed to determine floors and vertical splitting lines are employed to detect walls, windows, and doors. The windows, doors and walls elements which are named as terminals are clustered during classification process. Each terminal contains a special property as width. Among terminals, windows and doors are named the geometry tiles in definition of the vocabularies of grammar rules. Higher order structures that inferred by grouping the tiles resulted in the production rules. The rules with three dimensional modelled façade elements constitute formal grammar that is named façade grammar. This grammar holds all the information that is necessary to reconstruct façades in the style of the given building. Thus, it can be used to improve and complete façade reconstruction in areas with no or limited sensor data. Finally, a 3D reconstructed façade model is generated that the accuracy of its geometry size and geometry position depends on the density of the raw point cloud.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


2021 ◽  
Vol 13 (12) ◽  
pp. 2332
Author(s):  
Daniel Lamas ◽  
Mario Soilán ◽  
Javier Grandío ◽  
Belén Riveiro

The growing development of data digitalisation methods has increased their demand and applications in the transportation infrastructure field. Currently, mobile mapping systems (MMSs) are one of the most popular technologies for the acquisition of infrastructure data, with three-dimensional (3D) point clouds as their main product. In this work, a heuristic-based workflow for semantic segmentation of complex railway environments is presented, in which their most relevant elements are classified, namely, rails, masts, wiring, droppers, traffic lights, and signals. This method takes advantage of existing methodologies in the field for point cloud processing and segmentation, taking into account the geometry and spatial context of each classified element in the railway environment. This method is applied to a 90-kilometre-long railway lane and validated against a manual reference on random sections of the case study data. The results are presented and discussed at the object level, differentiating the type of the element. The indicators F1 scores obtained for each element are superior to 85%, being higher than 99% in rails, the most significant element of the infrastructure. These metrics showcase the quality of the algorithm, which proves that this method is efficient for the classification of long and variable railway sections, and for the assisted labelling of point cloud data for future applications based on training supervised learning models.


2021 ◽  
Vol 2107 (1) ◽  
pp. 012003
Author(s):  
N I Boslim ◽  
S A Abdul Shukor ◽  
S N Mohd Isa ◽  
R Wong

Abstract 3D point clouds are a set of point coordinates that can be obtained by using sensing device such as the Terrestrial Laser Scanner (TLS). Due to its high capability in collecting data and produce a strong density point cloud surrounding it, segmentation is needed to extract information from the massive point cloud containing different types of objects, apart from the object of interest. Bell Tower of Tawau, Sabah has been chosen as the object of interest to study the performance of different types of classifiers in segmenting the point cloud data. A state-of-the-art TLS was used to collect the data. This research’s aim is to segment the point cloud data of the historical building from its scene by using two different types of classifier and to study their performances. Two main classifiers commonly used in segmenting point cloud data of interest like building are tested here, which is Random Forest (RF) and k-Nearest Neighbour (kNN). As a result, it is found out that Random Forest classifier performs better in segmenting the existing point cloud data that represent the historic building compared to k-Nearest Neighbour classifier.


Author(s):  
K. R. Dayal ◽  
S. Raghavendra ◽  
H. Pande ◽  
P. S. Tiwari ◽  
I. Chauhan

In the recent past, several heritage structures have faced destruction due to both human-made incidents and natural calamities that have caused a great loss to the human race regarding its cultural achievements. In this context, the importance of documenting such structures to create a substantial database cannot be emphasised enough. The Clock Tower of Dehradun, India is one such structure. There is a lack of sufficient information in the digital domain, which justified the need to carry out this study. Thus, an attempt has been made to gauge the possibilities of using open source 3D tools such as VSfM to quickly and easily obtain point clouds of an object and assess its quality. The photographs were collected using consumer grade cameras with reasonable effort to ensure overlap. The sparse reconstruction and dense reconstruction were carried out to generate a 3D point cloud model of the tower. A terrestrial laser scanner (TLS) was also used to obtain a point cloud of the tower. The point clouds obtained from the two methods were analyzed to understand the quality of the information present; TLS acquired point cloud being a benchmark to assess the VSfM point cloud. They were compared to analyze the point density and subjected to a plane-fitting test for sample flat portions on the structure. The plane-fitting test revealed the <q>planarity</q> of the point clouds. A Gauss distribution fit yielded a standard deviation of 0.002 and 0.01 for TLS and VSfM, respectively. For more insight, comparisons with Agisoft Photoscan results were also made.


2020 ◽  
Vol 12 (9) ◽  
pp. 1452
Author(s):  
Ming Huang ◽  
Xueyu Wu ◽  
Xianglei Liu ◽  
Tianhang Meng ◽  
Peiyuan Zhu

The preference of three-dimensional representation of underground cable wells from two-dimensional symbols is a developing trend, and three-dimensional (3D) point cloud data is widely used due to its high precision. In this study, we utilize the characteristics of 3D terrestrial lidar point cloud data to build a CSG-BRep 3D model of underground cable wells, whose spatial topological relationship is fully considered. In order to simplify the modeling process, first, point cloud simplification is performed; then, the point cloud main axis is extracted by OBB bounding box, and lastly the point cloud orientation correction is realized by quaternion rotation. Furthermore, employing the adaptive method, the top point cloud is extracted, and it is projected for boundary extraction. Thereupon, utilizing the boundary information, we design the 3D cable well model. Finally, the cable well component model is generated by scanning the original point cloud. The experiments demonstrate that, along with the algorithm being fast, the proposed model is effective at displaying the 3D information of the actual cable wells and meets the current production demands.


2021 ◽  
Vol 9 ◽  
Author(s):  
Zhonglei Mao ◽  
Sheng Hu ◽  
Ninglian Wang ◽  
Yongqing Long

In recent years, low-cost unmanned aerial vehicles (UAVs) photogrammetry and terrestrial laser scanner (TLS) techniques have become very important non-contact measurement methods for obtaining topographic data about landslides. However, owing to the differences in the types of UAVs and whether the ground control points (GCPs) are set in the measurement, the obtained topographic data for landslides often have large precision differences. In this study, two types of UAVs (DJI Mavic Pro and DJI Phantom 4 RTK) with and without GCPs were used to survey a loess landslide. UAVs point clouds and digital surface model (DSM) data for the landslide were obtained. Based on this, we used the Geomorphic Change Detection software (GCD 7.0) and the Multiscale Model-To-Model Cloud Comparison (M3C2) algorithm in the Cloud Compare software for comparative analysis and accuracy evaluation of the different point clouds and DSM data obtained using the same and different UAVs. The experimental results show that the DJI Phantom 4 RTK obtained the highest accuracy landslide terrain data when the GCPs were set. In addition, we also used the Maptek I-Site 8,820 terrestrial laser scanner to obtain higher precision topographic point cloud data for the Beiguo landslide. However, owing to the terrain limitations, some of the point cloud data were missing in the blind area of the TLS measurement. To make up for the scanning defect of the TLS, we used the iterative closest point (ICP) algorithm in the Cloud Compare software to conduct data fusion between the point clouds obtained using the DJI Phantom 4 RTK with GCPs and the point clouds obtained using TLS. The results demonstrate that after the data fusion, the point clouds not only retained the high-precision characteristics of the original point clouds of the TLS, but also filled in the blind area of the TLS data. This study introduces a novel perspective and technical scheme for the precision evaluation of UAVs surveys and the fusion of point clouds data based on different sensors in geological hazard surveys.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sign in / Sign up

Export Citation Format

Share Document