Damage Detection of the RC Building in TLS Point Clouds Using 3D Deep Neural Network PointNet++

Author(s):  
Wanpeng Shao ◽  
Ken'Ichi Kakizaki ◽  
Shunsuke Araki ◽  
Tomohisa Mukai
2021 ◽  
Vol 6 (4) ◽  
pp. 8647-8654
Author(s):  
Qi Wang ◽  
Jian Chen ◽  
Jianqiang Deng ◽  
Xinfang Zhang

2020 ◽  
pp. 147592172093261 ◽  
Author(s):  
Zohreh Mousavi ◽  
Sina Varahram ◽  
Mir Mohammad Ettefagh ◽  
Morteza H. Sadeghi ◽  
Seyed Naser Razavi

Structural health monitoring of mechanical systems is essential to avoid their catastrophic failure. In this article, an effective deep neural network is developed for extracting the damage-sensitive features from frequency data of vibration signals to damage detection of mechanical systems in the presence of the uncertainties such as modeling errors, measurement errors, and environmental noises. For this purpose, the finite element method is used to analyze a mechanical system (finite element model). Then, vibration experiments are carried out on the laboratory-scale model. Vibration signals of real intact system are used to updating the finite element model and minimizing the disparities between the natural frequencies of the finite element model and real system. Some parts of the signals that are not related to the nature of the system are removed using the complete ensemble empirical mode decomposition technique. Frequency domain decomposition method is used to extract frequency data. The proposed deep neural network is trained using frequency data of the finite element model and real intact state and then is tested using frequency data of the real system. The proposed network is designed in two stages, namely, the pre-training classification based on deep auto-encoder and Softmax layer (first stage), and the re-training classification based on backpropagation algorithm for fine tuning of the network (second stage). The proposed method is validated using a lab-scale offshore jacket structure. The results show that the proposed method can learn features from the frequency data and achieve higher accuracy than other comparative methods.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4734
Author(s):  
Patryk Mazurek ◽  
Tomasz Hachaj

In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current and previous monocular visual sensors video frame to determine observer position and to determine a cloud of points that represent objects in the environment, while the deep neural network uses the current frame to detect and recognize objects (OR). In the next step, the sparse point cloud returned from the SLAM algorithm is compared with the area recognized by the OR network. Because each point from the 3D map has its counterpart in the current frame, therefore the filtration of points matching the area recognized by the OR algorithm is performed. The clustering algorithm determines areas in which points are densely distributed in order to detect spatial positions of objects detected by OR. Then by using principal component analysis (PCA)—based heuristic we estimate bounding boxes of detected objects. The image processing pipeline that uses sparse point clouds generated by SLAM in order to determine positions of objects recognized by deep neural network and mentioned PCA heuristic are main novelties of our solution. In contrary to state-of-the-art approaches, our algorithm does not require any additional calculations like generation of dense point clouds for objects positioning, which highly simplifies the task. We have evaluated our research on large benchmark dataset using various state-of-the-art OR architectures (YOLO, MobileNet, RetinaNet) and clustering algorithms (DBSCAN and OPTICS) obtaining promising results. Both our source codes and evaluation data sets are available for download, so our results can be easily reproduced.


2021 ◽  
Vol 13 (18) ◽  
pp. 3736
Author(s):  
Sung-Hwan Park ◽  
Hyung-Sup Jung ◽  
Sunmin Lee ◽  
Eun-Sook Kim

The role of forests is increasing because of rapid land use changes worldwide that have implications on ecosystems and the carbon cycle. Therefore, it is necessary to obtain accurate information about forests and build forest inventories. However, it is difficult to assess the internal structure of the forest through 2D remote sensing techniques and fieldwork. In this aspect, we proposed a method for estimating the vertical structure of forests based on full-waveform light detection and ranging (FW LiDAR) data in this study. Voxel-based tree point density maps were generated by estimating the number of canopy height points in each voxel grid from the raster digital terrain model (DTM) and canopy height points after pre-processing the LiDAR point clouds. We applied an unsupervised classification algorithm to the voxel-based tree point density maps and identified seven classes by profile pattern analysis for the forest vertical types. The classification accuracy was found to be 72.73% from the validation from 11 field investigation sites, which was additionally confirmed through comparative analysis with aerial images. Based on this pre-classification reference map, which is assumed to be ground truths, the deep neural network (DNN) model was finally applied to perform the final classification. As a result of accuracy assessment, it showed accuracy of 92.72% with a good performance. These results demonstrate the potential of vertical structure estimation for extensive forests using FW LiDAR data and that the distinction between one-storied and two-storied forests can be clearly represented. This technique is expected to contribute to efficient and effective management of forests based on accurate information derived from the proposed method.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2731
Author(s):  
Yunbo Rao ◽  
Menghan Zhang ◽  
Zhanglin Cheng ◽  
Junmin Xue ◽  
Jiansu Pu ◽  
...  

Accurate segmentation of entity categories is the critical step for 3D scene understanding. This paper presents a fast deep neural network model with Dense Conditional Random Field (DCRF) as a post-processing method, which can perform accurate semantic segmentation for 3D point cloud scene. On this basis, a compact but flexible framework is introduced for performing segmentation to the semantics of point clouds concurrently, contribute to more precise segmentation. Moreover, based on semantics labels, a novel DCRF model is elaborated to refine the result of segmentation. Besides, without any sacrifice to accuracy, we apply optimization to the original data of the point cloud, allowing the network to handle fewer data. In the experiment, our proposed method is conducted comprehensively through four evaluation indicators, proving the superiority of our method.


Author(s):  
R. Huang ◽  
Z. Ye ◽  
D. Hong ◽  
Y. Xu ◽  
U. Stilla

<p><strong>Abstract.</strong> In this paper, we propose a framework for obtaining semantic labels of LiDAR point clouds and refining the classification results by combining a deep neural network with a graph-structured smoothing technique. In general, the goal of the semantic scene analysis is to assign a semantic label to each point in the point cloud. Although various related researches have been reported, due to the complexity of urban areas, the semantic labeling of point clouds in urban areas is still a challenging task. In this paper, we address the issues of how to effectively extract features from each point and its local surrounding and how to refine the initial soft labels by considering contextual information in the spatial domain. Specifically, we improve the effectiveness of classification of point cloud in two aspects. Firstly, instead of utilizing handcrafted features as input for classification and refinement, the local context of a point is embedded into deep dimensional space and classified via a deep neural network (PointNet++), and simultaneously soft labels are obtained as initial results for next refinement. Secondly, the initial label probability set is improved by taking the context both in the spatial domain into consideration by constructing a graph structure, and the final labels are optimized by a graph cuts algorithm. To evaluate the performance of our proposed framework, experiments are conducted on a mobile laser scanning (MLS) point cloud dataset. We demonstrate that our approach can achieve higher accuracy in comparison to several commonly-used state-of-the-art baselines. The overall accuracy of our proposed method on TUM dataset can reach 85.38% for labeling eight semantic classes.</p>


Sign in / Sign up

Export Citation Format

Share Document