scholarly journals Intelligent Calibration of Static FEA Computations Based on Terrestrial Laser Scanning Reference

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6439
Author(s):  
Wei Xu ◽  
Xiangyu Bao ◽  
Genglin Chen ◽  
Ingo Neumann

The demand for efficient and accurate finite element analysis (FEA) is becoming more prevalent with the increase in advanced calibration technologies and sensor-based monitoring methods. The current research explores a deep learning-based methodology to calibrate FEA results. The utilization of monitoring reference results from measurements, e.g., terrestrial laser scanning, can help to capture the actual features in the static loading process. We learn the deviation sequence results between the standard FEA computations with the simplified geometry and refined reference values by the long short-term memory method. The complex changing principles in different deviations are trained and captured effectively in the training process of deep learning. Hence, we generate the FEA sequence results corresponding to next adjacent loading steps. The final FEA computations are calibrated by the threshold control. The calibration reduces the mean square errors of the FEA future sequence results significantly. This strengthens the calibration depth. Consequently, the calibration of FEA computations with deep learning can play a helpful role in the prediction and monitoring problems regarding the future structural behaviors.

2021 ◽  
Vol 974 (8) ◽  
pp. 2-12
Author(s):  
A.A. Sharafutdinova ◽  
M.J. Bryn

Terrestrial laser scanning and digital information modeling are increasingly practiced every year to solve application tasks at various stages of the industrial facility’s life cycle. In this regard, the task of formulating the requirements for the accuracy of performing terrestrial laser scanning for the subsequent forming digital information models becomes more and more calling. In this article we analyzed the types of engineering and geodetic works by which engineering tasks are solved at various stages of the industrial facility’s life cycle in order to create an accuracy requirement. An analysis of the regulatory and technical documentation that specifies doing these works was also made. Basing on it, the relationship between the measurement accuracy characteristics specified in the regulatory and technical documentation (design, construction and operational) and the mean square errors in determining the position of points is described. The authors propose a scheme for transition from the characteristics of the measurements accuracy to the mean square errors of determining the position of points for each type of engineering and geodetic work. The results of this study can be used at planning terrestrial laser scanning of industrial facilities. Basing on the above requirements for the accuracy of the geodetic work, it is possible to formulate a methodology for carrying out each stage of the TLS technological scheme.


Author(s):  
Saeed Vasebi ◽  
Yeganeh M. Hayeri ◽  
Peter J. Jin

Relatively recent increased computational power and extensive traffic data availability have provided a unique opportunity to re-investigate drivers’ car-following (CF) behavior. Classic CF models assume drivers’ behavior is only influenced by their preceding vehicle. Recent studies have indicated that considering surrounding vehicles’ information (e.g., multiple preceding vehicles) could affect CF models’ performance. An in-depth investigation of surrounding vehicles’ contribution to CF modeling performance has not been reported in the literature. This study uses a deep-learning model with long short-term memory (LSTM) to investigate to what extent considering surrounding vehicles could improve CF models’ performance. This investigation helps to select the right inputs for traffic flow modeling. Five CF models are compared in this study (i.e., classic, multi-anticipative, adjacent-lanes, following-vehicle, and all-surrounding-vehicles CF models). Performance of the CF models is compared in relation to accuracy, stability, and smoothness of traffic flow. The CF models are trained, validated, and tested by a large publicly available dataset. The average mean square errors (MSEs) for the classic, multi-anticipative, adjacent-lanes, following-vehicle, and all-surrounding-vehicles CF models are 1.58 × 10−3, 1.54 × 10−3, 1.56 × 10−3, 1.61 × 10−3, and 1.73 × 10−3, respectively. However, the results show insignificant performance differences between the classic CF model and multi-anticipative model or adjacent-lanes model in relation to accuracy, stability, or smoothness. The following-vehicle CF model shows similar performance to the multi-anticipative model. The all-surrounding-vehicles CF model has underperformed all the other models.


2019 ◽  
Vol 265 ◽  
pp. 137-144 ◽  
Author(s):  
T. Jackson ◽  
A. Shenkin ◽  
A. Wellpott ◽  
K. Calders ◽  
N. Origo ◽  
...  

2019 ◽  
Vol 11 (2) ◽  
pp. 211 ◽  
Author(s):  
Wuming Zhang ◽  
Peng Wan ◽  
Tiejun Wang ◽  
Shangshu Cai ◽  
Yiming Chen ◽  
...  

Tree stem detection is a key step toward retrieving detailed stem attributes from terrestrial laser scanning (TLS) data. Various point-based methods have been proposed for the stem point extraction at both individual tree and plot levels. The main limitation of the point-based methods is their high computing demand when dealing with plot-level TLS data. Although segment-based methods can reduce the computational burden and uncertainties of point cloud classification, its application is largely limited to urban scenes due to the complexity of the algorithm, as well as the conditions of natural forests. Here we propose a novel and simple segment-based method for efficient stem detection at the plot level, which is based on the curvature feature of the points and connected component segmentation. We tested our method using a public TLS dataset with six forest plots that were collected for the international TLS benchmarking project in Evo, Finland. Results showed that the mean accuracies of the stem point extraction were comparable to the state-of-art methods (>95%). The accuracies of the stem mappings were also comparable to the methods tested in the international TLS benchmarking project. Additionally, our method was applicable to a wide range of stem forms. In short, the proposed method is accurate and simple; it is a sensible solution for the stem detection of standing trees using TLS data.


2021 ◽  
Vol 11 (9) ◽  
pp. 3838
Author(s):  
Pengfei Zhang ◽  
Fenghua Li ◽  
Rongjian Zhao ◽  
Ruishi Zhou ◽  
Lidong Du ◽  
...  

Today, excessive psychological stress has become a universal threat to humans. That stress can heavily affect work and study when a person repeatedly is exposed to high stress. If that exposure is long enough, it can even cause cardiovascular disease and cancer. Therefore, both monitoring and managing of stress is imperative to reduce the bad outcomes from excessive psychological stress. Conventional monitoring methods firstly extract the characteristics of the RR interval of an electrocardiogram (ECG) from a time domain and a frequency domain, then use machine learning models, like SVM, random forest, and decision tree, to distinguish the level of that stress. The biggest limitation of using these methods is that at least one minute of ECG data and other signals are indispensable to ensure the high accuracy of the results. This will greatly affect the real-time application of the models. To satisfy real-time detection of stress with high accuracy, we proposed a framework based on deep learning technology. The proposed monitoring framework is based on convolutional neural networks (CNN) and bidirectional long short-term memory (BiLSTM). To evaluate the performance of this network, we conducted the experiments applying conventional methods. The data for the 34 subjects were collected on the server platform created by the group at the Institute of Psychology of the Chinese Academy of Sciences and our group. The accuracy of the proposed framework was up to 0.865 on three levels of stress using a 10 s ECG signal, a 0.228 improvement compared with conventional methods. Therefore, our proposed framework is more suitable for real-time applications.


2020 ◽  
Vol 12 (3) ◽  
pp. 352 ◽  
Author(s):  
WenFang Ye ◽  
Chuang Qian ◽  
Jian Tang ◽  
Hui Liu ◽  
XiaoYun Fan ◽  
...  

The detailed structure information under the forest canopy is important for forestry surveying. As a high-precision environmental sensing and measurement method, terrestrial laser scanning (TLS) is widely used in high-precision forestry surveying. In TLS-based forestry surveys, stem-mapping, which is focused on detecting and extracting trunks, is one of the core data processing tasks and the basis for the subsequent calculation of tree attributes; one of the most basic attributes is the diameter at breast height (DBH). This article explores and improves the methods for stem mapping and DBH estimation from TLS data. Firstly, an improved 3D stem mapping algorithm considering the growth direction in random sample consistency (RANSAC) cylinder fitting is proposed to extract and fit the individual tree point cloud section. It constructs the hierarchical optimum cylinder of the trunk and introduces the growth direction into the establishment of the backbone buffer in the next layer. Experimental results show that it can effectively remove most of the branches and reduce the interference of the branches to the discrimination of trunks and improve the integrity of stem extraction by about 36%. Secondly, a robust least squares ellipse fitting method based on the elliptic hypothesis is proposed for DBH estimation. Experimental results show that the DBH estimation accuracy of the proposed estimation method is improved compared with other methods. The mean root mean squared error (RMSE) of the proposed estimation method is 1.14 cm, compared with other methods with a mean RMSE of 1.70, 2.03, and 2.14 cm. The mean relative accuracy of the proposed estimation method is 95.2%, compared with other methods with a mean relative accuracy of 92.9%, 91.9%, and 90.9%.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yin Zhou ◽  
Daguang Han ◽  
Kaixin Hu ◽  
Guocheng Qin ◽  
Zhongfu Xiang ◽  
...  

The comprehensive utilization of prefabricated components (PCs) is one of the features of industrial construction. Trial assembly is imperative for PCs used in high-rise buildings and large bridges. Virtual trial assembly (VTA) is a preassembly process for PCs in a virtual environment that can avoid the time-consuming and economic challenges in physical trial assembly. In this study, a general framework for VTA that is performed between a point cloud, a building information model (BIM), and the finite element method is proposed. In obtaining point clouds via terrestrial laser scanning, the registration accuracy of point clouds is the key to building an accurate digital model of PCs. Accordingly, an accurate registration method based on triangular pyramid markers is proposed. This method can enable the general registration accuracy of point clouds to reach the submillimeter scale. Two algorithms for curved members and bolt holes are developed for PCs with bolt assembly to reconstruct a precise BIM that can be used directly in finite element analysis. Furthermore, an efficient simulation method for accurately predicting the elastic deformation and initial stress caused by forced assembly is proposed and verified. The proposed VTA method is verified on a reduced-scale steel pipe arch bridge. Experimental results show that the geometric prediction deviation of VTA is less than 1/1800 of the experimental bridge span, and the mean stress predicted via VTA is 90% of the measured mean stress. In general, this research may help improve the industrialization level of building construction.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Yinan Wang ◽  
Diane Oyen ◽  
Weihong (Grace) Guo ◽  
Anishi Mehta ◽  
Cory Braker Scott ◽  
...  

AbstractCatastrophic failure in brittle materials is often due to the rapid growth and coalescence of cracks aided by high internal stresses. Hence, accurate prediction of maximum internal stress is critical to predicting time to failure and improving the fracture resistance and reliability of materials. Existing high-fidelity methods, such as the Finite-Discrete Element Model (FDEM), are limited by their high computational cost. Therefore, to reduce computational cost while preserving accuracy, a deep learning model, StressNet, is proposed to predict the entire sequence of maximum internal stress based on fracture propagation and the initial stress data. More specifically, the Temporal Independent Convolutional Neural Network (TI-CNN) is designed to capture the spatial features of fractures like fracture path and spall regions, and the Bidirectional Long Short-term Memory (Bi-LSTM) Network is adapted to capture the temporal features. By fusing these features, the evolution in time of the maximum internal stress can be accurately predicted. Moreover, an adaptive loss function is designed by dynamically integrating the Mean Squared Error (MSE) and the Mean Absolute Percentage Error (MAPE), to reflect the fluctuations in maximum internal stress. After training, the proposed model is able to compute accurate multi-step predictions of maximum internal stress in approximately 20 seconds, as compared to the FDEM run time of 4 h, with an average MAPE of 2% relative to test data.


Sign in / Sign up

Export Citation Format

Share Document