Data-Driven Railway Crosstie Support Condition Prediction Using Deep Residual Neural Network: Algorithm and Application

Author(s):  
Bin Feng ◽  
Zhongyi Liu ◽  
Erol Tutumluer ◽  
Hai Huang

Ballasted track substructure is designed and constructed to provide uniform crosstie support and serve the functions of drainage and load distribution over trackbed. Poor and nonuniform support conditions can cause excessive crosstie vibration which will negatively affect the crosstie flexural bending behavior. Furthermore, ballast–tie gaps and large contact forces at the crosstie–ballast interface will result in accelerated ballast layer degradation and settlement accumulation. Inspection of crosstie support condition is therefore necessary while very challenging to implement using current methods and technologies. Based on deep learning artificial intelligence techniques and a developed residual neural network (ResNet), this paper introduces an innovative data-driven prediction approach for crosstie support conditions as demonstrated from a full-scale ballasted track laboratory experiment. The discrete element method (DEM) is leveraged to provide training and testing data sets for the proposed prediction model. K-means clustering is applied to establish ballast layer subsections with representative ballast particles and provide additional insights on layer zoning for dynamic behavior trends. When provided with DEM simulated particle vertical accelerations, the proposed deep learning ResNet could achieve 100% training and 95.8% testing accuracy. Fed with vertical acceleration measurements captured by advanced “SmartRock” sensors from a full-scale ballasted track laboratory experiment, the trained model could successfully reach a high accuracy of 92.0%. Based on the developed deep learning approach and the research findings presented in this paper, the innovative crosstie support condition prediction system is envisioned to provide railroaders accurate, timely, and repeatable inspection and monitoring opportunities without disrupting railway network operations.

2021 ◽  
Author(s):  
Emilio J. R. Coutinho ◽  
Marcelo J. Aqua and Eduardo Gildin

Abstract Physics-aware machine learning (ML) techniques have been used to endow data-driven proxy models with features closely related to the ones encountered in nature. Examples span from material balance and conservation laws. Physics-based and data-driven reduced-order models or a combination thereof (hybrid-based models) can lead to fast, reliable, and interpretable simulations used in many reservoir management workflows. We built on a recently developed deep-learning-based reduced-order modeling framework by adding a new step related to information of the input-output behavior (e.g., well rates) of the reservoir and not only the states (e.g., pressure and saturation) matching. A Combination of data-driven model reduction strategies and machine learning (deep- neural networks – NN) will be used here to achieve state and input-output matching simultaneously. In Jin, Liu and Durlofsky (2020), the authors use a NN architecture where it is possible to predict the state variables evolution after training an autoencoder coupled with a control system approach (Embed to Control - E2C) and adding some physical components (Loss functions) to the neural network training procedure. In this paper, we extend this idea by adding the simulation model output, e.g., well bottom-hole pressure and well flowrates, as data to be used in the training procedure. Additionally, we added a new neural network to the E2C transition model to handle the connections between state variables and model outputs. By doing this, it is possible to estimate the evolution in time of both the state variables as well as the output variables simultaneously. The method proposed provides a fast and reliable proxy for the simulation output, which can be applied to a well-control optimization problem. Such a non-intrusive method, like data-driven models, does not need to have access to reservoir simulation internal structure. So it can be easily applied to commercial reservoir simulations. We view this as an analogous step to system identification whereby mappings related to state dynamics, inputs (controls), and measurements (output) are obtained. The proposed method is applied to an oil-water model with heterogeneous permeability, 4 injectors, and 5 producer wells. We used 300 sampled well control sets to train the autoencoder and another set to validate the obtained autoencoder parameters.


Author(s):  
Huixin Yang ◽  
Xiang Li ◽  
Wei Zhang

Abstract Despite the rapid development of deep learning-based intelligent fault diagnosis methods on rotating machinery, the data-driven approach generally remains a "black box" to researchers, and its internal mechanism has not been sufficiently understood. The weak interpretability significantly impedes further development and applications of the effective deep neural network-based methods. This paper contributes efforts to understanding the mechanical signal processing of deep learning on the fault diagnosis problems. The diagnostic knowledge learned by the deep neural network is visualized using the neuron activation maximization and the saliency map methods. The discriminative features of different machine health conditions are intuitively observed. The relationship between the data-driven methods and the well-established conventional fault diagnosis knowledge is confirmed by the experimental investigations on two datasets. The results of this study can benefit researchers on understanding the complex neural networks, and increase the reliability of the data-driven fault diagnosis model in the real engineering cases.


2021 ◽  
Author(s):  
Julio Aguilar ◽  
Laura Sandoval ◽  
Arturo Rodriguez ◽  
Sanjay Shantha Kumar ◽  
Jose Terrazas ◽  
...  

Abstract In seeking predictability of characterizing materials for ultra-high temperature materials for hypersonic vehicles, the use of the convolutional neural network for characterizing the behavior of liquid Al-Sm-X (Hf, Zr, Ti) alloys within a B4C packed to determine the reaction products for which they are usually done with the scanning electron microscope (SEM) or X-ray diffraction (XRD) at ultra-high temperatures (> 1600°C). Our goal is to predict ultimately the products as liquid Al-Sm-X (Hf, Zr, Ti) alloys infiltrate into a B4C packed bed. Material characterization determines the processing path and final species from the reacting infusion consisting of fluid flow through porous channels, consumption of elemental components, and reaction forming boride and carbide precipitates. Since characterization is time-consuming, an expert in this field is required; our approach is to characterize and track these species using a Convolutional Neural Network (CNN) to facilitate and automate analysis of images. Although Deep Learning seems to provide an automated prediction approach, some of these challenges faced under this research are difficult to overcome. These challenges include data required, accuracy, training time, and computational cost requirements for a CNN. Our approach was to perform experiments on high-temperature metal infusion under B4C Packed Bed infiltration in a parametric matrix of cases. We characterized images using SEM and XRD images and run/optimize our CNN, which yields an innovative method for characterization via Deep Learning compared to traditional practices.


2021 ◽  
Vol 7 (2) ◽  
pp. 625-628
Author(s):  
Jan Oldenburg ◽  
Julian Renkewitz ◽  
Michael Stiehm ◽  
Klaus-Peter Schmitz

Abstract It is commonly accepted that hemodynamic situation is related with cardiovascular diseases as well as clinical post-procedural outcome. In particular, aortic valve stenosis and insufficiency are associated with high shear flow and increased pressure loss. Furthermore, regurgitation, high shear stress and regions of stagnant blood flow are presumed to have an impact on clinical result. Therefore, flow field assessment to characterize the hemodynamic situation is necessary for device evaluation and further design optimization. In-vitro as well as in-silico fluid mechanics methods can be used to investigate the flow through prostheses. In-silico solutions are based on mathematical equitation’s which need to be solved numerically (Computational Fluid Dynamics - CFD). Fundamentally, the flow is physically described by Navier-Stokes. CFD often requires high computational cost resulting in long computation time. Techniques based on deep-learning are under research to overcome this problem. In this study, we applied a deep-learning strategy to estimate fluid flows during peak systolic steady-state blood flows through mechanical aortic valves with varying opening angles in randomly generated aortic root geometries. We used a data driven approach by running 3,500 two dimensional simulations (CFD). The simulation data serves as training data in a supervised deep learning framework based on convolutional neural networks analogous to the U-net architecture. We were able to successfully train the neural network using the supervised data driven approach. The results showing that it is feasible to use a neural network to estimate physiological flow fields in the vicinity of prosthetic heart valves (Validation error below 0.06), by only giving geometry data (Image) into the Network. The neural network generates flow field prediction in real time, which is more than 2500 times faster compared to CFD simulation. Accordingly, there is tremendous potential in the use of AIbased approaches predicting blood flows through heart valves on the basis of geometry data, especially in applications where fast fluid mechanic predictions are desired.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 5150
Author(s):  
Shiza Mushtaq ◽  
M. M. Manjurul Islam ◽  
Muhammad Sohaib

This paper presents a comprehensive review of the developments made in rotating bearing fault diagnosis, a crucial component of a rotatory machine, during the past decade. A data-driven fault diagnosis framework consists of data acquisition, feature extraction/feature learning, and decision making based on shallow/deep learning algorithms. In this review paper, various signal processing techniques, classical machine learning approaches, and deep learning algorithms used for bearing fault diagnosis have been discussed. Moreover, highlights of the available public datasets that have been widely used in bearing fault diagnosis experiments, such as Case Western Reserve University (CWRU), Paderborn University Bearing, PRONOSTIA, and Intelligent Maintenance Systems (IMS), are discussed in this paper. A comparison of machine learning techniques, such as support vector machines, k-nearest neighbors, artificial neural networks, etc., deep learning algorithms such as a deep convolutional network (CNN), auto-encoder-based deep neural network (AE-DNN), deep belief network (DBN), deep recurrent neural network (RNN), and other deep learning methods that have been utilized for the diagnosis of rotary machines bearing fault, is presented.


2020 ◽  
Vol 369 ◽  
pp. 113217 ◽  
Author(s):  
Hamid Reza Tamaddon-Jahromi ◽  
Neeraj Kavan Chakshu ◽  
Igor Sazonov ◽  
Llion M. Evans ◽  
Hywel Thomas ◽  
...  

2020 ◽  
Vol 31 (13) ◽  
pp. 1346-1354 ◽  
Author(s):  
Yukiko Nagao ◽  
Mika Sakamoto ◽  
Takumi Chinen ◽  
Yasushi Okada ◽  
Daisuke Takao

By applying convolutional neural network-based classifiers, we demonstrate that cell images can be robustly classified according to cell cycle phases. Combined with Grad-CAM analysis, our approach enables us to extract biological features underlying cellular phenomena of interest in an unbiased and data-driven manner.


2020 ◽  
Vol 10 (14) ◽  
pp. 4999
Author(s):  
Dongbo Shi ◽  
Lei Sun ◽  
Yonghui Xie

The reliable design of the supercritical carbon dioxide (S-CO2) turbine is the core of the advanced S-CO2 power generation technology. However, the traditional computational fluid dynamics (CFD) method is usually applied in the S-CO2 turbine design-optimization, which is a high computational cost, high memory requirement, and long time-consuming solver. In this research, a flexible end-to-end deep learning approach is presented for the off-design performance prediction of the S-CO2 turbine based on physical fields reconstruction. Our approach consists of three steps: firstly, an optimal design of a 60,000 rpm S-CO2 turbine is established. Secondly, five design variables for off-design analysis are selected to reconstruct the temperature and pressure fields on the blade surface through a deconvolutional neural network. Finally, the power and efficiency of the turbine is predicted by a convolutional neural network according to reconstruction fields. The results show that the prediction approach not only outperforms five classical machine learning models but also focused on the physical mechanism of turbine design. In addition, once the deep model is well-trained, the calculation with graphics processing unit (GPU)-accelerated can quickly predict the physical fields and performance. This prediction approach requires less human intervention and has the advantages of being universal, flexible, and easy to implement.


2021 ◽  
Vol 11 (22) ◽  
pp. 10935
Author(s):  
Hongju Zhou ◽  
Liping Sun ◽  
Hongwei Zhou ◽  
Man Zhao ◽  
Xinpei Yuan ◽  
...  

The health of trees has become an important issue in forestry. How to detect the health of trees quickly and accurately has become a key area of research for scholars in the world. In this paper, a living tree internal defect detection model is established and analyzed using model-driven theory, where the theoretical fundamentals and implementations of the algorithm are clarified. The location information of the defects inside the trees is obtained by setting a relative permittivity matrix. The data-driven inversion algorithm is realized using a model-driven algorithm that is used to optimize the deep convolutional neural network, which combines the advantages of model-driven algorithms and data-driven algorithms. The results of the comparison inversion algorithms, the BP neural network inversion algorithm, and the model-driven deep learning network inversion algorithm, are analyzed through simulations. The results shown that the model-driven deep learning network inversion algorithm maintains a detection accuracy of more than 90% for single defects or homogeneous double defects, while it can still have a detection accuracy of 78.3% for heterogeneous multiple defects. In the simulations, the single defect detection time of the model-driven deep learning network inversion algorithm is kept within 0.1 s. Additionally, the proposed method overcomes the high nonlinearity and ill-posedness electromagnetic inverse scattering and reduces the time cost and computational complexity of detecting internal defects in trees. The results show that resolution and accuracy are improved in the inversion image for detecting the internal defects of trees.


Sign in / Sign up

Export Citation Format

Share Document