Prediction of Time-Dependent Structural Behavior with Recurrent Neural Networks

Author(s):  
Wolfgang Graf ◽  
Steffen Freitag ◽  
Jan-Uwe Sickert ◽  
Michael Kaliske
Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4493
Author(s):  
Rui Silva ◽  
António Araújo

Condition monitoring is a fundamental part of machining, as well as other manufacturing processes where, generally, there are parts that wear out and have to be replaced. Devising proper condition monitoring has been a concern of many researchers, but there is still a lack of robustness and efficiency, most often hindered by the system’s complexity or otherwise limited by the inherent noisy signals, a characteristic of industrial processes. The vast majority of condition monitoring approaches do not take into account the temporal sequence when modelling and hence lose an intrinsic part of the context of an actual time-dependent process, fundamental to processes such as cutting. The proposed system uses a multisensory approach to gather information from the cutting process, which is then modelled by a recurrent neural network, capturing the evolutive pattern of wear over time. The system was tested with realistic cutting conditions, and the results show great effectiveness and accuracy with just a few cutting tests. The use of recurrent neural networks demonstrates the potential of such an approach for other time-dependent industrial processes under noisy conditions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Saeedeh Hashemnia ◽  
Lukas Grasse ◽  
Shweta Soni ◽  
Matthew S. Tata

Recent deep-learning artificial neural networks have shown remarkable success in recognizing natural human speech, however the reasons for their success are not entirely understood. Success of these methods might be because state-of-the-art networks use recurrent layers or dilated convolutional layers that enable the network to use a time-dependent feature space. The importance of time-dependent features in human cortical mechanisms of speech perception, measured by electroencephalography (EEG) and magnetoencephalography (MEG), have also been of particular recent interest. It is possible that recurrent neural networks (RNNs) achieve their success by emulating aspects of cortical dynamics, albeit through very different computational mechanisms. In that case, we should observe commonalities in the temporal dynamics of deep-learning models, particularly in recurrent layers, and brain electrical activity (EEG) during speech perception. We explored this prediction by presenting the same sentences to both human listeners and the Deep Speech RNN and considered the temporal dynamics of the EEG and RNN units for identical sentences. We tested whether the recently discovered phenomenon of envelope phase tracking in the human EEG is also evident in RNN hidden layers. We furthermore predicted that the clustering of dissimilarity between model representations of pairs of stimuli would be similar in both RNN and EEG dynamics. We found that the dynamics of both the recurrent layer of the network and human EEG signals exhibit envelope phase tracking with similar time lags. We also computed the representational distance matrices (RDMs) of brain and network responses to speech stimuli. The model RDMs became more similar to the brain RDM when going from early network layers to later ones, and eventually peaked at the recurrent layer. These results suggest that the Deep Speech RNN captures a representation of temporal features of speech in a manner similar to human brain.


PAMM ◽  
2010 ◽  
Vol 10 (1) ◽  
pp. 155-156
Author(s):  
Steffen Freitag ◽  
Wolfgang Graf ◽  
Michael Kaliske

2011 ◽  
Vol 89 (21-22) ◽  
pp. 1971-1981 ◽  
Author(s):  
S. Freitag ◽  
W. Graf ◽  
M. Kaliske ◽  
J.-U. Sickert

Sign in / Sign up

Export Citation Format

Share Document