scholarly journals Remaining Useful Life Estimation Using Neural Ordinary Differential Equations

Author(s):  
Marco Star ◽  
Kristoffer McKee

Data-driven machinery prognostics has seen increasing popularity recently, especially with the effectiveness of deep learning methods growing. However, deep learning methods lack useful properties such as the lack of uncertainty quantification of their outputs and have a black-box nature. Neural ordinary differential equations (NODEs) use neural networks to define differential equations that propagate data from the inputs to the outputs. They can be seen as a continuous generalization of a popular network architecture used for image recognition known as the Residual Network (ResNet). This paper compares the performance of each network for machinery prognostics tasks to show the validity of Neural ODEs in machinery prognostics. The comparison is done using NASA’s Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset, which simulates the sensor information of degrading turbofan engines. To compare both architectures, they are set up as convolutional neural networks and the sensors are transformed to the time-frequency domain through the short-time Fourier transform (STFT). The spectrograms from the STFT are the input images to the networks and the output is the estimated RUL; hence, the task is turned into an image recognition task. The results found NODEs can compete with state-of-the-art machinery prognostics methods. While it does not beat the state-of-the-art method, it is close enough that it could warrant further research into using NODEs. The potential benefits of using NODEs instead of other network architectures are also discussed in this work.

2020 ◽  
Vol 14 ◽  
Author(s):  
Yaqing Zhang ◽  
Jinling Chen ◽  
Jen Hong Tan ◽  
Yuxuan Chen ◽  
Yunyi Chen ◽  
...  

Emotion is the human brain reacting to objective things. In real life, human emotions are complex and changeable, so research into emotion recognition is of great significance in real life applications. Recently, many deep learning and machine learning methods have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage in that the feature extraction process is usually cumbersome, which relies heavily on human experts. Then, end-to-end deep learning methods emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid model of CNN and LSTM (CNN-LSTM). The experiments were carried on the well-known DEAP dataset. Experimental results show that the CNN and CNN-LSTM models had high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reached 90.12 and 94.17%, respectively. The performance of the DNN model was not as accurate as other models, but the training speed was fast. The LSTM model was not as stable as the CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of the LSTM was much slower and it was difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, and dropout probability, were also conducted in the paper. Comparison results prove that the DNN model converged to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needed more epochs to learn. As for dropout probability, reducing the parameters by ~50% each time was appropriate.


2016 ◽  
Vol 21 (9) ◽  
pp. 998-1003 ◽  
Author(s):  
Oliver Dürr ◽  
Beate Sick

Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening–based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.


Author(s):  
Dong-Dong Chen ◽  
Wei Wang ◽  
Wei Gao ◽  
Zhi-Hua Zhou

Deep neural networks have witnessed great successes in various real applications, but it requires a large number of labeled data for training. In this paper, we propose tri-net, a deep neural network which is able to use massive unlabeled data to help learning with limited labeled data. We consider model initialization, diversity augmentation and pseudo-label editing simultaneously. In our work, we utilize output smearing to initialize modules, use fine-tuning on labeled data to augment diversity and eliminate unstable pseudo-labels to alleviate the influence of suspicious pseudo-labeled data. Experiments show that our method achieves the best performance in comparison with state-of-the-art semi-supervised deep learning methods. In particular, it achieves 8.30% error rate on CIFAR-10 by using only 4000 labeled examples.


2019 ◽  
Author(s):  
Alireza Yazdani ◽  
Lu Lu ◽  
Maziar Raissi ◽  
George Em Karniadakis

AbstractMathematical models of biological reactions at the system-level lead to a set of ordinary differential equations with many unknown parameters that need to be inferred using relatively few experimental measurements. Having a reliable and robust algorithm for parameter inference and prediction of the hidden dynamics has been one of the core subjects in systems biology, and is the focus of this study. We have developed a new systems-biology-informed deep learning algorithm that incorporates the system of ordinary differential equations into the neural networks. Enforcing these equations effectively adds constraints to the optimization procedure that manifests itself as an imposed structure on the observational data. Using few scattered and noisy measurements, we are able to infer the dynamics of unobserved species, external forcing, and the unknown model parameters. We have successfully tested the algorithm for three different benchmark problems.Author summaryThe dynamics of systems biological processes are usually modeled using ordinary differential equations (ODEs), which introduce various unknown parameters that need to be estimated efficiently from noisy measurements of concentration for a few species only. In this work, we present a new “systems-informed neural network” to infer the dynamics of experimentally unobserved species as well as the unknown parameters in the system of equations. By incorporating the system of ODEs into the neural networks, we effectively add constraints to the optimization algorithm, which makes the method robust to noisy and sparse measurements.


Author(s):  
Bin He ◽  
Long Liu ◽  
Dong Zhang

Abstract As a transmission component, the gear has been obtained widespread attention. The remaining useful life (RUL) prediction of gear is critical to the prognostics health management (PHM) of gear transmission systems. The digital twin (DT) provides support for gear RUL prediction with the advantages of rich health information data and accurate health indicators (HI). This paper reviews digital twin-driven RUL prediction methods for gear performance degradation, from the view of digital twin-driven physical model-based and virtual model-based prediction method. From the view of physical model-based one, it includes prediction model based on gear crack, gear fatigue, gear surface scratch, gear tooth breakage, and gear permanent deformation. From the view of digital twin-driven virtual model-based one, it includes non-deep learning methods, and deep learning methods. Non-deep learning methods include wiener process, gamma process, hidden Markov model (HMM), regression-based model, proportional hazard model. Deep learning methods include deep neural networks (DNN), deep belief networks (DBN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc. It mainly summarizes the performance degradation and life test of various models in gear, and evaluates the advantages and disadvantages of various methods. In addition, it encourages future works.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1078
Author(s):  
Ibon Merino ◽  
Jon Azpiazu ◽  
Anthony Remazeilles ◽  
Basilio Sierra

Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.


Author(s):  
Hoseong Kim ◽  
Jaeguk Hyun ◽  
Hyunjung Yoo ◽  
Chunho Kim ◽  
Hyunho Jeon

Recently, infrared object detection(IOD) has been extensively studied due to the rapid growth of deep neural networks(DNN). Adversarial attacks using imperceptible perturbation can dramatically deteriorate the performance of DNN. However, most adversarial attack works are focused on visible image recognition(VIR), and there are few methods for IOD. We propose deep learning-based adversarial attacks for IOD by expanding several state-of-the-art adversarial attacks for VIR. We effectively validate our claim through comprehensive experiments on two challenging IOD datasets, including FLIR and MSOD.


2020 ◽  
Author(s):  
Aikaterini Symeonidi ◽  
Anguelos Nicolaou ◽  
Frank Johannes ◽  
Vincent Christlein

AbstractDeep learning methods have proved to be powerful classification tools in the fields of structural and functional genomics. In this paper, we introduce a Recursive Convolutional Neural Networks (RCNN) for the analysis of epigenomic data. We focus on the task of predicting gene expression from the intensity of histone modifications. The proposed RCNN architecture can be applied to data of an arbitrary size, and has a single meta-parameter that quantifies the models capacity, thus making it flexible for experimenting. The proposed architecture outperforms state-of-the-art systems, while having several orders of magnitude fewer parameters.


Author(s):  
Jianwen Jiang ◽  
Yuxuan Wei ◽  
Yifan Feng ◽  
Jingxuan Cao ◽  
Yue Gao

In recent years, graph/hypergraph-based deep learning methods have attracted much attention from researchers. These deep learning methods take graph/hypergraph structure as prior knowledge in the model. However, hidden and important relations are not directly represented in the inherent structure. To tackle this issue, we propose a dynamic hypergraph neural networks framework (DHGNN), which is composed of the stacked layers of two modules: dynamic hypergraph construction (DHG) and hypergrpah convolution (HGC). Considering initially constructed hypergraph is probably not a suitable representation for data, the DHG module dynamically updates hypergraph structure on each layer. Then hypergraph convolution is introduced to encode high-order data relations in a hypergraph structure. The HGC module includes two phases: vertex convolution and hyperedge convolution, which are designed to aggregate feature among vertices and hyperedges, respectively. We have evaluated our method on standard datasets, the Cora citation network and Microblog dataset. Our method outperforms state-of-the-art methods. More experiments are conducted to demonstrate the effectiveness and robustness of our method to diverse data distributions.


2021 ◽  
Author(s):  
◽  
Nils Schaetti

In the last few years, a machine learning field named Deep-Learning (DL) has improved the results of several challenging tasks mainly in the field of computer vision. Deep architectures such as Convolutional Neural Networks (CNN) have been shown as very powerful for computer vision tasks. For those related to language and timeseries the state of the art models such as Long Short-Term Memory (LSTM) have a recurrent component that take into account the order of inputs and are able to memorise them. Among these tasks related to Natural Language Processing (NLP), an important problem in computational linguistics is authorship attribution where the goal is to find the true author of a text or, in an author profiling perspective, to extract information such as gender, origin and socio-economic background. However, few work have tackle the issue of authorship analysis with recurrent neural networks (RNNs). Consequently, we have decided to explore in this study the performances of several recurrent neural models, such as Echo State Networks (ESN), LSTM and Gated Recurrent Units (GRU) on three authorship analysis tasks. The first one on the classical authorship attribution task using the Reuters C50 dataset where models have to predict the true author of a document in a set of candidate authors. The second task is referred as author profiling as the model must determine the gender (male/female) of the author of a set of tweets using the PAN 2017 dataset from the CLEF conference. The third task is referred as author verification using an in-house dataset named SFGram and composed of dozens of science-fiction magazines from the 50s to the 70s. This task is separated into two problems. In the first, the goal is to extract passages written by a particular author inside a magazine co-written by several dozen authors. The second is to find out if a magazine contains passages written by a particular author. In order for our research to be applicable in authorship studies, we limited evaluated models to those with a so-called many-to-many architecture. This fulfills a fundamental constraint of the field of stylometry which is the ability to provide evidences for each prediction made. To evaluate these three models, we defined a set of experiments, performance measures and hyperparame-ters that could impact the output. We carried out these experiments with each model and their corresponding hyperparameters. Then we used statistical tests to detect significant di˙erences between these models, and with state-of-the-art baseline methods in authorship analysis. Our results shows that shallow and simple RNNs such as ESNs can be competitive with traditional meth-ods in authorship studies while keeping a learning time that can be used in practice and a reasonable number of parameters. These properties allow them to outperform much more complex neural models such as LSTMs and GRUs considered as state of the art in NLP. We also show that pretraining word and character features can be useful on stylometry problems if these are trained on a similar dataset. Consequently, interesting results are achievable on such tasks where the quantity of data is limited and therefore diÿcult to solve for deep learning methods. We also show that representations based on words and combinations of three characters (trigrams) are the most e˙ective for this kind of methods. Finally, we draw a landscape of possi-ble research paths for the future of neural networks and deep learning methods in the field authorship analysis.


Sign in / Sign up

Export Citation Format

Share Document