Detecting anomalies in time series data via a deep learning algorithm combining wavelets, neural networks and Hilbert transform

2017 ◽  
Vol 85 ◽  
pp. 292-304 ◽  
Author(s):  
Stratis Kanarachos ◽  
Stavros-Richard G. Christopoulos ◽  
Alexander Chroneos ◽  
Michael E. Fitzpatrick
2019 ◽  
Vol 16 (10) ◽  
pp. 4059-4063
Author(s):  
Ge Li ◽  
Hu Jing ◽  
Chen Guangsheng

Based on the consideration of complementary advantages, different wavelet, fractal and statistical methods are integrated to complete the classification feature extraction of time series. Combined with the advantage of process neural networks that processing time-varying information, we propose a fusion classifier with process neural network oriented time series. Be taking advantage of the multi-fractal processing nonlinear feature of time series data classification, the strong adaptability of the wavelet technique for time series data and the effect of statistical features on the classification of time series data, we can achieve the classification feature extraction of time series. Additionally, using time-varying input characteristics of process neural networks, the pattern matching of timevarying input information and space-time aggregation operation is realized. The feature extraction of time series with the above three methods is fused to the distance calculation between time-varying inputs and cluster space in process neural networks. We provide the process neural network fusion to the learning algorithm and optimize the calculation process of the time series classifier. Finally, we report the performance of our classification method using Synthetic Control Charts data from the UCI dataset and illustrate the advantage and validity of the proposed method.


2019 ◽  
Vol 11 (12) ◽  
pp. 3489
Author(s):  
Hyungjin Ko ◽  
Jaewook Lee ◽  
Junyoung Byun ◽  
Bumho Son ◽  
Saerom Park

Developing a robust and sustainable system is an important problem in which deep learning models are used in real-world applications. Ensemble methods combine diverse models to improve performance and achieve robustness. The analysis of time series data requires dealing with continuously incoming instances; however, most ensemble models suffer when adapting to a change in data distribution. Therefore, we propose an on-line ensemble deep learning algorithm that aggregates deep learning models and adjusts the ensemble weight based on loss value in this study. We theoretically demonstrate that the ensemble weight converges to the limiting distribution, and, thus, minimizes the average total loss from a new regret measure based on adversarial assumption. We also present an overall framework that can be applied to analyze time series. In the experiments, we focused on the on-line phase, in which the ensemble models predict the binary class for the simulated data and the financial and non-financial real data. The proposed method outperformed other ensemble approaches. Moreover, our method was not only robust to the intentional attacks but also sustainable in data distribution changes. In the future, our algorithm can be extended to regression and multiclass classification problems.


2022 ◽  
Vol 4 ◽  
Author(s):  
Lasitha Vidyaratne ◽  
Adam Carpenter ◽  
Tom Powers ◽  
Chris Tennant ◽  
Khan M. Iftekharuddin ◽  
...  

This work investigates the efficacy of deep learning (DL) for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a large, high-power continuous wave recirculating linac that utilizes 418 SRF cavities to accelerate electrons up to 12 GeV. Recent upgrades to CEBAF include installation of 11 new cryomodules (88 cavities) equipped with a low-level RF system that records RF time-series data from each cavity at the onset of an RF failure. Typically, subject matter experts (SME) analyze this data to determine the fault type and identify the cavity of origin. This information is subsequently utilized to identify failure trends and to implement corrective measures on the offending cavity. Manual inspection of large-scale, time-series data, generated by frequent system failures is tedious and time consuming, and thereby motivates the use of machine learning (ML) to automate the task. This study extends work on a previously developed system based on traditional ML methods (Tennant and Carpenter and Powers and Shabalina Solopova and Vidyaratne and Iftekharuddin, Phys. Rev. Accel. Beams, 2020, 23, 114601), and investigates the effectiveness of deep learning approaches. The transition to a DL model is driven by the goal of developing a system with sufficiently fast inference that it could be used to predict a fault event and take actionable information before the onset (on the order of a few hundred milliseconds). Because features are learned, rather than explicitly computed, DL offers a potential advantage over traditional ML. Specifically, two seminal DL architecture types are explored: deep recurrent neural networks (RNN) and deep convolutional neural networks (CNN). We provide a detailed analysis on the performance of individual models using an RF waveform dataset built from past operational runs of CEBAF. In particular, the performance of RNN models incorporating long short-term memory (LSTM) are analyzed along with the CNN performance. Furthermore, comparing these DL models with a state-of-the-art fault ML model shows that DL architectures obtain similar performance for cavity identification, do not perform quite as well for fault classification, but provide an advantage in inference speed.


Agronomy ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 142 ◽  
Author(s):  
Chi-Hua Chen ◽  
Hsu-Yang Kung ◽  
Feng-Jang Hwang

This editorial introduces the Special Issue, entitled “Deep Learning (DL) Techniques for Agronomy Applications”, of Agronomy. Topics covered in this issue include three main parts: (I) DL-based image recognition techniques for agronomy applications, (II) DL-based time series data analysis techniques for agronomy applications, and (III) behavior and strategy analysis for agronomy applications. Three papers on DL-based image recognition techniques for agronomy applications are as follows: (1) “Automatic segmentation and counting of aphid nymphs on leaves using convolutional neural networks,” by Chen et al.; (2) “Estimating body condition score in dairy cows from depth images using convolutional neural networks, transfer learning, and model ensembling techniques,” by Alvarez et al.; and (3) “Development of a mushroom growth measurement system applying deep learning for image recognition,” by Lu et al. One paper on DL-based time series data analysis techniques for agronomy applications is as follows: “LSTM neural network based forecasting model for wheat production in Pakistan,” by Haider et al. One paper on behavior and strategy analysis for agronomy applications is as follows: “Research into the E-learning model of agriculture technology companies: analysis by deep learning,” by Lin et al.


2019 ◽  
Vol 1 (1) ◽  
pp. 312-340 ◽  
Author(s):  
Meike Nauta ◽  
Doina Bucur ◽  
Christin Seifert

Having insight into the causal associations in a complex system facilitates decision making, e.g., for medical treatments, urban infrastructure improvements or financial investments. The amount of observational data grows, which enables the discovery of causal relationships between variables from observation of their behaviour in time. Existing methods for causal discovery from time series data do not yet exploit the representational power of deep learning. We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data. TCDF uses attention-based convolutional neural networks combined with a causal validation step. By interpreting the internal parameters of the convolutional networks, TCDF can also discover the time delay between a cause and the occurrence of its effect. Our framework learns temporal causal graphs, which can include confounders and instantaneous effects. Experiments on financial and neuroscientific benchmarks show state-of-the-art performance of TCDF on discovering causal relationships in continuous time series data. Furthermore, we show that TCDF can circumstantially discover the presence of hidden confounders. Our broadly applicable framework can be used to gain novel insights into the causal dependencies in a complex system, which is important for reliable predictions, knowledge discovery and data-driven decision making.


Author(s):  
Muhammad Faheem Mushtaq ◽  
Urooj Akram ◽  
Muhammad Aamir ◽  
Haseeb Ali ◽  
Muhammad Zulqarnain

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series.  Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.


Open Physics ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 360-374
Author(s):  
Yuan Pei ◽  
Lei Zhenglin ◽  
Zeng Qinghui ◽  
Wu Yixiao ◽  
Lu Yanli ◽  
...  

Abstract The load of the showcase is a nonlinear and unstable time series data, and the traditional forecasting method is not applicable. Deep learning algorithms are introduced to predict the load of the showcase. Based on the CEEMD–IPSO–LSTM combination algorithm, this paper builds a refrigerated display cabinet load forecasting model. Compared with the forecast results of other models, it finally proves that the CEEMD–IPSO–LSTM model has the highest load forecasting accuracy, and the model’s determination coefficient is 0.9105, which is obviously excellent. Compared with other models, the model constructed in this paper can predict the load of showcases, which can provide a reference for energy saving and consumption reduction of display cabinet.


2021 ◽  
Vol 441 ◽  
pp. 161-178
Author(s):  
Philip B. Weerakody ◽  
Kok Wai Wong ◽  
Guanjin Wang ◽  
Wendell Ela

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 120043-120065
Author(s):  
Kukjin Choi ◽  
Jihun Yi ◽  
Changhwa Park ◽  
Sungroh Yoon

Sign in / Sign up

Export Citation Format

Share Document