scholarly journals A Novel Deep Learning Approach for Anomaly Detection of Time Series Data

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiwei Ji ◽  
Jiaheng Gong ◽  
Jiarui Feng

Anomalies in time series, also called “discord,” are the abnormal subsequences. The occurrence of anomalies in time series may indicate that some faults or disease will occur soon. Therefore, development of novel computational approaches for anomaly detection (discord search) in time series is of great significance for state monitoring and early warning of real-time system. Previous studies show that many algorithms were successfully developed and were used for anomaly classification, e.g., health monitoring, traffic detection, and intrusion detection. However, the anomaly detection of time series was not well studied. In this paper, we proposed a long short-term memory- (LSTM-) based anomaly detection method (LSTMAD) for discord search from univariate time series data. LSTMAD learns the structural features from normal (nonanomalous) training data and then performs anomaly detection via a statistical strategy based on the prediction error for observed data. In our experimental evaluation using public ECG datasets and real-world datasets, LSTMAD detects anomalies more accurately than other existing approaches in comparison.

Author(s):  
Baoquan Wang ◽  
Tonghai Jiang ◽  
Xi Zhou ◽  
Bo Ma ◽  
Fan Zhao ◽  
...  

For abnormal detection of time series data, the supervised anomaly detection methods require labeled data. While the range of outlier factors used by the existing semi-supervised methods varies with data, model and time, the threshold for determining abnormality is difficult to obtain, in addition, the computational cost of the way to calculate outlier factors from other data points in the data set is also very large. These make such methods difficult to practically apply. This paper proposes a framework named LSTM-VE which uses clustering combined with visualization method to roughly label normal data, and then uses the normal data to train long short-term memory (LSTM) neural network for semi-supervised anomaly detection. The variance error (VE) of the normal data category classification probability sequence is used as outlier factor. The framework enables anomaly detection based on deep learning to be practically applied and using VE avoids the shortcomings of existing outlier factors and gains a better performance. In addition, the framework is easy to expand because the LSTM neural network can be replaced with other classification models. Experiments on the labeled and real unlabeled data sets prove that the framework is better than replicator neural networks with reconstruction error (RNN-RS) and has good scalability as well as practicability.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Alaa Sagheer ◽  
Mostafa Kotb

AbstractCurrently, most real-world time series datasets are multivariate and are rich in dynamical information of the underlying system. Such datasets are attracting much attention; therefore, the need for accurate modelling of such high-dimensional datasets is increasing. Recently, the deep architecture of the recurrent neural network (RNN) and its variant long short-term memory (LSTM) have been proven to be more accurate than traditional statistical methods in modelling time series data. Despite the reported advantages of the deep LSTM model, its performance in modelling multivariate time series (MTS) data has not been satisfactory, particularly when attempting to process highly non-linear and long-interval MTS datasets. The reason is that the supervised learning approach initializes the neurons randomly in such recurrent networks, disabling the neurons that ultimately must properly learn the latent features of the correlated variables included in the MTS dataset. In this paper, we propose a pre-trained LSTM-based stacked autoencoder (LSTM-SAE) approach in an unsupervised learning fashion to replace the random weight initialization strategy adopted in deep LSTM recurrent networks. For evaluation purposes, two different case studies that include real-world datasets are investigated, where the performance of the proposed approach compares favourably with the deep LSTM approach. In addition, the proposed approach outperforms several reference models investigating the same case studies. Overall, the experimental results clearly show that the unsupervised pre-training approach improves the performance of deep LSTM and leads to better and faster convergence than other models.


Author(s):  
Qingsong Wen ◽  
Liang Sun ◽  
Fan Yang ◽  
Xiaomin Song ◽  
Jingkun Gao ◽  
...  

Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance.


Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1168
Author(s):  
Wenxiang Guo ◽  
Xiyu Liu ◽  
Laisheng Xiang

Anomaly detection in time series has attracted much attention recently and is quite a challenging task. In this paper, a novel deep-learning approach (AL-CNN) that classifies the time series as normal or abnormal with less domain knowledge is proposed. The proposed algorithm combines Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) to effectively model the spatial and temporal information contained in time-series data, the techniques of Squeeze-and-Excitation are applied to implement the feature recalibration. However, the difficulty of selecting multiple parameters and the long training time of a single model make AL-CNN less effective. To alleviate these challenges, a hybrid dynamic membrane system (HM-AL-CNN) is designed which is a new distributed and parallel computing model. We have performed a detailed evaluation of this proposed approach on three well-known benchmarks including the Yahoo S5 datasets. Experiments show that the proposed method possessed a robust and superior performance than the state-of-the-art methods and improved the average on three used indicators significantly.


2021 ◽  
Vol 13 (1) ◽  
pp. 35-44
Author(s):  
Daniel Vajda ◽  
Adrian Pekar ◽  
Karoly Farkas

The complexity of network infrastructures is exponentially growing. Real-time monitoring of these infrastructures is essential to secure their reliable operation. The concept of telemetry has been introduced in recent years to foster this process by streaming time-series data that contain feature-rich information concerning the state of network components. In this paper, we focus on a particular application of telemetry — anomaly detection on time-series data. We rigorously examined state-of-the-art anomaly detection methods. Upon close inspection of the methods, we observed that none of them suits our requirements as they typically face several limitations when applied on time-series data. This paper presents Alter-Re2, an improved version of ReRe, a state-of-the-art Long Short- Term Memory-based machine learning algorithm. Throughout a systematic examination, we demonstrate that by introducing the concepts of ageing and sliding window, the major limitations of ReRe can be overcome. We assessed the efficacy of Alter-Re2 using ten different datasets and achieved promising results. Alter-Re2 performs three times better on average when compared to ReRe.


2016 ◽  
Vol 136 (3) ◽  
pp. 363-372
Author(s):  
Takaaki Nakamura ◽  
Makoto Imamura ◽  
Masashi Tatedoko ◽  
Norio Hirai

Author(s):  
Adil Aslam Mir ◽  
Fatih Vehbi Çelebi ◽  
Muhammad Rafique ◽  
M. R. I. Faruque ◽  
Mayeen Uddin Khandaker ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tuan D. Pham

AbstractAutomated analysis of physiological time series is utilized for many clinical applications in medicine and life sciences. Long short-term memory (LSTM) is a deep recurrent neural network architecture used for classification of time-series data. Here time–frequency and time–space properties of time series are introduced as a robust tool for LSTM processing of long sequential data in physiology. Based on classification results obtained from two databases of sensor-induced physiological signals, the proposed approach has the potential for (1) achieving very high classification accuracy, (2) saving tremendous time for data learning, and (3) being cost-effective and user-comfortable for clinical trials by reducing multiple wearable sensors for data recording.


Sign in / Sign up

Export Citation Format

Share Document