On-Line Error Detection and Mitigation for Time-Series Data of Cyber-Physical Systems using Deep Learning Based Methods

Author(s):  
Kai Ding ◽  
Sheng Ding ◽  
Andrey Morozov ◽  
Tagir Fabarisov ◽  
Klaus Janschek
2019 ◽  
Vol 11 (12) ◽  
pp. 3489
Author(s):  
Hyungjin Ko ◽  
Jaewook Lee ◽  
Junyoung Byun ◽  
Bumho Son ◽  
Saerom Park

Developing a robust and sustainable system is an important problem in which deep learning models are used in real-world applications. Ensemble methods combine diverse models to improve performance and achieve robustness. The analysis of time series data requires dealing with continuously incoming instances; however, most ensemble models suffer when adapting to a change in data distribution. Therefore, we propose an on-line ensemble deep learning algorithm that aggregates deep learning models and adjusts the ensemble weight based on loss value in this study. We theoretically demonstrate that the ensemble weight converges to the limiting distribution, and, thus, minimizes the average total loss from a new regret measure based on adversarial assumption. We also present an overall framework that can be applied to analyze time series. In the experiments, we focused on the on-line phase, in which the ensemble models predict the binary class for the simulated data and the financial and non-financial real data. The proposed method outperformed other ensemble approaches. Moreover, our method was not only robust to the intentional attacks but also sustainable in data distribution changes. In the future, our algorithm can be extended to regression and multiclass classification problems.


Open Physics ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 360-374
Author(s):  
Yuan Pei ◽  
Lei Zhenglin ◽  
Zeng Qinghui ◽  
Wu Yixiao ◽  
Lu Yanli ◽  
...  

Abstract The load of the showcase is a nonlinear and unstable time series data, and the traditional forecasting method is not applicable. Deep learning algorithms are introduced to predict the load of the showcase. Based on the CEEMD–IPSO–LSTM combination algorithm, this paper builds a refrigerated display cabinet load forecasting model. Compared with the forecast results of other models, it finally proves that the CEEMD–IPSO–LSTM model has the highest load forecasting accuracy, and the model’s determination coefficient is 0.9105, which is obviously excellent. Compared with other models, the model constructed in this paper can predict the load of showcases, which can provide a reference for energy saving and consumption reduction of display cabinet.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 120043-120065
Author(s):  
Kukjin Choi ◽  
Jihun Yi ◽  
Changhwa Park ◽  
Sungroh Yoon

2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zulkifli Halim ◽  
Shuhaida Mohamed Shuhidan ◽  
Zuraidah Mohd Sanusi

PurposeIn the previous study of financial distress prediction, deep learning techniques performed better than traditional techniques over time-series data. This study investigates the performance of deep learning models: recurrent neural network, long short-term memory and gated recurrent unit for the financial distress prediction among the Malaysian public listed corporation over the time-series data. This study also compares the performance of logistic regression, support vector machine, neural network, decision tree and the deep learning models on single-year data.Design/methodology/approachThe data used are the financial data of public listed companies that been classified as PN17 status (distress) and non-PN17 (not distress) in Malaysia. This study was conducted using machine learning library of Python programming language.FindingsThe findings indicate that all deep learning models used for this study achieved 90% accuracy and above with long short-term memory (LSTM) and gated recurrent unit (GRU) getting 93% accuracy. In addition, deep learning models consistently have good performance compared to the other models over single-year data. The results show LSTM and GRU getting 90% and recurrent neural network (RNN) 88% accuracy. The results also show that LSTM and GRU get better precision and recall compared to RNN. The findings of this study show that the deep learning approach will lead to better performance in financial distress prediction studies. To be added, time-series data should be highlighted in any financial distress prediction studies since it has a big impact on credit risk assessment.Research limitations/implicationsThe first limitation of this study is the hyperparameter tuning only applied for deep learning models. Secondly, the time-series data are only used for deep learning models since the other models optimally fit on single-year data.Practical implicationsThis study proposes recommendations that deep learning is a new approach that will lead to better performance in financial distress prediction studies. Besides that, time-series data should be highlighted in any financial distress prediction studies since the data have a big impact on the assessment of credit risk.Originality/valueTo the best of authors' knowledge, this article is the first study that uses the gated recurrent unit in financial distress prediction studies based on time-series data for Malaysian public listed companies. The findings of this study can help financial institutions/investors to find a better and accurate approach for credit risk assessment.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1078
Author(s):  
Ruxandra Stoean ◽  
Catalin Stoean ◽  
Miguel Atencia ◽  
Roberto Rodríguez-Labrada ◽  
Gonzalo Joya

Uncertainty quantification in deep learning models is especially important for the medical applications of this complex and successful type of neural architectures. One popular technique is Monte Carlo dropout that gives a sample output for a record, which can be measured statistically in terms of average probability and variance for each diagnostic class of the problem. The current paper puts forward a convolutional–long short-term memory network model with a Monte Carlo dropout layer for obtaining information regarding the model uncertainty for saccadic records of all patients. These are next used in assessing the uncertainty of the learning model at the higher level of sets of multiple records (i.e., registers) that are gathered for one patient case by the examining physician towards an accurate diagnosis. Means and standard deviations are additionally calculated for the Monte Carlo uncertainty estimates of groups of predictions. These serve as a new collection where a random forest model can perform both classification and ranking of variable importance. The approach is validated on a real-world problem of classifying electrooculography time series for an early detection of spinocerebellar ataxia 2 and reaches an accuracy of 88.59% in distinguishing between the three classes of patients.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1908
Author(s):  
Chao Ma ◽  
Xiaochuan Shi ◽  
Wei Li ◽  
Weiping Zhu

In the past decade, time series data have been generated from various fields at a rapid speed, which offers a huge opportunity for mining valuable knowledge. As a typical task of time series mining, Time Series Classification (TSC) has attracted lots of attention from both researchers and domain experts due to its broad applications ranging from human activity recognition to smart city governance. Specifically, there is an increasing requirement for performing classification tasks on diverse types of time series data in a timely manner without costly hand-crafting feature engineering. Therefore, in this paper, we propose a framework named Edge4TSC that allows time series to be processed in the edge environment, so that the classification results can be instantly returned to the end-users. Meanwhile, to get rid of the costly hand-crafting feature engineering process, deep learning techniques are applied for automatic feature extraction, which shows competitive or even superior performance compared to state-of-the-art TSC solutions. However, because time series presents complex patterns, even deep learning models are not capable of achieving satisfactory classification accuracy, which motivated us to explore new time series representation methods to help classifiers further improve the classification accuracy. In the proposed framework Edge4TSC, by building the binary distribution tree, a new time series representation method was designed for addressing the classification accuracy concern in TSC tasks. By conducting comprehensive experiments on six challenging time series datasets in the edge environment, the potential of the proposed framework for its generalization ability and classification accuracy improvement is firmly validated with a number of helpful insights.


2020 ◽  
Vol 140 ◽  
pp. 110121 ◽  
Author(s):  
Abdelhafid Zeroual ◽  
Fouzi Harrou ◽  
Abdelkader Dairi ◽  
Ying Sun

Sign in / Sign up

Export Citation Format

Share Document