scholarly journals TapNet: Multivariate Time Series Classification with Attentional Prototypical Network

2020 ◽  
Vol 34 (04) ◽  
pp. 6845-6852 ◽  
Author(s):  
Xuchao Zhang ◽  
Yifeng Gao ◽  
Jessica Lin ◽  
Chang-Tien Lu

With the advance of sensor technologies, the Multivariate Time Series classification (MTSC) problem, perhaps one of the most essential problems in the time series data mining domain, has continuously received a significant amount of attention in recent decades. Traditional time series classification approaches based on Bag-of-Patterns or Time Series Shapelet have difficulty dealing with the huge amounts of feature candidates generated in high-dimensional multivariate data but have promising performance even when the training set is small. In contrast, deep learning based methods can learn low-dimensional features efficiently but suffer from a shortage of labelled data. In this paper, we propose a novel MTSC model with an attentional prototype network to take the strengths of both traditional and deep learning based approaches. Specifically, we design a random group permutation method combined with multi-layer convolutional networks to learn the low-dimensional features from multivariate time series data. To handle the issue of limited training labels, we propose a novel attentional prototype network to train the feature representation based on their distance to class prototypes with inadequate data labels. In addition, we extend our model into its semi-supervised setting by utilizing the unlabeled data. Extensive experiments on 18 datasets in a public UEA Multivariate time series archive with eight state-of-the-art baseline methods exhibit the effectiveness of the proposed model.

2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1908
Author(s):  
Chao Ma ◽  
Xiaochuan Shi ◽  
Wei Li ◽  
Weiping Zhu

In the past decade, time series data have been generated from various fields at a rapid speed, which offers a huge opportunity for mining valuable knowledge. As a typical task of time series mining, Time Series Classification (TSC) has attracted lots of attention from both researchers and domain experts due to its broad applications ranging from human activity recognition to smart city governance. Specifically, there is an increasing requirement for performing classification tasks on diverse types of time series data in a timely manner without costly hand-crafting feature engineering. Therefore, in this paper, we propose a framework named Edge4TSC that allows time series to be processed in the edge environment, so that the classification results can be instantly returned to the end-users. Meanwhile, to get rid of the costly hand-crafting feature engineering process, deep learning techniques are applied for automatic feature extraction, which shows competitive or even superior performance compared to state-of-the-art TSC solutions. However, because time series presents complex patterns, even deep learning models are not capable of achieving satisfactory classification accuracy, which motivated us to explore new time series representation methods to help classifiers further improve the classification accuracy. In the proposed framework Edge4TSC, by building the binary distribution tree, a new time series representation method was designed for addressing the classification accuracy concern in TSC tasks. By conducting comprehensive experiments on six challenging time series datasets in the edge environment, the potential of the proposed framework for its generalization ability and classification accuracy improvement is firmly validated with a number of helpful insights.


Information ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 288
Author(s):  
Kuiyong Song ◽  
Nianbin Wang ◽  
Hongbin Wang

High-dimensional time series classification is a serious problem. A similarity measure based on distance is one of the methods for time series classification. This paper proposes a metric learning-based univariate time series classification method (ML-UTSC), which uses a Mahalanobis matrix on metric learning to calculate the local distance between multivariate time series and combines Dynamic Time Warping(DTW) and the nearest neighbor classification to achieve the final classification. In this method, the features of the univariate time series are presented as multivariate time series data with a mean value, variance, and slope. Next, a three-dimensional Mahalanobis matrix is obtained based on metric learning in the data. The time series is divided into segments of equal intervals to enable the Mahalanobis matrix to more accurately describe the features of the time series data. Compared with the most effective measurement method, the related experimental results show that our proposed algorithm has a lower classification error rate in most of the test datasets.


Author(s):  
Pradeep Lall ◽  
Tony Thomas ◽  
Ken Blecker

Abstract This study focuses on the feature vector identification and Remaining Useful Life (RUL) estimation of SAC305 solder alloy PCB's of two different configurations during varying conditions of temperature and vibration. The feature vectors are identified using the strain signals acquired from four symmetrical locations of the PCB at regular intervals during vibration. Two different types of experiments are employed to characterize the PCB's dynamic changes with varying temperature and acceleration levels. The strain signals acquired during each of these experiments are compared based on both time and frequency domain characteristics. Different statistical and frequency-based techniques were used to identify the strain signal variations with changes in the environment and loading conditions. The feature vectors in predicting failure at a constant working temperature and load were identified, and as an extension to this work, the effectiveness of the feature vectors during varying conditions of temperature and acceleration levels are investigated. The remaining Useful Life of the packages was estimated using a deep learning approach based on Long Short Term Memory (LSTM) network. This technique can identify the underlying patterns in multivariate time series data that can predict the packages' life. The autocorrelation function's residuals were used as the multivariate time series data in conjunction with the LSTM deep learning technique to forecast the packages' life at different varying temperatures and acceleration levels during vibration.


Author(s):  
B. Sushrith Et.al

In this paper, focus is made on predicting the patients who are going to be re-admitted back in the hospital before discharge using latest deep-learning algorithms is applied on the electronic health records of patients which is a time-series data. To begin with the study of the data and its analysis this project deployed the conventional supervised ML algorithms like the Logistic Regression, Naïve Bayes, Random Forest and SVM and compared their performances on different portion sizes of dataset. The final model built uses deep-learning architectures such as RNN and LSTM to improve the prediction results taking advantage of the time series data. Another feature added has been of low dimensional descriptions of medical concepts as the input to the model. Ultimately, this work tests, validates, and explains the developed system using the MIMIC-III dataset, which contains around 38000 patient’s information and about 61,155 patient’s data who admitted in ICU, duration of 10 years. The support from this exhaustive dataset is used to train the models that provide healthcare workers with proper information regarding their discharge and readmission in hospitals. These ML and deep learning models are used to know about the patient who is getting to be readmitted in the ICU before his discharge will help the hospital to allocate resources properly and also reduce the financial risk of patients. In order to reduce ICU readmission that can be avoided, hospitals have to be able to recognize patients who have a higher risk of ICU readmission. Those patients can then continue to stay in the ICU so that they will not have the risk of getting admit back to the hospital. Also, the resources of hospitals that were required for avoidable readmission can be re-allocated to more critical areas in the hospital that need them. A more effective model of predicting readmission system can play an important role in helping hospitals and ICU doctors to find the patients who are going to be readmitted before discharge. To build this system here we use different ML and deep-learning algorithms. Predictive models based on huge amounts of data are made to predict the patients who are going to be admitted back in the hospital after discharge.


2022 ◽  
Vol 258 (1) ◽  
pp. 12
Author(s):  
Vlad Landa ◽  
Yuval Reuveni

Abstract Space weather phenomena such as solar flares have a massive destructive power when they reach a certain magnitude. Here, we explore the deep-learning approach in order to build a solar flare-forecasting model, while examining its limitations and feature-extraction ability based on the available Geostationary Operational Environmental Satellite (GOES) X-ray time-series data. We present a multilayer 1D convolutional neural network to forecast the solar flare event probability occurrence of M- and X-class flares at 1, 3, 6, 12, 24, 48, 72, and 96 hr time frames. The forecasting models were trained and evaluated in two different scenarios: (1) random selection and (2) chronological selection, which were compared afterward in terms of common score metrics. Additionally, we also compared our results to state-of-the-art flare-forecasting models. The results indicates that (1) when X-ray time-series data are used alone, the suggested model achieves higher score results for X-class flares and similar scores for M-class as in previous studies. (2) The two different scenarios obtain opposite results for the X- and M-class flares. (3) The suggested model combined with solely X-ray time-series fails to distinguish between M- and X-class magnitude solar flare events. Furthermore, based on the suggested method, the achieved scores, obtained solely from X-ray time-series measurements, indicate that substantial information regarding the solar activity and physical processes are encapsulated in the data, and augmenting additional data sets, both spatial and temporal, may lead to better predictions, while gaining a comprehensive physical interpretation regarding solar activity. All source codes are available at https://github.com/vladlanda.


2021 ◽  
Vol 11 (20) ◽  
pp. 9373
Author(s):  
Jie Ju ◽  
Fang-Ai Liu

Deep learning models have been widely used in prediction problems in various scenarios and have shown excellent prediction effects. As a deep learning model, the long short-term memory neural network (LSTM) is potent in predicting time series data. However, with the advancement of technology, data collection has become more accessible, and multivariate time series data have emerged. Multivariate time series data are often characterized by a large amount of data, tight timeline, and many related sequences. Especially in real data sets, the change rules of many sequences will be affected by the changes of other sequences. The interacting factors data, mutation information, and other issues seriously impact the prediction accuracy of deep learning models when predicting this type of data. On the other hand, we can also extract the mutual influence information between different sequences and simultaneously use the extracted information as part of the model input to make the prediction results more accurate. Therefore, we propose an ATT-LSTM model. The network applies the attention mechanism (attention) to the LSTM to filter the mutual influence information in the data when predicting the multivariate time series data, which makes up for the poor ability of the network to process data. Weaknesses have greatly improved the accuracy of the network in predicting multivariate time series data. To evaluate the model’s accuracy, we compare the ATT-LSTM model with the other six models on two real multivariate time series data sets based on two evaluation indicators: Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The experimental results show that the model has an excellent performance improvement compared with the other six models, proving the model’s effectiveness in predicting multivariate time series data.


Sign in / Sign up

Export Citation Format

Share Document