A deep learning model to effectively capture mutation information in multivariate time series prediction

2020 ◽  
Vol 203 ◽  
pp. 106139 ◽  
Author(s):  
Jun Hu ◽  
Wendong Zheng
Energies ◽  
2020 ◽  
Vol 13 (18) ◽  
pp. 4722
Author(s):  
Seok-Jun Bu ◽  
Sung-Bae Cho

Predicting residential energy consumption is tantamount to forecasting a multivariate time series. A specific window for several sensor signals can induce various features extracted to forecast the energy consumption by using a prediction model. However, it is still a challenging task because of irregular patterns inside including hidden correlations between power attributes. In order to extract the complicated irregular energy patterns and selectively learn the spatiotemporal features to reduce the translational variance between energy attributes, we propose a deep learning model based on the multi-headed attention with the convolutional recurrent neural network. It exploits the attention scores calculated with softmax and dot product operation in the network to model the transient and impulsive nature of energy demand. Experiments with the dataset of University of California, Irvine (UCI) household electric power consumption consisting of a total 2,075,259 time-series show that the proposed model reduces the prediction error by 31.01% compared to the state-of-the-art deep learning model. Especially, the multi-headed attention improves the prediction performance even more by up to 27.91% than the single-attention.


2020 ◽  
Author(s):  
Dongdong Zhang ◽  
Changchang Yin ◽  
Katherine M. Hunold ◽  
Xiaoqian Jiang ◽  
Jeffrey M. Caterino ◽  
...  

Background: Sepsis, a life-threatening illness caused by the body's response to an infection, is the leading cause of death worldwide and has become a global epidemiological burden. Early prediction of sepsis increases the likelihood of survival for septic patients. Methods The 2019 DII National Data Science Challenge enabled participating teams to develop models for early prediction of sepsis onset with de-identified electronic health records of over 100,000 unique patients. Our task is to predict sepsis onset 4 hours before its diagnosis using basic administrative and demographics, time-series vital, lab, nutrition as features. An LSTM-based model with event embedding and time encoding is proposed to model time-series prediction. We utilized the attention mechanism and global max pooling techniques to enable interpretation for the proposed deep learning model. Results We evaluated the performance of the proposed model on 2 use cases of sepsis onset prediction which achieved AUC scores of 0.940 and 0.845, respectively. Our team, BuckeyeAI achieved an average AUC of 0.892 and the official rank is #2 out of 30 participants. Conclusions Our model outperformed collapsed models (i.e., logistic regression, random forest, and LightGBM). The proposed LSTM-based model handles irregular time intervals by incorporating time encoding and is interpretable thanks to the attention mechanism and global max pooling techniques.


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 575
Author(s):  
Zhenghe Li ◽  
Ling Kang ◽  
Liwei Zhou ◽  
Modi Zhu

Recent advances in deep learning, especially the long short-term memory (LSTM) networks, provide some useful insights on how to tackle time series prediction problems, not to mention the development of a time series model itself for prediction. Runoff forecasting is a time series prediction problem with a series of past runoff data (water level and discharge series data) as inputs and a fixed-length series of future runoff as output. Most previous work paid attention to the sufficiency of input data and the structural complexity of deep learning, while less effort has been put into the consideration of data quantity or the processing of original input data—such as time series decomposition, which can better capture the trend of runoff—or unleashing the effective potential of deep learning. Mutual information and seasonal trend decomposition are two useful time series methods in handling data quantity analysis and original data processing. Based on a former study, we proposed a deep learning model combined with time series analysis methods for daily runoff prediction in the middle Yangtze River and analyzed its feasibility and usability with frequently used counterpart models. Furthermore, this research also explored the data quality that affect the performance of the deep learning model. With the application of the time series method, we can effectively get some information about the data quality and data amount that we adopted in the deep learning model. The comparison experiment resulted in two different sites, implying that the proposed model improved the precision of runoff prediction and is much easier and more effective for practical application. In short, time series analysis methods can exert great potential of deep learning in daily runoff prediction and may unleash great potential of artificial intelligence in hydrology research.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Harjanto Prabowo ◽  
Alam A. Hidayat ◽  
Tjeng Wawan Cenggoro ◽  
Reza Rahutomo ◽  
Kartika Purwandari ◽  
...  

2021 ◽  
Author(s):  
Yuanjun Li ◽  
Satomi Suzuki ◽  
Roland Horne

Abstract Knowledge of well connectivity in a reservoir is crucial, especially for early-stage field development and water injection management. However, traditional interference tests can often take several weeks or even longer depending on the distance between wells and the hydraulic diffusivity of the reservoir. Therefore, instead of physically shutting in production wells, we can take advantage of deep learning methods to perform virtual interference tests. In this study, we first used the historical field data to train the deep learning model, a modified Long- and Short-term Time-series network (LSTNet). This model combines the Convolution Neural Network (CNN) to extract short-term local dependency patterns, the Recurrent Neural Network (RNN) to discover long-term patterns for time series trends, and a traditional autoregressive model to alleviate the scale insensitive problem. To address the time-lag issue in signal propagation, we employed a skip-recurrent structure that extends the existing RNN structure by connecting a current state with a previous state when the flow rate signal from an adjacent well starts to impact the observation well. In addition, we found that wells connected to the same manifold usually have similar liquid production patterns, which can lead to false causation of subsurface pressure communication. Thus we enhanced the model performance by using external feature differences to remove the surface connection in the data, thereby reducing input similarity. This enhancement can also amplify the weak signal and thus distinguish input signals. To examine the deep learning model, we used the datasets generated from Norne Field with two different geological settings: sealing and nonsealing cases. The production wells are placed at two sides of the fault to test the false-negative prediction. With these improvements and with parameter tuning, the modified LSTNet model could successfully indicate the well connectivity for the nonsealing cases and reveal the sealing structures in the sealing cases based on the historical data. The deep learning method we employed in this work can predict well pressure without using hand-crafted features, which are usually formed based on flow patterns and geological settings. Thus, this method should be applicable to general cases and more intuitive. Furthermore, this virtual interference test with a deep learning framework can avoid production loss.


Sign in / Sign up

Export Citation Format

Share Document