Performance Evaluation of a Deep Learning Model for Time Series Prediction on Android Devices

Author(s):  
Rika Sato ◽  
Masato Oguchi ◽  
Saneyasu Yamaguchi ◽  
Takeshi Kamiyama
2020 ◽  
Author(s):  
Dongdong Zhang ◽  
Changchang Yin ◽  
Katherine M. Hunold ◽  
Xiaoqian Jiang ◽  
Jeffrey M. Caterino ◽  
...  

Background: Sepsis, a life-threatening illness caused by the body's response to an infection, is the leading cause of death worldwide and has become a global epidemiological burden. Early prediction of sepsis increases the likelihood of survival for septic patients. Methods The 2019 DII National Data Science Challenge enabled participating teams to develop models for early prediction of sepsis onset with de-identified electronic health records of over 100,000 unique patients. Our task is to predict sepsis onset 4 hours before its diagnosis using basic administrative and demographics, time-series vital, lab, nutrition as features. An LSTM-based model with event embedding and time encoding is proposed to model time-series prediction. We utilized the attention mechanism and global max pooling techniques to enable interpretation for the proposed deep learning model. Results We evaluated the performance of the proposed model on 2 use cases of sepsis onset prediction which achieved AUC scores of 0.940 and 0.845, respectively. Our team, BuckeyeAI achieved an average AUC of 0.892 and the official rank is #2 out of 30 participants. Conclusions Our model outperformed collapsed models (i.e., logistic regression, random forest, and LightGBM). The proposed LSTM-based model handles irregular time intervals by incorporating time encoding and is interpretable thanks to the attention mechanism and global max pooling techniques.


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 575
Author(s):  
Zhenghe Li ◽  
Ling Kang ◽  
Liwei Zhou ◽  
Modi Zhu

Recent advances in deep learning, especially the long short-term memory (LSTM) networks, provide some useful insights on how to tackle time series prediction problems, not to mention the development of a time series model itself for prediction. Runoff forecasting is a time series prediction problem with a series of past runoff data (water level and discharge series data) as inputs and a fixed-length series of future runoff as output. Most previous work paid attention to the sufficiency of input data and the structural complexity of deep learning, while less effort has been put into the consideration of data quantity or the processing of original input data—such as time series decomposition, which can better capture the trend of runoff—or unleashing the effective potential of deep learning. Mutual information and seasonal trend decomposition are two useful time series methods in handling data quantity analysis and original data processing. Based on a former study, we proposed a deep learning model combined with time series analysis methods for daily runoff prediction in the middle Yangtze River and analyzed its feasibility and usability with frequently used counterpart models. Furthermore, this research also explored the data quality that affect the performance of the deep learning model. With the application of the time series method, we can effectively get some information about the data quality and data amount that we adopted in the deep learning model. The comparison experiment resulted in two different sites, implying that the proposed model improved the precision of runoff prediction and is much easier and more effective for practical application. In short, time series analysis methods can exert great potential of deep learning in daily runoff prediction and may unleash great potential of artificial intelligence in hydrology research.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Harjanto Prabowo ◽  
Alam A. Hidayat ◽  
Tjeng Wawan Cenggoro ◽  
Reza Rahutomo ◽  
Kartika Purwandari ◽  
...  

2021 ◽  
Author(s):  
Yuanjun Li ◽  
Satomi Suzuki ◽  
Roland Horne

Abstract Knowledge of well connectivity in a reservoir is crucial, especially for early-stage field development and water injection management. However, traditional interference tests can often take several weeks or even longer depending on the distance between wells and the hydraulic diffusivity of the reservoir. Therefore, instead of physically shutting in production wells, we can take advantage of deep learning methods to perform virtual interference tests. In this study, we first used the historical field data to train the deep learning model, a modified Long- and Short-term Time-series network (LSTNet). This model combines the Convolution Neural Network (CNN) to extract short-term local dependency patterns, the Recurrent Neural Network (RNN) to discover long-term patterns for time series trends, and a traditional autoregressive model to alleviate the scale insensitive problem. To address the time-lag issue in signal propagation, we employed a skip-recurrent structure that extends the existing RNN structure by connecting a current state with a previous state when the flow rate signal from an adjacent well starts to impact the observation well. In addition, we found that wells connected to the same manifold usually have similar liquid production patterns, which can lead to false causation of subsurface pressure communication. Thus we enhanced the model performance by using external feature differences to remove the surface connection in the data, thereby reducing input similarity. This enhancement can also amplify the weak signal and thus distinguish input signals. To examine the deep learning model, we used the datasets generated from Norne Field with two different geological settings: sealing and nonsealing cases. The production wells are placed at two sides of the fault to test the false-negative prediction. With these improvements and with parameter tuning, the modified LSTNet model could successfully indicate the well connectivity for the nonsealing cases and reveal the sealing structures in the sealing cases based on the historical data. The deep learning method we employed in this work can predict well pressure without using hand-crafted features, which are usually formed based on flow patterns and geological settings. Thus, this method should be applicable to general cases and more intuitive. Furthermore, this virtual interference test with a deep learning framework can avoid production loss.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xu Zhao ◽  
Ke Liao ◽  
Wei Wang ◽  
Junmei Xu ◽  
Lingzhong Meng

Abstract Background Intraoperative physiological monitoring generates a large quantity of time-series data that might be associated with postoperative outcomes. Using a deep learning model based on intraoperative time-series monitoring data to predict postoperative quality of recovery has not been previously reported. Methods Perioperative data from female patients having laparoscopic hysterectomy were prospectively collected. Deep learning, logistic regression, support vector machine, and random forest models were trained using different datasets and evaluated by 5-fold cross-validation. The quality of recovery on postoperative day 1 was assessed using the Quality of Recovery-15 scale. The quality of recovery was dichotomized into satisfactory if the score ≥122 and unsatisfactory if <122. Models’ discrimination was estimated using the area under the receiver operating characteristics curve (AUROC). Models’ calibration was visualized using the calibration plot and appraised by the Brier score. The SHapley Additive exPlanation (SHAP) approach was used to characterize different input features’ contributions. Results Data from 699 patients were used for modeling. When using preoperative data only, all four models exhibited poor performance (AUROC ranging from 0.65 to 0.68). The inclusion of the intraoperative intervention and/or monitoring data improved the performance of the deep leaning, logistic regression, and random forest models but not the support vector machine model. The AUROC of the deep learning model based on the intraoperative monitoring data only was 0.77 (95% CI, 0.72–0.81), which was indistinct from that based on the intraoperative intervention data only (AUROC, 0.79; 95% CI, 0.75–0.82) and from that based on the preoperative, intraoperative intervention, and monitoring data combined (AUROC, 0.81; 95% CI, 0.78–0.83). In contrast, when using the intraoperative monitoring data only, the logistic regression model had an AUROC of 0.72 (95% CI, 0.68–0.77), and the random forest model had an AUROC of 0.74 (95% CI, 0.73–0.76). The Brier score of the deep learning model based on the intraoperative monitoring data was 0.177, which was lower than that of other models. Conclusions Deep learning based on intraoperative time-series monitoring data can predict post-hysterectomy quality of recovery. The use of intraoperative monitoring data for outcome prediction warrants further investigation. Trial registration This trial (Identifier: NCT03641625) was registered at ClinicalTrials.gov by the principal investigator, Lingzhong Meng, on August 22, 2018.


Sign in / Sign up

Export Citation Format

Share Document