Tourism demand forecasting with time series imaging: A deep learning model

2021 ◽  
Vol 90 ◽  
pp. 103255
Author(s):  
Jian-Wu Bi ◽  
Hui Li ◽  
Zhi-Ping Fan
2021 ◽  
pp. 1-17
Author(s):  
Kun Zhu ◽  
Shuai Zhang ◽  
Wenyu Zhang ◽  
Zhiqiang Zhang

Accurate taxi demand forecasting is significant to estimate the change of demand to further make informed decisions. Although deep learning methods have been widely applied for taxi demand forecasting, they neglect the complexity of taxi demand data and the impact of event occurrences, making it hard to effectively model the taxi demand in highly dynamic areas (e.g., areas with frequent event occurrences). Therefore, to achieve accurate and stable taxi demand forecasting in highly dynamic areas, a novel hybrid deep learning model is proposed in this study. First, to reduce the complexity of taxi demand time series, the seasonal-trend decomposition procedures based on loess is employed to decompose the time series into three simpler components (i.e., seasonal, trend, and remainder components). Then, different forecasting methods are adopted to handle different components to obtain robust forecasting results. Moreover, considering the instability and nonlinearity of the remainder component, this study proposed to fuse the event features (in particular, text data) to capture the unusual fluctuation patterns of remainder component and solve its extreme value problem. Finally, genetic algorithm is applied to determine the optimal weights for integrating the forecasting results of three components to obtain the final taxi demand. The experimental results demonstrate the better accuracy and reliability of the proposed model compared with other baseline forecasting models.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Harjanto Prabowo ◽  
Alam A. Hidayat ◽  
Tjeng Wawan Cenggoro ◽  
Reza Rahutomo ◽  
Kartika Purwandari ◽  
...  

2021 ◽  
Author(s):  
Yuanjun Li ◽  
Satomi Suzuki ◽  
Roland Horne

Abstract Knowledge of well connectivity in a reservoir is crucial, especially for early-stage field development and water injection management. However, traditional interference tests can often take several weeks or even longer depending on the distance between wells and the hydraulic diffusivity of the reservoir. Therefore, instead of physically shutting in production wells, we can take advantage of deep learning methods to perform virtual interference tests. In this study, we first used the historical field data to train the deep learning model, a modified Long- and Short-term Time-series network (LSTNet). This model combines the Convolution Neural Network (CNN) to extract short-term local dependency patterns, the Recurrent Neural Network (RNN) to discover long-term patterns for time series trends, and a traditional autoregressive model to alleviate the scale insensitive problem. To address the time-lag issue in signal propagation, we employed a skip-recurrent structure that extends the existing RNN structure by connecting a current state with a previous state when the flow rate signal from an adjacent well starts to impact the observation well. In addition, we found that wells connected to the same manifold usually have similar liquid production patterns, which can lead to false causation of subsurface pressure communication. Thus we enhanced the model performance by using external feature differences to remove the surface connection in the data, thereby reducing input similarity. This enhancement can also amplify the weak signal and thus distinguish input signals. To examine the deep learning model, we used the datasets generated from Norne Field with two different geological settings: sealing and nonsealing cases. The production wells are placed at two sides of the fault to test the false-negative prediction. With these improvements and with parameter tuning, the modified LSTNet model could successfully indicate the well connectivity for the nonsealing cases and reveal the sealing structures in the sealing cases based on the historical data. The deep learning method we employed in this work can predict well pressure without using hand-crafted features, which are usually formed based on flow patterns and geological settings. Thus, this method should be applicable to general cases and more intuitive. Furthermore, this virtual interference test with a deep learning framework can avoid production loss.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xu Zhao ◽  
Ke Liao ◽  
Wei Wang ◽  
Junmei Xu ◽  
Lingzhong Meng

Abstract Background Intraoperative physiological monitoring generates a large quantity of time-series data that might be associated with postoperative outcomes. Using a deep learning model based on intraoperative time-series monitoring data to predict postoperative quality of recovery has not been previously reported. Methods Perioperative data from female patients having laparoscopic hysterectomy were prospectively collected. Deep learning, logistic regression, support vector machine, and random forest models were trained using different datasets and evaluated by 5-fold cross-validation. The quality of recovery on postoperative day 1 was assessed using the Quality of Recovery-15 scale. The quality of recovery was dichotomized into satisfactory if the score ≥122 and unsatisfactory if <122. Models’ discrimination was estimated using the area under the receiver operating characteristics curve (AUROC). Models’ calibration was visualized using the calibration plot and appraised by the Brier score. The SHapley Additive exPlanation (SHAP) approach was used to characterize different input features’ contributions. Results Data from 699 patients were used for modeling. When using preoperative data only, all four models exhibited poor performance (AUROC ranging from 0.65 to 0.68). The inclusion of the intraoperative intervention and/or monitoring data improved the performance of the deep leaning, logistic regression, and random forest models but not the support vector machine model. The AUROC of the deep learning model based on the intraoperative monitoring data only was 0.77 (95% CI, 0.72–0.81), which was indistinct from that based on the intraoperative intervention data only (AUROC, 0.79; 95% CI, 0.75–0.82) and from that based on the preoperative, intraoperative intervention, and monitoring data combined (AUROC, 0.81; 95% CI, 0.78–0.83). In contrast, when using the intraoperative monitoring data only, the logistic regression model had an AUROC of 0.72 (95% CI, 0.68–0.77), and the random forest model had an AUROC of 0.74 (95% CI, 0.73–0.76). The Brier score of the deep learning model based on the intraoperative monitoring data was 0.177, which was lower than that of other models. Conclusions Deep learning based on intraoperative time-series monitoring data can predict post-hysterectomy quality of recovery. The use of intraoperative monitoring data for outcome prediction warrants further investigation. Trial registration This trial (Identifier: NCT03641625) was registered at ClinicalTrials.gov by the principal investigator, Lingzhong Meng, on August 22, 2018.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Jinlai Zhang ◽  
Yanmei Meng ◽  
Jin Wei ◽  
Jie Chen ◽  
Johnny Qin

Sugar price forecasting has attracted extensive attention from policymakers due to its significant impact on people’s daily lives and markets. In this paper, we present a novel hybrid deep learning model that utilizes the merit of a time series decomposition technology empirical mode decomposition (EMD) and a hyperparameter optimization algorithm Tree of Parzen Estimators (TPEs) for sugar price forecasting. The effectiveness of the proposed model was implemented in a case study with the price of London Sugar Futures. Two experiments are conducted to verify the superiority of the EMD and TPE. Moreover, the specific effects of EMD and TPE are analyzed by the DM test and improvement percentage. Finally, empirical results demonstrate that the proposed hybrid model outperforms other models.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8270
Author(s):  
Taehwan Kim ◽  
Jeongho Park ◽  
Juwon Lee ◽  
Jooyoung Park

The global adoption of smartphone technology affords many conveniences, and not surprisingly, healthcare applications using wearable sensors like smartphones have received much attention. Among the various potential applications and research related to healthcare, recent studies have been conducted on recognizing human activities and characterizing human motions, often with wearable sensors, and with sensor signals that generally operate in the form of time series. In most studies, these sensor signals are used after pre-processing, e.g., by converting them into an image format rather than directly using the sensor signals themselves. Several methods have been used for converting time series data to image formats, such as spectrograms, raw plots, and recurrence plots. In this paper, we deal with the health care task of predicting human motion signals obtained from sensors attached to persons. We convert the motion signals into image formats with the recurrence plot method, and use it as an input into a deep learning model. For predicting subsequent motion signals, we utilize a recently introduced deep learning model combining neural networks and the Fourier transform, the Fourier neural operator. The model can be viewed as a Fourier-transform-based extension of a convolution neural network, and in these experiments, we compare the results of the model to the convolution neural network (CNN) model. The results of the proposed method in this paper show better performance than the results of the CNN model and, furthermore, we confirm that it can be utilized for detecting potential accidental falls more quickly via predicted motion signals.


Sign in / Sign up

Export Citation Format

Share Document