scholarly journals Predicting COVID-19 cases using bidirectional LSTM on multivariate time series

Author(s):  
Ahmed Ben Said ◽  
Abdelkarim Erradi ◽  
Hussein Ahmed Aly ◽  
Abdelmonem Mohamed

AbstractTo assist policymakers in making adequate decisions to stop the spread of the COVID-19 pandemic, accurate forecasting of the disease propagation is of paramount importance. This paper presents a deep learning approach to forecast the cumulative number of COVID-19 cases using bidirectional Long Short-Term Memory (Bi-LSTM) network applied to multivariate time series. Unlike other forecasting techniques, our proposed approach first groups the countries having similar demographic and socioeconomic aspects and health sector indicators using K-means clustering algorithm. The cumulative case data of the clustered countries enriched with data related to the lockdown measures are fed to the bidirectional LSTM to train the forecasting model. We validate the effectiveness of the proposed approach by studying the disease outbreak in Qatar and the proposed model prediction from December 1st until December 31st, 2020. The quantitative evaluation shows that the proposed technique outperforms state-of-art forecasting approaches.

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 731
Author(s):  
Mengxia Liang ◽  
Xiaolong Wang ◽  
Shaocong Wu

Finding the correlation between stocks is an effective method for screening and adjusting investment portfolios for investors. One single temporal feature or static nontemporal features are generally used in most studies to measure the similarity between stocks. However, these features are not sufficient to explore phenomena such as price fluctuations similar in shape but unequal in length which may be caused by multiple temporal features. To research stock price volatilities entirely, mining the correlation between stocks should be considered from the point view of multiple features described as time series, including closing price, etc. In this paper, a time-sensitive composite similarity model designed for multivariate time-series correlation analysis based on dynamic time warping is proposed. First, a stock is chosen as the benchmark, and the multivariate time series are segmented by the peaks and troughs time-series segmentation (PTS) algorithm. Second, similar stocks are screened out by similarity. Finally, the rate of rising or falling together between stock pairs is used to verify the proposed model’s effectiveness. Compared with other models, the composite similarity model brings in multiple temporal features and is generalizable for numerical multivariate time series in different fields. The results show that the proposed model is very promising.


2021 ◽  
Author(s):  
Fatemehalsadat Madaeni ◽  
Karem Chokmani ◽  
Rachid Lhissou ◽  
Saeid Homayuni ◽  
Yves Gauthier ◽  
...  

Abstract. In cold regions, ice-jam events result in severe flooding due to a rapid rise in water levels upstream of the jam. These floods threaten human safety and damage properties and infrastructures as the floods resulting from ice-jams are sudden. Hence, the ice-jam prediction tools can give an early warning to increase response time and minimize the possible corresponding damages. However, the ice-jam prediction has always been a challenging problem as there is no analytical method available for this purpose. Nonetheless, ice jams form when some hydro-meteorological conditions happen, a few hours to a few days before the event. The ice-jam prediction problem can be considered as a binary multivariate time-series classification. Deep learning techniques have been successfully applied for time-series classification in many fields such as finance, engineering, weather forecasting, and medicine. In this research, we successfully applied CNN, LSTM, and combined CN-LSTM networks for ice-jam prediction for all the rivers in Quebec. The results show that the CN-LSTM model yields the best results in the validation and generalization with F1 scores of 0.82 and 0.91, respectively. This demonstrates that CNN and LSTM models are complementary, and a combination of them further improves classification.


Author(s):  
Nachiketa Chakraborty

With an explosion of data in the near future, from observatories spanning from radio to gamma-rays, we have entered the era of time domain astronomy. Historically, this field has been limited to modeling the temporal structure with time-series simulations limited to energy ranges blessed with excellent statistics as in X-rays. In addition to ever increasing volumes and variety of astronomical lightcurves, there's a plethora of different types of transients detected not only across the electromagnetic spectrum, but indeed across multiple messengers like counterparts for neutrino and gravitational wave sources. As a result, precise, fast forecasting and modeling the lightcurves or time-series will play a crucial role in both understanding the physical processes as well as coordinating multiwavelength and multimessenger campaigns. In this regard, deep learning algorithms such as recurrent neural networks (RNNs) should prove extremely powerful for forecasting as it has in several other domains. Here we test the performance of a very successful class of RNNs, the Long Short Term Memory (LSTM) algorithms with simulated lightcurves. We focus on univariate forecasting of types of lightcurves typically found in active galactic nuclei (AGN) observations. Specifically, we explore the sensitivity of training and test losses to key parameters of the LSTM network and data characteristics namely gaps and complexity measured in terms of number of Fourier components. We find that typically, the performances of LSTMs are better for pink or flicker noise type sources. The key parameters on which performance is dependent are batch size for LSTM and the gap percentage of the lightcurves. While a batch size of $10-30$ seems optimal, the most optimal test and train losses are under $10 \%$ of missing data for both periodic and random gaps in pink noise. The performance is far worse for red noise. This compromises detectability of transients. The performance gets monotonically worse for data complexity measured in terms of number of Fourier components which is especially relevant in the context of complicated quasi-periodic signals buried under noise. Thus, we show that time-series simulations are excellent guides for use of RNN-LSTMs in forecasting.


2019 ◽  
Vol 11 (13) ◽  
pp. 1619 ◽  
Author(s):  
Zhou Ya’nan ◽  
Luo Jiancheng ◽  
Feng Li ◽  
Zhou Xiaocheng

Spatial features retrieved from satellite data play an important role for improving crop classification. In this study, we proposed a deep-learning-based time-series analysis method to extract and organize spatial features to improve parcel-based crop classification using high-resolution optical images and multi-temporal synthetic aperture radar (SAR) data. Central to this method is the use of multiple deep convolutional networks (DCNs) to extract spatial features and to use the long short-term memory (LSTM) network to organize spatial features. First, a precise farmland parcel map was delineated from optical images. Second, hundreds of spatial features were retrieved using multiple DCNs from preprocessed SAR images and overlaid onto the parcel map to construct multivariate time-series of crop growth for parcels. Third, LSTM-based network structures for organizing these time-series features were constructed to produce a final parcel-based classification map. The method was applied to a dataset of high-resolution ZY-3 optical images and multi-temporal Sentinel-1A SAR data to classify crop types in the Hunan Province of China. The classification results, showing an improvement of greater than 5.0% in overall accuracy relative to methods without spatial features, demonstrated the effectiveness of the proposed method in extracting and organizing spatial features for improving parcel-based crop classification.


2020 ◽  
Vol 12 (10) ◽  
pp. 4107
Author(s):  
Wafa Shafqat ◽  
Yung-Cheol Byun

The significance of contextual data has been recognized by analysts and specialists in numerous disciplines such as customization, data recovery, ubiquitous and versatile processing, information mining, and management. While a generous research has just been performed in the zone of recommender frameworks, by far most of the existing approaches center on prescribing the most relevant items to customers. It usually neglects extra-contextual information, for example time, area, climate or the popularity of different locations. Therefore, we proposed a deep long-short term memory (LSTM) based context-enriched hierarchical model. This proposed model had two levels of hierarchy and each level comprised of a deep LSTM network. In each level, the task of the LSTM was different. At the first level, LSTM learned from user travel history and predicted the next location probabilities. A contextual learning unit was active between these two levels. This unit extracted maximum possible contexts related to a location, the user and its environment such as weather, climate and risks. This unit also estimated other effective parameters such as the popularity of a location. To avoid feature congestion, XGBoost was used to rank feature importance. The features with no importance were discarded. At the second level, another LSTM framework was used to learn these contextual features embedded with location probabilities and resulted into top ranked places. The performance of the proposed approach was elevated with an accuracy of 97.2%, followed by gated recurrent unit (GRU) (96.4%) and then Bidirectional LSTM (94.2%). We also performed experiments to find the optimal size of travel history for effective recommendations.


2019 ◽  
Vol 8 (9) ◽  
pp. 366 ◽  
Author(s):  
Yong Han ◽  
Cheng Wang ◽  
Yibin Ren ◽  
Shukang Wang ◽  
Huangcheng Zheng ◽  
...  

The accurate prediction of bus passenger flow is the key to public transport management and the smart city. A long short-term memory network, a deep learning method for modeling sequences, is an efficient way to capture the time dependency of passenger flow. In recent years, an increasing number of researchers have sought to apply the LSTM model to passenger flow prediction. However, few of them pay attention to the optimization procedure during model training. In this article, we propose a hybrid, optimized LSTM network based on Nesterov accelerated adaptive moment estimation (Nadam) and the stochastic gradient descent algorithm (SGD). This method trains the model with high efficiency and accuracy, solving the problems of inefficient training and misconvergence that exist in complex models. We employ a hybrid optimized LSTM network to predict the actual passenger flow in Qingdao, China and compare the prediction results with those obtained by non-hybrid LSTM models and conventional methods. In particular, the proposed model brings about a 4%–20% extra performance improvements compared with those of non-hybrid LSTM models. We have also tried combinations of other optimization algorithms and applications in different models, finding that optimizing LSTM by switching Nadam to SGD is the best choice. The sensitivity of the model to its parameters is also explored, which provides guidance for applying this model to bus passenger flow data modelling. The good performance of the proposed model in different temporal and spatial scales shows that it is more robust and effective, which can provide insightful support and guidance for dynamic bus scheduling and regional coordination scheduling.


Author(s):  
Sawsan Morkos Gharghory

An enhanced architecture of recurrent neural network based on Long Short-Term Memory (LSTM) is suggested in this paper for predicting the microclimate inside the greenhouse through its time series data. The microclimate inside the greenhouse largely affected by the external weather variations and it has a great impact on the greenhouse crops and its production. Therefore, it is a massive importance to predict the microclimate inside greenhouse as a preceding stage for accurate design of a control system that could fulfill the requirements of suitable environment for the plants and crop managing. The LSTM network is trained and tested by the temperatures and relative humidity data measured inside the greenhouse utilizing the mathematical greenhouse model with the outside weather data over 27 days. To evaluate the prediction accuracy of the suggested LSTM network, different measurements, such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), are calculated and compared to those of conventional networks in references. The simulation results of LSTM network for forecasting the temperature and relative humidity inside greenhouse outperform over those of the traditional methods. The prediction results of temperature and humidity inside greenhouse in terms of RMSE approximately are 0.16 and 0.62 and in terms of MAE are 0.11 and 0.4, respectively, for both of them.


2021 ◽  
Vol 7 ◽  
pp. e534
Author(s):  
Kristoko Dwi Hartomo ◽  
Yessica Nataliani

This paper aims to propose a new model for time series forecasting that combines forecasting with clustering algorithm. It introduces a new scheme to improve the forecasting results by grouping the time series data using k-means clustering algorithm. It utilizes the clustering result to get the forecasting data. There are usually some user-defined parameters affecting the forecasting results, therefore, a learning-based procedure is proposed to estimate the parameters that will be used for forecasting. This parameter value is computed in the algorithm simultaneously. The result of the experiment compared to other forecasting algorithms demonstrates good results for the proposed model. It has the smallest mean squared error of 13,007.91 and the average improvement rate of 19.83%.


Sign in / Sign up

Export Citation Format

Share Document