scholarly journals A Deep Learning BiLSTM Encoding-Decoding Model for COVID-19 Pandemic Spread Forecasting

2021 ◽  
Vol 5 (4) ◽  
pp. 175
Author(s):  
Ahmed I. Shahin ◽  
Sultan Almotairi

The COVID-19 pandemic has widely spread with an increasing infection rate through more than 200 countries. The governments of the world need to record the confirmed infectious, recovered, and death cases for the present state and predict the cases. In favor of future case prediction, governments can impose opening and closing procedures to save human lives by slowing down the pandemic progression spread. There are several forecasting models for pandemic time series based on statistical processing and machine learning algorithms. Deep learning has been proven as an excellent tool for time series forecasting problems. This paper proposes a deep learning time-series prediction model to forecast the confirmed, recovered, and death cases. Our proposed network is based on an encoding–decoding deep learning network. Moreover, we optimize the selection of our proposed network hyper-parameters. Our proposed forecasting model was applied in Saudi Arabia. Then, we applied the proposed model to other countries. Our study covers two categories of countries that have witnessed different spread waves this year. During our experiments, we compared our proposed model and the other time-series forecasting models, which totaled fifteen prediction models: three statistical models, three deep learning models, seven machine learning models, and one prophet model. Our proposed forecasting model accuracy was assessed using several statistical evaluation criteria. It achieved the lowest error values and achieved the highest R-squared value of 0.99. Our proposed model may help policymakers to improve the pandemic spread control, and our method can be generalized for other time series forecasting tasks.

Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 455 ◽  
Author(s):  
Hongjun Guan ◽  
Zongli Dai ◽  
Shuang Guan ◽  
Aiwu Zhao

In time series forecasting, information presentation directly affects prediction efficiency. Most existing time series forecasting models follow logical rules according to the relationships between neighboring states, without considering the inconsistency of fluctuations for a related period. In this paper, we propose a new perspective to study the problem of prediction, in which inconsistency is quantified and regarded as a key characteristic of prediction rules. First, a time series is converted to a fluctuation time series by comparing each of the current data with corresponding previous data. Then, the upward trend of each of fluctuation data is mapped to the truth-membership of a neutrosophic set, while a falsity-membership is used for the downward trend. Information entropy of high-order fluctuation time series is introduced to describe the inconsistency of historical fluctuations and is mapped to the indeterminacy-membership of the neutrosophic set. Finally, an existing similarity measurement method for the neutrosophic set is introduced to find similar states during the forecasting stage. Then, a weighted arithmetic averaging (WAA) aggregation operator is introduced to obtain the forecasting result according to the corresponding similarity. Compared to existing forecasting models, the neutrosophic forecasting model based on information entropy (NFM-IE) can represent both fluctuation trend and fluctuation consistency information. In order to test its performance, we used the proposed model to forecast some realistic time series, such as the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), the Shanghai Stock Exchange Composite Index (SHSECI), and the Hang Seng Index (HSI). The experimental results show that the proposed model can stably predict for different datasets. Simultaneously, comparing the prediction error to other approaches proves that the model has outstanding prediction accuracy and universality.


2021 ◽  
Vol 13 (2) ◽  
pp. 744
Author(s):  
Elsa Chaerun Nisa ◽  
Yean-Der Kuan

Over the last few decades, total energy consumption has increased while energy resources remain limited. Energy demand management is crucial for this reason. To solve this problem, predicting and forecasting water-cooled chiller power consumption using machine learning and deep learning are presented. The prediction models adopted are thermodynamic model and multi-layer perceptron (MLP), while the time-series forecasting models adopted are MLP, one-dimensional convolutional neural network (1D-CNN), and long short-term memory (LSTM). Each group of models is compared. The best model in each group is then selected for implementation. The data were collected every minute from an academic building at one of the universities in Taiwan. The experimental result demonstrates that the best prediction model is the MLP with 0.971 of determination (R2), 0.743 kW of mean absolute error (MAE), and 1.157 kW of root mean square error (RMSE). The time-series forecasting model trained every day for three consecutive days using new data to forecast the next minute of power consumption. The best time-series forecasting model is LSTM with 0.994 of R2, 0.233 kW of MAE, and 1.415 kW of RMSE. The models selected for both MLP and LSTM indicated very close predictive and forecasting values to the actual value.


2019 ◽  
pp. 016555151987764
Author(s):  
Ping Wang ◽  
Xiaodan Li ◽  
Renli Wu

Wikipedia is becoming increasingly critical in helping people obtain information and knowledge. Its leading advantage is that users can not only access information but also modify it. However, this presents a challenging issue: how can we measure the quality of a Wikipedia article? The existing approaches assess Wikipedia quality by statistical models or traditional machine learning algorithms. However, their performance is not satisfactory. Moreover, most existing models fail to extract complete information from articles, which degrades the model’s performance. In this article, we first survey related works and summarise a comprehensive feature framework. Then, state-of-the-art deep learning models are introduced and applied to assess Wikipedia quality. Finally, a comparison among deep learning models and traditional machine learning models is conducted to validate the effectiveness of the proposed model. The models are compared extensively in terms of their training and classification performance. Moreover, the importance of each feature and the importance of different feature sets are analysed separately.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kinshuk Sengupta ◽  
Praveen Ranjan Srivastava

Abstract Background In medical diagnosis and clinical practice, diagnosing a disease early is crucial for accurate treatment, lessening the stress on the healthcare system. In medical imaging research, image processing techniques tend to be vital in analyzing and resolving diseases with a high degree of accuracy. This paper establishes a new image classification and segmentation method through simulation techniques, conducted over images of COVID-19 patients in India, introducing the use of Quantum Machine Learning (QML) in medical practice. Methods This study establishes a prototype model for classifying COVID-19, comparing it with non-COVID pneumonia signals in Computed tomography (CT) images. The simulation work evaluates the usage of quantum machine learning algorithms, while assessing the efficacy for deep learning models for image classification problems, and thereby establishes performance quality that is required for improved prediction rate when dealing with complex clinical image data exhibiting high biases. Results The study considers a novel algorithmic implementation leveraging quantum neural network (QNN). The proposed model outperformed the conventional deep learning models for specific classification task. The performance was evident because of the efficiency of quantum simulation and faster convergence property solving for an optimization problem for network training particularly for large-scale biased image classification task. The model run-time observed on quantum optimized hardware was 52 min, while on K80 GPU hardware it was 1 h 30 min for similar sample size. The simulation shows that QNN outperforms DNN, CNN, 2D CNN by more than 2.92% in gain in accuracy measure with an average recall of around 97.7%. Conclusion The results suggest that quantum neural networks outperform in COVID-19 traits’ classification task, comparing to deep learning w.r.t model efficacy and training time. However, a further study needs to be conducted to evaluate implementation scenarios by integrating the model within medical devices.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


Author(s):  
Nghiem Van Tinh

Over the past 25 years, numerous fuzzy time series forecasting models have been proposed to deal the complex and uncertain problems. The main factors that affect the forecasting results of these models are partition universe of discourse, creation of fuzzy relationship groups and defuzzification of forecasting output values. So, this study presents a hybrid fuzzy time series forecasting model combined particle swarm optimization (PSO) and fuzzy C-means clustering (FCM) for solving issues above. The FCM clustering is used to divide the historical data into initial intervals with unequal size. After generating interval, the historical data is fuzzified into fuzzy sets with the aim to serve for establishing fuzzy relationship groups according to chronological order. Then the information obtained from the fuzzy relationship groups can be used to calculate forecasted value based on a new defuzzification technique. In addition, in order to enhance forecasting accuracy, the PSO algorithm is used for finding optimum interval lengths in the universe of discourse. The proposed model is applied to forecast three well-known numerical datasets (enrolments data of the University of Alabama, the Taiwan futures exchange —TAIFEX data and yearly deaths in car road accidents in Belgium). These datasets are also examined by using some other forecasting models available in the literature. The forecasting results obtained from the proposed model are compared to those produced by the other models. It is observed that the proposed model achieves higher forecasting accuracy than its counterparts for both first—order and high—order fuzzy logical relationship.


2019 ◽  
Vol 175 ◽  
pp. 72-86 ◽  
Author(s):  
Domingos S. de O. Santos Júnior ◽  
João F.L. de Oliveira ◽  
Paulo S.G. de Mattos Neto

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yanpeng Zhang ◽  
Hua Qu ◽  
Weipeng Wang ◽  
Jihong Zhao

Time series forecasting models based on a linear relationship model show great performance. However, these models cannot handle the the data that are incomplete, imprecise, and ambiguous as the interval-based fuzzy time series models since the process of fuzzification is abandoned. This article proposes a novel fuzzy time series forecasting model based on multiple linear regression and time series clustering for forecasting market prices. The proposed model employs a preprocessing to transform the set of fuzzy high-order time series into a set of high-order time series, with synthetic minority oversampling technique. After that, a high-order time series clustering algorithm based on the multiple linear regression model is proposed to cluster dataset of fuzzy time series and to build the linear regression model for each cluster. Then, we make forecasting by calculating the weighted sum of linear regression models’ results. Also, a learning algorithm is proposed to train the whole model, which applies artificial neural network to learn the weights of linear models. The interval-based fuzzification ensures the capability to deal with the uncertainties, and linear model and artificial neural network enable the proposed model to learn both of linear and nonlinear characteristics. The experiment results show that the proposed model improves the average forecasting accuracy rate and is more suitable for dealing with these uncertainties.


Sign in / Sign up

Export Citation Format

Share Document