Impact of coronavirus disease 2019 on electricity demand and the unit commitment problem: a long–short-term memory-based machine learning approach

2021 ◽  
pp. 1-18
Author(s):  
Marwen Elkamel ◽  
Ali Ahmadian ◽  
Qipeng P. Zheng
Teknika ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 62-67
Author(s):  
Faisal Dharma Adhinata ◽  
Diovianto Putra Rakhmadani

The impact of this pandemic affects various sectors in Indonesia, especially in the economic sector, due to the large-scale social restrictions policy to suppress this case's growth. The details of the growth of Covid-19 in Indonesia are still fluctuating and cannot be fully understood. Recently it has been developed by researchers related to the prediction of Covid-19 cases in various countries. One of them is using a machine learning technique approach to predict cases of daily increase Covid-19. However, the use of machine learning techniques results in the MSE error value in the thousands. This high number indicates that the prediction data using the model is still a high error rate compared to the actual data. In this study, we propose a deep learning approach using the Long Short Term Memory (LSTM) method to build a prediction model for the daily increase cases of Covid-19. This study's LSTM model architecture uses the LSTM layer, Dropout layer, Dense, and Linear Activation Function. Based on various hyperparameter experiments, using the number of neurons 10, batch size 32, and epochs 50, the MSE values were 0.0308, RMSE 0.1758, and MAE 0.13. These results prove that the deep learning approach produces a smaller error value than machine learning techniques, even closer to zero.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3678
Author(s):  
Dongwon Lee ◽  
Minji Choi ◽  
Joohyun Lee

In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos.


2020 ◽  
Vol 27 (3) ◽  
pp. 373-389 ◽  
Author(s):  
Ashesh Chattopadhyay ◽  
Pedram Hassanzadeh ◽  
Devika Subramanian

Abstract. In this paper, the performance of three machine-learning methods for predicting short-term evolution and for reproducing the long-term statistics of a multiscale spatiotemporal Lorenz 96 system is examined. The methods are an echo state network (ESN, which is a type of reservoir computing; hereafter RC–ESN), a deep feed-forward artificial neural network (ANN), and a recurrent neural network (RNN) with long short-term memory (LSTM; hereafter RNN–LSTM). This Lorenz 96 system has three tiers of nonlinearly interacting variables representing slow/large-scale (X), intermediate (Y), and fast/small-scale (Z) processes. For training or testing, only X is available; Y and Z are never known or used. We show that RC–ESN substantially outperforms ANN and RNN–LSTM for short-term predictions, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver's time steps equivalent to several Lyapunov timescales. The RNN–LSTM outperforms ANN, and both methods show some prediction skills too. Furthermore, even after losing the trajectory, data predicted by RC–ESN and RNN–LSTM have probability density functions (pdf's) that closely match the true pdf – even at the tails. The pdf of the data predicted using ANN, however, deviates from the true pdf. Implications, caveats, and applications to data-driven and data-assisted surrogate modeling of complex nonlinear dynamical systems, such as weather and climate, are discussed.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Mamunur Rashid ◽  
Minarul Islam ◽  
Norizam Sulaiman ◽  
Bifta Sama Bari ◽  
Ripon Kumar Saha ◽  
...  

2021 ◽  
Vol 1 (1) ◽  
pp. 199-218
Author(s):  
Mostofa Ahsan ◽  
Rahul Gomes ◽  
Md. Minhaz Chowdhury ◽  
Kendall E. Nygard

Machine learning algorithms are becoming very efficient in intrusion detection systems with their real time response and adaptive learning process. A robust machine learning model can be deployed for anomaly detection by using a comprehensive dataset with multiple attack types. Nowadays datasets contain many attributes. Such high dimensionality of datasets poses a significant challenge to information extraction in terms of time and space complexity. Moreover, having so many attributes may be a hindrance towards creation of a decision boundary due to noise in the dataset. Large scale data with redundant or insignificant features increases the computational time and often decreases goodness of fit which is a critical issue in cybersecurity. In this research, we have proposed and implemented an efficient feature selection algorithm to filter insignificant variables. Our proposed Dynamic Feature Selector (DFS) uses statistical analysis and feature importance tests to reduce model complexity and improve prediction accuracy. To evaluate DFS, we conducted experiments on two datasets used for cybersecurity research namely Network Security Laboratory (NSL-KDD) and University of New South Wales (UNSW-NB15). In the meta-learning stage, four algorithms were compared namely Bidirectional Long Short-Term Memory (Bi-LSTM), Gated Recurrent Units, Random Forest and a proposed Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) for accuracy estimation. For NSL-KDD, experiments revealed an increment in accuracy from 99.54% to 99.64% while reducing feature size of one-hot encoded features from 123 to 50. In UNSW-NB15 we observed an increase in accuracy from 90.98% to 92.46% while reducing feature size from 196 to 47. The proposed approach is thus able to achieve higher accuracy while significantly lowering number of features required for processing.


Sign in / Sign up

Export Citation Format

Share Document