Electric Load Forecasting with Deep Machine Learning

2019 ◽  
Vol 16 (8) ◽  
pp. 3404-3409
Author(s):  
Ala Adin Baha Eldin Mustafa Abdelaziz ◽  
Ka Fei Thang ◽  
Jacqueline Lukose

The most commonly used form of energy in houses, factories, buildings and agriculture is the electrical energy, however, in recent years, there has been an increase in electrical energy demand due to technology advancements and rise in population, therefore an appropriated forecasting system must be developed to predict these demands as accurately as possible. For this purpose, five models were selected, they are Bidirectional-Long Short Term Memory (Bi-LSTM), Feed Forward Neural Network (FFNN), Long Short Term Memory (LSTM), Nonlinear Auto Regressive network with eXogenous inputs (NARX) and Multiple Linear Regression (MLR). This paper will demonstrate the development of these selected models using MATLAB and an android mobile application, which is used to visualize and interact with the data. The performance of the selected models was evaluated by performing the Mean Absolute Percent Error (MAPE), the selected historical data used to perform the MAPE was obtained from Toronto, Canada and Tasmania, Australia, where the year 2006 until 2016 was used as training data and the year 2017 was used to test the MAPE of the historical data with the models’ data. It is observed that the NARX model had the least MAPE for both the regions resulting in 1.9% for Toronto, Canada and 2.9% for Tasmania, Australia. Google cloud is used as the IoT (Internet of Things) platform for NARX data model, the 2017 datasets is converted to JavaScript Object Notation (JSON) file using JavaScript programming language, for data visualization and analysis for the android mobile application.

Author(s):  
Tao Gui ◽  
Qi Zhang ◽  
Lujun Zhao ◽  
Yaosong Lin ◽  
Minlong Peng ◽  
...  

In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length. However, LSTM can still experience difficulty in capturing long-term dependencies. In this work, we tried to alleviate this problem by introducing a dynamic skip connection, which can learn to directly connect two dependent words. Since there is no dependency information in the training data, we propose a novel reinforcement learning-based method to model the dependency relationship and connect dependent words. The proposed model computes the recurrent transition functions based on the skip connections, which provides a dynamic skipping advantage over RNNs that always tackle entire sentences sequentially. Our experimental results on three natural language processing tasks demonstrate that the proposed method can achieve better performance than existing methods. In the number prediction experiment, the proposed model outperformed LSTM with respect to accuracy by nearly 20%.


2020 ◽  
Vol 35 (4) ◽  
pp. 1203-1220 ◽  
Author(s):  
Qidong Yang ◽  
Chia-Ying Lee ◽  
Michael K. Tippett

ABSTRACTRapid intensification (RI) is an outstanding source of error in tropical cyclone (TC) intensity predictions. RI is generally defined as a 24-h increase in TC maximum sustained surface wind speed greater than some threshold, typically 25, 30, or 35 kt (1 kt ≈ 0.51 m s−1). Here, a long short-term memory (LSTM) model for probabilistic RI predictions is developed and evaluated. The variables (features) of the model include storm characteristics (e.g., storm intensity) and environmental variables (e.g., vertical shear) over the previous 48 h. A basin-aware RI prediction model is trained (1981–2009), validated (2010–13), and tested (2014–17) on global data. Models are trained on overlapping 48-h data, which allows multiple training examples for each storm. A challenge is that the data are highly unbalanced in the sense that there are many more non-RI cases than RI cases. To cope with this data imbalance, the synthetic minority-oversampling technique (SMOTE) is used to balance the training data by generating artificial RI cases. Model ensembling is also applied to improve prediction skill further. The model’s Brier skill scores in the Atlantic and eastern North Pacific are higher than those of operational predictions for RI thresholds of 25 and 30 kt and comparable for 35 kt on the independent test data. Composites of the features associated with RI and non-RI situations provide physical insights for how the model discriminates between RI and non-RI cases. Prediction case studies are presented for some recent storms.


Author(s):  
Tanvi Bhandarkar ◽  
Vardaan K ◽  
Nikhil Satish ◽  
S. Sridhar ◽  
R. Sivakumar ◽  
...  

<p>The prediction of a natural calamity such as earthquakes has been an area of interest for a long time but accurate results in earthquake forecasting have evaded scientists, even leading some to deem it intrinsically impossible to forecast them accurately. In this paper an attempt to forecast earthquakes and trends using a data of a series of past earthquakes. A type of recurrent neural network called Long Short-Term Memory (LSTM) is used to model the sequence of earthquakes. The trained model is then used to predict the future trend of earthquakes. An ordinary Feed Forward Neural Network (FFNN) solution for the same problem was done for comparison. The LSTM neural network was found to outperform the FFNN. The R^2 score of the LSTM is better than the FFNN’s by 59%.</p>


Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5873
Author(s):  
Yuhong Xie ◽  
Yuzuru Ueda ◽  
Masakazu Sugiyama

Load forecasting is an essential task in the operation management of a power system. Electric power companies utilize short-term load forecasting (STLF) technology to make reasonable power generation plans. A forecasting model with low prediction errors helps reduce operating costs and risks for the operators. In recent years, machine learning has become one of the most popular technologies for load forecasting. In this paper, a two-stage STLF model based on long short-term memory (LSTM) and multilayer perceptron (MLP), which improves the forecasting accuracy over the entire time horizon, is proposed. In the first stage, a sequence-to-sequence (seq2seq) architecture, which can handle a multi-sequence of input to extract more features of historical data than that of single sequence, is used to make multistep predictions. In the second stage, the MLP is used for residual modification by perceiving other information that the LSTM cannot. To construct the model, we collected the electrical load, calendar, and meteorological records of Kanto region in Japan for four years. Unlike other LSTM-based hybrid architectures, the proposed model uses two independent neural networks instead of making the neural network deeper by concatenating a series of LSTM cells and convolutional neural networks (CNNs). Therefore, the proposed model is easy to be trained and more interpretable. The seq2seq module performs well in the first few hours of the predictions. The MLP inherits the advantage of the seq2seq module and improves the results by feeding artificially selected features both from historical data and information of the target day. Compared to the LSTM-AM model and single MLP model, the mean absolute percentage error (MAPE) of the proposed model decreases from 2.82% and 2.65% to 2%, respectively. The results demonstrate that the MLP helps improve the prediction accuracy of seq2seq module and the proposed model achieves better performance than other popular models. In addition, this paper also reveals the reason why the MLP achieves the improvement.


2021 ◽  
Vol 5 (2) ◽  
pp. 524
Author(s):  
Annisa Farhah ◽  
Anggunmeka Luhur Prasasti ◽  
Marisa W Paryasto

In this modern era, restaurants are becoming very popular, especially in big cities. However, this can lead to density or queues of visitors at a restaurant, which should be avoided during the current Covid-19 pandemic. So that accurate information that can predict the density of restaurant will be very useful. In predicting the density of restaurants, data processing on the number of visitors obtained from one of the restaurants is carried out using artificial intelligence in the form of LSTM (Long Short Term Memory) RNN (Recurrent Neural Network). The results of the research on Recurrent Neural Network based on LSTM (Long Short Term Memory) at the best learning rate parameter of 0.001 and a maximum epoch of 2000 resulted in an MSE value of 0.00000278 on the training data and 0.0069 on the test data


2021 ◽  
pp. 1-13
Author(s):  
Joel Suárez-Cansino ◽  
Virgilio López-Morales ◽  
Julio César Ramos-Fernández

Building a good instructional design requires a sound organization management to program and articulate several tasks based for instance on the time availability, process follow-up, social and educational context. Furthermore, learning outcomes are the basis involving every educational activity. Thus, based on a predefined ontology, including the instructional educative model and its characteristics, we propose the use of a Long Short–Term Memory Artificial Neural Network (LSTM) to organize the structure and automatize the obtention of learning outcomes for a focused instructional design. We present encouraging results in this direction through the use of a LSTM using as the training data, a small learning outcomes set predefined by the user, focused on the characteristics of an educative model previously defined.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2045
Author(s):  
Xijie Xu ◽  
Xiaoping Rui ◽  
Yonglei Fan ◽  
Tian Yu ◽  
Yiwen Ju

Owing to the importance of coalbed methane (CBM) as a source of energy, it is necessary to predict its future production. However, the production process of CBM is the result of the interaction of many factors, making it difficult to perform accurate simulations through mathematical models. We must therefore rely on the historical data of CBM production to understand its inherent features and predict its future performance. The objective of this paper is to establish a deep learning prediction method for coalbed methane production without considering complex geological factors. In this paper, we propose a multivariate long short-term memory neural network (M-LSTM NN) model to predict CBM production. We tested the performance of this model using the production data of CBM wells in the Panhe Demonstration Area in the Qinshui Basin of China. The production of different CBM wells has similar characteristics in time. We can use the symmetric similarity of the data to transfer the model to the production forecasting of different CBM wells. Our results demonstrate that the M-LSTM NN model, utilizing the historical yield data of CBM as well as other auxiliary information such as casing pressures, water production levels, and bottom hole temperatures (including the highest and lowest temperatures), can predict CBM production successfully while obtaining a mean absolute percentage error (MAPE) of 0.91%. This is an improvement when compared with the traditional LSTM NN model, which has an MAPE of 1.14%. In addition to this, we conducted multi-step predictions at a daily and monthly scale and obtained similar results. It should be noted that with an increase in time lag, the prediction performance became less accurate. At the daily level, the MAPE value increased from 0.24% to 2.09% over 10 successive days. The predictions on the monthly scale also saw an increase in the MAPE value from 2.68% to 5.95% over three months. This tendency suggests that long-term forecasts are more difficult than short-term ones, and more historical data are required to produce more accurate results.


2020 ◽  
Vol 13 (6) ◽  
pp. 1263-1280
Author(s):  
JoséJoaquìn Mesa Jiménez ◽  
Lee Stokes ◽  
Chris Moss ◽  
Qingping Yang ◽  
Valerie N. Livina

Sign in / Sign up

Export Citation Format

Share Document