scholarly journals Hybrid nowcasting for solar power plants using satellite-data and Numerical Weather Predictions for (Deep) Machine Learning methods

2021 ◽  
Author(s):  
Fabrizio Ruffini ◽  
Michela Moschella ◽  
Antonio Piazzi

<p>With the expanding penetration of renewable energy in the energy sector, we observe an ever-increasing need for more accurate weather and production forecasts. They are needed by several energy players: plant owners, system operators, service providers (balancing service providers, energy traders). For the energy market needs, in different countries we can already find almost real-time trading markets; in a likely future scenario, the day-ahead market will disappear in favour of 5-15 minutes ahead market. This trend luckily matches the system operators need of predicting in the very short term the energy fed into the grid, to effectively cope with voltage and congestions problems and manage the ancillary services. Overall, the scenario indicates a compelling need for advanced forecasting techniques.</p><p>This article discusses a hybrid solar nowcasting system, predicting energy production from +15 minutes to 3 hours ahead, with a time granularity of 15 minutes. The system combines observed data (especially from satellite) and Numerical Weather Predictions to nowcast data in two steps: the first step is the nowcast of global horizontal irradiance and direct normal irradiance; they are then fed into the following system to predict the energy production. Thus, we disentangle the problem, and we can improve in parallel the two subsystems.</p><p>The weather nowcast model core is a Deep Learning method especially suited for time series problems (Long Short Term Memory Network - LSTM). It has been tested over different sites corresponding to different satellite spatial resolution, weather conditions and climate regions. The results are compared with different benchmarks such as the persistence model, smart persistence model and ground truth (where available), obtaining typical annual MAE results over the 15->3 hours between 10 and 80 W/m2. Other metrics (MBE, RMSE, and the forecast score) are calculated to get a deeper view of the results meaning. We also compared results without the availability of NWP (computationally expensive) or ground sensors (not always available in real-time) to understand the benefits of processing those data.</p><p>The power production system (fed with the output of the previous model) is a combination of different techniques: Decision trees, KNN, and NN. The performance is typical of 3-6% annual NMAE, depending on the site. We compare the results with the persistence benchmark and we calculate other metrics such as MBE, NRMSE and to get a deeper understanding of the results.</p><p>The two-steps model is finally compared with a one-step model only, where just satellite data are fed into a model predicting the power, to compare pros, cons and performance.</p>

Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5865
Author(s):  
Abhnil Amtesh Prasad ◽  
Merlinde Kay

Solar energy production is affected by the attenuation of incoming irradiance from underlying clouds. Often, improvements in the short-term predictability of irradiance using satellite irradiance models can assist grid operators in managing intermittent solar-generated electricity. In this paper, we develop and test a satellite irradiance model with short-term prediction capabilities using cloud motion vectors. Near-real time visible images from Himawari-8 satellite are used to derive cloud motion vectors using optical flow estimation techniques. The cloud motion vectors are used for the advection of pixels at future time horizons for predictions of irradiance at the surface. Firstly, the pixels are converted to cloud index using the historical satellite data accounting for clear, cloudy and cloud shadow pixels. Secondly, the cloud index is mapped to the clear sky index using a historical fitting function from the respective sites. Thirdly, the predicated all-sky irradiance is derived by scaling the clear sky irradiance with a clear sky index. Finally, a power conversion model trained at each site converts irradiance to power. The prediction of solar power tested at four sites in Australia using a one-month benchmark period with 5 min ahead prediction showed that errors were less than 10% at almost 34–60% of predicted times, decreasing to 18–26% of times under live predictions, but it outperformed persistence by >50% of the days with errors <10% for all sites. Results show that increased latency in satellite images and errors resulting from the conversion of cloud index to irradiance and power can significantly affect the forecasts.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1500 ◽  
Author(s):  
Halit Apaydin ◽  
Hajar Feizi ◽  
Mohammad Taghi Sattari ◽  
Muslume Sevba Colak ◽  
Shahaboddin Shamshirband ◽  
...  

Due to the stochastic nature and complexity of flow, as well as the existence of hydrological uncertainties, predicting streamflow in dam reservoirs, especially in semi-arid and arid areas, is essential for the optimal and timely use of surface water resources. In this research, daily streamflow to the Ermenek hydroelectric dam reservoir located in Turkey is simulated using deep recurrent neural network (RNN) architectures, including bidirectional long short-term memory (Bi-LSTM), gated recurrent unit (GRU), long short-term memory (LSTM), and simple recurrent neural networks (simple RNN). For this purpose, daily observational flow data are used during the period 2012–2018, and all models are coded in Python software programming language. Only delays of streamflow time series are used as the input of models. Then, based on the correlation coefficient (CC), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency coefficient (NS), results of deep-learning architectures are compared with one another and with an artificial neural network (ANN) with two hidden layers. Results indicate that the accuracy of deep-learning RNN methods are better and more accurate than ANN. Among methods used in deep learning, the LSTM method has the best accuracy, namely, the simulated streamflow to the dam reservoir with 90% accuracy in the training stage and 87% accuracy in the testing stage. However, the accuracies of ANN in training and testing stages are 86% and 85%, respectively. Considering that the Ermenek Dam is used for hydroelectric purposes and energy production, modeling inflow in the most realistic way may lead to an increase in energy production and income by optimizing water management. Hence, multi-percentage improvements can be extremely useful. According to results, deep-learning methods of RNNs can be used for estimating streamflow to the Ermenek Dam reservoir due to their accuracy.


2019 ◽  
Vol 31 (6) ◽  
pp. 1085-1113 ◽  
Author(s):  
Po-He Tseng ◽  
Núria Armengol Urpi ◽  
Mikhail Lebedev ◽  
Miguel Nicolelis

Although many real-time neural decoding algorithms have been proposed for brain-machine interface (BMI) applications over the years, an optimal, consensual approach remains elusive. Recent advances in deep learning algorithms provide new opportunities for improving the design of BMI decoders, including the use of recurrent artificial neural networks to decode neuronal ensemble activity in real time. Here, we developed a long-short term memory (LSTM) decoder for extracting movement kinematics from the activity of large ( N = 134–402) populations of neurons, sampled simultaneously from multiple cortical areas, in rhesus monkeys performing motor tasks. Recorded regions included primary motor, dorsal premotor, supplementary motor, and primary somatosensory cortical areas. The LSTM's capacity to retain information for extended periods of time enabled accurate decoding for tasks that required both movements and periods of immobility. Our LSTM algorithm significantly outperformed the state-of-the-art unscented Kalman filter when applied to three tasks: center-out arm reaching, bimanual reaching, and bipedal walking on a treadmill. Notably, LSTM units exhibited a variety of well-known physiological features of cortical neuronal activity, such as directional tuning and neuronal dynamics across task epochs. LSTM modeled several key physiological attributes of cortical circuits involved in motor tasks. These findings suggest that LSTM-based approaches could yield a better algorithm strategy for neuroprostheses that employ BMIs to restore movement in severely disabled patients.


Author(s):  
Dejiang Kong ◽  
Fei Wu

The widely use of positioning technology has made mining the movements of people feasible and plenty of trajectory data have been accumulated. How to efficiently leverage these data for location prediction has become an increasingly popular research topic as it is fundamental to location-based services (LBS). The existing methods often focus either on long time (days or months) visit prediction (i.e., the recommendation of point of interest) or on real time location prediction (i.e., trajectory prediction). In this paper, we are interested in the location prediction problem in a weak real time condition and aim to predict users' movement in next minutes or hours. We propose a Spatial-Temporal Long-Short Term Memory (ST-LSTM) model which naturally combines spatial-temporal influence into LSTM to mitigate the problem of data sparsity. Further, we employ a hierarchical extension of the proposed ST-LSTM (HST-LSTM) in an encoder-decoder manner which models the contextual historic visit information in order to boost the prediction performance. The proposed HST-LSTM is evaluated on a real world trajectory data set and the experimental results demonstrate the effectiveness of the proposed model.


Author(s):  
Yeo Jin Kim ◽  
Min Chi

We propose a bio-inspired approach named Temporal Belief Memory (TBM) for handling missing data with recurrent neural networks (RNNs). When modeling irregularly observed temporal sequences, conventional RNNs generally ignore the real-time intervals between consecutive observations. TBM is a missing value imputation method that considers the time continuity and captures latent missing patterns based on irregular real time intervals of the inputs. We evaluate our TBM approach with real-world electronic health records (EHRs) consisting of 52,919 visits and 4,224,567 events on a task of early prediction of septic shock. We compare TBM against multiple baselines including both domain experts' rules and the state-of-the-art missing data handling approach using both RNN and long-short term memory. The experimental results show that TBM outperforms all the competitive baseline approaches for the septic shock early prediction task. 


2020 ◽  
Vol 196 ◽  
pp. 02007
Author(s):  
Vladimir Mochalov ◽  
Anastasia Mochalova

In this paper, the previously obtained results on recognition of ionograms using deep learning are expanded to predict the parameters of the ionosphere. After the ionospheric parameters have been identified on the ionogram using deep learning in real time, we can predict the parameters for some time ahead on the basis of the new data obtained Examples of predicting the ionosphere parameters using an artificial recurrent neural network architecture long short-term memory are given. The place of the block for predicting the parameters of the ionosphere in the system for analyzing ionospheric data using deep learning methods is shown.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Abdullah Alharbi ◽  
Wael Alosaimi ◽  
Radhya Sahal ◽  
Hager Saleh

Low heart rate causes a risk of death, heart disease, and cardiovascular diseases. Therefore, monitoring the heart rate is critical because of the heart’s function to discover its irregularity to detect the health problems early. Rapid technological advancement (e.g., artificial intelligence and stream processing technologies) allows healthcare sectors to consolidate and analyze massive health-based data to discover risks by making more accurate predictions. Therefore, this work proposes a real-time prediction system for heart rate, which helps the medical care providers and patients avoid heart rate risk in real time. The proposed system consists of two phases, namely, an offline phase and an online phase. The offline phase targets developing the model using different forecasting techniques to find the lowest root mean square error. The heart rate time-series dataset is extracted from Medical Information Mart for Intensive Care (MIMIC-II). Recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent units (GRU), and bidirectional long short-term memory (BI-LSTM) are applied to heart rate time series. For the online phase, Apache Kafka and Apache Spark have been used to predict the heart rate in advance based on the best developed model. According to the experimental results, the GRU with three layers has recorded the best performance. Consequently, GRU with three layers has been used to predict heart rate 5 minutes in advance.


Sign in / Sign up

Export Citation Format

Share Document