scholarly journals Intraday Load Forecasts with Uncertainty

Energies ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 1833
Author(s):  
David Kozak ◽  
Scott Holladay ◽  
Gregory E. Fasshauer

We provide a comprehensive framework for forecasting five minute load using Gaussian processes with a positive definite kernel specifically designed for load forecasts. Gaussian processes are probabilistic, enabling us to draw samples from a posterior distribution and provide rigorous uncertainty estimates to complement the point forecast, an important benefit for forecast consumers. As part of the modeling process, we discuss various methods for dimension reduction and explore their use in effectively incorporating weather data to the load forecast. We provide guidance for every step of the modeling process, from model construction through optimization and model combination. We provide results on data from the largest deregulated wholesale U.S. electricity market for various periods in 2018. The process is transparent, mathematically motivated, and reproducible. The resulting model provides a probability density of five minute forecasts for 24 h.

Author(s):  
David Kozak ◽  
Scott Holladay ◽  
Gregory Fasshauer

We provide a comprehensive framework for forecasting five minute load using Gaussian processes with a positive definite kernel specifically designed for load forecasts. Gaussian processes are probabilistic, enabling us to draw samples from a posterior distribution and provide rigorous uncertainty estimates to complement the point forecast, an important benefit for forecast consumers. As part of the modeling process, we discuss various methods for dimension reduction and explore their use in effectively incorporating weather data to the load forecast. We provide guidance for every step of the modeling process, from model construction through optimization and model combination. We provide results on data from the PJMISO for various periods in 2018. The process is transparent, mathematically motivated, and reproducible. The resulting model provides a probability density of five-minute forecasts for 24 hours.


Atmosphere ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 953
Author(s):  
Nipun Gunawardena ◽  
Giuliana Pallotta ◽  
Matthew Simpson ◽  
Donald D. Lucas

In the event of an accidental or intentional hazardous material release in the atmosphere, researchers often run physics-based atmospheric transport and dispersion models to predict the extent and variation of the contaminant spread. These predictions are imperfect due to propagated uncertainty from atmospheric model physics (or parameterizations) and weather data initial conditions. Ensembles of simulations can be used to estimate uncertainty, but running large ensembles is often very time consuming and resource intensive, even using large supercomputers. In this paper, we present a machine-learning-based method which can be used to quickly emulate spatial deposition patterns from a multi-physics ensemble of dispersion simulations. We use a hybrid linear and logistic regression method that can predict deposition in more than 100,000 grid cells with as few as fifty training examples. Logistic regression provides probabilistic predictions of the presence or absence of hazardous materials, while linear regression predicts the quantity of hazardous materials. The coefficients of the linear regressions also open avenues of exploration regarding interpretability—the presented model can be used to find which physics schemes are most important over different spatial areas. A single regression prediction is on the order of 10,000 times faster than running a weather and dispersion simulation. However, considering the number of weather and dispersion simulations needed to train the regressions, the speed-up achieved when considering the whole ensemble is about 24 times. Ultimately, this work will allow atmospheric researchers to produce potential contamination scenarios with uncertainty estimates faster than previously possible, aiding public servants and first responders.


2019 ◽  
Vol 2 (S1) ◽  
Author(s):  
Cornelia Krome ◽  
Jan Höft ◽  
Volker Sander

Abstract In Germany and many other countries the energy market has been subject to significant changes. Instead of only a few large-scale producers that serve aggregated consumers, a shift towards regenerative energy sources is taking place. Energy systems are increasingly being made more flexible by decentralised producers and storage facilities, i.e. many consumers are also producers. The aggregation of producers form another type of power plants: a virtual power plant. On the basis of aggregated production and consumption, virtual power plants try to make decisions under the conditions of the electricity market or the grid condition. They are influenced by many different aspects. These include the current feed-in, weather data, or the demands of the consumers. Clearly, a virtual power plant is focusing on developing strategies to influence and optimise these factors. To accomplish this, many data sets can and should be analysed in order to interpret and create forecasts for energy systems. Time series based analytics are therefore of particular interest for virtual power plants. Classifying the different time series according to generators, consumers or customer types simplifies processes. In this way, scalable solutions for forecasts can be found. However, one has to first find the according clusters efficiently. This paper presents a method for determining clusters of time series. Models are adapted and model-based clustered using ARIMA parameters and an individual quality measure. In this way, the analysis of generic time series can be simplified and additional statements can be made with the help of graphical evaluations. To facilitate large scale virtual power plants, the presented clustering workflow is prepared to be applied on big data capable platforms, e.g. time series stored in Apache Cassandra, analysed through an Apache Spark execution framework. The procedure is shown here using the example of the Day-Ahead prices of the electricity market for 2018.


Author(s):  
Andrea Bontempelli ◽  
Stefano Teso ◽  
Fausto Giunchiglia ◽  
Andrea Passerini

The ability to learn from human supervision is fundamental for personal assistants and other interactive applications of AI. Two central challenges for deploying interactive learners in the wild are the unreliable nature of the supervision and the varying complexity of the prediction task. We address a simple but representative setting, incremental classification in the wild, where the supervision is noisy and the number of classes grows over time. In order to tackle this task, we propose a redesign of skeptical learning centered around Gaussian Processes (GPs). Skeptical learning is a recent interactive strategy in which, if the machine is sufficiently confident that an example is mislabeled, it asks the annotator to reconsider her feedback. In many cases, this is often enough to obtain clean supervision. Our redesign, dubbed ISGP , leverages the uncertainty estimates supplied by GPs to better allocate labeling and contradiction queries, especially in the presence of noise. Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed.


2018 ◽  
Vol 29 (1) ◽  
pp. 877-893 ◽  
Author(s):  
Dounia El Bourakadi ◽  
Ali Yahyaouy ◽  
Jaouad Boumhidi

Abstract Renewable energies constitute an alternative to fossil energies for several reasons. The microgrid can be assumed as the ideal way to integrate a renewable energy source in the production of electricity and give the consumer the opportunity to participate in the electricity market not just like a consumer but also like a producer. In this paper, we present a multi-agent system based on wind and photovoltaic power prediction using the extreme learning machine algorithm. This algorithm was tested on real weather data taken from the region of Tetouan City in Morocco. The process aimed to implement a microgrid located in Tetouan City and composed of different generation units (solar and wind energies were combined together to increase the efficiency of the system) and storage units (batteries were used to ensure the availability of power on demand as much as possible). In the proposed architecture, the microgrid can exchange electricity with the main grid; therefore, it can buy or sell electricity. Thus, the goal of our multi-agent system is to control the amount of power delivered or taken from the main grid in order to reduce the cost and maximize the benefit. To address uncertainties in the system, we use fuzzy logic control to manage the flow of energy, to ensure the availability of power on demand, and to make a reasonable decision about storing or selling electricity.


2020 ◽  
Vol 10 (8) ◽  
pp. 2774 ◽  
Author(s):  
Jamal Faraji ◽  
Ahmadreza Abazari ◽  
Masoud Babaei ◽  
S. M. Muyeen ◽  
Mohamed Benbouzid

In recent years, taking advantage of renewable energy sources (RESs) has increased considerably due to their unique capabilities, such as a flexible nature and sustainable energy production. Prosumers, who are defined as proactive users of RESs and energy storage systems (ESSs), are deploying economic opportunities related to RESs in the electricity market. The prosumers are contracted to provide specific power for consumers in a neighborhood during daytime. This study presents optimal scheduling and operation of a prosumer owns RESs and two different types of ESSs, namely stationary battery (SB) and plugged-in electric vehicle (PHEV). Due to the intermittent nature of RESs and their dependency on weather conditions, this study introduces a weather prediction module in the energy management system (EMS) by the use of a feed-forward artificial neural network (FF-ANN). Linear regression results for predicted and real weather data have achieved 0.96, 0.988, and 0.230 for solar irradiance, temperature, and wind speed, respectively. Besides, this study considers the depreciation cost of ESSs in an objective function based on the depth of charge (DOD) reduction. To investigate the effectiveness of the proposed strategy, predicted output and the real power of RESs are deployed, and a mixed-integer linear programming (MILP) model is used to solve the presented day-ahead optimization problem. Based on the obtained results, the predicted output of RESs yields a desirable operation cost with a minor difference (US$0.031) compared to the operation cost of the system using real weather data, which shows the effectiveness of the proposed EMS in this study. Furthermore, optimum scheduling with regard to ESSs depreciation term has resulted in the reduction of operation cost of the prosumer and depreciation cost of ESS in the objective function has improved the daily operation cost of the prosumer by $0.8647.


Entropy ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. 1109
Author(s):  
Scott A. Cameron ◽  
Hans C. Eggers ◽  
Steve Kroon

We consider estimating the marginal likelihood in settings with independent and identically distributed (i.i.d.) data. We propose estimating the predictive distributions in a sequential factorization of the marginal likelihood in such settings by using stochastic gradient Markov Chain Monte Carlo techniques. This approach is far more efficient than traditional marginal likelihood estimation techniques such as nested sampling and annealed importance sampling due to its use of mini-batches to approximate the likelihood. Stability of the estimates is provided by an adaptive annealing schedule. The resulting stochastic gradient annealed importance sampling (SGAIS) technique, which is the key contribution of our paper, enables us to estimate the marginal likelihood of a number of models considerably faster than traditional approaches, with no noticeable loss of accuracy. An important benefit of our approach is that the marginal likelihood is calculated in an online fashion as data becomes available, allowing the estimates to be used for applications such as online weighted model combination.


Author(s):  
Evgenii Tsymbalov ◽  
Sergei Makarychev ◽  
Alexander Shapeev ◽  
Maxim Panov

Active learning methods for neural networks are usually based on greedy criteria, which ultimately give a single new design point for the evaluation. Such an approach requires either some heuristics to sample a batch of design points at one active learning iteration, or retraining the neural network after adding each data point, which is computationally inefficient. Moreover, uncertainty estimates for neural networks sometimes are overconfident for the points lying far from the training sample. In this work, we propose to approximate Bayesian neural networks (BNN) by Gaussian processes (GP), which allows us to update the uncertainty estimates of predictions efficiently without retraining the neural network while avoiding overconfident uncertainty prediction for out-of-sample points. In a series of experiments on real-world data, including large-scale problems of chemical and physical modeling, we show the superiority of the proposed approach over the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document