Quantifying Uncertainty
Recently Published Documents


TOTAL DOCUMENTS

504
(FIVE YEARS 223)

H-INDEX

40
(FIVE YEARS 12)

2022 ◽  
Author(s):  
Yufan Zhang ◽  
Honglin Wen ◽  
Qiuwei Wu ◽  
Qian Ai

Prediction intervals (PIs) offer an effective tool for quantifying uncertainty of loads in distribution systems. The traditional central PIs cannot adapt well to skewed distributions, and their offline training fashion is vulnerable to the unforeseen change in future load patterns. Therefore, we propose an optimal PI estimation approach, which is online and adaptive to different data distributions by adaptively determining symmetric or asymmetric probability proportion pairs for quantiles of PIs’ bounds. It relies on the online learning ability of reinforcement learning (RL) to integrate the two online tasks, i.e., the adaptive selection of probability proportion pairs and quantile predictions, both of which are modeled by neural networks. As such, the quality of quantiles-formed PI can guide the selection process of optimal probability proportion pairs, which forms a closed loop to improve PIs’ quality. Furthermore, to improve the learning efficiency of quantile forecasts, a prioritized experience replay (PER) strategy is proposed for online quantile regression processes. Case studies on both load and net load demonstrate that the proposed method can better adapt to data distribution compared with online central PIs method. Compared with offline-trained methods, it obtains PIs with better quality and is more robust against concept drift.


2022 ◽  
Author(s):  
Yufan Zhang ◽  
Honglin Wen ◽  
Qiuwei Wu ◽  
Qian Ai

Prediction intervals (PIs) offer an effective tool for quantifying uncertainty of loads in distribution systems. The traditional central PIs cannot adapt well to skewed distributions, and their offline training fashion is vulnerable to the unforeseen change in future load patterns. Therefore, we propose an optimal PI estimation approach, which is online and adaptive to different data distributions by adaptively determining symmetric or asymmetric probability proportion pairs for quantiles of PIs’ bounds. It relies on the online learning ability of reinforcement learning (RL) to integrate the two online tasks, i.e., the adaptive selection of probability proportion pairs and quantile predictions, both of which are modeled by neural networks. As such, the quality of quantiles-formed PI can guide the selection process of optimal probability proportion pairs, which forms a closed loop to improve PIs’ quality. Furthermore, to improve the learning efficiency of quantile forecasts, a prioritized experience replay (PER) strategy is proposed for online quantile regression processes. Case studies on both load and net load demonstrate that the proposed method can better adapt to data distribution compared with online central PIs method. Compared with offline-trained methods, it obtains PIs with better quality and is more robust against concept drift.


Geosciences ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 27
Author(s):  
Talha Siddique ◽  
Md Mahmud ◽  
Amy Keesee ◽  
Chigomezyo Ngwira ◽  
Hyunju Connor

With the availability of data and computational technologies in the modern world, machine learning (ML) has emerged as a preferred methodology for data analysis and prediction. While ML holds great promise, the results from such models are not fully unreliable due to the challenges introduced by uncertainty. An ML model generates an optimal solution based on its training data. However, if the uncertainty in the data and the model parameters are not considered, such optimal solutions have a high risk of failure in actual world deployment. This paper surveys the different approaches used in ML to quantify uncertainty. The paper also exhibits the implications of quantifying uncertainty when using ML by performing two case studies with space physics in focus. The first case study consists of the classification of auroral images in predefined labels. In the second case study, the horizontal component of the perturbed magnetic field measured at the Earth’s surface was predicted for the study of Geomagnetically Induced Currents (GICs) by training the model using time series data. In both cases, a Bayesian Neural Network (BNN) was trained to generate predictions, along with epistemic and aleatoric uncertainties. Finally, the pros and cons of both Gaussian Process Regression (GPR) models and Bayesian Deep Learning (DL) are weighed. The paper also provides recommendations for the models that need exploration, focusing on space weather prediction.


2021 ◽  
pp. 1-13
Author(s):  
Mackenzie E. Meyer ◽  
Matthew P. Byrne ◽  
Iain D. Boyd ◽  
Benjamin A. Jorns

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zahid Khan ◽  
Afrah Al-Bossly ◽  
Mohammed M. A. Almazah ◽  
Fuad S. Alduais

In the absence of a correct distribution theory for complex data, neutrosophic algebra can be very useful in quantifying uncertainty. In applied data analysis, implementation of existing gamma distribution becomes inadequate for some applications when dealing with an imprecise, uncertain, or vague dataset. Most existing works have explored distributional properties of the gamma distribution under the assumption that data do not have any kind of indeterminacy. Yet, analytical properties of the gamma model for the more realistic setting when data involved uncertainties remain largely underdeveloped. This paper fills such a gap and develops the notion of neutrosophic gamma distribution (NGD). The proposed distribution represents a generalized structure of the existing gamma distribution. The basic distributional properties, including moments, shape coefficients, and moment generating function (MGF), are established. Several examples are considered to emphasize the relevance of the proposed NGD for dealing with circumstances with inadequate or ambiguous knowledge about the distributional characteristics. The estimation framework for treating vague parameters of the NGD is developed. The Monte Carlo simulation is implemented to examine the performance of the proposed model. The proposed model is applied to a real dataset for the purpose of dealing with inaccurate and vague statistical data. Results show that the NGD has better flexibility in handling real data over the conventional gamma distribution.


2021 ◽  
Vol 18 (6) ◽  
pp. 1022-1034
Author(s):  
Jia Wang ◽  
Fabian Nitschke ◽  
Emmanuel Gaucher ◽  
Thomas Kohl

Abstract Conventional methods to estimate the static formation temperature (SFT) require borehole temperature data measured during thermal recovery periods. This can be both economically and technically prohibitive under real operational conditions, especially for high-temperature boreholes. This study investigates the use of temperature logs obtained under injection conditions to determine SFT through inverse modelling. An adaptive sampling approach based on machine-learning techniques is applied to explore the model space efficiently by iteratively proposing samples based on the results of previous runs. Synthetic case studies are conducted with rigorous evaluation of factors affecting the quality of SFT estimates for deep hot wells. The results show that using temperature data measured at higher flow rates or after longer injection times could lead to less-reliable results. Furthermore, the estimation error exhibits an almost linear dependency on the standard error of the measured borehole temperatures. In addition, potential flow loss zones in the borehole would lead to increased uncertainties in the SFT estimates. Consequently, any prior knowledge about the amount of flow loss could improve the estimation accuracy considerably. For formations with thermal gradients varying with depth, prior information on the depth of the gradient change is necessary to avoid spurious results. The inversion scheme presented is demonstrated as an efficient tool for quantifying uncertainty in the interpretation of borehole data. Although only temperature data are considered in this work, other types of data such as flow and transport measurements can also be included in this method for geophysical and rock physics studies.


2021 ◽  
Author(s):  
Jianxiong Shen ◽  
Adria Ruiz ◽  
Antonio Agudo ◽  
Francesc Moreno-Noguer

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Luke J. Harrington ◽  
Carl-Friedrich Schleussner ◽  
Friederike E. L. Otto

AbstractHigh-level assessments of climate change impacts aggregate multiple perils into a common framework. This requires incorporating multiple dimensions of uncertainty. Here we propose a methodology to transparently assess these uncertainties within the ‘Reasons for Concern’ framework, using extreme heat as a case study. We quantitatively discriminate multiple dimensions of uncertainty, including future vulnerability and exposure to changing climate hazards. High risks from extreme heat materialise after 1.5–2 °C and very high risks between 2–3.5 °C of warming. Risks emerge earlier if global assessments were based on national risk thresholds, underscoring the need for stringent mitigation to limit future extreme heat risks.


Sign in / Sign up

Export Citation Format

Share Document