scholarly journals Epistemic Uncertainty, Subjective Probability, and Ancient History

2019 ◽  
Vol 50 (1) ◽  
pp. 91-111 ◽  
Author(s):  
Myles Lavan

The subjective interpretation of probability—increasingly influential in other fields—makes probability a useful tool of historical analysis. It provides a framework that can accommodate the significant epistemic uncertainty involved in estimating historical quantities, especially (but not only) regarding periods for which we have limited data. Conceptualizing uncertainty in terms of probability distributions is a useful discipline because it forces historians to consider the degree of uncertainty as well as to identify a most-likely value. It becomes even more useful when multiple uncertain quantities are combined in a single analysis, a common occurrence in ancient history. Though it may appear a radical departure from current practice, it builds upon a probabilism that is already latent in historical reasoning. Most estimates of quantities in ancient history are implicit expressions of probability distributions, insofar as they represent the value judged to be most likely, given the available evidence. But the traditional point-estimate approach leaves historians’ beliefs about the likelihood of other possible values unclear or unexamined.

Author(s):  
K. J. Beven ◽  
S. Almeida ◽  
W. P. Aspinall ◽  
P. D. Bates ◽  
S. Blazkova ◽  
...  

Abstract. This paper discusses how epistemic uncertainties are considered in a number of different natural hazard areas including floods, landslides and debris flows, dam safety, droughts, earthquakes, tsunamis, volcanic ash clouds and pyroclastic flows, and wind storms. In each case it is common practice to treat most uncertainties in the form of aleatory probability distributions but this may lead to an underestimation of the resulting uncertainties in assessing the hazard, consequences and risk. It is suggested that such analyses might be usefully extended by looking at different scenarios of assumptions about sources of epistemic uncertainty, with a view to reducing the element of surprise in future hazard occurrences. Since every analysis is necessarily conditional on the assumptions made about the nature of sources of epistemic uncertainty it is also important to follow the guidelines for good practice suggested in the companion Part 1 by setting out those assumptions in a condition tree.


2005 ◽  
Vol 35 (3) ◽  
pp. 319-323 ◽  
Author(s):  
Kevin J. Flannelly ◽  
Kathleen Galek ◽  
George F. Handzo

Although a substantial number of studies have documented the spiritual needs of hospitalized patients, few have examined the prevalence of these needs and even fewer have attempted to measure the extent to which they are being met. Since chaplains are the primary providers of spiritual care, chaplains' visits to patients would appear to provide a reasonable proxy for the latter. Based on the limited data available, we estimated the proportion of hospitalized patients who are visited by chaplains. Our analyses yielded a point estimate of 20% (+ 10%), depending on a number of factors.


2011 ◽  
Vol 133 (2) ◽  
Author(s):  
Kais Zaman ◽  
Mark McDonald ◽  
Sankaran Mahadevan

This paper develops and illustrates a probabilistic approach for uncertainty representation and propagation in system analysis, when the information on the uncertain input variables and/or their distribution parameters may be available as either probability distributions or simply intervals (single or multiple). A unique aggregation technique is used to combine multiple interval data and to compute rigorous bounds on the system response cumulative distribution function. The uncertainty described by interval data is represented through a flexible family of probability distributions. Conversion of interval data to a probabilistic format enables the use of computationally efficient methods for probabilistic uncertainty propagation. Two methods are explored for the implementation of the proposed approach, based on (1) sampling and (2) optimization. The sampling-based strategy is more expensive and tends to underestimate the output bounds. The optimization-based methodology improves both aspects. The proposed methods are used to develop new solutions to challenge problems posed by the Sandia epistemic uncertainty workshop (Oberkampf et al., 2004, “Challenge Problems: Uncertainty in System Response Given Uncertain Parameters,” Reliab. Eng. Syst. Saf., 85, pp. 11–19). Results for the challenge problems are compared with earlier solutions.


2019 ◽  
Author(s):  
Kathryn M. Rothenhoefer ◽  
Tao Hong ◽  
Aydin Alikaya ◽  
William R. Stauffer

AbstractDopamine neurons drive learning by coding reward prediction errors (RPEs), which are formalized as subtractions of predicted values from reward values. Subtractions accommodate point estimate predictions of value, such as the average value. However, point estimate predictions fail to capture many features of choice and learning behaviors. For instance, reaction times and learning rates consistently reflect higher moments of probability distributions. Here, we demonstrate that dopamine RPE responses code probability distributions. We presented monkeys with rewards that were drawn from the tails of normal and uniform reward size distributions to generate rare and common RPEs, respectively. Behavioral choices and pupil diameter measurements indicated that monkeys learned faster and registered greater arousal from rare RPEs, compared to common RPEs of identical magnitudes. Dopamine neuron recordings indicated that rare rewards amplified RPE responses. These results demonstrate that dopamine responses reflect probability distributions and suggest a neural mechanism for the amplified learning and enhanced arousal associated with rare events.


Author(s):  
Jason Matthew Aughenbaugh ◽  
Scott Duncan ◽  
Christiaan J. J. Paredis ◽  
Bert Bras

There is growing acceptance in the design community that two types of uncertainty exist: inherent variability and uncertainty that results from a lack of knowledge, which variously is referred to as imprecision, incertitude, irreducible uncertainty, and epistemic uncertainty. There is much less agreement on the appropriate means for representing and computing with these types of uncertainty. Probability bounds analysis (PBA) is a method that represents uncertainty using upper and lower cumulative probability distributions. These structures, called probability boxes or just p-boxes, capture both variability and imprecision. PBA includes algorithms for efficiently computing with these structures under certain conditions. This paper explores the advantages and limitations of PBA in comparison to traditional decision analysis with sensitivity analysis in the context of environmentally benign design and manufacture. The example of the selection of an oil filter involves multiple objectives and multiple uncertain parameters. These parameters are known with varying levels of uncertainty, and different assumptions about the dependencies between variables are made. As such, the example problem provides a rich context for exploring the applicability of PBA and sensitivity analysis to making engineering decisions under uncertainty. The results reveal specific advantages and limitations of both methods. The appropriate choice of an analysis depends on the exact decision scenario.


Author(s):  
Keith J. Beven ◽  
Susana Almeida ◽  
Willy P. Aspinall ◽  
Paul D. Bates ◽  
Sarka Blazkova ◽  
...  

Abstract. This paper discusses how epistemic uncertainties are considered in a number of different natural hazard areas including floods, landslides and debris flows, dam safety, droughts, earthquakes, tsunamis, volcanic ash clouds and pyroclastic flows, and wind storms. In each case it is common practice to treat most uncertainties in the form of aleatory probability distributions but this may lead to an underestimation of the resulting uncertainties in assessing the hazard, consequences and risk. It is suggested that such analyses might be usefully extended by looking at different scenarios of assumptions about sources of epistemic uncertainty, with a view to reducing the element of surprise in future hazard occurrences. Since every analysis is necessarily conditional on the assumptions made about the nature of sources of epistemic uncertainty it is also important to follow the guidelines for good practice suggested in the companion Part 2.


2021 ◽  
Author(s):  
Uwe Ehret

<p>In this contribution, I will – with examples from hydrology - make the case for information theory as a general language and framework for i) characterizing systems, ii) quantifying the information content in data, iii) evaluating how well models can learn from data, and iv) measuring how well models do in prediction. In particular, I will discuss how information measures can be used to characterize systems by the state space volume they occupy, their dynamical complexity, and their distance from equilibrium. Likewise, I will discuss how we can measure the information content of data through systematic perturbations, and how much information a model absorbs (or ignores) from data during learning. This can help building hybrid models that optimally combine information in data and general knowledge from physical and other laws, which is currently among the key challenges in machine learning applied to earth science problems.</p><p>While I will try my best to convince everybody of taking an information perspective henceforth, I will also name the related challenges: Data demands, binning choices, estimation of probability distributions from limited data, and issues with excessive data dimensionality.</p>


2021 ◽  
Author(s):  
Jack B. Soll ◽  
Asa B. Palley ◽  
Christina A. Rader

Much research on advice taking examines how people revise point estimates given input from others. This work has established that people often egocentrically discount advice. If they were to place more weight on advice, their point estimates would be more accurate. Yet the focus on point estimates and accuracy has resulted in a narrow conception of what it means to heed advice. We distinguish between revisions of point estimates and revisions of attendant probability distributions. Point estimates represent a single best guess; distributions represent the probabilities that people assign to all possible answers. A more complete picture of advice taking is provided by considering revisions of distributions, which reflect changes in both confidence and best guesses. We capture this using a new measure of advice utilization: the influence of advice. We observe that, when input from a high-quality advisor largely agrees with a person’s initial opinion, it engenders little change in one’s point estimate and, hence, little change in accuracy yet significantly increases confidence. This pattern suggests more advice taking than generally suspected. However, it is not necessarily beneficial. Because people are typically overconfident to begin with, receiving advice that agrees with their initial opinion can exacerbate overconfidence. In several experiments, we manipulate advisor quality and measure the extent to which advice agrees with a person’s initial opinion. The results allow us to pinpoint circumstances in which heeding advice is beneficial, improving accuracy or reducing overconfidence, as well as circumstances in which it is harmful, hurting accuracy or exacerbating overconfidence. This paper was accepted by Yuval Rottenstreich, judgment and decision making.


Author(s):  
NICOLA PEDRONI ◽  
ENRICO ZIO

Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, …) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis.


Author(s):  
Patrick G. Heasler ◽  
Scott E. Sanborn ◽  
Steven R. Doctor ◽  
Michael T. Anderson

The U.S. Nuclear Regulatory Commission (NRC) in cooperation with the nuclear industry is constructing an improved probabilistic fracture model for piping systems that in the past have not been susceptible to known degradation processes that could lead to pipe rupture. Recent operating experience with primary water stress corrosion cracking (PWSCC) has challenged this prior position of leak-before-break and which has now become known as “extremely Low Probability of Rupture” (xLPR). This paper focuses on the xLPR model’s treatment of uncertainty for in-service inspection. In the xLPR model, uncertainty is classified as either aleatory or epistemic, and both types of uncertainty are described with probability distributions. Earlier PFM models included aleatory, but ignored epistemic, uncertainty, or attempted to deal with epistemic uncertainty by use of conservative bounds. Thus, inclusion of both types of uncertainty in xLPR should produce more realistic results than the earlier models. This work shows that by including epistemic uncertainty in the xLPR ISI module, there can be a significant effect on rupture probability; however, this depends upon the specific scenarios being studied. Some simple scenarios are presented to illustrate those where there is no effect and those having a significant effect on the probability of rupture.


Sign in / Sign up

Export Citation Format

Share Document