subjective probabilities
Recently Published Documents


TOTAL DOCUMENTS

254
(FIVE YEARS 18)

H-INDEX

32
(FIVE YEARS 2)

2022 ◽  
Vol 165 ◽  
pp. 108346
Author(s):  
Marco Behrendt ◽  
Marius Bittner ◽  
Liam Comerford ◽  
Michael Beer ◽  
Jianbing Chen

2021 ◽  
Vol 21 (11) ◽  
pp. 3509-3517
Author(s):  
Warner Marzocchi ◽  
Jacopo Selva ◽  
Thomas H. Jordan

Abstract. The main purpose of this article is to emphasize the importance of clarifying the probabilistic framework adopted for volcanic hazard and eruption forecasting. Eruption forecasting and volcanic hazard analysis seek to quantify the deep uncertainties that pervade the modeling of pre-, sin-, and post-eruptive processes. These uncertainties can be differentiated into three fundamental types: (1) the natural variability of volcanic systems, usually represented as stochastic processes with parameterized distributions (aleatory variability); (2) the uncertainty in our knowledge of how volcanic systems operate and evolve, often represented as subjective probabilities based on expert opinion (epistemic uncertainty); and (3) the possibility that our forecasts are wrong owing to behaviors of volcanic processes about which we are completely ignorant and, hence, cannot quantify in terms of probabilities (ontological error). Here we put forward a probabilistic framework for hazard analysis recently proposed by Marzocchi and Jordan (2014), which unifies the treatment of all three types of uncertainty. Within this framework, an eruption forecasting or a volcanic hazard model is said to be complete only if it (a) fully characterizes the epistemic uncertainties in the model's representation of aleatory variability and (b) can be unconditionally tested (in principle) against observations to identify ontological errors. Unconditional testability, which is the key to model validation, hinges on an experimental concept that characterizes hazard events in terms of exchangeable data sequences with well-defined frequencies. We illustrate the application of this unified probabilistic framework by describing experimental concepts for the forecasting of tephra fall from Campi Flegrei. Eventually, this example may serve as a guide for the application of the same probabilistic framework to other natural hazards.


2021 ◽  
Author(s):  
Warner Marzocchi ◽  
Jacopo Selva ◽  
Thomas H. Jordan

Abstract. The main purpose of this article is to emphasize the importance of clarifying the probabilistic framework adopted for volcanic hazard and eruption forecasting. Eruption forecasting and volcanic hazard analysis seeks to quantify the deep uncertainties that pervade the modeling of pre-, sin- and post-eruptive processes. These uncertainties can be differentiated into three fundamental types: (1) the natural variability of volcanic systems, usually represented as stochastic processes with parameterized distributions (aleatory variability); (2) the uncertainty in our knowledge of how volcanic systems operate and evolve, often represented as subjective probabilities based on expert opinion (epistemic uncertainty); and (3) the possibility that our forecasts are wrong owing to behaviors of volcanic processes about which we are completely ignorant and, hence, cannot quantify in terms of probabilities (ontological error). Here we put forward a probabilistic framework for hazard analysis recently proposed by Marzocchi & Jordan (2014), which unifies the treatment of all three types of uncertainty. Within this framework, an eruption forecasting or a volcanic hazard model is said to be complete only if it (a) fully characterizes the epistemic uncertainties in the model's representation of aleatory variability and (b) can be unconditionally tested (in principle) against observations to identify ontological errors. Unconditional testability, which is the key to model validation, hinges on an experimental concept that characterizes hazard events in terms of exchangeable data sequences with well-defined frequencies. We illustrate the application of this unified probabilistic framework by describing experimental concepts for the forecasting of tephra fall from Campi Flegrei. Eventually, this example may serve as a guide for the application of the same probabilistic framework to other natural hazards.


2021 ◽  
Author(s):  
Elliot Gould ◽  
Charles T. Gray ◽  
Rebecca Groenewegen ◽  
Aaron Willcox ◽  
David Peter Wilkinson ◽  
...  

Structured protocols, such as the IDEA protocol, may be used to elicit expert judgments in the form of subjective probabilities from multiple experts. Judgments from individual experts about a particular phenomena must therefore be mathematically aggregated into a single prediction. The process of aggregation may be complicated when uncertainty bounds are elicited with a judgment, and also when there are several rounds of elicitation. This paper presents the new R package \pkg{aggreCAT}, which provides 22 unique aggregation methods for combining individual judgments into a single, probabilistic measure. The aggregation methods were developed as a part of the Defense Advanced Research Projects Agency (DARPA) ‘Systematizing Confidence in Open Research and Evidence’ (SCORE) programme, which aims to generate confidence scores or estimates of ‘claim credibility’ for 3000 research claims from the social and behavioural sciences. We provide several worked examples illustrating the underlying mechanics of the aggregation methods. We also describe a general workflow for using the software in practice to facilitate uptake of this software for appropriate use-cases.


2021 ◽  
Vol 104 (2) ◽  
pp. 003685042110096
Author(s):  
Mohammed A AlKhars

A common technique for eliciting subjective probabilities is to provide a set of exclusive and exhaustive events and ask the assessor to estimate the probabilities of such events. However, such subjective probabilities estimations are usually subjected to a bias known as the partition dependence bias. This study aims to investigate the effect of state space partitioning and the level of knowledge on subjective probability estimations. The state space is partitioned into full, collapsed, and pruned trees, while the knowledge is manipulated into low and high levels. A scenario called “Best Bank Award” was developed and a 2 × 3 experimental design was employed to explore the effect of the level of knowledge and the partitioning of the state space on the subjective probability. A total of 627 professionals participated in the study and 543 valid responses were used for analysis. The results of two-way ANOVA with the Tukey HSD test for post hoc analysis indicate a mean probability of 24.2% for the full tree, which is significantly lower than those of the collapsed (35.7%) as well as pruned (36.3%) trees. Moreover, there is significant difference in the mean probabilities between the low (38.1%) and high (24.9%) knowledge levels. The results support the hypotheses that the partitioning of the state space as well as the level of knowledge affects subjective probability estimation. The study demonstrates that regardless of the level of knowledge, the partition dependence bias is robust. However, the subjective probability accuracy improves with more knowledge.


2021 ◽  
Author(s):  
Ilke Aydogan

Prior beliefs and their updating play a crucial role in decisions under uncertainty, and theories about them have been well established in classical Bayesianism. Yet, they are almost absent for ambiguous decisions from experience. This paper proposes a new decision model that incorporates the role of prior beliefs, beyond the role of ambiguity attitudes, into the analysis of such decisions. Hence, it connects ambiguity theories, popular in economics, with decision from experience, popular (mostly) in psychology, to the benefit of both. A reanalysis of some existing data sets from the literature on decisions from experience shows that the model that incorporates prior beliefs into the estimation of subjective probabilities outperforms the commonly used model that approximates subjective probabilities with observed relative frequencies. Controlling for subjective priors, we obtain more accurate measurements of ambiguity attitudes, and thus a new explanation of the gap between decision from description and decision from experience. This paper was accepted by Manel Baucells, decision analysis.


2020 ◽  
pp. 073401682097882
Author(s):  
Sean Patrick Roche ◽  
Justin T. Pickett ◽  
Jonathan Intravia ◽  
Andrew J. Thompson

Do people think about offending risk in verbal or numerical terms? Does the elicitation method affect reported subjective probabilities? Rational choice models require potential outcomes (e.g., benefits/costs) to be weighted by their probability of occurrence. Indeed, the subjective likelihood of being apprehended is the central construct in criminological deterrence theory—the so-called certainty principle. Yet, extant literature has measured the construct inconsistently and with little attention to potential consequences. Using a series of randomized experiments conducted with nationwide samples of American adults (aged 18 and over), this study examines the degree of correspondence between verbal and numeric measures of apprehension risk, assesses the durability of numeric estimates specifically, and attempts to elicit how respondents naturally think about apprehension risk. The findings suggest that laypeople are somewhat inconsistent in their use of both verbal and numeric descriptors of probability, their numeric estimates of probability are unlikely to be precise or durable, and many seem to prefer thinking of risk in verbal terms (compared to numeric terms). Researchers should consider including both verbal and numeric measures of probability and explore alternative measurement strategies, including anchoring vignettes, which have been valuable in standardizing verbal responses in other disciplines.


Author(s):  
Andrea Verzobio ◽  
Ahmed El-Awady ◽  
Kumaraswamy Ponnambalam ◽  
John Quigley ◽  
Daniele Zonta

Bayesian networks support the probabilistic failure analysis of complex systems, e.g. dams and bridges, needed for a better understanding of the system reliability and for taking mitigation actions. Bayesian networks are useful in representing the interactions among system components graphically, while the quantitative strength of the interrelationships between the variables is measured using conditional probabilities. However, due to a lack of objective data it often becomes necessary to rely on expert judgment to provide subjective probabilities to quantify the model. This paper proposes an elicitation process that can be used to support the collection of valid and reliable data with the specific aim of quantifying a Bayesian Network, while minimizing the adverse impact of biases. To illustrate how this framework works, it is applied to a real-life case study regarding the safety of the Mountain Chute Dam and Generating Station, which is located on the Madawaska River in Ontario, Canada.


Author(s):  
Frank Jackson

Consequentialism says that consequences settle what ought to be done. What does this imply for how we should decide, on some given occasion, what ought to be done in the light of our beliefs about the consequences of the actions available to us, our options? We explore the issues generated by the fact that typically there is substantial uncertainty about the consequences of the actions we need to choose between—we perforce must rely on the subjective probabilities of the possible outcomes of those actions. We distinguish objective “oughts” from expective “oughts” and note the complications that arise with compound actions—actions that have actions as parts.


Sign in / Sign up

Export Citation Format

Share Document