imprecise probabilities
Recently Published Documents


TOTAL DOCUMENTS

220
(FIVE YEARS 55)

H-INDEX

25
(FIVE YEARS 3)

Author(s):  
Qiuhan Wang ◽  
Mei Cai ◽  
Wei Guo

Abstract The increasing frequency and severity of Natech accidents warn us to investigate the occurrence mechanism of these events. Cascading disasters chain magnifies the impact of natural hazards due to its propagation through critical infrastructures and socio-economic networks. In order to manipulate imprecise probabilities of cascading events in Natech scenarios, this work proposes an improved Bayesian network (BN) combining with evidence theory to better deal with epistemic uncertainty in Natech accidents than traditional BNs. Effective inference algorithms have been developed to propagate system faulty in a socio-economic system. The conditional probability table (CPT) of BN in the traditional probability approach is modified by utilizing an OR/AND gate to obtain the belief mass propagation in the framework of evidence theory. Our improved Bayesian network methodology makes it possible to assess the impact and damage of Natech accidents under the environment of complex interdependence among accidents with insufficient data. Finally, a case study of Guangdong province, an area prone to natural disasters, is given. The modified Bayesian network is carried out to analyze this area’s Natech scenario. After diagnostic analysis and sensitivity analysis of human factors and the natural factor, we are able to locate the key nodes in the cascading disaster chain. Findings can provide useful theoretical support for urban managers of industrial cities to enhance disaster prevention and mitigation ability.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Seamus Bradley

Abstract Imprecise probabilities (IP) are an increasingly popular way of reasoning about rational credence. However they are subject to an apparent failure to display convincing inductive learning. This paper demonstrates that a small modification to the update rule for IP allows us to overcome this problem, albeit at the cost of satisfying only a weaker concept of coherence.


Author(s):  
Thomas Augustin

AbstractThis chapter aims at surveying and highlighting in an introductory way some challenges and big opportunities a paradigmatic shift to imprecise probabilities could induce in statistical modelling. Working with an informal understanding of imprecise probabilities, we discuss the concepts of model imprecision and data imprecision as the two main types of imprecision in statistical modelling. Then we provide a short survey of some major developments, methodological questions and applications of imprecise probabilistic models under model imprecision in the context of different inference schools and summarize some recent developments in the area of data imprecision.


Author(s):  
Sifeng Bi ◽  
Michael Beer

AbstractThis chapter presents the technique route of model updating in the presence of imprecise probabilities. The emphasis is put on the inevitable uncertainties, in both numerical simulations and experimental measurements, leading the updating methodology to be significantly extended from deterministic sense to stochastic sense. This extension requires that the model parameters are not regarded as unknown-but-fixed values, but random variables with uncertain distributions, i.e. the imprecise probabilities. The final objective of stochastic model updating is no longer a single model prediction with maximal fidelity to a single experiment, but rather the calibrated distribution coefficients allowing the model predictions to fit with the experimental measurements in a probabilistic point of view. The involvement of uncertainty within a Bayesian updating framework is achieved by developing a novel uncertainty quantification metric, i.e. the Bhattacharyya distance, instead of the typical Euclidian distance. The overall approach is demonstrated by solving the model updating sub-problem of the NASA uncertainty quantification challenge. The demonstration provides a clear comparison between performances of the Euclidian distance and the Bhattacharyya distance, and thus promotes a better understanding of the principle of stochastic model updating, as no longer to determine the unknown-but-fixed parameters, but rather to reduce the uncertainty bounds of the model prediction and meanwhile to guarantee the existing experimental data to be still enveloped within the updated uncertainty space.


Author(s):  
Erik Quaeghebeur

AbstractThe theory of imprecise probability is a generalization of classical ‘precise’ probability theory that allows modeling imprecision and indecision. This is a practical advantage in situations where a unique precise uncertainty model cannot be justified. This arises, for example, when there is a relatively small amount of data available to learn the uncertainty model or when the model’s structure cannot be defined uniquely. The tools the theory provides make it possible to draw conclusions and make decisions that correctly reflect the limited information or knowledge available for the uncertainty modeling task. This extra expressivity however often implies a higher computational burden. The goal of this chapter is to primarily give you the necessary knowledge to be able to read literature that makes use of the theory of imprecise probability. A secondary goal is to provide the insight needed to use imprecise probabilities in your own research. To achieve the goals, we present the essential concepts and techniques from the theory, as well as give a less in-depth overview of the various specific uncertainty models used. Throughout, examples are used to make things concrete. We build on the assumed basic knowledge of classical probability theory.


Author(s):  
Zied Ben Bouallegue ◽  
David S. Richardson

The relative operating characteristic (ROC) curve is a popular diagnostic tool in forecast verification, with the area under the ROC curve (AUC) used as a verification metric measuring the discrimination ability of a forecast. Along with calibration, discrimination is deemed as a fundamental probabilistic forecast attribute. In particular, in ensemble forecast verification, AUC provides a basis for the comparison of potential predictive skill of competing forecasts. While this approach is straightforward when dealing with forecasts of common events (e.g. probability of precipitation), the AUC interpretation can turn out to be oversimplistic or misleading when focusing on rare events (e.g. precipitation exceeding some warning criterion). How should we interpret AUC of ensemble forecasts when focusing on rare events? How can changes in the way probability forecasts are derived from the ensemble forecast affect AUC results? How can we detect a genuine improvement in terms of predictive skill? Based on verification experiments, a critical eye is cast on the AUC interpretation to answer these questions. As well as the traditional trapezoidal approximation and the well-known bi-normal fitting model, we discuss a new approach which embraces the concept of imprecise probabilities and relies on the subdivision of the lowest ensemble probability category.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Marc Andree Weber

Abstract The evidence that we get from peer disagreement is especially problematic from a Bayesian point of view since the belief revision caused by a piece of such evidence cannot be modelled along the lines of Bayesian conditionalisation. This paper explains how exactly this problem arises, what features of peer disagreements are responsible for it, and what lessons should be drawn for both the analysis of peer disagreements and Bayesian conditionalisation as a model of evidence acquisition. In particular, it is pointed out that the same characteristic of evidence from disagreement that explains the problems with Bayesian conditionalisation also suggests an interpretation of suspension of belief in terms of imprecise probabilities.


Sign in / Sign up

Export Citation Format

Share Document