Aggregation of Epistemic Uncertainty in Forms of Possibility and Certainty Factors

Author(s):  
Koichi Yamada ◽  

Uncertainty aggregation is an important reasoning for making decisions in the real world, which is full of uncertainty. The paper proposes an information source model for aggregating epistemic uncertainties about truth and discusses uncertainty aggregation in the form of possibility distributions. A new combination rule of possibilities for truth is proposed. Then, this paper proceeds to discussion about a traditional but seemingly forgotten representation of uncertainty (i.e., certainty factors (CFs)) and proposes a new interpretation based on possibility theory. CFs have been criticized because of their lack of sound mathematical interpretation from the viewpoint of probability. Thus, this paper first establishes a theory for a sound interpretation using possibility theory. Then it examines aggregation of CFs based on the interpretation and some combination rules of possibility distributions. The paper proposes several combination rules for CFs having sound theoretical basis, one of which is exactly the same as the oft-criticized combination.

2014 ◽  
Vol 8 (1) ◽  
pp. 218-221 ◽  
Author(s):  
Ping Hu ◽  
Zong-yao Wang

We propose a non-monotone line search combination rule for unconstrained optimization problems, the corresponding non-monotone search algorithm is established and its global convergence can be proved. Finally, we use some numerical experiments to illustrate the new combination of non-monotone search algorithm’s effectiveness.


Author(s):  
K. DEMIRLI ◽  
M. MOLHIM ◽  
A. BULGAK

Sonar sensors are widely used in mobile robots applications such as navigation, map building, and localization. The performance of these sensors is affected by the environmental phenomena, sensor design, and target characteristics. Therefore, the readings obtained from these sensors are uncertain. This uncertainity is often modeled by using Probability Theory. However, the probablistic approach is valid when the available knowledge is precise which is not the case in sonar readings. In this paper, the behavior of sonar readings reflected from walls and corners are studied, then new models of angular uncertainty and radial imprecision for sonar readings obtained from corners and walls are proposed. These models are represented by using Possibility Theory, mainly possibility distributions.


Author(s):  
Sofiia Alpert

The process of solution of different practical and ecological problems, using hyperspectral satellite images usually includes a procedure of classification. Classification is one of the most difficult and important procedures. Some image classification methods were considered and analyzed in this work. These methods are based on the theory of evidence. Evidence theory can simulate uncertainty and process imprecise and incomplete information. It were considered such combination rules in this paper: “mixing” combination rule (or averaging), convolutive x-averaging (or c-averaging) and Smet’s combination rule. It was shown, that these methods can process the data from multiple sources or spectral bands, that provide different assessments for the same hypotheses. It was noted, that the purpose of aggregation of information is to simplify data, whether the data is coming from multiple sources or different spectral bands. It was shown, that Smet’s rule is unnormalized version of Dempster rule, that applied in Smet’s Transferable Belief Model. It also processes imprecise and incomplete data. Smet’s combination rule entails a slightly different formulation of Dempster-Shafer theory. Mixing (or averaging) rule was considered in this paper too. It is the averaging operation that is used for probability distributions. This rule uses basic probability assignments from different sources (spectral bands) and weighs assigned according to the reliability of the sources. Convolutive x-averaging (or c-averaging) rule was considered in this paper too. This combination rule is a generalization of the average for scalar numbers. This rule is commutative and not associative. It also was noted, that convolutive x-averaging (c-averaging) rule can include any number of basic probability assignments. It were also considered examples, where these proposed combination rules were used. Mixing, convolutive x-averaging (c-averaging) rule and Smet’s combination rule can be applied for analysis of hyperspectral satellite images, in remote searching for minerals and oil, solving different environmental and thematic problems.


2011 ◽  
Vol 133 (2) ◽  
Author(s):  
Toshihisa Nishioka ◽  
Guangqin Zhou ◽  
Takehiro Fujimoto

In nuclear pressure vessels, multiple surface cracks are often found by regular inspection. In order to evaluate the integrity of the vessels, ASME B&PV Code Section XI provides the flaw combination rules; however, its accuracy has not been clarified yet. For the analyses of interacting multiple semi-elliptical surface cracks, in 1983 Nishioka and Atluri developed the Vijayakumar, Nishioka, and Atluri (VNA) solution-finite element alternating method which is highly accurate and cost effective. Using this highly accurate VNA-finite element alternating method, the case of extremely closely located two interacting coplanar cracks was analyzed. From the numerical results, it is found that the B&PV Code Section XI provides a conservative flaw combination rule. Therefore, the B&PV Code Section XI is precisely verified by modern and accurate computational technologies.


Author(s):  
Bostjan Bezensek ◽  
John Sharples ◽  
Mark Wilkes

Multiple adjacent flaws are aligned and combined into simplified flaws for the purpose of flaw assessment in fitness for service procedures. Adjacent co-planar flaws are characterised by a bounding semi-elliptical flaw in accordance with flaw combination rules. Approximately 10 years ago BS7910 and R6 procedures revised flaw combination rules to allow adjacent co-planar flaws to touch prior to flaw characterisation. An extensive experimental and analytical programme has been underway over the past few years in the UK to examine the conservatism of flaw characterisation for fatigue, ductile tearing and cleavage with the new flaw combination rule. It was found that for fatigue and ductile tearing of flaws in contact the flaw characterisation is conservative, while for cleavage the flaw characterisation was found to be non-conservative when failure is accompanied with only small amounts of crack tip plasticity. In this paper an engineering approach is introduced to identify cases when flaw characterisation becomes non-conservative for cleavage. The approach relies on comparing material’s capacity for plastic deformation with the geometry of the complex flaw. In this manner no detailed analyses or extensive material characterisation are required for every examined case. The procedure is developed on the basis of test data and finite element analyses are used to extend its applicability to other flaws.


2014 ◽  
Vol 15 (1) ◽  
pp. 79-116 ◽  
Author(s):  
KIM BAUTERS ◽  
STEVEN SCHOCKAERT ◽  
MARTINE DE COCK ◽  
DIRK VERMEIR

AbstractAnswer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.


Author(s):  
HENRI PRADE ◽  
RONALD R. YAGER

This note investigates how various ideas of "expectedness" can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying "I would not be surprised that…" before an event takes place, or by saying "I knew it" after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of possibility of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.


Sign in / Sign up

Export Citation Format

Share Document