Interval-Valued Degrees of Belief: Applications of Interval Computations to Expert Systems and Intelligent Control

Author(s):  
Hung T. Nguyen ◽  
Vladik Kreinovich ◽  
Qiang Zuo

Usually, expert systems use numbers to describe the experts' degree of belief in their statements. In practice, however, it is difficult to assign an exact numerical value to the expert's degree of belief. At best, we can get an interval of possible values. This fact leads to the use of interval-valued degree of belief. When intervals are used to describe degrees of belief, then computations with intervals must be used to process them. In this paper, we describe applications of such interval computations to expert systems and to intelligent control.

1983 ◽  
Vol 6 (2) ◽  
pp. 231-245 ◽  
Author(s):  
Henry E. Kyburg

AbstractThere is a tension between normative and descriptive elements in the theory of rational belief. This tension has been reflected in work in psychology and decision theory as well as in philosophy. Canons of rationality should be tailored to what is humanly feasible. But rationality has normative content as well as descriptive content.A number of issues related to both deductive and inductive logic can be raised. Are there full beliefs – statements that are categorically accepted? Should statements be accepted when they become overwhelmingly probable? What is the structure imposed on these beliefs by rationality? Are they consistent? Are they deductively closed? What parameters, if any, does rational acceptance depend on? How can accepted statements come to be rejected on new evidenceShould degrees of belief satisfy the probability calculus? Does conformity to the probability calculus exhaust the rational constraints that can be imposed on partial beliefs? With the acquisition of new evidence, should beliefs change in accord with Bayes' theorem? Are decisions made in accord with the principle of maximizing expected utility? Should they be?A systematic set of answers to these questions is developed on the basis of a probabilistic rule of acceptance and a conception of interval-valued logical probability according to which probabilities are based on known frequencies. This leads to limited deductive closure, a demand for only limited consistency, and the rejection of Bayes' theorem as universally applicable to changes of belief. It also becomes possible, given new evidence, to reject previously accepted statements.


Author(s):  
Vladik Kreinovich

In this issues, we continue to publish abstracts and reviews of recents papers on interval methods in knowledge representation. In knowledge representation, intervals are used for two main purposes: • to describe durations of events; and • to describe uncertainty of measurement results and expert estimates of different quantities; often, we do not know the exact value of a quantity, but we know its lower and upper bounds (e.g., we may not know the exact value of someone's weight, but we may know that this weight is in between 140 and 160 pounds). An important case of this uncertainty occurs in knowledge elicitation, when we ask experts to numerically estimate their degrees of belief in their own statements; in this case, it is often difficult for an expert to estimate this degree of belief precisely, but an expert can often provide us with an interval of possible values. The reviews are collected by Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA, email [email protected]


Author(s):  
Rani Lill Anjum ◽  
Stephen Mumford

The issue of probability enters into science because there can be inconclusive evidence, degrees of belief, and chancy phenomena in the world. This is relevant to Bayesian thinking, for example, which accepts that theories should be accepted only tentatively and considered more or less probable in the light of new evidence. Probability can be modelled in a simplified way, such as where a maximal degree of belief is assigned the value 1. A question remains of how well this reflects the reality of epistemic phenomena, which seems to allow cases where there is more than certainty, i.e. where you would still be certain of something even with less evidence than there is.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

How does Bayesian inference handle the highly idealized nature of many (statistical) models in science? The standard interpretation of probability as degree of belief in the truth of a model does not seem to apply in such cases since all candidate models are most probably wrong. Similarly, it is not clear how chance-credence coordination works for the probabilities generated by a statistical model. We solve these problems by developing a suppositional account of degree of belief where probabilities in scientific modeling are decoupled from our actual (unconditional) degrees of belief. This explains the normative pull of chance-credence coordination in Bayesian inference, uncovers the essentially counterfactual nature of reasoning with Bayesian models, and squares well with our intuitive judgment that statistical models provide “objective” probabilities.


Sign in / Sign up

Export Citation Format

Share Document