probability logic
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 10)

H-INDEX

16
(FIVE YEARS 1)

Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1409
Author(s):  
Marija Boričić Joksimović

We give some simple examples of applying some of the well-known elementary probability theory inequalities and properties in the field of logical argumentation. A probabilistic version of the hypothetical syllogism inference rule is as follows: if propositions A, B, C, A→B, and B→C have probabilities a, b, c, r, and s, respectively, then for probability p of A→C, we have f(a,b,c,r,s)≤p≤g(a,b,c,r,s), for some functions f and g of given parameters. In this paper, after a short overview of known rules related to conjunction and disjunction, we proposed some probabilized forms of the hypothetical syllogism inference rule, with the best possible bounds for the probability of conclusion, covering simultaneously the probabilistic versions of both modus ponens and modus tollens rules, as already considered by Suppes, Hailperin, and Wagner.


2021 ◽  
pp. 672-687
Author(s):  
Niki Pfeifer ◽  
Giuseppe Sanfilippo
Keyword(s):  

Author(s):  
M Pourmahdian ◽  
R Zoghifard

Abstract This paper provides some model-theoretic analysis for probability (modal) logic ($PL$). It is known that this logic does not enjoy the compactness property. However, by passing into the sublogic of $PL$, namely basic probability logic ($BPL$), it is shown that this logic satisfies the compactness property. Furthermore, by drawing some special attention to some essential model-theoretic properties of $PL$, a version of Lindström characterization theorem is investigated. In fact, it is verified that probability logic has the maximal expressive power among those abstract logics extending $PL$ and satisfying both the filtration and disjoint unions properties. Finally, by alternating the semantics to the finitely additive probability models ($\mathcal{F}\mathcal{P}\mathcal{M}$) and introducing positive sublogic of $PL$ including $BPL$, it is proved that this sublogic possesses the compactness property with respect to $\mathcal{F}\mathcal{P}\mathcal{M}$.


2020 ◽  
Vol 45 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Mark D. Packard ◽  
Brent B. Clark

2020 ◽  
Vol 30 (1) ◽  
pp. 61-76
Author(s):  
Sergei Artemov

Abstract Imagine a database—a set of propositions $\varGamma =\{F_1,\ldots ,F_n\}$ with some kind of probability estimates and let a proposition $X$ logically follow from $\varGamma $. What is the best justified lower bound of the probability of $X$? The traditional approach, e.g. within Adams’ probability logic, computes the numeric lower bound for $X$ corresponding to the worst-case scenario. We suggest a more flexible parameterized approach by assuming probability events $u_1,u_2,\ldots ,u_n$ that support $\varGamma $ and calculating aggregated evidence$e(u_1,u_2,\ldots ,u_n)$ for $X$. The probability of $e$ provides a tight lower bound for any, not only a worst-case, situation. The problem is formalized in a version of justification logic and the conclusions are supported by corresponding completeness theorems. This approach can handle conflicting and inconsistent data and allows the gathering both positive and negative evidence for the same proposition.


2019 ◽  
pp. 61-69
Author(s):  
Miguel López-Astorga

This paper tries to explore possible relations and differences between three kinds of contemporary theories about cognition and language: the approaches supporting the idea that there is a mental logic, the mental models theory, and the frameworks based upon probability logic. That exploration is made here by means of the analytic sentences and the revision of the way each of those types of theories can deal with them. The conclusions seem to show that the three kinds of theories address such sentences in a similar manner, which can mean that there can be more links between them than thought.


Author(s):  
Brendan Juba

Standard approaches to probabilistic reasoning require that one possesses an explicit model of the distribution in question. But, the empirical learning of models of probability distributions from partial observations is a problem for which efficient algorithms are generally not known. In this work we consider the use of bounded-degree fragments of the “sum-of-squares” logic as a probability logic. Prior work has shown that we can decide refutability for such fragments in polynomial-time. We propose to use such fragments to decide queries about whether a given probability distribution satisfies a given system of constraints and bounds on expected values. We show that in answering such queries, such constraints and bounds can be implicitly learned from partial observations in polynomial-time as well. It is known that this logic is capable of deriving many bounds that are useful in probabilistic analysis. We show here that it furthermore captures key polynomial-time fragments of resolution. Thus, these fragments are also quite expressive.


Sign in / Sign up

Export Citation Format

Share Document