logical probability
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 12)

H-INDEX

6
(FIVE YEARS 1)

Axiomathes ◽  
2021 ◽  
Author(s):  
Jan Woleński

AbstractThe problem of induction belongs to the most controversial issues in philosophy of science. If induction is understood widely, it covers every fallible inference, that is, such that its conclusion is not logically entailed by its premises. This paper analyses so-called reductive induction, that is, reasoning in which premises follow from the conclusion, but the reverse relation does not hold. Two issues are taken into account, namely the definition of reductive inference and its justification. The analysis proposed in the paper employs metalogical tools. The author agrees with the view that a quantitative account of degree of confirmation for universal theories via logical probability is problematic. However, prospect for a qualitative approach look as more promising. Using the construction of maximally consistent sets allows to distinguish good and worthless induction as well as shows how to understand induction in a semantic way. A closer analysis of deductivism in the theory of justification shows that it is a hidden inductivism.


2021 ◽  
Vol 12 (1) ◽  
pp. 131-144
Author(s):  
Malvina Ongaro

Abstract In this paper, I propose an assessment of the interpretation of the mathematical notion of probability that Wittgenstein presents in TLP (1963: 5.15 – 5.156). I start by presenting his definition of probability as a relation between propositions. I claim that this definition qualifies as a logical interpretation of probability, of the kind defended in the same years by J. M. Keynes. However, Wittgenstein’s interpretation seems prima facie to be safe from two standard objections moved to logical probability, i. e. the mystic nature of the postulated relation and the reliance on Laplace’s principle of indifference. I then proceed to evaluate Wittgenstein’s idea against three criteria for the adequacy of an interpretation of probability: admissibility, ascertainability, and applicability. If the interpretation is admissible on Kolmogorov’s classical axiomatisation, the problem of ascertainability brings up a difficult dilemma. Finally, I test the interpretation in the application to three main contexts of use of probabilities. While the application to frequencies rests ungrounded, the application to induction requires some elaboration, and the application to rational belief depends on ascertainability.


2021 ◽  
Vol 17 (2) ◽  
pp. 87-96
Author(s):  
Gyula Pulay

Reducing greenhouse gas (GHG) emissions, considered to be the main cause of global warming, is one of the greatest challenges of our time. The implementation of new practices is assisted by the supreme audit institutions, among them the State Audit Office of Hungary, with advice based on their audits. Auditing is effective when it is carried out in the areas most at risk of failing the objective. The SAO's experts have developed a method for identifying the branches of the national economy the most at risk in terms of reducing GHG emissions. The essence of this method, the developed logical probability model and the results of the calculations are presented in this article.


Philosophies ◽  
2020 ◽  
Vol 5 (4) ◽  
pp. 25
Author(s):  
Chenguang Lu

Many researchers want to unify probability and logic by defining logical probability or probabilistic logic reasonably. This paper tries to unify statistics and logic so that we can use both statistical probability and logical probability at the same time. For this purpose, this paper proposes the P–T probability framework, which is assembled with Shannon’s statistical probability framework for communication, Kolmogorov’s probability axioms for logical probability, and Zadeh’s membership functions used as truth functions. Two kinds of probabilities are connected by an extended Bayes’ theorem, with which we can convert a likelihood function and a truth function from one to another. Hence, we can train truth functions (in logic) by sampling distributions (in statistics). This probability framework was developed in the author’s long-term studies on semantic information, statistical learning, and color vision. This paper first proposes the P–T probability framework and explains different probabilities in it by its applications to semantic information theory. Then, this framework and the semantic information methods are applied to statistical learning, statistical mechanics, hypothesis evaluation (including falsification), confirmation, and Bayesian reasoning. Theoretical applications illustrate the reasonability and practicability of this framework. This framework is helpful for interpretable AI. To interpret neural networks, we need further study.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 384 ◽  
Author(s):  
Chenguang Lu

After long arguments between positivism and falsificationism, the verification of universal hypotheses was replaced with the confirmation of uncertain major premises. Unfortunately, Hemple proposed the Raven Paradox. Then, Carnap used the increment of logical probability as the confirmation measure. So far, many confirmation measures have been proposed. Measure F proposed by Kemeny and Oppenheim among them possesses symmetries and asymmetries proposed by Elles and Fitelson, monotonicity proposed by Greco et al., and normalizing property suggested by many researchers. Based on the semantic information theory, a measure b* similar to F is derived from the medical test. Like the likelihood ratio, measures b* and F can only indicate the quality of channels or the testing means instead of the quality of probability predictions. Furthermore, it is still not easy to use b*, F, or another measure to clarify the Raven Paradox. For this reason, measure c* similar to the correct rate is derived. Measure c* supports the Nicod Criterion and undermines the Equivalence Condition, and hence, can be used to eliminate the Raven Paradox. An example indicates that measures F and b* are helpful for diagnosing the infection of Novel Coronavirus, whereas most popular confirmation measures are not. Another example reveals that all popular confirmation measures cannot be used to explain that a black raven can confirm “Ravens are black” more strongly than a piece of chalk. Measures F, b*, and c* indicate that the existence of fewer counterexamples is more important than more positive examples’ existence, and hence, are compatible with Popper’s falsification thought.


2020 ◽  
Vol 11 (4) ◽  
pp. 429-445
Author(s):  
Nicholas Danne ◽  

To justify inductive inference and vanquish classical skepticisms about human memory, external world realism, etc., Richard Fumerton proposes his “inferential internalism,” an epistemology whereby humans ‘see’ by Russellian acquaintance Keynesian probable relations (PRs) between propositions. PRs are a priori necessary relations of logical probability, akin to but not reducible to logical entailments, such that perceiving a PR between one’s evidence E and proposition P of unknown truth value justifies rational belief in P to an objective degree. A recent critic of inferential internalism is Alan Rhoda, who questions its psychological plausibility. Rhoda argues that in order to see necessary relations between propositions E and P, one would need acquaintance with too many propositions at once, since our evidence E is often complex. In this paper, I criticize Rhoda’s implausibility objection as too quick. Referencing the causal status effect (CSE) from psychology, I argue that some of the complex features of evidence E contribute to our type-categorizing it as E-type, and thus we do not need to ‘see’ all of the complex features when we see the PR between E and P. My argument leaves unchanged Fumerton’s justificatory role for the PR, but enhances its psychological plausibility.


Sign in / Sign up

Export Citation Format

Share Document