scholarly journals Question-answer dynamics and confirmation theory in reasoning by representativeness

2021 ◽  
Author(s):  
Mathias Sablé-Meyer ◽  
Janek Guerrini ◽  
Salvador Mascarenhas

We show that probabilistic decision-making behavior characteristic of reasoning by representativeness or typicality arises in minimalistic settings lacking many of the features previously thought to be necessary conditions for the phenomenon. Specifically, we develop a version of a classical experiment by Kahneman and Tversky (1973) on base-rate neglect, where participants have full access to the probabilistic distribution, conveyed entirely visually and without reliance on familiar stereotypes, rich descriptions, or individuating information. We argue that the notion of evidential support as studied in (Bayesian) confirmation theory offers a good account of our experimental findings, as has been proposed for related data points from the representativeness literature. In a nutshell, when faced with competing alternatives to choose from, humans are sometimes less interested in picking the option with the highest probability of being true (posterior probability), and instead choose the option best supported by available evidence. We point out that this theoretical avenue is descriptively powerful, but has an as-yet unclear explanatory dimension. Building on approaches to reasoning from linguistic semantics, we propose that the chief trigger of confirmation-theoretic mechanisms in deliberate reasoning is a linguistically-motivated tendency to interpret certain experimental setups as intrinsically contrastive, in a way best cashed out by modern linguistic semantic theories of questions. These questions generate pragmatic pressures for interpreting surrounding information as having been meant to help answer the question, which will naturally give rise to confirmation-theoretic effects, very plausibly as a byproduct of iterated Bayesian update as proposed by modern Bayesian theories of relevance-based reasoning in pragmatics. Our experiment provides preliminary but tantalizing evidence in favor of this hypothesis, as participants displayed significantly more confirmation-theoretic behavior in a condition that highlighted the question-like, contrastive nature of the task.

Author(s):  
J. Robert G. Williams

This chapter presents axioms for comparative conditional probability relations. The axioms presented here are more general than usual. Each comparative relation is a weak partial order on pairs of sentences but need not be a complete order relation. The axioms for these comparative relations are probabilistically sound for the broad class of conditional probability functions known as Popper functions. Furthermore, these axioms are probabilistically complete. Arguably, the notion of comparative conditional probability provides a foundation for Bayesian confirmation theory. Bayesian confirmation functions are overly precise probabilistic representations of the more fundamental logic of comparative support. The most important features of evidential support are captured by comparative relationships among argument strengths, realized by the comparative support relations and their logic.


2018 ◽  
Vol 41 ◽  
Author(s):  
Alex O. Holcombe ◽  
Samuel J. Gershman

AbstractZwaan et al. and others discuss the importance of the inevitable differences between a replication experiment and the corresponding original experiment. But these discussions are not informed by a principled, quantitative framework for taking differences into account. Bayesian confirmation theory provides such a framework. It will not entirely solve the problem, but it will lead to new insights.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Bingyin Hu ◽  
Anqi Lin ◽  
L. Catherine Brinson

AbstractThe inconsistency of polymer indexing caused by the lack of uniformity in expression of polymer names is a major challenge for widespread use of polymer related data resources and limits broad application of materials informatics for innovation in broad classes of polymer science and polymeric based materials. The current solution of using a variety of different chemical identifiers has proven insufficient to address the challenge and is not intuitive for researchers. This work proposes a multi-algorithm-based mapping methodology entitled ChemProps that is optimized to solve the polymer indexing issue with easy-to-update design both in depth and in width. RESTful API is enabled for lightweight data exchange and easy integration across data systems. A weight factor is assigned to each algorithm to generate scores for candidate chemical names and optimized to maximize the minimum value of the score difference between the ground truth chemical name and the other candidate chemical names. Ten-fold validation is utilized on the 160 training data points to prevent overfitting issues. The obtained set of weight factors achieves a 100% test accuracy on the 54 test data points. The weight factors will evolve as ChemProps grows. With ChemProps, other polymer databases can remove duplicate entries and enable a more accurate “search by SMILES” function by using ChemProps as a common name-to-SMILES translator through API calls. ChemProps is also an excellent tool for auto-populating polymer properties thanks to its easy-to-update design.


Diametros ◽  
2020 ◽  
pp. 1-24
Author(s):  
Zoe Hitzig ◽  
Jacob Stegenga

We provide a novel articulation of the epistemic peril of p-hacking using three resources from philosophy: predictivism, Bayesian confirmation theory, and model selection theory. We defend a nuanced position on p-hacking: p-hacking is sometimes, but not always, epistemically pernicious. Our argument requires a novel understanding of Bayesianism, since a standard criticism of Bayesian confirmation theory is that it cannot represent the influence of biased methods. We then turn to pre-analysis plans, a methodological device used to mitigate p-hacking. Some say that pre-analysis plans are epistemically meritorious while others deny this, and in practice pre-analysis plans are often violated. We resolve this debate with a modest defence of pre-analysis plans. Further, we argue that pre-analysis plans can be epistemically relevant even if the plan is not strictly followed—and suggest that allowing for flexible pre-analysis plans may be the best available policy option.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Andrea Giuseppe Ragno

Abstract Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel–Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: confirmation and coherence. DFH first and, more rigorously, Tešic (2017) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the first section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and confirmation measures.


Sign in / Sign up

Export Citation Format

Share Document