HOPE-Graph: A Hypothesis Evaluation Service considering News and Causality Knowledge

Author(s):  
Futoshi Iwama ◽  
Miki Enoki ◽  
Sachiko Yoshihama
AERA Open ◽  
2021 ◽  
Vol 7 ◽  
pp. 233285842110285
Author(s):  
Tom Rosman ◽  
Samuel Merk

We investigate in-service teachers’ reasons for trust and distrust in educational research compared to research in general. Building on previous research on a so-called “smart but evil” stereotype regarding educational researchers, three sets of confirmatory hypotheses were preregistered. First, we expected that teachers would emphasize expertise—as compared with benevolence and integrity—as a stronger reason for trust in educational researchers. Moreover, we expected that this pattern would not only apply to educational researchers, but that it would generalize to researchers in general. Furthermore, we hypothesized that the pattern could also be found in the general population. Following a pilot study aiming to establish the validity of our measures (German general population sample; N = 504), hypotheses were tested in an online study with N = 414 randomly sampled German in-service teachers. Using the Bayesian informative hypothesis evaluation framework, we found empirical support for five of our six preregistered hypotheses.


2002 ◽  
Vol 42 (3) ◽  
pp. 251-277 ◽  
Author(s):  
Rajendra P. Srivastava ◽  
Arnold Wright ◽  
Theodore J. Mock

1992 ◽  
Vol 71 (3_suppl) ◽  
pp. 1091-1104 ◽  
Author(s):  
Peter E. Langford ◽  
Robert Hunting

480 adolescents and young adults between the ages of 12 and 29 years participated in an experiment in which they were asked to evaluate hypotheses from quantified first-order predicate logic specifying that certain classes of event were necessarily, possibly, or certainly not included within a universe of discourse. Results were used to test a two-stage model of performance on hypothesis evaluation tasks that originated in work on the evaluation of conditionals. The two-stage model, unlike others available, successfully predicted the range of patterns of reply observed. In dealing with very simple hypotheses subjects in this age range tended not to make use of alternative hypotheses unless these were explicitly or implicitly suggested to them by the task. This tells against complexity of hypothesis as an explanation of the reluctance to use alternative hypotheses in evaluating standard conditionals.


2003 ◽  
Vol 22 (2) ◽  
pp. 219-235 ◽  
Author(s):  
Wendy J. Green ◽  
Ken T. Trotman

In order to improve auditor judgments, it is first necessary to understand and evaluate what successful auditors do differently than those who are less successful. This study uses a computerized research instrument to examine in a single experiment the hypothesis generation, information search, hypothesis evaluation, and final judgment stages of the analytical procedures process. The inclusion of a criterion variable and the ability to search for additional evidence allows the study to examine in which of the various stages of analytical procedures auditors make less-than-optimal judgments. Of the 82 participants, 24 selected the correct cause, 19 never generated the correct cause as a hypothesis, and 39 generated the correct cause as a hypothesis but ended up not selecting it. The incorrect participants were divided into two categories: those who incorrectly selected the inherited hypothesis and those who incorrectly selected another self-generated non-error as the cause. The former group showed deficiencies in both information search and hypothesis evaluation compared to the correct group. The second incorrect group had similar information search patterns to the correct participants but had inferior hypothesis evaluation. These findings therefore lend support to the suggestion by Asare and Wright (2003) that not only is hypothesis generation important, but also information search and hypothesis evaluation are important.


1983 ◽  
Vol 38 (138) ◽  
pp. 57 ◽  
Author(s):  
Thomas N. Huffman

2020 ◽  
Author(s):  
Erik Brockbank ◽  
Caren Walker

A large body of research has shown that engaging in explanation improves learning across a range of tasks. The act of explaining has been proposed to draw attention and cognitive resources toward evidence that will support a good explanation—information that is broad, abstract, and consistent with prior knowledge—which in turn aids discovery and generalization. However, it remains unclear whether explanation acts on the learning process via improved hypothesis generation, increasing the probability that the correct hypothesis is considered in the first place, or hypothesis evaluation, the appraisal of the correct hypothesis in light of evidence. In the present study, we address this question by separating the hypothesis generation and evaluation processes in a novel category learning task and quantifying the effect of explanation on each process independently. We find that explanation supports the generation of broad and abstract hypotheses but has less effect on the evaluation of hypotheses.


2020 ◽  
Author(s):  
Daniel W. Heck ◽  
Udo Boehm ◽  
Florian Böing-Messing ◽  
Paul - Christian Bürkner ◽  
Koen Derks ◽  
...  

The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines.


2021 ◽  
Author(s):  
Justin Sulik ◽  
Ryan McKay

Explanations of science denial rooted in individual cognition tend to focus on general trait-like factors such as cognitive style, conspiracist ideation or delusional ideation. However, we argue that this focus typically glosses over the concrete, mechanistic elements of belief formation, such as hypothesis generation, data gathering, or hypothesis evaluation. We show, empirically, that such elements predict variance in science denial not accounted for by cognitive style, even after accounting for social factors such as political ideology. We conclude that a cognitive account of science denial would benefit from the study of complex (i.e., open-ended, multi-stage) problem solving that incorporates these mechanistic elements.


Sign in / Sign up

Export Citation Format

Share Document