Students’ Reflections About a Course for Learning Inferential Reasoning Via Simulations

Author(s):  
Susanne Podworny
2021 ◽  
Author(s):  
Maxwell Adam Levinson ◽  
Justin Niestroy ◽  
Sadnan Al Manir ◽  
Karen Fairchild ◽  
Douglas E. Lake ◽  
...  

AbstractResults of computational analyses require transparent disclosure of their supporting resources, while the analyses themselves often can be very large scale and involve multiple processing steps separated in time. Evidence for the correctness of any analysis should include not only a textual description, but also a formal record of the computations which produced the result, including accessible data and software with runtime parameters, environment, and personnel involved. This article describes FAIRSCAPE, a reusable computational framework, enabling simplified access to modern scalable cloud-based components. FAIRSCAPE fully implements the FAIR data principles and extends them to provide fully FAIR Evidence, including machine-interpretable provenance of datasets, software and computations, as metadata for all computed results. The FAIRSCAPE microservices framework creates a complete Evidence Graph for every computational result, including persistent identifiers with metadata, resolvable to the software, computations, and datasets used in the computation; and stores a URI to the root of the graph in the result’s metadata. An ontology for Evidence Graphs, EVI (https://w3id.org/EVI), supports inferential reasoning over the evidence. FAIRSCAPE can run nested or disjoint workflows and preserves provenance across them. It can run Apache Spark jobs, scripts, workflows, or user-supplied containers. All objects are assigned persistent IDs, including software. All results are annotated with FAIR metadata using the evidence graph model for access, validation, reproducibility, and re-use of archived data and software.


2012 ◽  
Vol 126 (3) ◽  
pp. 243-254 ◽  
Author(s):  
Andrew Hill ◽  
Emma Collier-Baker ◽  
Thomas Suddendorf

2020 ◽  
Author(s):  
Ullrich K. H. Ecker ◽  
Stephan Lewandowsky ◽  
Matthew Chadwick

Misinformation often continues to influence inferential reasoning after clear and credible corrections are provided; this effect is known as the continued influence effect. It has been theorized that this effect is partly driven by misinformation familiarity. Some researchers have even argued that a correction should avoid repeating the misinformation, as the correction itself could serve to inadvertently enhance misinformation familiarity and may thus backfire, ironically strengthening the very misconception it aims to correct. While previous research has found little evidence of such familiarity backfire effects, there remains one situation where they may yet arise: when correcting entirely novel misinformation, where corrections could serve to spread misinformation to new audiences who had never heard of it before. This article presents three experiments (total N = 1,718) investigating the possibility of familiarity backfire within the context of correcting novel misinformation claims and after a one-week study-test delay. While there was variation across experiments, overall there was substantial evidence against familiarity backfire. Corrections that exposed participants to novel misinformation did not lead to stronger misconceptions compared to a control group never exposed to the false claims or corrections. This suggests that it is safe to repeat misinformation when correcting it, even when the audience might be unfamiliar with the misinformation.


Author(s):  
Harvey S. Wiener

Mature readers always reach beyond the text they are reading. They know unconsciously how to interact with print, regularly uncovering new meanings and making inferential leaps that connect with other thoughts, id s, or experiences. As you saw in the last chapter's discussion of inference, a piece of writing almost always means more than it says, and the awake reader constantly fleshes out suggestions, nuances, and implications to enrich the reading experience. In this and the next chapter, I want to talk with you about some high-order inference skills: predicting outcomes, drawing conclusions, and generalizing. These three skills work together because they involve the reader's ability to follow a trail begun but not completed by the words on the page. The three skills all relate to inferential reasoning in that they require readers to evolve meanings derived from the prose. Remember our definition of inference? When we infer, we uncover information that is unstated—hidden, if you will. The information expands upon the writer's words. Using what the writer tells us, we plug into the complex circuitry of ideas by adducing what's not exactly stated in what we're reading. We dig out meanings, shaping and expanding the writers ideas. Predicting, concluding, and generalizing move us toward wider and deeper meanings in what we read. Let's take them up one at a time. An engaged reader regularly looks ahead to what will happen next—what will be the next event in a chronological sequence, what will be the next point in a logical progression, what will be the next thread in the analytic fabric the writer is weaving. We base our predictions on prior events or issues in the narrative or analytical sequence. Making correct predictions involves our ability to see causes and effects, stimuli and results, actions and consequences. Your child already knows how to predict outcomes. Right from her earliest days in the crib, she has used important analytical skills instinctively.


Author(s):  
David A. Schum

In the field of law there is a rich legacy of scholarship and experience regarding the properties, uses and discovery of evidence in inferential reasoning tasks. Over the centuries our courts have been concerned about characteristics of evidence that seem necessary in order to draw valid and persuasive conclusions from it. Thus, they have been led to consider such matters as the relevance of evidence, the credibility of the sources from which it comes, and the probative or inferential force of evidence. Court trials usually involve inferences about events in the past. The past can never be completely recovered. In addition, evidence about past events is frequently inconclusive, conflicting or contradictory, and often vague or ambiguous. The result is that inferences about past events are necessarily probabilistic in nature. Our courts have also been concerned about whether the interests of fairness require that, on occasion, evidence might be inadmissible, even though relevant and credible. Evidential and inferential issues such as the ones just mentioned are also of concern to philosophers and persons in other disciplines. This entry concerns several evidential issues of particular interest in legal contexts.


Author(s):  
Yannick Boddez ◽  
Jan De Houwer ◽  
Tom Beckers

Chapter 4 describes the inferential reasoning theory of causal learning and discusses how thinking about this theory has evolved in at least two important ways. First, the authors argue that it is useful to decouple the debate about different possible types of mental representations involved in causal learning (e.g., propositional or associative) from the debate about processes involved therein (e.g., inferential reasoning or attention). Second, at the process level inferential reasoning is embedded within a broad array of mental processes that are all required to provide a full mechanistic account of causal learning. Based on those insights, the authors evaluate five arguments that are often raised against inferential reasoning theory. They conclude that causal learning is best understood as involving the formation and retrieval of propositional representations, both of which depend on multiple cognitive processes (i.e., the multi-process propositional account).


1982 ◽  
Vol 10 (2) ◽  
pp. 188-193 ◽  
Author(s):  
C. Donald Morris ◽  
John D. Bransford

Sign in / Sign up

Export Citation Format

Share Document