hypothesis space
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 32)

H-INDEX

11
(FIVE YEARS 4)

2022 ◽  
Vol 31 ◽  
pp. 426
Author(s):  
Brian Leahy ◽  
Eimantas Zalnieriunas

When a child acquires her first modal verbs, is she learning how to map words in the language she is learning onto innate concepts of possibility, necessity, and impossibility? Or does she also have to construct modal concepts? If the concepts are constructed, does learning to talk about possibilities play a role in the construction process? Exploring this hypothesis space requires testing children's acquisition of modal vocabulary alongside nonverbal tests of their modal concepts. Here we report a study with 103 children from 4;0 through 7;11 and 24 adults. We argue that the data fit best with the hypothesis that acquisition of modal language and development of modal concepts proceed hand-in-hand. However, more research is needed, especially with 3-year-olds.


Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 140
Author(s):  
Akrivi Krouska ◽  
Christos Troussas ◽  
Cleo Sgouropoulou

This paper presents a novel cognitive diagnostic module which is incorporated in e-learning software for the tutoring of the markup language HTML. The system is responsible for detecting the learners’ cognitive bugs and delivering personalized guidance. The novelty of this approach is that it is based on the Repair theory that incorporates additional features, such as student negligence and test completion times, in its diagnostic mechanism; also, it employs a recommender module that suggests students optimal learning paths based on their misconceptions using descriptive test feedback and adaptability of learning content. Considering the Repair theory, the diagnostic mechanism uses a library of error correction rules to explain the cause of errors observed by the student during the assessment. This library covers common errors, creating a hypothesis space in that way. Therefore, the test items are expanded, so that they belong to the hypothesis space. Both the system and the cognitive diagnostic tool were evaluated with promising results, showing that they offer a personalized experience to learners.


2021 ◽  
Vol 57 (7) ◽  
pp. 1080-1093
Author(s):  
Angela Jones ◽  
Douglas B. Markant ◽  
Thorsten Pachur ◽  
Alison Gopnik ◽  
Azzurra Ruggeri

2021 ◽  
pp. 63-95
Author(s):  
Lauren Clemens

A major challenge in developing prosodic arguments to support or refute syntactic analyses is to discern when prosody transparently reflects syntax, verses when the correspondence between syntax and prosody is obscured by phonological, architectural, or mapping constraints. In this paper, I use data from Ch'ol (Mayan) and Niuean (Polynesian) to assess the efficacy of using acoustic cues to prosodic constituency as a diagnostic for syntactic structure. I demonstrate how arguments based on prosodic constituency can successfully reduce the hypothesis space available to syntactic analysis. Nonetheless, the insight gained from prosodic constituency can fall short of distinguishing between syntactic accounts, because syntax-prosody non-isomorphisms do arise. This problem can be addressed by using a variety of methodologies in search of converging evidence, e.g. using syntactic and prosodic argumentation in tandem and by collecting and analyzing more prosodic data in order to better understand the prosodic systems of individual languages.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 407
Author(s):  
Irene Unceta ◽  
Jordi Nin ◽  
Oriol Pujol

Differential replication is a method to adapt existing machine learning solutions to the demands of highly regulated environments by reusing knowledge from one generation to the next. Copying is a technique that allows differential replication by projecting a given classifier onto a new hypothesis space, in circumstances where access to both the original solution and its training data is limited. The resulting model replicates the original decision behavior while displaying new features and characteristics. In this paper, we apply this approach to a use case in the context of credit scoring. We use a private residential mortgage default dataset. We show that differential replication through copying can be exploited to adapt a given solution to the changing demands of a constrained environment such as that of the financial market. In particular, we show how copying can be used to replicate the decision behavior not only of a model, but also of a full pipeline. As a result, we can ensure the decomposability of the attributes used to provide explanations for credit scoring models and reduce the time-to-market delivery of these solutions.


2021 ◽  
Author(s):  
Andrew Cropper ◽  
Rolf Morel

AbstractWe describe an inductive logic programming (ILP) approach calledlearning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages:generate,test, andconstrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set ofhypothesis constraints(constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesisfailswhen it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.


2021 ◽  
Author(s):  
Shunkichi Matsumoto

AbstractIn recent years, quite a few evolutionary psychologists have come to embrace a heuristic interpretation of the discipline. They claim that, no matter how methodologically incomplete, adaptive thinking works fine as a good heuristic that effectively reduces the hypothesis space by generating novel and promising hypotheses that can eventually be empirically tested. The purpose of this article is to elucidate the use of heuristics in evolutionary psychology, thereby clarifying the role adaptive thinking has to play. To that end, two typical heuristic interpretations—Machery’s "bootstrap strategy" and Goldfinch’s heuristically streamlined evolutionary psychology—are examined, focusing on the relationship between adaptive thinking and heuristics. The article draws two primary conclusions. The first is that the reliability of the heuristic hypothesis generation procedure (in the context of discovery) should count no less than the conclusiveness of the final testing procedure (in the context of justification) in establishing scientific facts; nature does not always get the last word. Philosophy also counts. The second is that adaptive thinking constitutes a core heuristic in evolutionary psychology that provides the discipline with its raison d'être, but this is only possible when adaptive thinking is substantiated with sufficient historical underpinnings.


2021 ◽  
Author(s):  
S. Patsantzis ◽  
S. H. Muggleton

AbstractMeta-Interpretive Learners, like most ILP systems, learn by searching for a correct hypothesis in the hypothesis space, the powerset of all constructible clauses. We show how this exponentially-growing search can be replaced by the construction of a Top program: the set of clauses in all correct hypotheses that is itself a correct hypothesis. We give an algorithm for Top program construction and show that it constructs a correct Top program in polynomial time and from a finite number of examples. We implement our algorithm in Prolog as the basis of a new MIL system, Louise, that constructs a Top program and then reduces it by removing redundant clauses. We compare Louise to the state-of-the-art search-based MIL system Metagol in experiments on grid world navigation, graph connectedness and grammar learning datasets and find that Louise improves on Metagol’s predictive accuracy when the hypothesis space and the target theory are both large, or when the hypothesis space does not include a correct hypothesis because of “classification noise” in the form of mislabelled examples. When the hypothesis space or the target theory are small, Louise and Metagol perform equally well.


Sign in / Sign up

Export Citation Format

Share Document