learning in the limit
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 0)

2014 ◽  
Vol 79 (3) ◽  
pp. 908-927 ◽  
Author(s):  
ACHILLES A. BEROS

AbstractWe consider the arithmetic complexity of index sets of uniformly computably enumerable families learnable under different learning criteria. We determine the exact complexity of these sets for the standard notions of finite learning, learning in the limit, behaviorally correct learning and anomalous learning in the limit. In proving the ${\rm{\Sigma }}_5^0$-completeness result for behaviorally correct learning we prove a result of independent interest; if a uniformly computably enumerable family is not learnable, then for any computable learner there is a ${\rm{\Delta }}_2^0$ enumeration witnessing failure.


Author(s):  
Reginaldo Inojosa da Silva Filho ◽  
Ricardo Luis de Azevedo da Rocha ◽  
Ricardo Henrique Gracini Guiraldelli

2012 ◽  
Vol 457 ◽  
pp. 111-127 ◽  
Author(s):  
Jeffrey Heinz ◽  
Anna Kasprzik ◽  
Timo Kötzing

2009 ◽  
Vol 74 (2) ◽  
pp. 489-516 ◽  
Author(s):  
Lorenzo Carlucci ◽  
John Case ◽  
Sanjay Jain

AbstractWe investigate a new paradigm in the context of learning in the limit, namely, learningcorrection grammarsfor classes ofcomputably enumerable (c.e.)languages. Knowing a language may feature a representation of it in terms oftwogrammars. The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammarscorrection grammars. Correction grammars capture the observable fact that peopledocorrect their linguistic utterances during their usual linguistic activities.We show that learning correction grammars for classes of c.e. languages in theTxtEx-mode(i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in theTxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For eachn≥ 0. there is a similar learning advantage, again in learning correction grammars for classes of c.e. languages, but where we compare learning correction grammars that maken+ 1 corrections to those that makencorrections.The concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. This transfinite extension can also be conceptualized as being about learning Ershov-descriptions for c.e. languages. Forua notation in Kleene's general system (O, <o) of ordinal notations for constructive ordinals, we introduce the concept of anu-correction grammar, whereuis used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: ifuandvare notations for constructive ordinals such thatu<ov. then there are classes of c.e. languages that can beTxtEx-learned by conjecturingv-correction grammars but not by conjecturingu-correction grammars.Surprisingly, we show that—above “ω-many” corrections—it is not possible to strengthen the hierarchy:TxtEx-learningu-correction grammars of classes of c.e. languages, whereuis a notation inOforanyordinal, can be simulated byTxtBc-learningw-correction grammars, wherewis any notation for the smallest infinite ordinalω.


2000 ◽  
Vol 11 (03) ◽  
pp. 515-524
Author(s):  
TAKESI OKADOME

The paper deals with learning in the limit from positive data. After an introduction and overview of earlier results, we strengthen a result of Sato and Umayahara (1991) by establishing a necessary and sufficient condition for the satisfaction of Angluin's (1980) finite tell-tale condition. Our other two results show that two notions introduced here, the finite net property and the weak finite net property, lead to sufficient conditions for learning in the limit from positive data. Examples not solvable by earlier methods are also given.


Sign in / Sign up

Export Citation Format

Share Document