formal learning theory
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 3 (1) ◽  
pp. 55-67
Author(s):  
José Antonio Sánchez-García ◽  
◽  
Eric Flores-Medrano ◽  
Lidia Aurora Hernández-Rebollar ◽  
Estela Juárez-Ruiz ◽  
...  

This article explores the influence of knowledge of formal learning theories on the rest of the math teacher's specialized knowledge. To this end, the MTSK model was chosen to explore specialized knowledge and APOS theory as a formal learning theory. The work was carried out with two top mid-level Mexican professors who had completed master's degrees and in whose final work they used the APOS theory. We analyzed his final work in the part related to the design of teaching sequences. In the results we present how the design of the activities of the informants shows evidence of the Pedagogical Content Knowledge, as there is a mobilization of knowledge corresponding to the different subdomains and categories framed in that domain.


2019 ◽  
pp. 1-19
Author(s):  
FRANCESCA ZAFFORA BLANDO

Abstract Numerous learning tasks can be described as the process of extrapolating patterns from observed data. One of the driving intuitions behind the theory of algorithmic randomness is that randomness amounts to the absence of any effectively detectable patterns: it is thus natural to regard randomness as antithetical to inductive learning. Osherson and Weinstein [11] draw upon the identification of randomness with unlearnability to introduce a learning-theoretic framework (in the spirit of formal learning theory) for modelling algorithmic randomness. They define two success criteria—specifying under what conditions a pattern may be said to have been detected by a computable learning function—and prove that the collections of data sequences on which these criteria cannot be satisfied correspond to the set of weak 1-randoms and the set of weak 2-randoms, respectively. This learning-theoretic approach affords an intuitive perspective on algorithmic randomness, and it invites the question of whether restricting attention to learning-theoretic success criteria comes at an expressivity cost. In other words, is the framework expressive enough to capture most core algorithmic randomness notions and, in particular, Martin-Löf randomness—arguably, the most prominent algorithmic randomness notion in the literature? In this article, we answer the latter question in the affirmative by providing a learning-theoretic characterisation of Martin-Löf randomness. We then show that Schnorr randomness, another central algorithmic randomness notion, also admits a learning-theoretic characterisation in this setting.


Author(s):  
Daniel Osherson ◽  
Scott Weinstein

Author(s):  
Daniel Osherson ◽  
Dick de Jongh ◽  
Eric Martin ◽  
Scott Weinstein

Sign in / Sign up

Export Citation Format

Share Document