trace learning
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 1)

ZDM ◽  
2021 ◽  
Vol 53 (5) ◽  
pp. 1073-1084 ◽  
Author(s):  
Barbara Jaworski ◽  
Despina Potari

AbstractThis paper addresses implementation with respect to the professional development (PD) of teachers of mathematics and the educators/didacticians who work with them, through an inquiry-based developmental model. In contrast with a PD model in which educators show, guide or instruct teachers in classroom approaches and mathematical tasks, we present a developmental model in which teachers and educators collaborate to inquire into and develop their own teaching practice. The project, Learning Communities in Mathematics (LCM: e.g., Goodchild, Fuglestad and Jaworski, 2013) exemplifies this developmental model. Here we focus on a project Teaching Better Mathematics (TBM) which extends LCM and implements its developmental model at larger scale. We trace the implementation process through analysis of data gathered during and after the extended project, including written reflections of key didacticians, minutes from leadership meetings and two versions of the project proposal. Particularly, we trace learning and development through an activity theory analysis of the issues, tensions and contradictions experienced by participation in TBM.


2020 ◽  
Vol 30 (10) ◽  
pp. 1927-1933.e2 ◽  
Author(s):  
Antoine Wystrach ◽  
Cornelia Buehlmann ◽  
Sebastian Schwarz ◽  
Ken Cheng ◽  
Paul Graham

2018 ◽  
Author(s):  
Daniel M. Navarro ◽  
Bedeho M. W. Mender ◽  
Hannah E. Smithson ◽  
Simon M. Stringer

AbstractWe study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule [1], but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.


2015 ◽  
Vol 7 (1) ◽  
pp. 26-38 ◽  
Author(s):  
Vui Ann Shim ◽  
Chris Stephen Naveen Ranjit ◽  
Bo Tian ◽  
Miaolong Yuan ◽  
Huajin Tang

2013 ◽  
Vol 25 (5) ◽  
pp. 1261-1276 ◽  
Author(s):  
Jasmin Leveillé ◽  
Thomas Hannagan

Convolutional models of object recognition achieve invariance to spatial transformations largely because of the use of a suitably defined pooling operator. This operator typically takes the form of a max or average function defined across units tuned to the same feature. As a model of the brain's ventral pathway, where computations are carried out by weighted synaptic connections, such pooling can lead to spatial invariance only if the weights that connect similarly tuned units to a given pooling unit are of approximately equal strengths. How identical weights can be learned in the face of nonuniformly distributed data remains unclear. In this letter, we show how various versions of the trace learning rule can help solve this problem. This allows us in turn to explain previously published results and make recommendations as to the optimal rule for invariance learning.


Sign in / Sign up

Export Citation Format

Share Document