scholarly journals The Development of Working Memory

2020 ◽  
Vol 29 (6) ◽  
pp. 545-553
Author(s):  
John P. Spencer

Working memory is a central cognitive system that plays a key role in development, with working memory capacity and speed of processing increasing as children move from infancy through adolescence. Here, I focus on two questions: What neural processes underlie working memory, and how do these processes change over development? Answers to these questions lie in computer simulations of neural-network models that shed light on how development happens. These models open up new avenues for optimizing clinical interventions aimed at boosting the working memory abilities of at-risk infants.

2014 ◽  
Vol 57 (3) ◽  
pp. 1026-1039 ◽  
Author(s):  
Sira Määttä ◽  
Marja-Leena Laakso ◽  
Asko Tolvanen ◽  
Timo Ahonen ◽  
Tuija Aro

Purpose In this article, the authors examine the developmental continuity from prelinguistic communication to kindergarten age in language and working memory capacity. Method Following work outlining 6 groups of children with different trajectories of early communication development (ECD; Määttä, Laakso, Tolvanen, Ahonen, & Aro, 2012), the authors examined their later development by psychometric assessment. Ninety-one children first assessed at ages 12–21 months completed a battery of language and working memory tests at age 5;3 (years;months). Results Two of the ECD groups previously identified as being at risk for language difficulties continued to show weaker performance at follow-up. Seventy-nine percent of the children with compromised language skills at follow-up were identified on the basis of the ECD groups, but the number of false positives was high. The 2 at-risk groups also differed significantly from the typically developing groups in the measures tapping working memory capacity. Conclusions In line with the dimensional view of language impairment, the accumulation of early delays predicted the amount of later difficulties; however, at the individual level, the prediction had rather low specificity. The results imply a strong link between language and working memory and call for further studies examining the early developmental interaction between language and memory.


Author(s):  
Gualtiero Piccinini

The introduction of the concept of computation in cognitive science is discussed in this article. Computationalism is usually introduced as an empirical hypothesis that can be disconfirmed. Processing information is surely an important aspect of cognition so if computation is information processing, then cognition involves computation. Computationalism becomes more significant when it has explanatory power. The most relevant and explanatory notion of computation is that associated with digital computers. Turing analyzed computation in terms of what are now called Turing machines that are the kind of simple processor operating on an unbounded tape. Turing stated that any function that can be computed by an algorithm could be computed by a Turing machine. McCulloch and Pitts's account of cognition contains three important aspects that include an analogy between neural processes and digital computations, the use of mathematically defined neural networks as models, and an appeal to neurophysiological evidence to support their neural network models. Computationalism involves three accounts of computation such as causal, semantic, and mechanistic. There are mappings between any physical system and at least some computational descriptions under the causal account. The semantic account may be formulated as a restricted causal account.


2008 ◽  
Vol 94 (1-3) ◽  
pp. 116-124 ◽  
Author(s):  
Carolien Thush ◽  
Reinout W. Wiers ◽  
Susan L. Ames ◽  
Jerry L. Grenard ◽  
Steve Sussman ◽  
...  

2018 ◽  
Vol 373 (1740) ◽  
pp. 20160513 ◽  
Author(s):  
Peter Skorupski ◽  
HaDi MaBouDi ◽  
Hiruni Samadi Galpayage Dona ◽  
Lars Chittka

When counting-like abilities were first described in the honeybee in the mid-1990s, many scholars were sceptical, but such capacities have since been confirmed in a number of paradigms and also in other insect species. Counter to the intuitive notion that counting is a cognitively advanced ability, neural network analyses indicate that it can be mediated by very small neural circuits, and we should therefore perhaps not be surprised that insects and other small-brained animals such as some small fish exhibit such abilities. One outstanding question is how bees actually acquire numerical information. For perception of small numerosities, working-memory capacity may limit the number of items that can be enumerated, but within these limits, numerosity can be evaluated accurately and (at least in primates) in parallel. However, presentation of visual stimuli in parallel does not automatically ensure parallel processing. Recent work on the question of whether bees can see ‘at a glance’ indicates that bees must acquire spatial detail by sequential scanning rather than parallel processing. We explore how this might be tested for a numerosity task in bees and other animals. This article is part of a discussion meeting issue ‘The origins of numerical abilities’.


2020 ◽  
Author(s):  
Yinghao Li ◽  
Robert Kim ◽  
Terrence J. Sejnowski

SummaryRecurrent neural network (RNN) model trained to perform cognitive tasks is a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties and slow synaptic dynamics are important for encoding stimuli and WM maintenance, respectively. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.


2020 ◽  
Vol 117 (19) ◽  
pp. 10530-10540 ◽  
Author(s):  
Zedong Bi ◽  
Changsong Zhou

To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How do animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multiseconds in working memory? How is temporal information processed concurrently with spatial information and decision making? Why are there strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and nontemporal information is coded in subspaces orthogonal with each other, and the state trajectories with time at different nontemporal information are quasiparallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and nontemporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of nontemporal information are similar or not. We identified four factors that facilitate strong temporal signals in nontiming tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and it is supported by and gives predictions to a number of experimental phenomena.


2019 ◽  
Author(s):  
Eli Pollock ◽  
Mehrdad Jazayeri

AbstractMany cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.Author SummaryNeurons in the brain form intricate networks that can produce a vast array of activity patterns. To support goal-directed behavior, the brain must adjust the connections between neurons so that network dynamics can perform desirable computations on behaviorally relevant variables. A fundamental goal in computational neuroscience is to provide an understanding of how network connectivity aligns the dynamics in the brain to the dynamics needed to track those variables. Here, we develop a mathematical framework for creating recurrent neural network models that can address this problem. Specifically, we derive a set of linear equations that constrain the connectivity to afford a direct mapping of task-relevant dynamics onto network activity. We demonstrate the utility of this technique by creating and analyzing a set of network models that can perform a simple working memory task. We then extend the approach to show how additional constraints can furnish networks whose dynamics are controlled flexibly by external inputs. Finally, we exploit the flexibility of this technique to explore the robustness and capacity limitations of recurrent networks. This network synthesis method provides a powerful means for generating and validating hypotheses about how task-relevant computations can emerge from network dynamics.


Sign in / Sign up

Export Citation Format

Share Document