scholarly journals Low-dimensional dynamics for working memory and time encoding

2020 ◽  
Vol 117 (37) ◽  
pp. 23021-23032 ◽  
Author(s):  
Christopher J. Cueva ◽  
Alex Saez ◽  
Encarni Marcos ◽  
Aldo Genovesio ◽  
Mehrdad Jazayeri ◽  
...  

Our decisions often depend on multiple sensory experiences separated by time delays. The brain can remember these experiences and, simultaneously, estimate the timing between events. To understand the mechanisms underlying working memory and time encoding, we analyze neural activity recorded during delays in four experiments on nonhuman primates. To disambiguate potential mechanisms, we propose two analyses, namely, decoding the passage of time from neural data and computing the cumulative dimensionality of the neural trajectory over time. Time can be decoded with high precision in tasks where timing information is relevant and with lower precision when irrelevant for performing the task. Neural trajectories are always observed to be low-dimensional. In addition, our results further constrain the mechanisms underlying time encoding as we find that the linear “ramping” component of each neuron’s firing rate strongly contributes to the slow timescale variations that make decoding time possible. These constraints rule out working memory models that rely on constant, sustained activity and neural networks with high-dimensional trajectories, like reservoir networks. Instead, recurrent networks trained with backpropagation capture the time-encoding properties and the dimensionality observed in the data.

2018 ◽  
Author(s):  
Christopher J. Cueva ◽  
Alex Saez ◽  
Encarni Marcos ◽  
Aldo Genovesio ◽  
Mehrdad Jazayeri ◽  
...  

Our decisions often depend on multiple sensory experiences separated by time delays. The brain can remember these experiences and, simultaneously, estimate the timing between events. To understand the mechanisms underlying working memory and time encoding we analyze neural activity recorded during delays in four experiments on non-human primates. To disambiguate potential mechanisms, we propose two analyses, namely, decoding the passage of time from neural data, and computing the cumulative dimensionality of the neural trajectory over time. Time can be decoded with high precision in tasks where timing information is relevant and with lower precision when irrelevant for performing the task. Neural trajectories are always observed to be low dimensional. These constraints rule out working memory models that rely on constant, sustained activity, and neural networks with high dimensional trajectories, like reservoir networks. Instead, recurrent networks trained with backpropagation capture the time encoding properties and the dimensionality observed in the data.


2021 ◽  
Author(s):  
Javier Orlandi ◽  
Mohammad Adbolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Aishwarya Parthasarathy ◽  
Cheng Tang ◽  
Roger Herikstad ◽  
Loong Fah Cheong ◽  
Shih-Cheng Yen ◽  
...  

Abstract Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference.


Author(s):  
Jay Schulkin

This chapter examines the issue of musical sensibility as an instinct as well as the cognitive and neural capabilities that underlie musical expression, including diverse forms of memory. In particular, it considers working memory as an evolutionary trend that expanded our problem solving and social expression. The chapter first provides an overview of the link between musical sensibility and social instincts from an evolutionary perspective before discussing how music is inherently tied to movement and time, along with cognitive events, adaptation, sensory experiences, and emotional sensations. It also describes musical cognition and cognitive motor planning memory as inherent features of musical sensibility, and how musical experience affects the brain.


2021 ◽  
Author(s):  
Javier G. Orlandi ◽  
Mohammad Abdolrahmani ◽  
Ryo Aoki ◽  
Dmitry R. Lyamzin ◽  
Andrea Benucci

Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Cheng Tang ◽  
Roger Herikstad ◽  
Aishwarya Parthasarathy ◽  
Camilo Libedinsky ◽  
Shih-Cheng Yen

The lateral prefrontal cortex is involved in the integration of multiple types of information, including working memory and motor preparation. However, it is not known how downstream regions can extract one type of information without interference from the others present in the network. Here, we show that the lateral prefrontal cortex of non-human primates contains two minimally dependent low-dimensional subspaces: one that encodes working memory information, and another that encodes motor preparation information. These subspaces capture all the information about the target in the delay periods, and the information in both subspaces is reduced in error trials. A single population of neurons with mixed selectivity forms both subspaces, but the information is kept largely independent from each other. A bump attractor model with divisive normalization replicates the properties of the neural data. These results provide new insights into neural processing in prefrontal regions.


2019 ◽  
Author(s):  
Cheng Tang ◽  
Roger Herikstad ◽  
Aishwarya Parthasarathy ◽  
Camilo Libedinsky ◽  
Shih-Cheng Yen

AbstractThe lateral prefrontal cortex is involved in the integration of multiple types of information, including working memory and motor preparation. However, it is not known how downstream regions can extract one type of information without interference from the others present in the network. Here we show that the lateral prefrontal cortex contains two independent low-dimensional subspaces: one that encodes working memory information, and another that encodes motor preparation information. These subspaces capture all the information about the target in the delay periods, and the information in both subspaces is reduced in error trials. A single population of neurons with mixed selectivity forms both subspaces, but the information is kept largely independent from each other. A bump attractor model with divisive normalization replicates the properties of the neural data. These results have implications for the neural mechanisms of cognitive flexibility and capacity limitations.


2020 ◽  
Author(s):  
Yin-Jui Chang ◽  
Yuan-I Chen ◽  
Hsin-Chih Yeh ◽  
Jose M. Carmena ◽  
Samantha R. Santacruz

AbstractFundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas the population brain activity has been studied in the micro-to meso-scale in building the connections between the dynamical patterns and the behaviors, such studies were often done at a single length scale and lacked an explanatory theory that identifies the neuronal origin across multiple scales. Here we introduce the NeuroBondGraph Network, a dynamical system incorporating both biological-inspired components and deep learning techniques to capture cross-scale dynamics that can infer and map the neural data from multiple scales. We demonstrated our model is not only 3.5 times more accurate than the popular sphere head model but also extracts more synchronized phase and correlated low-dimensional latent dynamics. We also showed that we can extend our methods to robustly predict held-out data across 16 days. Accordingly, the NeuroBondGraph Network opens the door to revealing comprehensive understanding of the brain computation, where network mechanisms of multi-scale communications are critical.


Sign in / Sign up

Export Citation Format

Share Document