Task representations in neural networks trained to perform many cognitive tasks

2019 ◽  
Vol 22 (2) ◽  
pp. 297-306 ◽  
Author(s):  
Guangyu Robert Yang ◽  
Madhura R. Joglekar ◽  
H. Francis Song ◽  
William T. Newsome ◽  
Xiao-Jing Wang
iScience ◽  
2021 ◽  
pp. 103178
Author(s):  
Yichen Henry Liu ◽  
Junda Zhu ◽  
Christos Constantinidis ◽  
Xin Zhou

2021 ◽  
Author(s):  
Tomoya Nakai ◽  
Shinji Nishimoto

Which part of the brain contributes to our complex cognitive processes? Studies have revealed contributions of the cerebellum and subcortex to higher-order cognitive functions; however it is unclear whether such functional representations are preserved across the cortex, cerebellum, and subcortex. In this study, we used functional magnetic resonance imaging data with 103 cognitive tasks and constructed three voxel-wise encoding and decoding models independently using cortical, cerebellar, and subcortical voxels. Representational similarity analysis revealed that the structure of task representations is preserved across the three brain parts. Principal component analysis visualized distinct organizations of abstract cognitive functions in each part of the cerebellum and subcortex. More than 90% of the cognitive tasks were decodable from the cerebellum and subcortical activities, even for the novel tasks not included in model training. Furthermore, we discovered that the cerebellum and subcortex have sufficient information to reconstruct activity in the cerebral cortex.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1508
Author(s):  
Kun Zhang ◽  
Yuanjie Zheng ◽  
Xiaobo Deng ◽  
Weikuan Jia ◽  
Jian Lian ◽  
...  

The goal of the few-shot learning method is to learn quickly from a low-data regime. Structured output tasks like segmentation are challenging for few-shot learning, due to their being high-dimensional and statistically dependent. For this problem, we propose improved guided networks and combine them with a fully connected conditional random field (CRF). The guided network extracts task representations from annotated support images through feature fusion to do fast, accurate inference on new unannotated query images. By bringing together few-shot learning methods and fully connected CRFs, our method can do accurate object segmentation by overcoming poor localization properties of deep convolutional neural networks and can quickly updating tasks, without further optimization, when faced with new data. Our guided network is at the forefront of accuracy for the terms of annotation volume and time.


2017 ◽  
Author(s):  
Guangyu Robert Yang ◽  
H. Francis Song ◽  
William T. Newsome ◽  
Xiao-Jing Wang

ABSTRACTA neural system has the ability to flexibly perform many tasks, but the underlying mechanism cannot be elucidated in traditional experimental and modeling studies designed for one task at a time. Here, we trained a single network model to perform 20 cognitive tasks that may involve working memory, decision-making, categorization and inhibitory control. We found that after training, recurrent units developed into clusters that are functionally specialized for various cognitive processes. We introduce a measure to quantify relationships between single-unit neural representations of tasks, and report five distinct types of such relationships that can be tested experimentally. Surprisingly, our network developed compositionality of task representations, a critical feature for cognitive flexibility, whereby one task can be performed by recombining instructions for other tasks. Finally, we demonstrate how the network could learn multiple tasks sequentially. This work provides a computational platform to investigate neural representations of many cognitive tasks.


2016 ◽  
Author(s):  
Thomas Miconi

AbstractNeural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.


2020 ◽  
Vol 31 (4) ◽  
pp. 1285-1296 ◽  
Author(s):  
Chaofei Hong ◽  
Xile Wei ◽  
Jiang Wang ◽  
Bin Deng ◽  
Haitao Yu ◽  
...  

2004 ◽  
Vol 16 (3) ◽  
pp. 382-389 ◽  
Author(s):  
Emmanuel Guigon

Unlike most artificial systems, the brain is able to face situations that it has not learned or even encountered before. This ability is not in general echoed by the properties of most neural networks. Here, we show that neural computation based on least-square error learning between populations of intensitycoded neurons can explain interpolation and extrapolation capacities of the nervous system in sensorimotor and cognitive tasks. We present simulations for function learning experiments, auditory-visual behavior, and visuomotor transformations. The results suggest that induction in human behavior, be it sensorimotor or cognitive, could arise from a common neural associative mechanism.


Sign in / Sign up

Export Citation Format

Share Document