scholarly journals Neural representation of newly instructed rule identities during early implementation trials

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Hannes Ruge ◽  
Theo AJ Schäfer ◽  
Katharina Zwosta ◽  
Holger Mohr ◽  
Uta Wolfensteller

By following explicit instructions, humans instantaneously get the hang of tasks they have never performed before. We used a specially calibrated multivariate analysis technique to uncover the elusive representational states during the first few implementations of arbitrary rules such as ‘for coffee, press red button’ following their first-time instruction. Distributed activity patterns within the ventrolateral prefrontal cortex (VLPFC) indicated the presence of neural representations specific of individual stimulus-response (S-R) rule identities, preferentially for conditions requiring the memorization of instructed S-R rules for correct performance. Identity-specific representations were detectable starting from the first implementation trial and continued to be present across early implementation trials. The increasingly fluent application of novel rule representations was channelled through increasing cooperation between VLPFC and anterior striatum. These findings inform representational theories on how the prefrontal cortex supports behavioral flexibility specifically by enabling the ad-hoc coding of newly instructed individual rule identities during their first-time implementation.

2019 ◽  
Author(s):  
Hannes Ruge ◽  
Theo A. J. Schäfer ◽  
Katharina Zwosta ◽  
Holger Mohr ◽  
Uta Wolfensteller

AbstractBy following explicit instructions humans can instantaneously get the hang of tasks they have never performed before. Here, we used a specially calibrated multivariate analysis technique to uncover the elusive representational states following newly instructed arbitrary behavioural rules such as ‘for coffee, press red button’, while transitioning from ‘knowing what to do’ to ‘actually doing it’. Subtle variation in distributed neural activity patterns reflected rule-specific representations within the ventrolateral prefrontal cortex (VLPFC), confined to instructed stimulus-response learning in contrast to incidental learning involving the same stimuli and responses. VLPFC representations were established right after first-time instruction and remained stable across early implementation trials. More and more fluent application of novel rule representations was channelled through increasing cooperation between VLPFC and anterior striatum. These findings inform representational theories on how the prefrontal cortex supports behavioural flexibility by enabling ad-hoc coding of novel task rules without recourse to familiar sub-routines


2020 ◽  
Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.


2017 ◽  
Author(s):  
Apoorva Bhandari ◽  
Christopher Gagne ◽  
David Badre

AbstractUnderstanding the nature and form of prefrontal cortex representations that support flexible behavior is an important open problem in cognitive neuroscience. In humans, multi-voxel pattern analysis (MVPA) of fMRI BOLD measurements has emerged as an important approach for studying neural representations. An implicit, untested assumption underlying many PFC MVPA studies is that the base rate of decoding information from PFC BOLD activity patterns is similar to that of other brain regions. Here we estimate these base rates from a meta-analysis of published MVPA studies and show that the PFC has a significantly lower base rate for decoding than visual sensory cortex. Our results have implications for the design and interpretation of MVPA studies of prefrontal cortex, and raise important questions about its functional organization.


2021 ◽  
Author(s):  
Ze Fu ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Huichao Yang ◽  
Jiahuan Wang ◽  
...  

A critical way for humans to acquire, represent and communicate information is through language, yet the underlying computation mechanisms through which language contributes to our word meaning representations are poorly understood. We compared three major types of word computation mechanisms from large language corpus (simple co-occurrence, graph-space relations and neural-network-vector-embedding relations) in terms of the association of words’ brain activity patterns, measured by two functional magnetic resonance imaging (fMRI) experiments. Word relations derived from a graph-space representation, and not neural-network-vector-embedding, had unique explanatory power for the neural activity patterns in brain regions that have been shown to be particularly sensitive to language processes, including the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were robust across different window sizes and graph sizes and were relatively specific to language inputs. These findings highlight the role of cumulative language inputs in organizing word meaning neural representations and provide a mathematical model to explain how different brain regions capture different types of language-derived information.


2019 ◽  
Author(s):  
Katherine L Alfred ◽  
Justin C Hayes ◽  
Rachel Pizzie ◽  
Joshua S. Cetron ◽  
David J. M. Kraemer

Individual differences in patterns of attention and thought can vary so greatly that two individuals presented with the same information may encode distinct representations. When presented with a stimulus to be recalled later, the information an individual encodes is dependent on the features of the stimulus to which one attends. Past studies have shown that, on the group level, verbal and visual information (e.g., words and pictures) are encoded in disparate regions of the brain. However, this account conflates external and internal representational formats, and it also neglects individual differences in attention. In this study, we examined neural and cognitive patterns associated with individual differences in attention to verbal representations—both external and internal. We found that the encoded neural representation of semantic content (meaningful words and pictures) varied as a function of individual differences in verbal attention, independent of the stimulus presentation format. Individuals who demonstrated an attentive bias toward words showed similar multivariate BOLD activity patterns within an a priori speech production network when encoding object names as when encoding pictures of objects. This result indicates that these individuals use a common process to encode meaningful words and pictures. These effects were not found for non-semantic stimuli (pronounceable non-words and nonsense pictures). Importantly, as expected, no individual differences in neural representation were found in a separate network of regions known to process semantic content independent of format. These results highlight inter-individual divergence and convergence in internal representations of encoded semantic content.


2018 ◽  
Vol 30 (10) ◽  
pp. 1473-1498 ◽  
Author(s):  
Apoorva Bhandari ◽  
Christopher Gagne ◽  
David Badre

The prefrontal cortex (PFC) is central to flexible, goal-directed cognition, and understanding its representational code is an important problem in cognitive neuroscience. In humans, multivariate pattern analysis (MVPA) of fMRI blood oxygenation level-dependent (BOLD) measurements has emerged as an important approach for studying neural representations. Many previous studies have implicitly assumed that MVPA of fMRI BOLD is just as effective in decoding information encoded in PFC neural activity as it is in visual cortex. However, MVPA studies of PFC have had mixed success. Here we estimate the base rate of decoding information from PFC BOLD activity patterns from a meta-analysis of published MVPA studies. We show that PFC has a significantly lower base rate (55.4%) than visual areas in occipital (66.6%) and temporal (71.0%) cortices and one that is close to chance levels. Our results have implications for the design and interpretation of MVPA studies of PFC and raise important questions about its functional organization.


2021 ◽  
Vol 7 (15) ◽  
pp. eabd5363
Author(s):  
G. Castegnetti ◽  
M. Zurita ◽  
B. De Martino

Value is often associated with reward, emphasizing its hedonic aspects. However, when circumstances change, value must also change (a compass outvalues gold, if you are lost). How are value representations in the brain reshaped under different behavioral goals? To answer this question, we devised a new task that decouples usefulness from its hedonic attributes, allowing us to study flexible goal-dependent mapping. Here, we show that, unlike sensory cortices, regions in the prefrontal cortex (PFC)—usually associated with value computation—remap their representation of perceptually identical items according to how useful the item has been to achieve a specific goal. Furthermore, we identify a coding scheme in the PFC that represents value regardless of the goal, thus supporting generalization across contexts. Our work questions the dominant view that equates value with reward, showing how a change in goals triggers a reorganization of the neural representation of value, enabling flexible behavior.


2019 ◽  
Vol 375 (1791) ◽  
pp. 20180531 ◽  
Author(s):  
Alona Fyshe

The temporal generalization method (TGM) is a data analysis technique that can be used to test if the brain’s representation for particular stimuli (e.g. sounds, images) is maintained, or if it changes as a function of time (King J-R, Dehaene S. 2014 Characterizing the dynamics of mental representations: the temporal generalization method. Trends Cogn. Sci. 18 , 203–210. ( doi:10.1016/j.tics.2014.01.002 )). The TGM involves training models to predict the stimuli or condition using a time window from a recording of brain activity, and testing the resulting models at all possible time windows. This is repeated for all possible training windows to create a full matrix of accuracy for every combination of train/test window. The results of a TGM indicate when brain activity patterns are consistent (i.e. the trained model performs well even when tested on a different time window), and when they are inconsistent, allowing us to track neural representations over time. The TGM has been used to study the representation of images and sounds during a variety of tasks, but has been less readily applied to studies of language. Here, we give an overview of the method itself, discuss how the TGM has been used to analyse two studies of language in context and explore how the TGM could be applied to further our understanding of semantic composition. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.


2006 ◽  
Vol 18 (5) ◽  
pp. 749-765 ◽  
Author(s):  
Kevin Johnston ◽  
Stefan Everling

Complex behavior often requires the formation of associations between environmental stimuli and motor responses appropriate to those stimuli. Moreover, the appropriate response to a given stimulus may vary depending on environmental context. Stimulus-response associations that are adaptive in one situation may not be in another. The prefrontal cortex (PFC) has been shown to be critical for stimulus-response mapping and the implementation of task context. To investigate the neural representation of sensory-motor associations and task context in the PFC, we recorded the activity of prefrontal neurons in two monkeys while they performed two tasks. The first task was a delayed-match-to-sample task in which monkeys were presented with a sample picture and rewarded for making a saccade to the test picture that matched the sample picture following a delay period. The second task was a conditional visuomotor task in which identical sample pictures were presented. In this task, animals were rewarded for performing either prosaccades or antisaccades following the delay period depending on sample picture identity. PFC neurons showed task selectivity, object selectivity, and combinations of task and object selectivity. These modulations of activity took the form of a reduction in stimulus and delay-related activity, and a pro/anti instruction-based grouping of delay activity in the conditional visuomotor task. These data show that activity in PFC neurons is modulated by experimental context, and that this activity represents the formal demands of the task currently being performed.


Author(s):  
Maddalena Boccia ◽  
Valentina Sulpizio ◽  
Federica Bencivenga ◽  
Cecilia Guariglia ◽  
Gaspare Galati

AbstractIt is commonly acknowledged that visual imagery and perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, the way in which our brain processes and organizes previous acquired knowledge to allow the generation of mental images is still a matter of debate. Here, we performed a representation similarity analysis of three previous fMRI experiments conducted in our laboratory to characterize the neural representation underlying imagery and perception of objects, buildings and faces and to disclose possible dissimilarities in the neural structure of such representations. To this aim, we built representational dissimilarity matrices (RDMs) by computing multivariate distances between the activity patterns associated with each pair of stimuli in the content-dependent areas of the HVC and HC. We found that spatial information is widely coded in the HVC during perception (i.e. RSC, PPA and OPA) and imagery (OPA and PPA). Also, visual information seems to be coded in both preferred and non-preferred regions of the HVC, supporting a distributed view of encoding. Overall, the present results shed light upon the spatial coding of imagined and perceived exemplars in the HVC.


Sign in / Sign up

Export Citation Format

Share Document