scholarly journals The dimensionality of neural representations for control

Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.

2021 ◽  
Vol 7 (15) ◽  
pp. eabd5363
Author(s):  
G. Castegnetti ◽  
M. Zurita ◽  
B. De Martino

Value is often associated with reward, emphasizing its hedonic aspects. However, when circumstances change, value must also change (a compass outvalues gold, if you are lost). How are value representations in the brain reshaped under different behavioral goals? To answer this question, we devised a new task that decouples usefulness from its hedonic attributes, allowing us to study flexible goal-dependent mapping. Here, we show that, unlike sensory cortices, regions in the prefrontal cortex (PFC)—usually associated with value computation—remap their representation of perceptually identical items according to how useful the item has been to achieve a specific goal. Furthermore, we identify a coding scheme in the PFC that represents value regardless of the goal, thus supporting generalization across contexts. Our work questions the dominant view that equates value with reward, showing how a change in goals triggers a reorganization of the neural representation of value, enabling flexible behavior.


2020 ◽  
Author(s):  
Sebastian Bobadilla-Suarez ◽  
Olivia Guest ◽  
Bradley C. Love

AbstractRecent work has considered the relationship between value and confidence in both behavior and neural representation. Here we evaluated whether the brain organizes value and confidence signals in a systematic fashion that reflects the overall desirability of options. If so, regions that respond to either increases or decreases in both value and confidence should be widespread. We strongly confirmed these predictions through a model-based fMRI analysis of a mixed gambles task that assessed subjective value (SV) and inverse decision entropy (iDE), which is related to confidence. Purported value areas more strongly signalled iDE than SV, underscoring how intertwined value and confidence are. A gradient tied to the desirability of actions transitioned from positive SV and iDE in ventromedial prefrontal cortex to negative SV and iDE in dorsal medial prefrontal cortex. This alignment of SV and iDE signals could support retrospective evaluation to guide learning and subsequent decisions.


2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


e-Neuroforum ◽  
2018 ◽  
Vol 24 (1) ◽  
pp. A11-A18
Author(s):  
Sabine Windmann ◽  
Grit Hein

Abstract Altruism is a puzzling phenomenon, especially for Biology and Economics. Why do individuals reduce their chances to provide some of the resources they own to others? The answer to this question can be sought at ultimate or proximate levels of explanation. The Social Neurosciences attempt to specify the brain mechanisms that drive humans to act altruistically, in assuming that overtly identical behaviours can be driven by different motives. The research has shown that activations and functional connectivities of the Anterior Insula and the Temporoparietal Junction play specific roles in empathetic versus strategic forms of altruism, whereas the dorsolateral prefrontal cortex, among other regions, is involved in norm-oriented punitive forms of altruism. Future research studies could focus on the processing of ambiguity and conflict in pursuit of altruistic intentions.


2021 ◽  
Author(s):  
John Philippe Paulus ◽  
Carlo Vignali ◽  
Marc N Coutanche

Associative inference, the process of drawing novel links between existing knowledge to rapidly integrate associated information, is supported by the hippocampus and neocortex. Within the neocortex, the medial prefrontal cortex (mPFC) has been implicated in the rapid cortical learning of new information that is congruent with an existing framework of knowledge, or schema. How the brain integrates associations to form inferences, specifically how inferences are represented, is not well understood. In this study, we investigate how the brain uses schemas to facilitate memory integration in an associative inference paradigm (A-B-C-D). We conducted two event-related fMRI experiments in which participants retrieved previously learned direct (AB, BC, CD) and inferred (AC, AD) associations between word pairs for items that are schema congruent or incongruent. Additionally, we investigated how two factors known to affect memory, a delay with sleep, and reward, modulate the neural integration of associations within, and between, schema. Schema congruency was found to benefit the integration of associates, but only when retrieval immediately follows learning. RSA revealed that neural patterns of inferred pairs (AC) in the PHc, mPFC, and posHPC were more similar to their constituents (AB and BC) when the items were schema congruent, suggesting that schema facilitates the assimilation of paired items into a single inferred unit containing all associated elements. Furthermore, a delay with sleep, but not reward, impacted the assimilation of inferred pairs. Our findings reveal that the neural representations of overlapping associations are integrated into novel representations through the support of memory schema.


2020 ◽  
Author(s):  
Levan Bokeria ◽  
Richard Henson ◽  
Robert M Mok

Much of higher cognition involves abstracting away from sensory details and thinking conceptually. How do our brains learn and represent such abstract concepts? Recent work has proposed that neural representations in the medial temporal lobe (MTL), which are involved in spatial navigation, might also support learning of higher-level knowledge structures. These ideas are supported by findings that neural representations in MTL, as well as medial prefrontal cortex (mPFC), are involved in “navigation” of simple two-dimensional spaces of visual stimuli, social spaces and odor spaces. A recent study in the Journal of Neuroscience by Viganò & Piazza (2020) takes this research further by suggesting that entorhinal cortex (EHC) and mPFC are capable of mapping not only sensory spaces, but also abstract semantic spaces. In this opinion piece, we first describe the paradigm and results of the study, as well as the importance of the findings for the field. We then raise several methodological concerns and suggest changes to the paradigm to address these issues. Finally, we discuss potential future research directions including experimental and modelling approaches to tackle outstanding questions in the field.


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


2020 ◽  
Author(s):  
Bart Hartogsveld ◽  
Conny W.E.M. Quaedflieg ◽  
Peter van Ruitenbeek ◽  
Tom Smeets

Binging disorders are characterized by episodes of eating large amounts of food whilst experiencing a loss of control. Recent studies suggest that the underlying causes of these binging disorders consist of a complex system of environmental cues, different processing of food stimuli, altered behavioral responding, and brain changes. We propose that task-independent volumetric and connectivity changes in the brain are highly related to altered functioning in reward sensitivity, cognitive control, and negative affect, which in turn promotes and conserves binging behavior. We here review imaging studies and show that volume and connectivity changes in the orbitofrontal cortex, inferior frontal gyrus, medial prefrontal cortex, striatum, insula and amygdala overlap with distorted brain activation associated with increased reward sensitivity, decreased cognitive control, and distorted responses to negative affect or stress seen in binging disorder. Future research integrating both task-based and task-independent neuroimaging approaches therefore shows considerable promise in clarifying binging behavior. We provide suggestions for how this integration may guide future research and inform novel brain-based treatment options in binging disorders.


2010 ◽  
Vol 39 (2) ◽  
pp. 205-220 ◽  
Author(s):  
Amanda W. Calkins ◽  
Christen M. Deveney ◽  
Meara L. Weitzman ◽  
Bridget A. Hearon ◽  
Greg J. Siegle ◽  
...  

Background: Recent advances have been made in the application of cognitive training strategies as interventions for mental disorders. One novel approach, cognitive control training (CCT), uses computer-based exercises to chronically increase prefrontal cortex recruitment. Activation of prefrontal control mechanisms have specifically been identified with attenuation of emotional responses. However, it is unclear whether recruitment of prefrontal resources alone is operative in this regard, or whether prefrontal control is important only in the role of explicit emotion regulation. This study examined whether exposure to cognitive tasks before an emotional challenge attenuated the effects of the emotional challenge. Aims: We investigated whether a single training session could alter participants' reactivity to subsequent emotional stimuli on two computer-based tasks as well as affect ratings made during the study. We hypothesized that individuals performing the Cognitive Control (CC) task as compared to those performing the Peripheral Vision (PV) comparison task would (1) report reduced negative affect following the mood induction and the emotion task, and (2) exhibit reduced reactivity (defined by lower affective ratings) to negative stimuli during both the reactivity and recovery phases of the emotion task and (3) show a reduced bias towards threatening information. Method: Fifty-nine healthy participants were randomized to complete CC tasks or PV, underwent a negative mood induction, and then made valence and arousal ratings for IAPS images, and completed an assessment of attentional bias. Results: Results indicated that a single-session of CC did not consistently alter participants' responses to either task. However, performance on the CC tasks was correlated on subsequent ratings of emotional images. Conclusions: While overall these results do not support the idea that affective responding is altered by making healthy volunteers use their prefrontal cortex before the affective task, they are discussed in the context of study design issues and future research directions.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


Sign in / Sign up

Export Citation Format

Share Document