scholarly journals Influence of cognitive control on semantic representation

2016 ◽  
Author(s):  
Waitsang Keung ◽  
Daniel Osherson ◽  
Jonathan D. Cohen

AbstractThe neural representation of an object can change depending on its context. For instance, a horse may be more similar to a bear than to a dog in terms of size, but more similar to a dog in terms of domesticity. We used behavioral measures of similarity together with representational similarity analysis and functional connectivity of fMRI data in humans to reveal how the neural representation of semantic knowledge can change to match the current goal demand. Here we present evidence that objects similar to each other in a given context are also represented more similarly in the brain and that these similarity relationships are modulated by context specific activations in frontal areas.Significance statementThe judgment of similarity between two objects can differ in different contexts. Here we report a study that tested the hypothesis that brain areas associated with task context and cognitive control modulate semantic representations of objects in a task-specific way.We first demonstrate that task instructions impact how objects are represented in the brain. We then show that the expression of these representations is correlated with activity in regions of frontal cortex widely thought to represent context, attention and control.In addition, we introduce spatial variance as a novel index of representational expression and attentional modulation. This promises to lay the groundwork for more exacting studies of the neural basis of semantics, as well as the dynamics of attentional modulation.

2020 ◽  
Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.


2017 ◽  
Author(s):  
Cameron Parro ◽  
Matthew L Dixon ◽  
Kalina Christoff

AbstractCognitive control mechanisms support the deliberate regulation of thought and behavior based on current goals. Recent work suggests that motivational incentives improve cognitive control, and has begun to elucidate the brain regions that may support this effect. Here, we conducted a quantitative meta-analysis of neuroimaging studies of motivated cognitive control using activation likelihood estimation (ALE) and Neurosynth in order to delineate the brain regions that are consistently activated across studies. The analysis included functional neuroimaging studies that investigated changes in brain activation during cognitive control tasks when reward incentives were present versus absent. The ALE analysis revealed consistent recruitment in regions associated with the frontoparietal control network including the inferior frontal sulcus (IFS) and intraparietal sulcus (IPS), as well as consistent recruitment in regions associated with the salience network including the anterior insula and anterior mid-cingulate cortex (aMCC). A large-scale exploratory meta-analysis using Neurosynth replicated the ALE results, and also identified the caudate nucleus, nucleus accumbens, medial thalamus, inferior frontal junction/premotor cortex (IFJ/PMC), and hippocampus. Finally, we conducted separate ALE analyses to compare recruitment during cue and target periods, which tap into proactive engagement of rule-outcome associations, and the mobilization of appropriate viscero-motor states to execute a response, respectively. We found that largely distinct sets of brain regions are recruited during cue and target periods. Altogether, these findings suggest that flexible interactions between frontoparietal, salience, and dopaminergic midbrain-striatal networks may allow control demands to be precisely tailored based on expected value.


2010 ◽  
Vol 104 (5) ◽  
pp. 2831-2849 ◽  
Author(s):  
Michael Campos ◽  
Boris Breznen ◽  
Richard A. Andersen

In the study of the neural basis of sensorimotor transformations, it has become clear that the brain does not always wait to sense external events and afterward select the appropriate responses. If there are predictable regularities in the environment, the brain begins to anticipate the timing of instructional cues and the signals to execute a response, revealing an internal representation of the sequential behavioral states of the task being performed. To investigate neural mechanisms that could represent the sequential states of a task, we recorded neural activity from two oculomotor structures implicated in behavioral timing—the supplementary eye fields (SEF) and the lateral intraparietal area (LIP)—while rhesus monkeys performed a memory-guided saccade task. The neurons of the SEF were found to collectively encode the progression of the task with individual neurons predicting and/or detecting states or transitions between states. LIP neurons, while also encoding information about the current temporal interval, were limited with respect to SEF neurons in two ways. First, LIP neurons tended to be active when the monkey was planning a saccade but not in the precue or intertrial intervals, whereas SEF neurons tended to have activity modulation in all intervals. Second, the LIP neurons were more likely to be spatially tuned than SEF neurons. SEF neurons also show anticipatory activity. The state-selective and anticipatory responses of SEF neurons support two complementary models of behavioral timing, state dependent and accumulator models, and suggest that each model describes a contribution SEF makes to timing at different temporal resolutions.


2012 ◽  
Vol 24 (1) ◽  
pp. 133-147 ◽  
Author(s):  
Carin Whitney ◽  
Marie Kirk ◽  
Jamie O'Sullivan ◽  
Matthew A. Lambon Ralph ◽  
Elizabeth Jefferies

To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus (LIFG), whereas neuropsychological research suggests that damage to a widely distributed network elicits impairments of semantic control. There is also debate about the relationship between semantic and executive control more widely. We used TMS in healthy human volunteers to create “virtual lesions” in structures typically damaged in patients with semantic control deficits: LIFG, left posterior middle temporal gyrus (pMTG), and intraparietal sulcus (IPS). The influence of TMS on tasks varying in semantic and nonsemantic control demands was examined for each region within this hypothesized network to gain insights into (i) their functional specialization (i.e., involvement in semantic representation, controlled retrieval, or selection) and (ii) their domain dependence (i.e., semantic or cognitive control). The results revealed that LIFG and pMTG jointly support both the controlled retrieval and selection of semantic knowledge. IPS specifically participates in semantic selection and responds to manipulations of nonsemantic control demands. These observations are consistent with a large-scale semantic control network, as predicted by lesion data, that draws on semantic-specific (LIFG and pMTG) and domain-independent executive components (IPS).


2020 ◽  
Vol 29 (2) ◽  
pp. 126-133 ◽  
Author(s):  
Jordan Grafman ◽  
Irene Cristofori ◽  
Wanting Zhong ◽  
Joseph Bulbulia

Religion’s neural underpinnings have long been a topic of speculation and debate, but an emerging neuroscience of religion is beginning to clarify which regions of the brain integrate moral, ritual, and supernatural religious beliefs with functionally adaptive responses. Here, we review evidence indicating that religious cognition involves a complex interplay among the brain regions underpinning cognitive control, social reasoning, social motivations, and ideological beliefs.


2018 ◽  
Author(s):  
Ehud Vinepinsky ◽  
Lear Cohen ◽  
Shay Perchik ◽  
Ohad Ben-Shahar ◽  
Opher Donchin ◽  
...  

AbstractLike most animals, the survival of fish depends crucially on navigation in space. This capacity has been documented in numerous behavioral studies that have revealed navigation strategies and the sensory modalities used for navigation. However, virtually nothing is known about how freely swimming fish represent space and locomotion in the brain to enable successful navigation. Using a novel wireless neural recording system, we measured the activity of single neurons in the goldfish lateral pallium, a brain region known to be involved in spatial memory and navigation, while the fish swam freely in a two-dimensional water tank. Four cell types were identified: border cells, head direction cells, speed cells and conjunction head direction with speed. Border cells were active when the fish was near the boundary of the environment. Head direction cells were shown to encode head direction. Speed cells only encoded the absolute speed independent of direction suggestive of an odometry signal. Finally, the conjunction of head direction with speed cells represented the velocity of the fish. This study thus sheds light on how information related to navigation is represented in the brain of swimming fish, and addresses the fundamental question of the neural basis of navigation in this diverse group of vertebrates. The similarities between our observations in fish and earlier findings in mammals may indicate that the networks controlling navigation in vertebrate originate from an ancient circuit common across vertebrates.SummaryNavigation is a fundamental behavioral capacity facilitating survival in many animal species. Fish is one lineage where navigation has been explored behaviorally, but it remains unclear how freely swimming fish represent space and locomotion in the brain. This is a key open question in our understanding of navigation in fish and more generally in understanding the evolutionary origin of the brain’s navigation system. To address this issue, we recorded neuronal signals from the brain of freely swimming goldfish and successfully identified representations of border and swimming kinematics in a brain region known to be associated with navigation. Our findings thus provide a glimpse into the building blocks of the neural representation underlying fish navigation. The similarity of the representation in fish with that of mammals may be key evidence supporting a preserved ancient mechanism across brain evolution.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K. Robinson ◽  
Martin N. Hebart ◽  
Thomas A. Carlson

AbstractThe neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


Author(s):  
Elizabeth Musz ◽  
Sharon L. Thompson-Schill

Semantic memory is composed of one’s accumulated world knowledge. This includes one’s stored factual information about the real-world objects and animals, which enables one to recognize and interact with the things in one’s environment. How is this semantic information organized, and where is it stored in the brain? Newly developed functional neuroimaging (fMRI) methods have provided exciting and innovative approaches to studying these questions. In particular, several recent fMRI investigations have examined the neural bases of semantic knowledge using similarity-based approaches. In similarity models, data from direct (i.e., neural) and indirect (i.e., subjective, psychological) measurements are interpreted as proximity data that provide information about the relationships among object concepts in an abstract, high-dimensional space. Concepts are encoded as points in this conceptual space, such that the semantic relatedness between two concepts is determined by their distance from one another. Using this approach, neuroimaging studies have offered compelling insights to several open-ended questions about how object concepts are represented in the brain. This chapter briefly describes how similarity spaces are computed from both behavioral data and spatially distributed fMRI activity patterns. Then, it reviews empirical reports that relate observed neural similarity spaces to various models of semantic similarity. The chapter examines how these methods have both shaped and informed our current understanding of the neural representation of conceptual information about real-world objects.


2001 ◽  
Vol 4 (2) ◽  
pp. 101-103
Author(s):  
David W. Green

The papers in this Special Issue focus on the use of neuroimaging techniques to answer questions about the neural representation, processing and control of two languages. Neuropsychological data from bilingual aphasics remain vital if we are to establish the neural basis of language (see Paradis, 1995) but lesion-deficit studies alone cannot tell us how neural activity relates to ongoing language processing. Modern neuroimaging methods provide a means to do so. There are two broad classes of such methods: electrophysiological methods allow us to answer questions about when a particular process occurs whereas haemodynamic methods allow us to answer the complementary question of where in the brain such a process is carried out. Before giving a thumb-nail sketch of the papers in this Special Issue, I briefly discuss each class of method.


2016 ◽  
Author(s):  
Alona Fyshe ◽  
Gustavo Sudre ◽  
Leila Wehbe ◽  
Nicole Rafidi ◽  
Tom M. Mitchell

AbstractAs a person reads, the brain performs complex operations to create higher order semantic representations from individual words. While these steps are effortless for competent readers, we are only beginning to understand how the brain performs these actions. Here, we explore semantic composition using magnetoencephalography (MEG) recordings of people reading adjective-noun phrases presented one word at a time. We track the neural representation of semantic information over time, through different brain regions. Our results reveal two novel findings: 1) a neural representation of the adjective is present during noun presentation, but this neural representation is different from that observed during adjective presentation 2) the neural representation of adjective semantics observed during adjective reading is reactivated after phrase reading, with remarkable consistency. We also note that while the semantic representation of the adjective during the reading of the adjective is very distributed, the later representations are concentrated largely to temporal and frontal areas previously associated with composition. Taken together, these results paint a picture of information flow in the brain as phrases are read and understood.


Sign in / Sign up

Export Citation Format

Share Document