scholarly journals Early and Late Stimulus-Evoked Cortical Hemodynamic Responses Provide Insight into the Neurogenic Nature of Neurovascular Coupling

2011 ◽  
Vol 32 (3) ◽  
pp. 468-480 ◽  
Author(s):  
Aneurin J Kennerley ◽  
Sam Harris ◽  
Michael Bruyns-Haylett ◽  
Luke Boorman ◽  
Ying Zheng ◽  
...  

Understanding neurovascular coupling is a prerequisite for the interpretation of results obtained from modern neuroimaging techniques. This study investigated the hemodynamic and neural responses in rat somatosensory cortex elicited by 16 seconds electrical whisker stimuli. Hemodynamics were measured by optical imaging spectroscopy and neural activity by multichannel electrophysiology. Previous studies have suggested that the whisker-evoked hemodynamic response contains two mechanisms, a transient ‘backwards’ dilation of the middle cerebral artery, followed by an increase in blood volume localized to the site of neural activity. To distinguish between the mechanisms responsible for these aspects of the response, we presented whisker stimuli during normocapnia (‘control’), and during a high level of hypercapnia. Hypercapnia was used to ‘predilate’ arteries and thus possibly ‘inhibit’ aspects of the response related to the ‘early’ mechanism. Indeed, hemodynamic data suggested that the transient stimulus-evoked response was absent under hypercapnia. However, evoked neural responses were also altered during hypercapnia and convolution of the neural responses from both the normocapnic and hypercapnic conditions with a canonical impulse response function, suggested that neurovascular coupling was similar in both conditions. Although data did not clearly dissociate early and late vascular responses, they suggest that the neurovascular coupling relationship is neurogenic in origin.

2003 ◽  
Vol 23 (5) ◽  
pp. 546-555 ◽  
Author(s):  
John Martindale ◽  
John Mayhew ◽  
Jason Berwick ◽  
Myles Jones ◽  
Chris Martin ◽  
...  

This article investigates the relation between stimulus-evoked neural activity and cerebral hemodynamics. Specifically, the hypothesis is tested that hemodynamic responses can be modeled as a linear convolution of experimentally obtained measures of neural activity with a suitable hemodynamic impulse response function. To obtain a range of neural and hemodynamic responses, rat whisker pad was stimulated using brief (≤2 seconds) electrical stimuli consisting of single pulses (0.3 millisecond, 1.2 mA) combined both at different frequencies and in a paired-pulse design. Hemodynamic responses were measured using concurrent optical imaging spectroscopy and laser Doppler flowmetry, whereas neural responses were assessed through current source density analysis of multielectrode recordings from a single barrel. General linear modeling was used to deconvolve the hemodynamic impulse response to a single “neural event” from the hemodynamic and neural responses to stimulation. The model provided an excellent fit to the empirical data. The implications of these results for modeling schemes and for physiologic systems coupling neural and hemodynamic activity are discussed.


2020 ◽  
Author(s):  
Vahid Mehrpour ◽  
Travis Meyer ◽  
Eero P. Simoncelli ◽  
Nicole C. Rust

AbstractMemories of the images that we have seen are thought to be reflected in the reduction of neural responses in high-level visual areas such as inferotemporal (IT) cortex, a phenomenon known as repetition suppression (RS). We challenged this hypothesis with a task that required rhesus monkeys to report image familiarity while ignoring variations in contrast, a stimulus attribute that is also known to modulate the overall IT response. The monkeys’ behavior was largely contrast-invariant, contrary to the predictions of the RS encoding scheme, which could not distinguish response familiarity from changes in contrast. However, the monkeys’ behavioral patterns were well predicted by a linearly decodable variant in which the total spike count is corrected for contrast modulation. These results suggest that the IT neural activity pattern that best aligns with single-exposure visual familiarity behavior is not RS but rather “sensory referenced suppression (SRS)”: reductions in IT population response magnitude, corrected for sensory modulation.


2013 ◽  
Vol 25 (11) ◽  
pp. 1887-1895 ◽  
Author(s):  
Katherine E. Powers ◽  
Leah H. Somerville ◽  
William M. Kelley ◽  
Todd F. Heatherton

As a social species, humans are acutely aware of cues that signal inclusionary status. This study characterizes behavioral and neural responses when individuals anticipate social feedback. Across two fMRI studies, participants (n = 42) made social judgments about supposed peers and then received feedback from those individuals. Of particular interest was the neural activity occurring when participants were awaiting social feedback. During this anticipatory period, increased neural activity was observed in the ventral striatum, a central component of the brain's reward circuitry, and dorsomedial pFC, a brain region implicated in mentalizing about others. Individuals high in rejection sensitivity exhibited greater responses in both the ventral striatum and dorsomedial pFC when anticipating positive feedback. These findings provide initial insight into the neural mechanisms involved in anticipating social evaluations as well as the cognitive processes that underlie rejection sensitivity.


2021 ◽  
Vol 7 (22) ◽  
pp. eabe7547
Author(s):  
Meenakshi Khosla ◽  
Gia H. Ngo ◽  
Keith Jamison ◽  
Amy Kuceyeski ◽  
Mert R. Sabuncu

Naturalistic stimuli, such as movies, activate a substantial portion of the human brain, invoking a response shared across individuals. Encoding models that predict neural responses to arbitrary stimuli can be very useful for studying brain function. However, existing models focus on limited aspects of naturalistic stimuli, ignoring the dynamic interactions of modalities in this inherently context-rich paradigm. Using movie-watching data from the Human Connectome Project, we build group-level models of neural activity that incorporate several inductive biases about neural information processing, including hierarchical processing, temporal assimilation, and auditory-visual interactions. We demonstrate how incorporating these biases leads to remarkable prediction performance across large areas of the cortex, beyond the sensory-specific cortices into multisensory sites and frontal cortex. Furthermore, we illustrate that encoding models learn high-level concepts that generalize to task-bound paradigms. Together, our findings underscore the potential of encoding models as powerful tools for studying brain function in ecologically valid conditions.


2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2007 ◽  
Vol 33 (2-3) ◽  
pp. 433-456 ◽  
Author(s):  
Adam J. Kolber

A neurologist with abdominal pain goes to see a gastroenterologist for treatment. The gastroenterologist asks the neurologist where it hurts. The neurologist replies, “In my head, of course.” Indeed, while we can feel pain throughout much of our bodies, pain signals undergo most of their processing in the brain. Using neuroimaging techniques like functional magnetic resonance imaging (“fMRI”) and positron emission tomography (“PET”), researchers have more precisely identified brain regions that enable us to experience physical pain. Certain regions of the brain's cortex, for example, increase in activation when subjects are exposed to painful stimuli. Furthermore, the amount of activation increases with the intensity of the painful stimulus. These findings suggest that we may be able to gain insight into the amount of pain a particular person is experiencing by non-invasively imaging his brain.Such insight could be particularly valuable in the courtroom where we often have no definitive medical evidence to prove or disprove claims about the existence and extent of pain symptoms.


RSC Advances ◽  
2016 ◽  
Vol 6 (1) ◽  
pp. 439-447 ◽  
Author(s):  
Rui Dou ◽  
Shuanglin Li ◽  
Yan Shao ◽  
Bo Yin ◽  
Mingbo Yang

A hierarchical tri-continuous structure is formed and controlled in PVDF/PS/HDPE ternary blends. A very high level of PS continuity, about 80%, is achieved only with a PS volume composition as low as 11 vol%.


2007 ◽  
Vol 98 (4) ◽  
pp. 2089-2098 ◽  
Author(s):  
Sean P. MacEvoy ◽  
Russell A. Epstein

Complex visual scenes preferentially activate several areas of the human brain, including the parahippocampal place area (PPA), the retrosplenial complex (RSC), and the transverse occipital sulcus (TOS). The sensitivity of neurons in these regions to the retinal position of stimuli is unknown, but could provide insight into their roles in scene perception and navigation. To address this issue, we used functional magnetic resonance imaging (fMRI) to measure neural responses evoked by sequences of scenes and objects confined to either the left or right visual hemifields. We also measured the level of adaptation produced when stimuli were either presented first in one hemifield and then repeated in the opposite hemifield or repeated in the same hemifield. Although overall responses in the PPA, RSC, and TOS tended to be higher for contralateral stimuli than for ipsilateral stimuli, all three regions exhibited position-invariant adaptation, insofar as the magnitude of adaptation did not depend on whether stimuli were repeated in the same or opposite hemifields. In contrast, object-selective regions showed significantly greater adaptation when objects were repeated in the same hemifield. These results suggest that neuronal receptive fields (RFs) in scene-selective regions span the vertical meridian, whereas RFs in object-selective regions do not. The PPA, RSC, and TOS may support scene perception and navigation by maintaining stable representations of large-scale features of the visual environment that are insensitive to the shifts in retinal stimulation that occur frequently during natural vision.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


Sign in / Sign up

Export Citation Format

Share Document