scholarly journals EEG Frequency Tagging to Dissociate the Cortical Responses to Nociceptive and Nonnociceptive Stimuli

2014 ◽  
Vol 26 (10) ◽  
pp. 2262-2274 ◽  
Author(s):  
Elisabeth Colon ◽  
Valéry Legrain ◽  
André Mouraux

Whether the cortical processing of nociceptive input relies on the activity of nociceptive-specific neurons or whether it relies on the activity of neurons also involved in processing nonnociceptive sensory input remains a matter of debate. Here, we combined EEG “frequency tagging” of steady-state evoked potentials (SS-EPs) with an intermodal selective attention paradigm to test whether the cortical processing of nociceptive input relies on nociceptive-specific neuronal populations that can be selectively modulated by top–down attention. Trains of nociceptive and vibrotactile stimuli (Experiment 1) and trains of nociceptive and visual stimuli (Experiment 2) were applied concomitantly to the same hand, thus eliciting nociceptive, vibrotactile, and visual SS-EPs. In each experiment, a target detection task was used to focus attention toward one of the two concurrent streams of sensory input. We found that selectively attending to nociceptive or vibrotactile somatosensory input indistinctly enhances the magnitude of nociceptive and vibrotactile SS-EPs, whereas selectively attending to nociceptive or visual input independently enhances the magnitude of the SS-EP elicited by the attended sensory input. This differential effect indicates that the processing of nociceptive input involves neuronal populations also involved in the processing of touch, but distinct from the neuronal populations involved in vision.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chloé Stengel ◽  
Marine Vernet ◽  
Julià L. Amengual ◽  
Antoni Valero-Cabré

AbstractCorrelational evidence in non-human primates has reported increases of fronto-parietal high-beta (22–30 Hz) synchrony during the top-down allocation of visuo-spatial attention. But may inter-regional synchronization at this specific frequency band provide a causal mechanism by which top-down attentional processes facilitate conscious visual perception? To address this question, we analyzed electroencephalographic (EEG) signals from a group of healthy participants who performed a conscious visual detection task while we delivered brief (4 pulses) rhythmic (30 Hz) or random bursts of Transcranial Magnetic Stimulation (TMS) to the right Frontal Eye Field (FEF) prior to the onset of a lateralized target. We report increases of inter-regional synchronization in the high-beta band (25–35 Hz) between the electrode closest to the stimulated region (the right FEF) and right parietal EEG leads, and increases of local inter-trial coherence within the same frequency band over bilateral parietal EEG contacts, both driven by rhythmic but not random TMS patterns. Such increases were accompained by improvements of conscious visual sensitivity for left visual targets in the rhythmic but not the random TMS condition. These outcomes suggest that high-beta inter-regional synchrony can be modulated non-invasively and that high-beta oscillatory activity across the right dorsal fronto-parietal network may contribute to the facilitation of conscious visual perception. Our work supports future applications of non-invasive brain stimulation to restore impaired visually-guided behaviors by operating on top-down attentional modulatory mechanisms.


2018 ◽  
Vol 115 (41) ◽  
pp. 10499-10504 ◽  
Author(s):  
Yin Yan ◽  
Li Zhaoping ◽  
Wu Li

Early sensory cortex is better known for representing sensory inputs but less for the effect of its responses on behavior. Here we explore the behavioral correlates of neuronal responses in primary visual cortex (V1) in a task to detect a uniquely oriented bar—the orientation singleton—in a background of uniformly oriented bars. This singleton is salient or inconspicuous when the orientation contrast between the singleton and background bars is sufficiently large or small, respectively. Using implanted microelectrodes, we measured V1 activities while monkeys were trained to quickly saccade to the singleton. A neuron’s responses to the singleton within its receptive field had an early and a late component, both increased with the orientation contrast. The early component started from the outset of neuronal responses; it remained unchanged before and after training on the singleton detection. The late component started ∼40 ms after the early one; it emerged and evolved with practicing the detection task. Training increased the behavioral accuracy and speed of singleton detection and increased the amount of information in the late response component about a singleton’s presence or absence. Furthermore, for a given singleton, faster detection performance was associated with higher V1 responses; training increased this behavioral–neural correlate in the early V1 responses but decreased it in the late V1 responses. Therefore, V1’s early responses are directly linked with behavior and represent the bottom-up saliency signals. Learning strengthens this link, likely serving as the basis for making the detection task more reflexive and less top-down driven.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Hannes P Saal ◽  
Michael A Harvey ◽  
Sliman J Bensmaia

The sense of touch comprises multiple sensory channels that each conveys characteristic signals during interactions with objects. These neural signals must then be integrated in such a way that behaviorally relevant information about the objects is preserved. To understand the process of integration, we implement a simple computational model that describes how the responses of neurons in somatosensory cortex—recorded from awake, behaving monkeys—are shaped by the peripheral input, reconstructed using simulations of neuronal populations that reproduce natural spiking responses in the nerve with millisecond precision. First, we find that the strength of cortical responses is driven by one population of nerve fibers (rapidly adapting) whereas the timing of cortical responses is shaped by the other (Pacinian). Second, we show that input from these sensory channels is integrated in an optimal fashion that exploits the disparate response behaviors of different fiber types.


2021 ◽  
Author(s):  
Yaxin Liu ◽  
Stella F. Lourenco

Apparent motion is a robust perceptual phenomenon in which observers perceive a stimulus traversing the vacant visual space between two flashed stimuli. Although it is known that the “filling-in” of apparent motion favors the simplest and most economical path, the interpolative computations remain poorly understood. Here, we tested whether the perception of apparent motion is best characterized by Newtonian physics or kinematic geometry. Participants completed a target detection task while Pacmen- shaped objects were presented in succession to create the perception of apparent motion. We found that target detection was impaired when apparent motion, as predicted by kinematic geometry, not Newtonian physics, obstructed the target’s location. Our findings shed light on the computations employed by the visual system, suggesting specifically that the “filling-in” perception of apparent motion may be dominated by kinematic geometry, not Newtonian physics.


Author(s):  
Md Abdullah Al Fahim ◽  
Mohammad Maifi Hasan Khan ◽  
Theodore Jensen ◽  
Yusuf Albayram ◽  
Emil Coman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document