scholarly journals Decoding motor imagery and action planning in the early visual cortex: Overlapping but distinct neural mechanisms

NeuroImage ◽  
2020 ◽  
Vol 218 ◽  
pp. 116981 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Jody C. Culham ◽  
Luigi Cattaneo ◽  
Luca Turella
Cortex ◽  
2014 ◽  
Vol 59 ◽  
pp. 1-11 ◽  
Author(s):  
Christianne Jacobs ◽  
Tom A. de Graaf ◽  
Alexander T. Sack

2019 ◽  
Vol 29 (11) ◽  
pp. 4662-4678 ◽  
Author(s):  
Jason P Gallivan ◽  
Craig S Chapman ◽  
Daniel J Gale ◽  
J Randall Flanagan ◽  
Jody C Culham

Abstract The primate visual system contains myriad feedback projections from higher- to lower-order cortical areas, an architecture that has been implicated in the top-down modulation of early visual areas during working memory and attention. Here we tested the hypothesis that these feedback projections also modulate early visual cortical activity during the planning of visually guided actions. We show, across three separate human functional magnetic resonance imaging (fMRI) studies involving object-directed movements, that information related to the motor effector to be used (i.e., limb, eye) and action goal to be performed (i.e., grasp, reach) can be selectively decoded—prior to movement—from the retinotopic representation of the target object(s) in early visual cortex. We also find that during the planning of sequential actions involving objects in two different spatial locations, that motor-related information can be decoded from both locations in retinotopic cortex. Together, these findings indicate that movement planning selectively modulates early visual cortical activity patterns in an effector-specific, target-centric, and task-dependent manner. These findings offer a neural account of how motor-relevant target features are enhanced during action planning and suggest a possible role for early visual cortex in instituting a sensorimotor estimate of the visual consequences of movement.


2018 ◽  
Author(s):  
Jena Velji-Ibrahim ◽  
J. Douglas Crawford ◽  
Luigi Cattaneo ◽  
Simona Monaco

AbstractThe role of the early visual cortex (EVC) has been extensively studied for visual recognition but to a lesser degree to determine how action planning influences perceptual representations of objects. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action-dependent way and if so, whether this was due to functional connectivity between visual and higher-level cortical areas. Sixteen participants used their right dominant hand to perform movements (Align or Open Hand) towards one of two oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming toward target location, only Align movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features in the EVC is modulated by the upcoming action, we could use the pre-movement activity pattern to dissociate between object locations in both tasks, and orientations in the Align task only. We found above chance decoding accuracy between the two objects for both tasks in the calcarine sulcus corresponding to the peripheral location of the objects in the visual cortex, suggesting a task-independent (i.e. location) modulation. In contrast, we found significant decoding accuracy between the two objects for Align but not Open Hand movements in the occipital pole corresponding to central vision, and dorsal stream areas, suggesting a task-dependent (i.e. orientation) modulation. Psychophysiological interaction analysis indicated stronger functional connectivity during the planning phase of Align than Open Hand movements between EVC and sensory-motor areas in the dorsal and ventral visual stream, as well as areas that lie at the interface between the two streams. These results demonstrate that task-specific preparatory signals modulate activity not only in areas typically known to be involved in perception for action, but also in the EVC. Further, our findings suggest that object features that are relevant for successful action performance are represented in the part of the visual cortex that is best suited to process visual features in great details, such as the foveal cortex, even if the objects are viewed in the periphery.


2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


2020 ◽  
Vol 31 (1) ◽  
pp. 138-146
Author(s):  
Dean Shmuel ◽  
Sebastian M Frank ◽  
Haggai Sharon ◽  
Yuka Sasaki ◽  
Takeo Watanabe ◽  
...  

Abstract Perception thresholds can improve through repeated practice with visual tasks. Can an already acquired and well-consolidated perceptual skill be noninvasively neuromodulated, unfolding the neural mechanisms involved? Here, leveraging the susceptibility of reactivated memories ranging from synaptic to systems levels across learning and memory domains and animal models, we used noninvasive brain stimulation to neuromodulate well-consolidated reactivated visual perceptual learning and reveal the underlying neural mechanisms. Subjects first encoded and consolidated the visual skill memory by performing daily practice sessions with the task. On a separate day, the consolidated visual memory was briefly reactivated, followed by low-frequency, inhibitory 1 Hz repetitive transcranial magnetic stimulation over early visual cortex, which was individually localized using functional magnetic resonance imaging. Poststimulation perceptual thresholds were measured on the final session. The results show modulation of perceptual thresholds following early visual cortex stimulation, relative to control stimulation. Consistently, resting state functional connectivity between trained and untrained parts of early visual cortex prior to training predicted the magnitude of perceptual threshold modulation. Together, these results indicate that even previously consolidated human perceptual memories are susceptible to neuromodulation, involving early visual cortical processing. Moreover, the opportunity to noninvasively neuromodulate reactivated perceptual learning may have important clinical implications.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2012 ◽  
Vol 24 (2) ◽  
pp. 367-377 ◽  
Author(s):  
Vincent van de Ven ◽  
Bert Jans ◽  
Rainer Goebel ◽  
Peter De Weerd

Visual scene perception owes greatly to surface features such as color and brightness. Yet, early visual cortical areas predominantly encode surface boundaries rather than surface interiors. Whether human early visual cortex may nevertheless carry a small signal relevant for surface perception is a topic of debate. We induced brightness changes in a physically constant surface by temporally modulating the luminance of surrounding surfaces in seven human participants. We found that fMRI activity in the V2 representation of the constant surface was in antiphase to luminance changes of surrounding surfaces (i.e., activity was in-phase with perceived brightness changes). Moreover, the amplitude of the antiphase fMRI activity in V2 predicted the strength of illusory brightness perception. We interpret our findings as evidence for a surface-related signal in early visual cortex and discuss the neural mechanisms that may underlie that signal in concurrence with its possible interaction with the properties of the fMRI signal.


2021 ◽  
Author(s):  
Bei Zhang ◽  
Ralph Weidner ◽  
Fredrik Allenmark ◽  
Sabine Bertleff ◽  
Gereon R. Fink ◽  
...  

Observers can learn the locations where salient distractors appear frequently to reduce potential interference - an effect attributed to better suppression of distractors at frequent locations. But how distractor suppression is implemented in the visual cortex and frontoparietal attention networks remains unclear. We used fMRI and a regional distractor-location learning paradigm (Sauter et al. 2018, 2020) with two types of distractors defined in either the same (orientation) or a different (colour) dimension to the target to investigate this issue. fMRI results showed that BOLD signals in early visual cortex were significantly reduced for distractors (as well as targets) occurring at the frequent versus rare locations, mirroring behavioural patterns. This reduction was more robust with same-dimension distractors. Crucially, behavioural interference was correlated with distractor-evoked visual activity only for same- (but not different-) dimension distractors. Moreover, with different- (but not same-) dimension distractors, a colour-processing area within the fusiform gyrus was activated more when a colour distractor was present versus absent and with a distractor occurring at a rare versus frequent location. These results support statistical learning of frequent distractor locations involving regional suppression in the early visual cortex and point to differential neural mechanisms of distractor handling with different- versus same-dimension distractors.


Sign in / Sign up

Export Citation Format

Share Document