Processing of Visual Signals for Direct Specification of Motor Targets and for Conceptual Representation of Action Targets in the Dorsal and Ventral Premotor Cortex

2009 ◽  
Vol 102 (6) ◽  
pp. 3280-3294 ◽  
Author(s):  
Tomoko Yamagata ◽  
Yoshihisa Nakayama ◽  
Jun Tanji ◽  
Eiji Hoshi

Previous reports have indicated that the premotor cortex (PM) uses visual information for either direct guidance of limb movements or indirect specification of action targets at a conceptual level. We explored how visual inputs signaling these two different categories of information are processed by PM neurons. Monkeys performed a delayed reaching task after receiving two different sets of visual instructions, one directly specifying the spatial location of a motor target (a direct spatial-target cue) and the other providing abstract information about the spatial location of a motor target by indicating whether to select the right or left target at a conceptual level (a symbolic action-selection cue). By comparing visual responses of PM neurons to the two sets of visual cues, we found that the conceptual action plan indicated by the symbolic action-selection cue was represented predominantly in dorsal PM (PMd) neurons with a longer latency (150 ms), whereas both PMd and ventral PM (PMv) neurons responded with a shorter latency (90 ms) when the motor target was directly specified with the direct spatial-target cue. We also found that excited, but not inhibited, responses of PM neurons to the direct spatial-target cue were biased toward contralateral preference. In contrast, responses to the symbolic action-selection cue were either excited or inhibited without laterality preference. Taken together, these results suggest that the PM constitutes a pair of distinct circuits for visually guided motor act; one circuit, linked more strongly with PMd, carries information for retrieving action instruction associated with a symbolic cue, and the other circuit, linked with PMd and PMv, carries information for directly specifying a visuospatial position of a reach target.

2006 ◽  
Vol 95 (6) ◽  
pp. 3596-3616 ◽  
Author(s):  
Eiji Hoshi ◽  
Jun Tanji

We examined neuronal activity in the dorsal and ventral premotor cortex (PMd and PMv, respectively) to explore the role of each motor area in processing visual signals for action planning. We recorded neuronal activity while monkeys performed a behavioral task during which two visual instruction cues were given successively with an intervening delay. One cue instructed the location of the target to be reached, and the other indicated which arm was to be used. We found that the properties of neuronal activity in the PMd and PMv differed in many respects. After the first cue was given, PMv neuron response mostly reflected the spatial position of the visual cue. In contrast, PMd neuron response also reflected what the visual cue instructed, such as which arm to be used or which target to be reached. After the second cue was given, PMv neurons initially responded to the cue's visuospatial features and later reflected what the two visual cues instructed, progressively increasing information about the target location. In contrast, the activity of the majority of PMd neurons responded to the second cue with activity reflecting a combination of information supplied by the first and second cues. Such activity, already reflecting a forthcoming action, appeared with short latencies (<400 ms) and persisted throughout the delay period. In addition, both the PMv and PMd showed bilateral representation on visuospatial information and motor-target or effector information. These results further elucidate the functional specialization of the PMd and PMv during the processing of visual information for action planning.


2021 ◽  
Author(s):  
Kelvin Vu-Cheung ◽  
Edward F Ester ◽  
Thomas C Sprague

Visual working memory (WM) enables the maintenance and manipulation of information no longer accessible in the visual world. Previous research has identified spatial WM representations in activation patterns in visual, parietal, and frontal cortex. In natural vision, the period between the encoding of information into WM and the time when it is used to guide behavior (the delay period) is rarely "empty", as is the case in most of the above laboratory experiments. In naturalistic conditions, eye movements, movement of the individual, and events in the environment result in visual signals which may overwrite or impair the fidelity of WM representations, especially in early sensory cortices. Here, we evaluated the extent to which a brief, irrelevant interrupting visual stimulus presented during a spatial WM delay period impaired behavioral performance and WM representation fidelity assayed using an image reconstruction technique (inverted encoding model). On each trial, participants (both sexes) viewed two target dots and were immediately post-cued to remember the precise spatial position of one dot. On 50% of trials, a brief interrupter stimulus appeared. While we observed strong transient univariate visual responses to the distracter stimulus, we saw no change in reconstructed neural WM representations under distraction, nor a change in behavioral performance on a continuous recall task. This suggests that spatial WM representations may be particularly robust to interference from incoming task-irrelevant visual information, perhaps related to their role in guiding movements.


Perception ◽  
1998 ◽  
Vol 27 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Michel-Ange Amorim ◽  
Jack M Loomis ◽  
Sergio S Fukusima

An unfamiliar configuration lying in depth and viewed from a distance is typically seen as foreshortened. The hypothesis motivating this research was that a change in an observer's viewpoint even when the configuration is no longer visible induces an imaginal updating of the internal representation and thus reduces the degree of foreshortening. In experiment 1, observers attempted to reproduce configurations defined by three small glowing balls on a table 2 m distant under conditions of darkness following ‘viewpoint change’ instructions. In one condition, observers reproduced the continuously visible configuration using three other glowing balls on a nearer table while imagining standing at the distant table. In the other condition, observers viewed the configuration, it was then removed, and they walked in darkness to the far table and reproduced the configuration. Even though the observers received no additional information about the stimulus configuration in walking to the table, they were more accurate (less foreshortening) than in the other condition. In experiment 2, observers reproduced distant configurations on a nearer table more accurately when doing so from memory than when doing so while viewing the distant stimulus configuration. In experiment 3, observers performed both the real and imagined perspective change after memorizing the remote configuration. The results of the three experiments indicate that the continued visual presence of the target configuration impedes imaginary perspective-change performance and that an actual change in viewpoint does not increase reproduction accuracy substantially over that obtained with an imagined change in viewpoint.


2015 ◽  
Vol 27 (7) ◽  
pp. 1344-1359 ◽  
Author(s):  
Sara Jahfari ◽  
Lourens Waldorp ◽  
K. Richard Ridderinkhof ◽  
H. Steven Scholte

Action selection often requires the transformation of visual information into motor plans. Preventing premature responses may entail the suppression of visual input and/or of prepared muscle activity. This study examined how the quality of visual information affects frontobasal ganglia (BG) routes associated with response selection and inhibition. Human fMRI data were collected from a stop task with visually degraded or intact face stimuli. During go trials, degraded spatial frequency information reduced the speed of information accumulation and response cautiousness. Effective connectivity analysis of the fMRI data showed action selection to emerge through the classic direct and indirect BG pathways, with inputs deriving form both prefrontal and visual regions. When stimuli were degraded, visual and prefrontal regions processing the stimulus information increased connectivity strengths toward BG, whereas regions evaluating visual scene content or response strategies reduced connectivity toward BG. Response inhibition during stop trials recruited the indirect and hyperdirect BG pathways, with input from visual and prefrontal regions. Importantly, when stimuli were nondegraded and processed fast, the optimal stop model contained additional connections from prefrontal to visual cortex. Individual differences analysis revealed that stronger prefrontal-to-visual connectivity covaried with faster inhibition times. Therefore, prefrontal-to-visual cortex connections appear to suppress the fast flow of visual input for the go task, such that the inhibition process can finish before the selection process. These results indicate response selection and inhibition within the BG to emerge through the interplay of top–down adjustments from prefrontal and bottom–up input from sensory cortex.


1984 ◽  
Vol 59 (1) ◽  
pp. 227-232 ◽  
Author(s):  
Luciano Mecacci ◽  
Dario Salmaso

Visual evoked potentials were recorded for 6 adult male subjects in response to single vowels and consonants in printed and script forms. Analysis showed the vowels in the printed form to have evoked responses with shorter latency (component P1 at about 133 msec.) and larger amplitude (component P1-N1) than the other letter-typeface combinations. No hemispheric asymmetries were found. The results partially agree with the behavioral data on the visual information-processing of letters.


2006 ◽  
Vol 95 (2) ◽  
pp. 922-931 ◽  
Author(s):  
David E. Vaillancourt ◽  
Mary A. Mayka ◽  
Daniel M. Corcos

The cerebellum, parietal cortex, and premotor cortex are integral to visuomotor processing. The parameters of visual information that modulate their role in visuomotor control are less clear. From motor psychophysics, the relation between the frequency of visual feedback and force variability has been identified as nonlinear. Thus we hypothesized that visual feedback frequency will differentially modulate the neural activation in the cerebellum, parietal cortex, and premotor cortex related to visuomotor processing. We used functional magnetic resonance imaging at 3 Tesla to examine visually guided grip force control under frequent and infrequent visual feedback conditions. Control conditions with intermittent visual feedback alone and a control force condition without visual feedback were examined. As expected, force variability was reduced in the frequent compared with the infrequent condition. Three novel findings were identified. First, infrequent (0.4 Hz) visual feedback did not result in visuomotor activation in lateral cerebellum (lobule VI/Crus I), whereas frequent (25 Hz) intermittent visual feedback did. This is in contrast to the anterior intermediate cerebellum (lobule V/VI), which was consistently active across all force conditions compared with rest. Second, confirming previous observations, the parietal and premotor cortices were active during grip force with frequent visual feedback. The novel finding was that the parietal and premotor cortex were also active during grip force with infrequent visual feedback. Third, right inferior parietal lobule, dorsal premotor cortex, and ventral premotor cortex had greater activation in the frequent compared with the infrequent grip force condition. These findings demonstrate that the frequency of visual information reduces motor error and differentially modulates the neural activation related to visuomotor processing in the cerebellum, parietal cortex, and premotor cortex.


Text Matters ◽  
2016 ◽  
pp. 15-34
Author(s):  
Agnieszka Łowczanin

This paper reads The Monk by M. G. Lewis in the context of the literary and visual responses to the French Revolution, suggesting that its digestion of the horrors across the Channel is exhibited especially in its depictions of women. Lewis plays with public and domestic representations of femininity, steeped in social expectation and a rich cultural and religious imaginary. The novel’s ambivalence in the representation of femininity draws on the one hand on Catholic symbolism, especially its depictions of the Madonna and the virgin saints, and on the other, on the way the revolutionaries used the body of the queen, Marie Antoinette, to portray the corruption of the royal family. The Monk fictionalizes the ways in which the female body was exposed, both by the Church and by the Revolution, and appropriated to become a highly politicized entity, a tool in ideological argumentation.


2011 ◽  
Vol 106 (4) ◽  
pp. 1862-1874 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.


2019 ◽  
Author(s):  
Evan Cesanek ◽  
Fulvio Domini

AbstractTo perform accurate movements, the sensorimotor system must maintain a delicate calibration of the mapping between visual inputs and motor outputs. Previous work has focused on the mapping between visual inputs and individual locations in egocentric space, but little attention has been paid to the mappings that support interactions with 3D objects. In this study, we investigated sensorimotor adaptation of grasping movements targeting the depth dimension of 3D paraboloid objects. Object depth was specified by separately manipulating binocular disparity (stereo) and texture gradients. At the end of each movement, the fingers closed down on a physical object consistent with one of the two cues, depending on the condition (haptic-for-texture or haptic-for-stereo). Unlike traditional adaptation paradigms, where relevant spatial properties are determined by a single dimension of visual information, this method enabled us to investigate whether adaptation processes can selectively adjust the influence of different sources of visual information depending on their relationship to physical depth. In two experiments, we found short-term changes in grasp performance consistent with a process of cue-selective adaptation: the slope of the grip aperture with respect to a reliable cue (correlated with physical reality) increased, whereas the slope with respect to the unreliable cue (uncorrelated with physical reality) decreased. In contrast, slope changes did not occur during exposure to a set of stimuli where both cues remained correlated with physical reality, but one was rendered with a constant bias of 10 mm; the grip aperture simply became uniformly larger or smaller, as in standard adaptation paradigms. Overall, these experiments support a model of cue-selective adaptation driven by correlations between error signals and input values (i.e., supervised learning), rather than mismatched haptic and visual signals.


Sign in / Sign up

Export Citation Format

Share Document