scholarly journals The Intraparietal Cortex: Subregions Involved in Fixation, Saccades, and in the Visual and Somatosensory Guidance of Reaching

2001 ◽  
Vol 21 (6) ◽  
pp. 671-682 ◽  
Author(s):  
Georgia G. Gregoriou ◽  
Helen E. Savaki

The functional activity of the intraparietal cortex was mapped with the [14C]deoxyglucose method in monkeys performing fixation of a central visual target, saccades to visual targets, reaching in the light during fixation of a central visual target, and acoustically triggered reaching in the dark while the eyes maintained a straight ahead direction. Different subregions of the intraparietal cortical area 7 were activated by fixation, saccades to visual targets, and acoustically triggered reaching in the dark. Subregions in the ventral part of the intraparietal cortex (around the fundus of the intraparietal sulcus) were activated only during reaching in the light, in which case visual information was available to guide the moving forelimb. In contrast, subregions in the dorsal part of the intraparietal cortical area 5 were activated during both reaching in the light and the dark, in which cases somatosensory information was the only one available in common. Thus, visual guidance of reaching is associated with the ventral intraparietal cortex, whereas somatosensory guidance, based on proprioceptive information about the current forelimb position, is associated with dorsal intraparietal area 5.

2007 ◽  
Vol 97 (2) ◽  
pp. 1068-1077 ◽  
Author(s):  
Nikolaos Smyrnis ◽  
Asimakis Mantas ◽  
Ioannis Evdokimidis

In previous studies we observed a pattern of systematic directional errors when humans pointed to memorized visual target locations in two-dimensional (2-D) space. This directional error was also observed in the initial direction of slow movements toward visual targets or movements to kinesthetically defined targets in 2-D space. In this study we used a perceptual experiment where subjects decide whether an arrow points in the direction of a visual target in 2-D space and observed a systematic distortion in direction discrimination known as the “oblique effect.” More specifically, direction discrimination was better for cardinal directions than for oblique. We then used an equivalent measure of direction discrimination in a task where subjects pointed to memorized visual target locations and showed the presence of a motor oblique effect. We finally modeled the oblique effect in the perceptual and motor task using a quadratic function. The model successfully predicted the observed direction discrimination differences in both tasks and, furthermore, the parameter of the model that was related to the shape of the function was not different between the motor and the perceptual tasks. We conclude that a similarly distorted representation of target direction is present for memorized pointing movements and perceptual direction discrimination.


2021 ◽  
Vol 33 (3) ◽  
pp. 599-603
Author(s):  
Koji Okuda ◽  
◽  
Youjirou Ohbatake ◽  
Daisuke Kondo

A major challenge in remote control is the reduction in work efficiency compared with on-board operation. The factors of reduction in work efficiency include a lack of information (information such as perspective, realistic sensation, vibration, and sound) compared to on-board operations. One of the factors is the lack of vestibular/somatosensory information regarding rotation. To clarify the effect of the presence of input of vestibular/somatosensory information regarding rotation on the worker’s operation, we conducted a basic laboratory experiment of a horizontal turning operation. The experimental results indicate that a response appropriate for the input of information regarding rotation can be made only with visual information; however, the reaction is delayed in the case without the input of information regarding rotation in comparison with a case with the input of information regarding rotation.


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


This chapter presents and discusses the results of our analysis. Regarding the findings our first research study, the questionnaire on IR, we organize the discussion of results into more sections, namely research area (1) the interaction between the financial and non-financial information; IR versus other reports; research area (2) the capitals and the value creation process; research area (3) defining integrated reporting; research area (4) IR costs and benefits; research area (5) determinants of integrated reporting; research area (6) recommendations concerning the IIRC framework; research area (7) the industry; research area (8) characteristics for IR information; research area (9) voluntary versus mandatory IR and assurance. The second part of our research presents the results of the SPSS analysis, and we interpret the data according to its economic and business significance.


Author(s):  
Rolf Ulrich ◽  
Laura Prislan ◽  
Jeff Miller

Abstract The Eriksen flanker task is a traditional conflict paradigm for studying the influence of task-irrelevant information on the processing of task-relevant information. In this task, participants are asked to respond to a visual target item (e.g., a letter) that is flanked by task-irrelevant items (e.g., also letters). Responses are typically faster and more accurate when the task-irrelevant information is response-congruent with the visual target than when it is incongruent. Several researchers have attributed the starting point of this flanker effect to poor selective filtering at a perceptual level (e.g., spotlight models), which subsequently produces response competition at post-perceptual stages. The present study examined whether a flanker-like effect could also be established within a bimodal analog of the flanker task with auditory irrelevant letters and visual target letters, which must be processed along different processing routes. The results of two experiments revealed that a flanker-like effect is also present with bimodal stimuli. In contrast to the unimodal flanker task, however, the effect only emerged when flankers and targets shared the same letter name, but not when they were different letters mapped onto the same response. We conclude that the auditory flankers can influence the time needed to recognize visual targets but do not directly activate their associated responses.


2001 ◽  
Vol 86 (2) ◽  
pp. 676-691 ◽  
Author(s):  
Jay A. Edelman ◽  
Michael E. Goldberg

Neurons in the intermediate layers of the superior colliculus respond to visual targets and/or discharge immediately before and during saccades. These visual and motor responses have generally been considered independent, with the visual response dependent on the nature of the stimulus, and the saccade-related activity related to the attributes of the saccade, but not to how the saccade was elicited. In these experiments we asked whether saccade-related discharge in the superior colliculus depended on whether the saccade was directed to a visual target. We recorded extracellular activity of neurons in the intermediate layers of the superior colliculus of three rhesus monkeys during saccades in tasks in which we varied the presence or absence of a visual target and the temporal delays between the appearance and disappearance of a target and saccade initiation. Across our sample of neurons ( n = 64), discharge was highest when a saccade was made to a still-present visual target, regardless of whether the target had recently appeared or had been present for several hundred milliseconds. Discharge was intermediate when the target had recently disappeared and lowest when the target had never appeared during that trial. These results are consistent with the hypothesis that saccade-related discharge decreases as the time between the target disappearance and saccade initiation increases. Saccade velocity was also higher for saccades to visual targets, and correlated on a trial-by-trial basis with perisaccadic discharge for many neurons. However, discharge of many neurons was dependent on task but independent of saccade velocity, and across our sample of neurons, saccade velocity was higher for saccades made immediately after target appearance than would be predicted by discharge level. A tighter relationship was found between saccade precision and perisaccadic discharge. These findings suggest that just as the purpose of the saccadic system in primates is to drive the fovea to a visual target, presaccadic motor activity in the superior colliculus is most intense when such a target is actually present. This enhanced activity may, itself, contribute to the enhanced performance of the saccade system when the saccade is made to a real visual target.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 127-127
Author(s):  
M Desmurget ◽  
Y Rossetti ◽  
C Prablanc

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.


Science ◽  
2019 ◽  
Vol 363 (6422) ◽  
pp. 64-69 ◽  
Author(s):  
Riccardo Beltramo ◽  
Massimo Scanziani

Visual responses in the cerebral cortex are believed to rely on the geniculate input to the primary visual cortex (V1). Indeed, V1 lesions substantially reduce visual responses throughout the cortex. Visual information enters the cortex also through the superior colliculus (SC), but the function of this input on visual responses in the cortex is less clear. SC lesions affect cortical visual responses less than V1 lesions, and no visual cortical area appears to entirely rely on SC inputs. We show that visual responses in a mouse lateral visual cortical area called the postrhinal cortex are independent of V1 and are abolished upon silencing of the SC. This area outperforms V1 in discriminating moving objects. We thus identify a collicular primary visual cortex that is independent of the geniculo-cortical pathway and is capable of motion discrimination.


Sign in / Sign up

Export Citation Format

Share Document