scholarly journals Efference Copy Provides the Eye Position Information Required for Visually Guided Reaching

1998 ◽  
Vol 80 (3) ◽  
pp. 1605-1608 ◽  
Author(s):  
Richard F. Lewis ◽  
Bertrand M. Gaymard ◽  
Rafael J. Tamargo

Lewis, Richard F., Bertrand M. Gaymard, and Rafael J. Tamargo. Efference copy provides the eye position information required for visually guided reaching. J. Neurophysiol. 80: 1605–1608, 1998. The contribution of extraocular muscle (EOM) proprioception to the eye position signal used to transform retinotopic visual information to a craniotopic reference frame remains uncertain. In this study we examined the effects of unilateral and bilateral proprioceptive deafferentation of the EOMs on the accuracy of reaching movements directed to visual targets. No significant changes occurred in the mean accuracy (constant error) or variance (variable error) of pointing after unilateral or bilateral deafferentation. We concluded that in normal animals efference copy provides sufficient information about orbital eye position to code space in craniotopic coordinates.

2019 ◽  
Vol 122 (5) ◽  
pp. 1909-1917
Author(s):  
Svenja Gremmler ◽  
Markus Lappe

We investigated whether the proprioceptive eye position signal after the execution of a saccadic eye movement is used to estimate the accuracy of the movement. If so, saccadic adaptation, the mechanism that maintains saccade accuracy, could use this signal in a similar way as it uses visual feedback after the saccade. To manipulate the availability of the proprioceptive eye position signal we utilized the finding that proprioceptive eye position information builds up gradually after a saccade over a time interval comparable to typical saccade latencies. We confined the retention time of gaze at the saccade landing point by asking participants to make fast return saccades to the fixation point that preempt the usability of proprioceptive eye position signals. In five experimental conditions we measured the influence of the visual and proprioceptive feedback, together and separately, on the development of adaptation. We found that the adaptation of the previously shortened saccades in the case of visual feedback being unavailable after the saccade was significantly weaker when the use of proprioceptive eye position information was impaired by fast return saccades. We conclude that adaptation can be driven by proprioceptive eye position feedback. NEW & NOTEWORTHY We show that proprioceptive eye position information is used after a saccade to estimate motor error and adapt saccade control. Previous studies on saccadic adaptation focused on visual feedback about saccade accuracy. A multimodal error signal combining visual and proprioceptive information is likely more robust. Moreover, combining proprioceptive and visual measures of saccade performance can be helpful to keep vision, proprioception, and motor control in alignment and produce a coherent representation of space.


2010 ◽  
Vol 104 (6) ◽  
pp. 3494-3509 ◽  
Author(s):  
Barbara Heider ◽  
Anushree Karnik ◽  
Nirmala Ramalingam ◽  
Ralph M. Siegel

Visually guided hand movements in primates require an interconnected network of various cortical areas. Single unit firing rate from area 7a and dorsal prelunate (DP) neurons of macaque posterior parietal cortex (PPC) was recorded during reaching movements to targets at variable locations and under different eye position conditions. In the eye position–varied task, the reach target was always foveated; thus eye position varied with reach target location. In the retinal-varied task, the monkey reached to targets at variable retinotopic locations while eye position was kept constant in the center. Spatial tuning was examined with respect to temporal (task epoch) and contextual (task condition) aspects, and response fields were compared. The analysis showed distinct tuning types. The majority of neurons changed their gain field tuning and retinotopic tuning between different phases of the task. Between the onset of visual stimulation and the preparatory phase (before the go signal), about one half the neurons altered their firing rate significantly. Spatial response fields during preparation and initiation epochs were strongly influenced by the task condition (eye position varied vs. retinal varied), supporting a strong role of eye position during visually guided reaching. DP neurons, classically considered visual, showed reach related modulation similar to 7a neurons. This study shows that both area 7a and DP are modulated during reaching behavior in primates. The various tuning types in both areas suggest distinct populations recruiting different circuits during visually guided reaching.


1999 ◽  
Vol 81 (3) ◽  
pp. 1355-1364 ◽  
Author(s):  
Robert J. van Beers ◽  
Anne C. Sittig ◽  
Jan J. Denier van der Gon

Integration of proprioceptive and visual position-information: an experimentally supported model. To localize one’s hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.


1982 ◽  
Vol 55 (3) ◽  
pp. 1003-1016 ◽  
Author(s):  
B. L. Day ◽  
C. D. Marsden

The principal question asked is whether in a visually-guided motor task, a subject tracking a known target employs a different strategy of movement to that used when tracking an unknown target. 22 subjects performed a series of 150 visual tracking tasks each 5 sec. long. The target-move-ment patterns used for the first 50 trials were all different, but for the remaining 100 trials they were identical. Subjects, however, were not informed of the repetition until the final 50 trials. When the task was made repetitive, even though the subjects were unaware of the repetition, learning occurred as evidenced by a progressive reduction in tracking error, although tracking lag remained above the mean reaction-time. Once subjects were aware of the repetition, tracking lags often reached zero or even negative values and tracking error dropped even further. It is argued that the former learning is confined to subconscious improvement in the intermittent response to visual inspection of tracking error, whereas the latter is achieved by adopting a truly predictive mode of tracking. Further experiments were devised to evaluate the role of visual information in movement control when using the predictive strategy. The main finding was that even when moving predictively, visual information was used to regulate motor output, largely to modify the timing of the predictive response to synchronize with the stimulus.


2019 ◽  
Author(s):  
Wee Kiat Lau

Fixation position changes slightly after each blink (Lau & Maus, 2019). We investigated whether these changes affect subsequent saccades. We tested if the oculomotor system uses an internal representation of eye position to plan a saccade. Naïve participants (N = 12) made 10° visually-guided (VG) and memory-guided (MG) saccades to a dot target presented to the left or right of fixation. Participants blinked once (blink) or remained fixated (no-blink) before an auditory cue instructed them to saccade to the target. We hypothesized that if participants had access to an eye position signal at the onset of their saccade, blink-induced position shifts should be corrected for. The alternative hypothesis was that without such an internal eye position signal, blink-induced position shifts should correlate with landing positions. This was not the case either in VG or MG saccades. Saccades started more forward from fixation for MG than VG saccades and landed more backward of the target for MG than VG saccades. Blinking did not contribute to these positional differences. Instead, blinks enlarged both saccade amplitudes. MG amplitudes were also smaller than VG amplitudes. We found no correlation between starting and landing errors across saccades. Hence, start position changes did not influence saccade landing errors. Our results suggest that to plan accurate saccades, the oculomotor system uses an internal representation of eye position that is updated after each blink. Although blinking was introduced to increase eye position changes, it did not influence saccade starting nor landing positions.


1975 ◽  
Vol 27 (3) ◽  
pp. 459-465 ◽  
Author(s):  
Brian Craske ◽  
Martin Crawshaw ◽  
Peter Heron

Three experiments are reported on the effects of previous lateral deviation of the eyes. There is a large effect on their subsequent resting position, and a smaller instantaneous effect on voluntary eye centring. Both are in the direction of previous fixation. The latter effect becomes insignificant within 30 s. The treatment produces errors in visually guided reaching away from the previous direction of fixation. The effects are consistent with a change in registered eye position, an effect also produced by exposure to prisms. Despite this similarity, the disturbance to the oculomotor system caused by these two treatments is sharply differentiated by the resting position. Prisms cause subsequent low frequency, high amplitude oscillations of the eyes (Craske and Templeton, 1968), whereas following lateral deviation the mean resting position returns gradually towards the pre-treatment position.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Ester Martinez-Martin ◽  
Angel P. del Pobil ◽  
Manuela Chessa ◽  
Fabio Solari ◽  
Silvio P. Sabatini

Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach’s performance is evaluated through experiments on both simulated and real data.


2021 ◽  
Vol 10 (5) ◽  
pp. 1102
Author(s):  
Corina Marilena Cristache ◽  
Mihai Burlibasa ◽  
Ioana Tudor ◽  
Eugenia Eftimie Totu ◽  
Fabrizio Di Francesco ◽  
...  

(1) Background: Prosthetically-driven implant positioning is a prerequisite for long-term successful treatment. Transferring the planned implant position information to the clinical setting could be done using either static or dynamic guided techniques. The 3D model of the bone and surrounding structures is obtained via cone beam computed tomography (CBCT) and the patient’s oral condition can be acquired conventionally and then digitalized using a desktop scanner, partially digital workflow (PDW) or digitally with the aid of an intraoral scanner (FDW). The aim of the present randomized clinical trial (RCT) was to compare the accuracy of flapless dental implants insertion in partially edentulous patients with a static surgical template obtained through PDW and FDW. Patient outcome and time spent from data collection to template manufacturing were also compared. (2) Methods: 66 partially edentulous sites (at 49 patients) were randomly assigned to a PDW or FDW for guided implant insertion. Planned and placed implants position were compared by assessing four deviation parameters: 3D error at the entry point, 3D error at the apex, angular deviation, and vertical deviation at entry point. (3) Results: A total of 111 implants were inserted. No implant loss during osseointegration or mechanical and technical complications occurred during the first-year post-implants loading. The mean error at the entry point was 0.44 mm (FDW) and 0.85 (PDW), p ≤ 0.00; at implant apex, 1.03 (FDW) and 1.48 (PDW), p ≤ 0.00; the mean angular deviation, 2.12° (FDW) and 2.48° (PDW), p = 0.03 and the mean depth deviation, 0.45 mm (FDW) and 0.68 mm (PDW), p ≤ 0.00; (4) Conclusions: Despite the statistically significant differences between the groups, and in the limits of the present study, full digital workflow as well as partially digital workflow are predictable methods for accurate prosthetically driven guided implants insertion.


2017 ◽  
Vol 372 (1717) ◽  
pp. 20160077 ◽  
Author(s):  
Anna Honkanen ◽  
Esa-Ville Immonen ◽  
Iikka Salmela ◽  
Kyösti Heimonen ◽  
Matti Weckström

Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’.


Sign in / Sign up

Export Citation Format

Share Document