scholarly journals The Critical Role of Head Movements for Spatial Representation During Bumblebees Learning Flight

2021 ◽  
Vol 14 ◽  
Author(s):  
Charlotte Doussot ◽  
Olivier J. N. Bertrand ◽  
Martin Egelhaaf

Bumblebees perform complex flight maneuvers around the barely visible entrance of their nest upon their first departures. During these flights bees learn visual information about the surroundings, possibly including its spatial layout. They rely on this information to return home. Depth information can be derived from the apparent motion of the scenery on the bees' retina. This motion is shaped by the animal's flight and orientation: Bees employ a saccadic flight and gaze strategy, where rapid turns of the head (saccades) alternate with flight segments of apparently constant gaze direction (intersaccades). When during intersaccades the gaze direction is kept relatively constant, the apparent motion contains information about the distance of the animal to environmental objects, and thus, in an egocentric reference frame. Alternatively, when the gaze direction rotates around a fixed point in space, the animal perceives the depth structure relative to this pivot point, i.e., in an allocentric reference frame. If the pivot point is at the nest-hole, the information is nest-centric. Here, we investigate in which reference frames bumblebees perceive depth information during their learning flights. By precisely tracking the head orientation, we found that half of the time, the head appears to pivot actively. However, only few of the corresponding pivot points are close to the nest entrance. Our results indicate that bumblebees perceive visual information in several reference frames when they learn about the surroundings of a behaviorally relevant location.

2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


2013 ◽  
Vol 26 (5) ◽  
pp. 465-482 ◽  
Author(s):  
Michelle L. Cadieux ◽  
David I. Shore

Performance on tactile temporal order judgments (TOJs) is impaired when the hands are crossed over the midline. The cause of this effect appears to be tied to the use of an external reference frame, most likely based on visual information. We measured the effect of degrading the external reference frame on the crossed-hand deficit through restriction of visual information across three experiments. Experiments 1 and 2 examined three visual conditions (eyes open–lights on, eyes open–lights off, and eyes closed–lights off) while manipulating response demands; no effect of visual condition was seen. In Experiment 3, response demands were altered to be maximally connected to the internal reference frame and only two visual conditions were tested: eyes open–lights on, eyes closed–lights off. Blindfolded participants had a reduced crossed-hands deficit. Results are discussed in terms of the time needed to recode stimuli from an internal to an external reference frame and the role of conflict between these two reference frames in causing this effect.


2019 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

AbstractMost sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies demonstrated saccade-related activity in the dorsal pulvinar and we have recently shown that many neurons exhibit post-saccadic spatial preference long after the saccade execution. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°/0°/15°) in monkeys performing a visually-cued memory saccade task. We found two main types of gaze dependence. First, ∼50% of neurons showed an effect of static gaze direction during initial and post-saccadic fixation. Eccentric gaze preference was more common than straight ahead. Some of these neurons were not visually-responsive and might be primarily signaling the position of the eyes in the orbit, or coding foveal targets in a head/body/world-centered reference frame. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to post-saccadic target fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in non-retinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually-guided eye and limb movements.New & NoteworthyWork on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. Here we show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.


2020 ◽  
Vol 123 (1) ◽  
pp. 367-391 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.


1999 ◽  
Vol 42 (3) ◽  
pp. 526-539 ◽  
Author(s):  
Charissa R. Lansing ◽  
George W. McConkie

Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances presented without sound. The first experiment found that observers spend more time looking at and direct more gazes toward the upper part of the talker's face in making decisions about intonation patterns than about the words being spoken. The second experiment tested the Gaze Direction Assumption underlying Experiment 1—that is, that people direct their gaze to the stimulus region containing information required for their task. In this experiment, 18 observers with normal hearing made decisions about segmental and prosodic categories under conditions in which face motion was restricted to selected areas of the face. The results indicate that information in the upper part of the talker's face is more critical for intonation pattern decisions than for decisions about word segments or primary sentence stress, thus supporting the Gaze Direction Assumption. Visual speech perception proficiency requires learning where to direct visual attention for cues related to different aspects of speech.


2011 ◽  
Vol 23 (10) ◽  
pp. 2983-2993 ◽  
Author(s):  
Hans-Otto Karnath ◽  
André Mandler ◽  
Simon Clavagnier

Different reference frames have been identified to influence neglect behavior. In particular, neglect has been demonstrated to be related to the contralesional side of the subject's body (egocentric reference frames) as well as to the contralesional side of individual objects irrespective of their position to the patient (object-based reference frame). There has been discussion whether this distinction separates neglect into body- and object-based forms. The present experiment aimed to prove possible interactions between object-based and egocentric aspects in spatial neglect. Neglect patients' eye and head movements were recorded while they explored objects at five egocentric positions along the horizontal dimension of space. The patients showed both egocentric as well as object-based behavior. Most interestingly, data analysis revealed that object-based neglect varied with egocentric position. Although the neglect of the objects' left side was strong at contralesional egocentric positions, it ameliorated at more ipsilesional egocentric positions of the objects. The patients showed steep, ramp-shaped patterns of exploration for objects located on the far contralesional side and a broadening of these patterns as the locations of the objects shifted more to the ipsilesional side. The data fitted well with the saliency curves predicted by a model of space representation, which suggests that visual input is represented in two modes simultaneously: in veridical egocentric coordinates and in within-object coordinates.


1999 ◽  
Vol 82 (5) ◽  
pp. 2833-2838 ◽  
Author(s):  
W. P. Medendorp ◽  
J.A.M. van Gisbergen ◽  
M.W.I.M. Horstink ◽  
C.C.A.M. Gielen

We investigated head movements of patients with spasmodic torticollis toward targets in various directions. These patients, whose severe dystonia was reflected in an abnormal resting head position, appeared to retain a Donders'-type strategy for the control of the rotational degrees of freedom of the head. As in normals, rotation vectors, representing head orientation, were confined to a curved surface, which specifies how head torsion depends on gaze direction. The orientation of the surface in body coordinates, which was very stereotyped in normals, was different for patients. The same Donders surface was found for head movements and for stationary head postures, indicating that the same neural mechanism governs its implementation in both tasks. To interpret our results, we propose a conceptual scheme incorporating the basal ganglia, which are thought to be involved in the etiology of torticollis, and an implementation stage for Donders' law.


2021 ◽  
Author(s):  
Arash Tavakoli ◽  
Vahid Balali ◽  
Arsalan Heydarian

Studies have shown that environmental factors affect driving behaviors. For instance, weather conditions and the presence of a passenger have been shown to significantly affect the speed of the driver. As one of the important measures of driving behavior is the gaze and head movements of the driver, such metrics can be potentially used towards understanding the effects of environmental factors on the driver’s behavior in real-time. In this study, using a naturalistic study platform, videos have been collected from six participants for more than four weeks of a fully naturalistic driving scenario. The videos of both the participants’ faces and roads have been cleaned and manually categorized depending on weather, road type, and passenger conditions. Facial videos have been analyzed using OpenFace to retrieve the gaze direction and head movements of the driver. Results, overall, suggest that the gaze direction and head movements of the driver are affected by a combination of environmental factors and individual differences. Specifically, results depict the distracting effect of the passenger on some individuals. In addition, it shows that highways and city streets are the cause for maximum distraction on the driver’s gaze.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kaian Unwalla ◽  
Michelle L. Cadieux ◽  
David I. Shore

AbstractAccurate localization of touch requires the integration of two reference frames—an internal (e.g., anatomical) and an external (e.g., spatial). Using a tactile temporal order judgement task with the hands crossed over the midline, we investigated the integration of these two reference frames. We manipulated the reliability of the visual and vestibular information, both of which contribute to the external reference frame. Visual information was manipulated between experiments (Experiment 1 was done with full vision and Experiment 2 was done while wearing a blindfold). Vestibular information was manipulated in both experiments by having the two groups of participants complete the task in both an upright posture and one where they were lying down on their side. Using a Bayesian hierarchical model, we estimated the perceptual weight applied to these reference frames. Lying participants on their side reduced the weight applied to the external reference frame and produced a smaller deficit; blindfolding resulted in similar reductions. These findings reinforce the importance of the visual system when weighting tactile reference frames, and highlight the importance of the vestibular system in this integration.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Angie M Michaiel ◽  
Elliott TT Abe ◽  
Cristopher M Niell

Many studies of visual processing are conducted in constrained conditions such as head- and gaze-fixation, and therefore less is known about how animals actively acquire visual information in natural contexts. To determine how mice target their gaze during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We found that the majority of eye movements are compensatory for head movements, thereby serving to stabilize the visual scene. During movement, however, periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Notably, these saccades do not preferentially target the prey location. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


Sign in / Sign up

Export Citation Format

Share Document