1601 Ocular-counter roll investigated by the three-dimensional eye movement recording with a dual scleral search coil in alert monkeys

1996 ◽  
Vol 25 ◽  
pp. S170
Author(s):  
Yasuo Suzuki ◽  
Kikuro Fukushima ◽  
Masamichi Kato
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shotaro Harada ◽  
Takao Imai ◽  
Yasumitsu Takimoto ◽  
Yumi Ohta ◽  
Takashi Sato ◽  
...  

AbstractIn the interaural direction, translational linear acceleration is loaded during lateral translational movement and gravitational acceleration is loaded during lateral tilting movement. These two types of acceleration induce eye movements via two kinds of otolith-ocular reflexes to compensate for movement and maintain clear vision: horizontal eye movement during translational movement, and torsional eye movement (torsion) during tilting movement. Although the two types of acceleration cannot be discriminated, the two otolith-ocular reflexes can distinguish them effectively. In the current study, we tested whether lateral-eyed mice exhibit both of these otolith-ocular reflexes. In addition, we propose a new index for assessing the otolith-ocular reflex in mice. During lateral translational movement, mice did not show appropriate horizontal eye movement, but exhibited unnecessary vertical torsion-like eye movement that compensated for the angle between the body axis and gravito-inertial acceleration (GIA; i.e., the sum of gravity and inertial force due to movement) by interpreting GIA as gravity. Using the new index (amplitude of vertical component of eye movement)/(angle between body axis and GIA), the mouse otolith-ocular reflex can be assessed without determining whether the otolith-ocular reflex is induced during translational movement or during tilting movement.


Author(s):  
Han Collewijn ◽  
Robert M. Steinman ◽  
Casper J. Erkelens ◽  
Zygmunt Pizlo ◽  
Johannes Van Der Steen

1993 ◽  
Vol 2 (1) ◽  
pp. 44-53 ◽  
Author(s):  
Kristinn R. Thorisson

The most common visual feedback technique in teleoperation is in the form of monoscopic video displays. As robotic autonomy increases and the human operator takes on the role of a supervisor, three-dimensional information is effectively presented by multiple, televised, two-dimensional (2-D) projections showing the same scene from different angles. To analyze how people go about using such segmented information for estimations about three-dimensional (3-D) space, 18 subjects were asked to determine the position of a stationary pointer in space; eye movements and reaction times (RTs) were recorded during a period when either two or three 2-D views were presented simultaneously, each showing the same scene from a different angle. The results revealed that subjects estimated 3-D space by using a simple algorithm of feature search. Eye movement analysis supported the conclusion that people can efficiently use multiple 2-D projections to make estimations about 3-D space without reconstructing the scene mentally in three dimensions. The major limiting factor on RT in such situations is the subjects' visual search performance, giving in this experiment a mean of 2270 msec (SD = 468; N = 18). This conclusion was supported by predictions of the Model Human Processor (Card, Moran, & Newell, 1983), which predicted a mean RT of 1820 msec given the general eye movement patterns observed. Single-subject analysis of the experimental data suggested further that in some cases people may base their judgments on a more elaborate 3-D mental model reconstructed from the available 2-D views. In such situations, RTs and visual search patterns closely resemble those found in the mental rotation paradigm (Just & Carpenter, 1976), giving RTs in the range of 5-10 sec.


Sign in / Sign up

Export Citation Format

Share Document