Effects of spatial visual information and head motion cues on auditory spatial judgments.

2011 ◽  
Vol 129 (4) ◽  
pp. 2485-2485
Author(s):  
Mark A. Ericson ◽  
Rachel Weatherless
2021 ◽  
Author(s):  
Zezhong Lv ◽  
Qing Xu ◽  
Klaus Schoeffmann ◽  
Simon Parkinson

AbstractEye movement behavior, which provides the visual information acquisition and processing, plays an important role in performing sensorimotor tasks, such as driving, by human beings in everyday life. In the procedure of performing sensorimotor tasks, eye movement is contributed through a specific coordination of head and eye in gaze changes, with head motions preceding eye movements. Notably we believe that this coordination in essence indicates a kind of causality. In this paper, we investigate transfer entropy to set up a quantity for measuring an unidirectional causality from head motion to eye movement. A normalized version of the proposed measure, demonstrated by virtual reality based psychophysical studies, behaves very well as a proxy of driving performance, suggesting that quantitative exploitation of coordination of head and eye may be an effective behaviometric of sensorimotor activity.


1997 ◽  
Vol 59 (7) ◽  
pp. 1018-1026 ◽  
Author(s):  
Stephen Perrett ◽  
William Noble
Keyword(s):  

2011 ◽  
Vol 105 (6) ◽  
pp. 2989-3001 ◽  
Author(s):  
Ryan M. Yoder ◽  
Benjamin J. Clark ◽  
Joel E. Brown ◽  
Mignon V. Lamia ◽  
Stephane Valerio ◽  
...  

Successful navigation requires a constantly updated neural representation of directional heading, which is conveyed by head direction (HD) cells. The HD signal is predominantly controlled by visual landmarks, but when familiar landmarks are unavailable, self-motion cues are able to control the HD signal via path integration. Previous studies of the relationship between HD cell activity and path integration have been limited to two or more arenas located in the same room, a drawback for interpretation because the same visual cues may have been perceptible across arenas. To address this issue, we tested the relationship between HD cell activity and path integration by recording HD cells while rats navigated within a 14-unit T-maze and in a multiroom maze that consisted of unique arenas that were located in different rooms but connected by a passageway. In the 14-unit T-maze, the HD signal remained relatively stable between the start and goal boxes, with the preferred firing directions usually shifting <45° during maze traversal. In the multiroom maze in light, the preferred firing directions also remained relatively constant between rooms, but with greater variability than in the 14-unit maze. In darkness, HD cell preferred firing directions showed marginally more variability between rooms than in the lighted condition. Overall, the results indicate that self-motion cues are capable of maintaining the HD cell signal in the absence of familiar visual cues, although there are limits to its accuracy. In addition, visual information, even when unfamiliar, can increase the precision of directional perception.


2014 ◽  
Vol 8 ◽  
Author(s):  
Hirohito M. Kondo ◽  
Iwaki Toshima ◽  
Daniel Pressnitzer ◽  
Makio Kashino

1995 ◽  
Vol 7 (3) ◽  
pp. 204-208 ◽  
Author(s):  
Yasuhito Suenaga ◽  

A paradigm for better human interface called Human Reader is introduced with references to computer vision (CV) and computer graphics (CG) research projects on human images at NTT Human Interface Laboratories. CV and CG are regarded as dual problems of visual information processing. Our research includes the recognition of face, detection of head direction and head motion, lip motion analysis, facial expressions analysis, detection of hand or finger positions and movements, 3D head model generation, synchronized acquisition of shape and color, and rendering realistic face images having various facial expressions with complex components such as hair.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8079
Author(s):  
Jose V. Riera ◽  
Sergio Casas ◽  
Marcos Fernández ◽  
Francisco Alonso ◽  
Sergio A. Useche

Motion platforms have been widely used in Virtual Reality (VR) systems for decades to simulate motion in virtual environments, and they have several applications in emerging fields such as driving assistance systems, vehicle automation and road risk management. Currently, the development of new VR immersive systems faces unique challenges to respond to the user’s requirements, such as introducing high-resolution 360° panoramic images and videos. With this type of visual information, it is much more complicated to apply the traditional methods of generating motion cues, since it is generally not possible to calculate the necessary corresponding motion properties that are needed to feed the motion cueing algorithms. For this reason, this paper aims to present a new method for generating non-real-time gravito-inertial cues with motion platforms using a system fed both with computer-generated—simulation-based—images and video imagery. It is a hybrid method where part of the gravito-inertial cues—those with acceleration information—are generated using a classical approach through the application of physical modeling in a VR scene utilizing washout filters, and part of the gravito-inertial cues—the ones coming from recorded images and video, without acceleration information—were generated ad hoc in a semi-manual way. The resulting motion cues generated were further modified according to the contributions of different experts based on a successive approximation—Wideband Delphi-inspired—method. The subjective evaluation of the proposed method showed that the motion signals refined with this method were significantly better than the original non-refined ones in terms of user perception. The final system, developed as part of an international road safety education campaign, could be useful for developing further VR-based applications for key fields such as driving assistance, vehicle automation and road crash prevention.


Author(s):  
Agnes Wong

The vestibulo-ocular and optokinetic reflexes are the earliest eye movements to appear phylogenetically. The vestibulo-ocular reflex (VOR) stabilizes retinal images during head motion by counter-rotating the eyes at the same speed as the head but in the opposite direction. Information about head motion passes from the vestibular sensors in the inner ear to the VOR circuitry within the brainstem, which computes an appropriate eye velocity command. The eyes, confined in their bony orbits, normally do not change position, and their motion relative to the head is restricted to a change in orientation. However, the head can both change position and orientation relative to space. Thus, the function of the VOR is to generate eye orientation that best compensates for changes in position and orientation of the head. Because the drive for this reflex is vestibular rather than visual, it operates even in darkness. To appreciate the benefits of having our eyes under vestibular and not just visual control, hold a page of text in front of you, and oscillate it back and forth horizontally at a rate of about two cycles per second. You will find that the text is blurred. However, if you hold the page still and instead oscillate your head at the same rate, you will be able to read the text clearly. This is because when the page moves, only visual information is available. Visual information normally takes about 100 msec to travel from the visual cortices, through a series of brain structures, to the ocular motoneurons that move the eyes. This delay is simply too long for the eyes to keep up with the oscillating page. However, when the head moves, both vestibular and visual information are available. Vestibular information takes only about 7–15 msec to travel from the vestibular sensors, through the brainstem, to the ocular motoneurons. With this short latency, the eyes can easily compensate for the rapid oscillation of the head. Thus, damages to the vestibular system often cause oscillopsia, an illusion of motion in the stationary environment, especially during head movements.


Sign in / Sign up

Export Citation Format

Share Document