Effects of Driver’s Head Motion and Visual Information on Perception of Ride Comfort

Author(s):  
Kazuhito Kato ◽  
Satoshi Kitazaki ◽  
Takayuki Sonoda
2021 ◽  
Author(s):  
Zezhong Lv ◽  
Qing Xu ◽  
Klaus Schoeffmann ◽  
Simon Parkinson

AbstractEye movement behavior, which provides the visual information acquisition and processing, plays an important role in performing sensorimotor tasks, such as driving, by human beings in everyday life. In the procedure of performing sensorimotor tasks, eye movement is contributed through a specific coordination of head and eye in gaze changes, with head motions preceding eye movements. Notably we believe that this coordination in essence indicates a kind of causality. In this paper, we investigate transfer entropy to set up a quantity for measuring an unidirectional causality from head motion to eye movement. A normalized version of the proposed measure, demonstrated by virtual reality based psychophysical studies, behaves very well as a proxy of driving performance, suggesting that quantitative exploitation of coordination of head and eye may be an effective behaviometric of sensorimotor activity.


1995 ◽  
Vol 7 (3) ◽  
pp. 204-208 ◽  
Author(s):  
Yasuhito Suenaga ◽  

A paradigm for better human interface called Human Reader is introduced with references to computer vision (CV) and computer graphics (CG) research projects on human images at NTT Human Interface Laboratories. CV and CG are regarded as dual problems of visual information processing. Our research includes the recognition of face, detection of head direction and head motion, lip motion analysis, facial expressions analysis, detection of hand or finger positions and movements, 3D head model generation, synchronized acquisition of shape and color, and rendering realistic face images having various facial expressions with complex components such as hair.


Author(s):  
Agnes Wong

The vestibulo-ocular and optokinetic reflexes are the earliest eye movements to appear phylogenetically. The vestibulo-ocular reflex (VOR) stabilizes retinal images during head motion by counter-rotating the eyes at the same speed as the head but in the opposite direction. Information about head motion passes from the vestibular sensors in the inner ear to the VOR circuitry within the brainstem, which computes an appropriate eye velocity command. The eyes, confined in their bony orbits, normally do not change position, and their motion relative to the head is restricted to a change in orientation. However, the head can both change position and orientation relative to space. Thus, the function of the VOR is to generate eye orientation that best compensates for changes in position and orientation of the head. Because the drive for this reflex is vestibular rather than visual, it operates even in darkness. To appreciate the benefits of having our eyes under vestibular and not just visual control, hold a page of text in front of you, and oscillate it back and forth horizontally at a rate of about two cycles per second. You will find that the text is blurred. However, if you hold the page still and instead oscillate your head at the same rate, you will be able to read the text clearly. This is because when the page moves, only visual information is available. Visual information normally takes about 100 msec to travel from the visual cortices, through a series of brain structures, to the ocular motoneurons that move the eyes. This delay is simply too long for the eyes to keep up with the oscillating page. However, when the head moves, both vestibular and visual information are available. Vestibular information takes only about 7–15 msec to travel from the vestibular sensors, through the brainstem, to the ocular motoneurons. With this short latency, the eyes can easily compensate for the rapid oscillation of the head. Thus, damages to the vestibular system often cause oscillopsia, an illusion of motion in the stationary environment, especially during head movements.


Interiority ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 213-229
Author(s):  
Maria M. C. Sengke ◽  
Triandriani Mustikawati

This paper discusses the visual mechanisms of seeing and their significance in experiencing an interior space. The discussion investigates what the observers can obtain from seeing activities. The aim is to emphasise on the role of seeing as a way of constructing the relation between human and the interior environment. The paper explores the mechanisms of seeing by focusing on two different ways, which are seeing in a static position from a point of observation, and seeing while moving through a path of observation. The exploration in a hospital setting finds out that seeing from a point of observation gave a visual range determined by the body's shaft motion, head motion, and eye movement. This way of seeing produces visual information on interior space, which consists of vertical and horizontal fields. Seeing while moving will create a path of observation that gave an optical flow containing dynamic and continuous visual information. The understanding of seeing mechanisms in interior environment can generate a design with better human-interior relation.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document