Self-Motion Speed Perception of Visual Information in Driving: Modeling and Validation

Author(s):  
Hirofumi Yotsutsuji ◽  
Hideyuki Kita
i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


2017 ◽  
Vol 30 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Séamas Weech ◽  
Nikolaus F. Troje

Studies of the illusory sense of self-motion elicited by a moving visual surround (‘vection’) have revealed key insights about how sensory information is integrated. Vection usually occurs after a delay of several seconds following visual motion onset, whereas self-motion in the natural environment is perceived immediately. It has been suggested that this latency relates to the sensory mismatch between visual and vestibular signals at motion onset. Here, we tested three techniques with the potential to reduce sensory mismatch in order to shorten vection onset latency: noisy galvanic vestibular stimulation (GVS) and bone conducted vibration (BCV) at the mastoid processes, and body vibration applied to the lower back. In Experiment 1, we examined vection latency for wide field visual rotations about the roll axis and applied a burst of stimulation at the start of visual motion. Both GVS and BCV reduced vection latency by two seconds compared to the control condition, whereas body vibration had no effect on latency. In Experiment 2, the visual stimulus rotated about the pitch, roll, or yaw axis and we found a similar facilitation of vection by both BCV and GVS in each case. In a control experiment, we confirmed that air-conducted sound administered through headphones was not sufficient to reduce vection onset latency. Together the results suggest that noisy vestibular stimulation facilitates vection, likely due to an upweighting of visual information caused by a reduction in vestibular sensory reliability.


1987 ◽  
Vol 31 (2) ◽  
pp. 263-265 ◽  
Author(s):  
George J. Andersen ◽  
Brian P. Dyre

An important consideration for some types of flight simulation is that sufficient visual information be provided for a perception of self-motion. A general conclusion of earlier research is that peripheral stimulation (outside a 30 deg. diameter area of the central visual field) is necessary for perceived self-motion to occur. More recently Andersen and Braunstein (1985) demonstrated that induced self-motion could occur when visual information simulating forward motion of the observer was presented to a limited area of the central visual field. In the present study, the perception of induced roll vection (rotation about the line of sight) from visual stimulation of the central visual field was examined. Subjects viewed computer generated displays that simulated observer motion relative to a volume of randomly positioned points. Two variables were examined: 1) the presence or absence of a simulated forward motion, and 2) the presence of a 15 deg. or 30 deg. sinusoidal roll motion. It was found that: 1) induced roll vection occurred with stimulation restricted to a 10 deg. diameter area of the central visual field; 2) greater postural instability occurred for displays with a 30 deg. roll as compared to a 15 deg. roll; and 3) significantly greater postural instability occurred along the X-axis (left/right) as compared to the Y-axis (front/back). The implications of this research for flight simulation will be discussed.


2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


2011 ◽  
Vol 105 (6) ◽  
pp. 2989-3001 ◽  
Author(s):  
Ryan M. Yoder ◽  
Benjamin J. Clark ◽  
Joel E. Brown ◽  
Mignon V. Lamia ◽  
Stephane Valerio ◽  
...  

Successful navigation requires a constantly updated neural representation of directional heading, which is conveyed by head direction (HD) cells. The HD signal is predominantly controlled by visual landmarks, but when familiar landmarks are unavailable, self-motion cues are able to control the HD signal via path integration. Previous studies of the relationship between HD cell activity and path integration have been limited to two or more arenas located in the same room, a drawback for interpretation because the same visual cues may have been perceptible across arenas. To address this issue, we tested the relationship between HD cell activity and path integration by recording HD cells while rats navigated within a 14-unit T-maze and in a multiroom maze that consisted of unique arenas that were located in different rooms but connected by a passageway. In the 14-unit T-maze, the HD signal remained relatively stable between the start and goal boxes, with the preferred firing directions usually shifting <45° during maze traversal. In the multiroom maze in light, the preferred firing directions also remained relatively constant between rooms, but with greater variability than in the 14-unit maze. In darkness, HD cell preferred firing directions showed marginally more variability between rooms than in the lighted condition. Overall, the results indicate that self-motion cues are capable of maintaining the HD cell signal in the absence of familiar visual cues, although there are limits to its accuracy. In addition, visual information, even when unfamiliar, can increase the precision of directional perception.


2017 ◽  
Vol 118 (3) ◽  
pp. 1650-1663 ◽  
Author(s):  
Jan Churan ◽  
Johannes Paul ◽  
Steffen Klingenhoefer ◽  
Frank Bremmer

In the natural world, self-motion always stimulates several different sensory modalities. Here we investigated the interplay between a visual optic flow stimulus simulating self-motion and a tactile stimulus (air flow resulting from self-motion) while human observers were engaged in a distance reproduction task. We found that adding congruent tactile information (i.e., speed of the air flow and speed of visual motion are directly proportional) to the visual information significantly improves the precision of the actively reproduced distances. This improvement, however, was smaller than predicted for an optimal integration of visual and tactile information. In contrast, incongruent tactile information (i.e., speed of the air flow and speed of visual motion are inversely proportional) did not improve subjects’ precision indicating that incongruent tactile information and visual information were not integrated. One possible interpretation of the results is a link to properties of neurons in the ventral intraparietal area that have been shown to have spatially and action-congruent receptive fields for visual and tactile stimuli. NEW & NOTEWORTHY This study shows that tactile and visual information can be integrated to improve the estimates of the parameters of self-motion. This, however, happens only if the two sources of information are congruent—as they are in a natural environment. In contrast, an incongruent tactile stimulus is still used as a source of information about self-motion but it is not integrated with visual information.


2018 ◽  
Vol 10 (12) ◽  
pp. 168781401881896
Author(s):  
Zhanji Zheng ◽  
Zhigang Du ◽  
Qiaojun Xiang ◽  
Guojun Chen

Speed illusion is the leading contributing factor to traffic accidents in highway tunnels. This study aimed to estimate the influence of visual information at different scales and frequencies on drivers’ visual perception and driving safety in highway tunnels. The speed perception of drivers was measured using the stimulus of subjectively equivalent speeds as an index. Thirty drivers were recruited to conduct a psychophysical experiment on speed perception using a driving simulator. The large-, medium- and small-scale visual information in a frequency range of 0.1–32 Hz were used in the experimental scene to generate scenes for comparison. The results show that high-frequency visual information (2–32 Hz) might lead to driver overestimation of vehicle speed in tunnels, while medium-frequency (0.4–1 Hz) and low-frequency (0.1–0.2 Hz) visual information contribute to speed underestimation. The medium-scale information had the largest speed overestimation effect, followed by large- and small-scale information (significant differences of 2–8 Hz). Medium-scale visual information below 8 Hz had the lowest degree of dispersion of speed perception. Therefore, the use of integrated high-frequency, medium-scale visual information and medium-frequency, large- and small-scale visual information is suggested to reduce the speed illusion of drivers and ensure driving safety.


Author(s):  
Kerstan S. Mork ◽  
Patricia R. DeLucia

Head-on collisions result in a substantial number of fatalities. To detect head-on collisions, drivers must judge effectively the direction or heading of their own vehicle in relation to the heading of oncoming vehicles. In our previous study, we used computer simulations of self-motion through a traffic scene to measure judgments about whether a head-on collision was imminent. Results suggested that judgments about head-on collision are affected by both the optical flow information provided by the centerline and the optical flow information provided by the oncoming car. The objective of the current study was to further examine the effect of different components of the optical flow pattern on judgments of head-on collisions. We measured judgments about head on collisions while manipulating local optical flow from the oncoming car and global optical flow from the background scenery. Our results suggest that visual information about the oncoming car's motion was more effective than visual information about self motion. The implication is that it may be beneficial for drivers to focus greater attention on the information about the oncoming car's motion in order to improve judgments about head-on collisions. Further research is needed to evaluate this possibility.


Sign in / Sign up

Export Citation Format

Share Document