scholarly journals Sensory conflict alters visual perception of action capabilities during crossing of a closing gap in virtual reality

2020 ◽  
Vol 73 (12) ◽  
pp. 2309-2316
Author(s):  
Natalie Snyder ◽  
Michael Cinelli

The somatosensory, vestibular, and visual systems contribute to multisensory integration, which facilitates locomotion around obstacles in the environment. The joystick-controlled virtual reality (VR) locomotion interface does not preserve congruent sensory input like real-walking, yet is commonly used in human behaviour research. Our purpose was to determine if collision avoidance behaviours were affected during an aperture crossing task when somatosensory and vestibular input were incongruent, and only vision was accurate. Participants included 36 young adults who completed a closing gap aperture crossing task in VR using real-walking and joystick-controlled locomotion. Participants successfully completed the task using both interfaces. Switch point between passable and impassable apertures was larger for joystick-controlled locomotion compared with real-walking, but time-to-contact (TTC) was lower for real-walking than joystick-controlled locomotion. Increased joystick-controlled locomotion switch point may be attributed to incongruency between visual and non-visual information, causing underestimation of distance travelled towards the aperture. Performance on future VR applications incorporating dynamically changing gaps can be considered successful using joystick-controlled locomotion, while taking into account a potential behaviour difference. Differences in TTC may be explained by the requirement of gait termination in real-walking but not in joystick-controlled locomotion. Future VR studies would benefit from programming acceleration and deceleration into joystick-controlled locomotion interfaces.

Perception ◽  
2021 ◽  
Vol 50 (9) ◽  
pp. 783-796
Author(s):  
Lisa P. Y. Lin ◽  
Christopher J. Plack ◽  
Sally A. Linkenauger

The ability to accurately perceive the extent over which one can act is requisite for the successful execution of visually guided actions. Yet, like other outcomes of perceptual-motor experience, our perceived action boundaries are not stagnant, but in constant flux. Hence, the perceptual systems must account for variability in one’s action capabilities in order for the perceiver to determine when they are capable of successfully performing an action. Recent work has found that, after reaching with a virtual arm that varied between short and long each time they reach, individuals determined their perceived action boundaries using the most liberal reaching experience. However, these studies were conducted in virtual reality, and the perceptual systems may handle variability differently in a real-world setting. To test this hypothesis, we created a modified orthopedic elbow brace that mimics injury in the upper limb by restricting elbow extension via remote control. Participants were asked to make reachability judgments after training in which the maximum extent of their reaching ability was either unconstricted, constricted or variable over several calibration trials. Findings from the current study did not conform to those in virtual reality; participants were more conservative with their reachability estimates after experiencing variability in a real-world setting.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2014 ◽  
Vol 15 (3) ◽  
pp. 271.e1-271.e7 ◽  
Author(s):  
Thibault Deschamps ◽  
François Hug ◽  
Paul W. Hodges ◽  
Kylie Tucker

2015 ◽  
Vol 80 (2) ◽  
pp. 224-234 ◽  
Author(s):  
Yannick Daviaux ◽  
Sylvain Cremoux ◽  
Jessica Tallet ◽  
David Amarantini ◽  
Christophe Cornu ◽  
...  

2021 ◽  
Author(s):  
Mayu Yamada ◽  
Hirono Ohashi ◽  
Koh Hosoda ◽  
Daisuke Kurabayashi ◽  
Shunsuke Shigaki

Most animals survive and thrive due to navigation behavior to reach their destinations. In order to navigate, it is important for animals to integrate information obtained from multisensory inputs and use that information to modulate their behavior. In this study, by using a virtual reality (VR) system for an insect, we investigated how an adult silkmoth integrates visual and wind direction information during female search behavior (olfactory behavior). According to the behavioral experiments using the VR system, the silkmoth had the highest navigation success rate when odor, vision, and wind information were correctly provided. However, we found that the success rate of the search signifcantly reduced if wind direction information was provided that was incorrect from the direction actually detected. This indicates that it is important to acquire not only odor information, but also wind direction information correctly. In other words, Behavior was modulated by the degree of co-incidence between the direction of arrival of the odor and the direction of arrival of the wind, and posture control (angular velocity control) was modulated by visual information. We mathematically modeled the modulation of behavior using multisensory information and evaluated it by simulation. As a result, the mathematical model not only succeeded in reproducing the actual female search behavior of the silkmoth, but can also improve search success relative to the conventional odor source search algorithm.


2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


2020 ◽  
pp. 095679762095485
Author(s):  
Mathieu Landry ◽  
Jason Da Silva Castanheira ◽  
Jérôme Sackur ◽  
Amir Raz

Suggestions can cause some individuals to miss or disregard existing visual stimuli, but can they infuse sensory input with nonexistent information? Although several prominent theories of hypnotic suggestion propose that mental imagery can change our perceptual experience, data to support this stance remain sparse. The present study addressed this lacuna, showing how suggesting the presence of physically absent, yet critical, visual information transforms an otherwise difficult task into an easy one. Here, we show how adult participants who are highly susceptible to hypnotic suggestion successfully hallucinated visual occluders on top of moving objects. Our findings support the idea that, at least in some people, suggestions can add perceptual information to sensory input. This observation adds meaningful weight to theoretical, clinical, and applied aspects of the brain and psychological sciences.


Sign in / Sign up

Export Citation Format

Share Document