Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality

2019 ◽  
Vol 2019 (2) ◽  
pp. 60402-1-60402-8 ◽  
Author(s):  
Yutaro Hirao ◽  
Takashi Kawai
2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2021 ◽  
Author(s):  
Mayu Yamada ◽  
Hirono Ohashi ◽  
Koh Hosoda ◽  
Daisuke Kurabayashi ◽  
Shunsuke Shigaki

Most animals survive and thrive due to navigation behavior to reach their destinations. In order to navigate, it is important for animals to integrate information obtained from multisensory inputs and use that information to modulate their behavior. In this study, by using a virtual reality (VR) system for an insect, we investigated how an adult silkmoth integrates visual and wind direction information during female search behavior (olfactory behavior). According to the behavioral experiments using the VR system, the silkmoth had the highest navigation success rate when odor, vision, and wind information were correctly provided. However, we found that the success rate of the search signifcantly reduced if wind direction information was provided that was incorrect from the direction actually detected. This indicates that it is important to acquire not only odor information, but also wind direction information correctly. In other words, Behavior was modulated by the degree of co-incidence between the direction of arrival of the odor and the direction of arrival of the wind, and posture control (angular velocity control) was modulated by visual information. We mathematically modeled the modulation of behavior using multisensory information and evaluated it by simulation. As a result, the mathematical model not only succeeded in reproducing the actual female search behavior of the silkmoth, but can also improve search success relative to the conventional odor source search algorithm.


2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


Author(s):  
Roberta Etzi ◽  
Siyuan Huang ◽  
Giulia Wally Scurati ◽  
Shilei Lyu ◽  
Francesco Ferrise ◽  
...  

Abstract The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.


2002 ◽  
Vol 14 (3) ◽  
pp. 238-244 ◽  
Author(s):  
Akiko Kawaji ◽  
◽  
Fumihito Arai ◽  
Toshio Fukuda ◽  

The biomicromanipulation so important to biology and bioengineering remains difficult because object used are very small, kept in liquids, or observed by optical microscopy. We are developing a micromanipulation system for anatomical operation of microobjects such as embryos, cells, and microbes. Microscope images are 2-D, so it is difficult to manipulate targets in 3-D space. To improve manipulation work, we proposed a 3-D biomicromanipulation system combined with virtual reality (VR) space. Here we propose the 3-D modeling method of the object to present 3-D visual information to the operator and improve the operation environment. In this system, we still have difficulty changing the orientation of the microscopic object by manipulating a mechanical manipulator. The bioaligner we propose is a microdevice for the positional control of an object. We developed a 2-D bioaligner by microfabrication. Here we show its fabrication process and basic rotating experiment with chlorella cells.


2021 ◽  
Vol 7 (3) ◽  
pp. 53-67
Author(s):  
Zoya I. Konnova ◽  
◽  
Galina V. Semenova ◽  

Modern society requires specialists who are ready to act in a high-tech professional environment. The use of Augmented Reality (AR) and Virtual Reality (VR) technologies is a key direction for the development of the professional sphere in the near future. The relevance of this study is due to the need to introduce these technologies in the field of foreign language education in universities to optimize the process of forming students' professional foreign language competence. The purpose of this article is to study and analyze the existing experience of using educational technologies of augmented and virtual reality in teaching a foreign language in Russia and abroad. Methodology and methods: the lack of a sufficient research base devoted directly to the experience of implementing AR and VR technologies in the process of teaching a foreign language to university students led to the choice of a comprehensive research methodology: theoretical analysis of scientific, pedagogical and methodological literature on the research topic, description and analysis of research results. As a result, the article analyzes the use of augmented and virtual reality technologies in teaching a foreign language, their purpose and functions. The possibility of using these technologies in the educational environment in order to visualize the educational material, supplement it with visual information technologies by reading the QR code with smartphones, tablets and other gadgets, increase motivation and interest in learning is shown. The advantages and disadvantages of augmented and virtual reality technologies are highlighted. It is concluded that educational AR and VR technologies have a huge potential for teaching a foreign language in universities, and many of their shortcomings will be eliminated in the coming years.


Sign in / Sign up

Export Citation Format

Share Document