An Olfactory Display to Study the Integration of Vision and Olfaction in a Virtual Reality Environment

Author(s):  
Lorenzo Micaroni ◽  
Marina Carulli ◽  
Francesco Ferrise ◽  
Alberto Gallace ◽  
Monica Bordegoni

The paper describes the design of an innovative virtual reality (VR) system, based on a combination of an olfactory display and a visual display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low-cost VR technologies. In particular, the system is based on a custom directional olfactory display (OD), a head mounted display (HMD) to deliver both visual and olfactory cues, and an input device to register subjects' answers. The paper reports the design of the olfactory interface as well as its integration with the overall system.

Author(s):  
Lorenzo Micaroni ◽  
Marina Carulli ◽  
Francesco Ferrise ◽  
Monica Bordegoni ◽  
Alberto Gallace

This research aims to design and develop an innovative system, based on an olfactory display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low cost Virtual Reality (VR) technologies. In particular, the system is based on a custom directional olfactory display, an Oculus Rift Head Mounted Display (HMD) to deliver both visual and olfactory cues and an input device to register subjects’ answers. The VR environment is developed in Unity3D. The paper describes the design of the olfactory interface as well as its integration with the overall system. Finally the results of the initial testing are reported in the paper.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1967
Author(s):  
Chih-Wei Shiu ◽  
Jeanne Chen ◽  
Yu-Chi Chen

Virtual reality is an important technology in the digital media industry, providing a whole new experience for most people. However, its manipulation method is more difficult than the traditional keyboard and mouse. In this research, we proposed a new low-cost online handwriting symbol recognition system to accurately identify symbols by user actions. The purpose was low cost processing without requiring a server. Experimental results showed that the average success rate of recognition was 99.8%. The execution time averaged a significantly low 0.03395 s. The proposed system is, respectively, highly reliable and at a low cost. This implies that the proposed system is suitable for applications in real-time environments.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2012 ◽  
Vol 11 (3) ◽  
pp. 9-17 ◽  
Author(s):  
Sébastien Kuntz ◽  
Ján Cíger

A lot of professionals or hobbyists at home would like to create their own immersive virtual reality systems for cheap and taking little space. We offer two examples of such "home-made" systems using the cheapest hardware possible while maintaining a good level of immersion: the first system is based on a projector (VRKit-Wall) and cost around 1000$, while the second system is based on a head-mounted display (VRKit-HMD) and costs between 600� and 1000�. We also propose a standardization of those systems in order to enable simple application sharing. Finally, we describe a method to calibrate the stereoscopy of a NVIDIA 3D Vision system.


Author(s):  
Thiago D'Angelo ◽  
Saul Emanuel Delabrida Silva ◽  
Ricardo A. R. Oliveira ◽  
Antonio A. F. Loureiro

Virtual Reality and Augmented Reality Head-Mounted Displays (HMDs) have been emerging in the last years. These technologies sound like the new hot topic for the next years. Head-Mounted Displays have been developed for many different purposes. Users have the opportunity to enjoy these technologies for entertainment, work tasks, and many other daily activities. Despite the recent release of many AR and VR HMDs, two major problems are hindering the AR HMDs from reaching the mainstream market: the extremely high costs and the user experience issues. In order to minimize these problems, we have developed an AR HMD prototype based on a smartphone and on other low-cost materials. The prototype is capable of running Eye Tracking algorithms, which can be used to improve user interaction and user experience. To assess our AR HMD prototype, we choose a state-of-the-art method for eye center location found in the literature and evaluate its real-time performance in different development boards.


1995 ◽  
Vol 4 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Warren Robinett ◽  
Richard Holloway

The visual display transformation for virtual reality (VR) systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics. The process can be represented as a series of transformations, some of which contain parameters that must match the physical configuration of the system hardware and the user's body. Because of the number and complexity of the transformations, a systematic approach and a thorough understanding of the mathematical models involved are essential. This paper presents a complete model for the visual display transformation for a VR system; that is, the series of transformations used to map points from object coordinates to screen coordinates. Virtual objects are typically defined in an object-centered coordinate system (CS), but must be displayed using the screen-centered CSs of the two screens of a head-mounted display (HMD). This particular algorithm for the VR display computation allows multiple users to independently change position, orientation, and scale within the virtual world, allows users to pick up and move virtual objects, uses the measurements from a head tracker to immerse the user in the virtual world, provides an adjustable eye separation for generating two stereoscopic images, uses the off-center perspective projection required by many HMDs, and compensates for the optical distortion introduced by the lenses in an HMD. The implementation of this framework as the core of the UNC VR software is described, and the values of the UNC display parameters are given. We also introduce the vector-quaternion-scalar (VQS) representation for transformations between 3D coordinate systems, which is specifically tailored to the needs of a VR system. The transformations and CSs presented comprise a complete framework for generating the computer-graphic imagery required in a typical VR system. The model presented here is deliberately abstract in order to be general purpose; thus, issues of system design and visual perception are not addressed. While the mathematical techniques involved are already well known, there are enough parameters and pitfalls that a detailed description of the entire process should be a useful tool for someone interested in implementing a VR system.


2021 ◽  
Vol 2 ◽  
Author(s):  
Lorenz S. Neuwirth ◽  
Maxime Ros

Introduction: Students interested in neuroscience surgical applications learn about stereotaxic surgery mostly through textbooks that introduce the concepts but lack sufficient details to provide students with applied learning skills related to biomedical research. The present study employed a novel pedagogical approach which used an immersive virtual reality (VR) alternative to teach students stereotaxic surgery procedures through the point of view (POV) of the neuroscientist conducting the research procedures.Methods: The study compared the 180° video virtual reality head-mounted display (180° video VR HMD) and the 3D video computer display groups to address the learning gaps created by textbooks that insufficiently teach stereotaxic surgery, by bringing students into the Revinax® Virtual Training Solutions educational instruction platform/technology. Following the VR experience, students were surveyed to determine their ratings of the learning content and comprehension of the material and how it compared to a traditional lecture, an online/hybrid lecture, and YouTube/other video content, as well as whether they would have interest in such a pedagogical tool.Results: The 180° video VR HMD and the 3D video computer display groups helped students attend to and learn the material equally, it improved their self-study, and they would recommend that their college/university invest in this type of pedagogy. Students reported that both interventions increased their rate of learning, their retention of the material, and its translatability. Students equally preferred both interventions over traditional lectures, online/hybrid courses, textbooks, and YouTube/other video content to learn stereotaxic surgery.Conclusion: Students preferred to learn in and achieve greater learning outcomes from both the 180° video VR HMD and the 3D video computer display over other pedagogical instructional formats and thought that it would be a more humane alternative to show how to conduct the stereotaxic surgical procedure without having to unnecessarily use/practice and/or demonstrate on an animal. Thus, this pedagogical approach facilitated their learning in a manner that was consistent with the 3-Rs in animal research and ethics. The 180° video VR HMD and the 3D video computer display can be a low-cost and effective pedagogical option for distance/remote learning content for students as we get through the COVID-19 pandemic or for future alternative online/hybrid classroom instruction to develop skills/reskill/upskill in relation to neuroscience techniques.


2021 ◽  
Author(s):  
Stanley Mugisha ◽  
Matteo Zoppi ◽  
Rezia Molfino ◽  
Vamsi Guda ◽  
Christine Chevallereau ◽  
...  

Abstract In the list of interfaces used to make virtual reality, haptic interfaces allow users to touch a virtual world with their hands. Traditionally, the user’s hand touches the end effector of a robotic arm. When there is no contact, the robotic arm is passive; when there is contact, the arm suppresses mobility to the user’s hand in certain directions. Unfortunately, the passive mode is never completely seamless to the user. Haptic interfaces with intermittent contacts are interfaces using industrial robots that move towards the user when contact needs to be made. As the user is immersed via a virtual reality Head Mounted Display (HMD), he cannot perceive the danger of a collision when he changes his area of interest in the virtual environment. The objective of this article is to describe movement strategies for the robot to be as fast as possible on the contact zone while guaranteeing safety. This work uses the concept of predicting the position of the user through his gaze direction and the position of his dominant hand (the one touching the object). A motion generation algorithm is proposed and then applied to a UR5 robot with an HTC vive tracker system for an industrial application involving the analysis of materials in the interior of a car.


2018 ◽  
Author(s):  
Yoshihito Masuoka ◽  
Hiroyuki Morikawa ◽  
Takashi Kawai ◽  
Toshio Nakagohri

BACKGROUND Virtual reality (VR) technology has started to gain attention as a form of surgical support in medical settings. Likewise, the widespread use of smartphones has resulted in the development of various medical applications; for example, Google Cardboard, which can be used to build simple head-mounted displays (HMDs). However, because of the absence of observed and reported outcomes of the use of three-dimensional (3D) organ models in relevant environments, we have yet to determine the effects of or issues with the use of such VR technology. OBJECTIVE The aim of this paper was to study the issues that arise while observing a 3D model of an organ that is created based on an actual surgical case through the use of a smartphone-based simple HMD. Upon completion, we evaluated and gathered feedback on the performance and usability of the simple observation environment we had created. METHODS We downloaded our data to a smartphone (Galaxy S6; Samsung, Seoul, Korea) and created a simple HMD system using Google Cardboard (Google). A total of 17 medical students performed 2 experiments: an observation conducted by a single observer and another one carried out by multiple observers using a simple HMD. Afterward, they assessed the results by responding to a questionnaire survey. RESULTS We received a largely favorable response in the evaluation of the dissection model, but also a low score because of visually induced motion sickness and eye fatigue. In an introspective report on simultaneous observations made by multiple observers, positive opinions indicated clear image quality and shared understanding, but displeasure caused by visually induced motion sickness, eye fatigue, and hardware problems was also expressed. CONCLUSIONS We established a simple system that enables multiple persons to observe a 3D model. Although the observation conducted by multiple observers was successful, problems likely arose because of poor smartphone performance. Therefore, smartphone performance improvement may be a key factor in establishing a low-cost and user-friendly 3D observation environment.


Sign in / Sign up

Export Citation Format

Share Document