Design of a Directional Olfactory Display to Study the Integration of Vision and Olfaction

Author(s):  
Lorenzo Micaroni ◽  
Marina Carulli ◽  
Francesco Ferrise ◽  
Monica Bordegoni ◽  
Alberto Gallace

This research aims to design and develop an innovative system, based on an olfactory display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low cost Virtual Reality (VR) technologies. In particular, the system is based on a custom directional olfactory display, an Oculus Rift Head Mounted Display (HMD) to deliver both visual and olfactory cues and an input device to register subjects’ answers. The VR environment is developed in Unity3D. The paper describes the design of the olfactory interface as well as its integration with the overall system. Finally the results of the initial testing are reported in the paper.

Author(s):  
Lorenzo Micaroni ◽  
Marina Carulli ◽  
Francesco Ferrise ◽  
Alberto Gallace ◽  
Monica Bordegoni

The paper describes the design of an innovative virtual reality (VR) system, based on a combination of an olfactory display and a visual display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low-cost VR technologies. In particular, the system is based on a custom directional olfactory display (OD), a head mounted display (HMD) to deliver both visual and olfactory cues, and an input device to register subjects' answers. The paper reports the design of the olfactory interface as well as its integration with the overall system.


2012 ◽  
Vol 11 (3) ◽  
pp. 9-17 ◽  
Author(s):  
Sébastien Kuntz ◽  
Ján Cíger

A lot of professionals or hobbyists at home would like to create their own immersive virtual reality systems for cheap and taking little space. We offer two examples of such "home-made" systems using the cheapest hardware possible while maintaining a good level of immersion: the first system is based on a projector (VRKit-Wall) and cost around 1000$, while the second system is based on a head-mounted display (VRKit-HMD) and costs between 600� and 1000�. We also propose a standardization of those systems in order to enable simple application sharing. Finally, we describe a method to calibrate the stereoscopy of a NVIDIA 3D Vision system.


Author(s):  
Thiago D'Angelo ◽  
Saul Emanuel Delabrida Silva ◽  
Ricardo A. R. Oliveira ◽  
Antonio A. F. Loureiro

Virtual Reality and Augmented Reality Head-Mounted Displays (HMDs) have been emerging in the last years. These technologies sound like the new hot topic for the next years. Head-Mounted Displays have been developed for many different purposes. Users have the opportunity to enjoy these technologies for entertainment, work tasks, and many other daily activities. Despite the recent release of many AR and VR HMDs, two major problems are hindering the AR HMDs from reaching the mainstream market: the extremely high costs and the user experience issues. In order to minimize these problems, we have developed an AR HMD prototype based on a smartphone and on other low-cost materials. The prototype is capable of running Eye Tracking algorithms, which can be used to improve user interaction and user experience. To assess our AR HMD prototype, we choose a state-of-the-art method for eye center location found in the literature and evaluate its real-time performance in different development boards.


2021 ◽  
Vol 2 ◽  
Author(s):  
Lorenz S. Neuwirth ◽  
Maxime Ros

Introduction: Students interested in neuroscience surgical applications learn about stereotaxic surgery mostly through textbooks that introduce the concepts but lack sufficient details to provide students with applied learning skills related to biomedical research. The present study employed a novel pedagogical approach which used an immersive virtual reality (VR) alternative to teach students stereotaxic surgery procedures through the point of view (POV) of the neuroscientist conducting the research procedures.Methods: The study compared the 180° video virtual reality head-mounted display (180° video VR HMD) and the 3D video computer display groups to address the learning gaps created by textbooks that insufficiently teach stereotaxic surgery, by bringing students into the Revinax® Virtual Training Solutions educational instruction platform/technology. Following the VR experience, students were surveyed to determine their ratings of the learning content and comprehension of the material and how it compared to a traditional lecture, an online/hybrid lecture, and YouTube/other video content, as well as whether they would have interest in such a pedagogical tool.Results: The 180° video VR HMD and the 3D video computer display groups helped students attend to and learn the material equally, it improved their self-study, and they would recommend that their college/university invest in this type of pedagogy. Students reported that both interventions increased their rate of learning, their retention of the material, and its translatability. Students equally preferred both interventions over traditional lectures, online/hybrid courses, textbooks, and YouTube/other video content to learn stereotaxic surgery.Conclusion: Students preferred to learn in and achieve greater learning outcomes from both the 180° video VR HMD and the 3D video computer display over other pedagogical instructional formats and thought that it would be a more humane alternative to show how to conduct the stereotaxic surgical procedure without having to unnecessarily use/practice and/or demonstrate on an animal. Thus, this pedagogical approach facilitated their learning in a manner that was consistent with the 3-Rs in animal research and ethics. The 180° video VR HMD and the 3D video computer display can be a low-cost and effective pedagogical option for distance/remote learning content for students as we get through the COVID-19 pandemic or for future alternative online/hybrid classroom instruction to develop skills/reskill/upskill in relation to neuroscience techniques.


2021 ◽  
Author(s):  
Stanley Mugisha ◽  
Matteo Zoppi ◽  
Rezia Molfino ◽  
Vamsi Guda ◽  
Christine Chevallereau ◽  
...  

Abstract In the list of interfaces used to make virtual reality, haptic interfaces allow users to touch a virtual world with their hands. Traditionally, the user’s hand touches the end effector of a robotic arm. When there is no contact, the robotic arm is passive; when there is contact, the arm suppresses mobility to the user’s hand in certain directions. Unfortunately, the passive mode is never completely seamless to the user. Haptic interfaces with intermittent contacts are interfaces using industrial robots that move towards the user when contact needs to be made. As the user is immersed via a virtual reality Head Mounted Display (HMD), he cannot perceive the danger of a collision when he changes his area of interest in the virtual environment. The objective of this article is to describe movement strategies for the robot to be as fast as possible on the contact zone while guaranteeing safety. This work uses the concept of predicting the position of the user through his gaze direction and the position of his dominant hand (the one touching the object). A motion generation algorithm is proposed and then applied to a UR5 robot with an HTC vive tracker system for an industrial application involving the analysis of materials in the interior of a car.


2018 ◽  
Author(s):  
Yoshihito Masuoka ◽  
Hiroyuki Morikawa ◽  
Takashi Kawai ◽  
Toshio Nakagohri

BACKGROUND Virtual reality (VR) technology has started to gain attention as a form of surgical support in medical settings. Likewise, the widespread use of smartphones has resulted in the development of various medical applications; for example, Google Cardboard, which can be used to build simple head-mounted displays (HMDs). However, because of the absence of observed and reported outcomes of the use of three-dimensional (3D) organ models in relevant environments, we have yet to determine the effects of or issues with the use of such VR technology. OBJECTIVE The aim of this paper was to study the issues that arise while observing a 3D model of an organ that is created based on an actual surgical case through the use of a smartphone-based simple HMD. Upon completion, we evaluated and gathered feedback on the performance and usability of the simple observation environment we had created. METHODS We downloaded our data to a smartphone (Galaxy S6; Samsung, Seoul, Korea) and created a simple HMD system using Google Cardboard (Google). A total of 17 medical students performed 2 experiments: an observation conducted by a single observer and another one carried out by multiple observers using a simple HMD. Afterward, they assessed the results by responding to a questionnaire survey. RESULTS We received a largely favorable response in the evaluation of the dissection model, but also a low score because of visually induced motion sickness and eye fatigue. In an introspective report on simultaneous observations made by multiple observers, positive opinions indicated clear image quality and shared understanding, but displeasure caused by visually induced motion sickness, eye fatigue, and hardware problems was also expressed. CONCLUSIONS We established a simple system that enables multiple persons to observe a 3D model. Although the observation conducted by multiple observers was successful, problems likely arose because of poor smartphone performance. Therefore, smartphone performance improvement may be a key factor in establishing a low-cost and user-friendly 3D observation environment.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1002
Author(s):  
Georgios Tsaramirsis ◽  
Michail Papoutsidakis ◽  
Morched Derbali ◽  
Fazal Qudus Khan ◽  
Fotis Michailidis

Olfaction can enhance the experience of music, films, computer games and virtual reality applications. However, this area is less explored than other areas such as computer graphics and audio. Most advanced olfactory displays are designed for a specific experiment, they are hard to modify and extend, expensive, and/or can deliver a very limited number of scents. Additionally, current-generation olfactory displays make no decisions on if and when a scent should be released. This paper proposes a low-cost, easy to build, powerful smart olfactory display, that can release up to 24 different aromas and allow control of the quantity of the released aroma. The display is capable of absorbing back the aroma, in an attempt to clean the air prior to releasing a new aroma. Additionally, the display includes a smart algorithm that will decide when to release certain aromas. The device controller application includes releasing scents based on a timer, text in English subtitles, or input from external software applications. This allows certain applications (such as games) to decide when to release a scent, making it ideal for gaming. The device also supports native connectivity with games developed using a game development asset, developed as part of this project. The project was evaluated by 15 subjects and it was proved to have high accuracy when the scents were released with 1.5 minutes’ delay from each other.


2016 ◽  
Vol 40 (3) ◽  
pp. 22-40 ◽  
Author(s):  
Stefania Serafin ◽  
Cumhur Erkut ◽  
Juraj Kojs ◽  
Niels C. Nilsson ◽  
Rolf Nordahl

The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term “virtual musical instruments” has been used for a long time to describe software simulations, extensions of existing musical instruments, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field overview of VRMIs from the viewpoint of the performer. We propose nine design guidelines, describe evaluation methods, analyze case studies, and consider future challenges.


2018 ◽  
pp. 698-719
Author(s):  
Thiago D'Angelo ◽  
Saul Emanuel Delabrida Silva ◽  
Ricardo A. R. Oliveira ◽  
Antonio A. F. Loureiro

Virtual Reality and Augmented Reality Head-Mounted Displays (HMDs) have been emerging in the last years. These technologies sound like the new hot topic for the next years. Head-Mounted Displays have been developed for many different purposes. Users have the opportunity to enjoy these technologies for entertainment, work tasks, and many other daily activities. Despite the recent release of many AR and VR HMDs, two major problems are hindering the AR HMDs from reaching the mainstream market: the extremely high costs and the user experience issues. In order to minimize these problems, we have developed an AR HMD prototype based on a smartphone and on other low-cost materials. The prototype is capable of running Eye Tracking algorithms, which can be used to improve user interaction and user experience. To assess our AR HMD prototype, we choose a state-of-the-art method for eye center location found in the literature and evaluate its real-time performance in different development boards.


2019 ◽  
Vol 26 (3) ◽  
pp. 359-370 ◽  
Author(s):  
Maurizio Vertemati ◽  
Simone Cassin ◽  
Francesco Rizzetto ◽  
Angelo Vanzulli ◽  
Marco Elli ◽  
...  

Introduction. With the availability of low-cost head-mounted displays (HMDs), virtual reality environments (VREs) are increasingly being used in medicine for teaching and clinical purposes. Our aim was to develop an interactive, user-friendly VRE for tridimensional visualization of patient-specific organs, establishing a workflow to transfer 3-dimensional (3D) models from imaging datasets to our immersive VRE. Materials and Methods. This original VRE model was built using open-source software and a mobile HMD, Samsung Gear VR. For its validation, we enrolled 33 volunteers: morphologists (n = 11), trainee surgeons (n = 15), and expert surgeons (n = 7). They tried our VRE and then filled in an original 5-point Likert-type scale 6-item questionnaire, considering the following parameters: ease of use, anatomy comprehension compared with 2D radiological imaging, explanation of anatomical variations, explanation of surgical procedures, preoperative planning, and experience of gastrointestinal/neurological disorders. Results in the 3 groups were statistically compared using analysis of variance. Results. Using cross-sectional medical imaging, the developed VRE allowed to visualize a 3D patient-specific abdominal scene in 1 hour. Overall, the 6 items were evaluated positively by all groups; only anatomy comprehension was statistically significant different among the 3 groups. Conclusions. Our approach, based on open-source software and mobile hardware, proved to be a valid and well-appreciated system to visualize 3D patient-specific models, paving the way for a potential new tool for teaching and preoperative planning.


Sign in / Sign up

Export Citation Format

Share Document