scholarly journals Desarrollo de un soporte tecnológico al tratamiento para pacientes que sufren fobia social y agorafobia utilizando escenarios de Virtual Reality

2018 ◽  
Author(s):  
◽  
Héctor Alberto Chumpitaz Watanave ◽  
◽  
Maria Fernanda Segovia Chacón

El presente proyecto sobre un soporte tecnológico al tratamiento de pacientes que sufren fobia social y agorafobia utilizando escenarios de Virtual Reality busca desarrollar una solución efectiva para reducir el tiempo del tratamiento psicológico y que dicha solución sea accesible en términos económicos a los centros psicológicos. El tratamiento de exposición en realidad virtual es una alternativa al tratamiento de exposición en vivo y exposición en imaginación que se le da a los pacientes que sufren fobia social y agorafobia en el Perú. En el proyecto solo se utilizan dispositivos de realidad virtual económicos, que no incurran en altos costos como un HTC VIVE u Oculus Rift. Además, se busca utilizar tecnologías Open Source y un motor de juego que no involucre pagar regalías. Finalmente, para evaluar los beneficios del software desarrollado se harán pruebas con pacientes que tengas estos padecimientos y se realizará una comparativa entre los que tomaron la terapia con el software frente a los que tomaron el tratamiento de exposición en imaginación.

Author(s):  
Daniele Regazzoni ◽  
Caterina Rizzi ◽  
Andrea Vitali

The Natural User Interface (NUI), which permits a simple and consistent user’s interaction, represents a meaningful challenge for developing virtual/augmented reality applications. This paper presents a set of guidelines to design optimal NUI as well as a software framework, named FrameworkVR, which encapsulates the rules of presented guidelines. FrameworkVR allows developing NUI for VR/AR reality applications based on Oculus Rift, Leap Motions device and on the VTK open source library. An example of VR application for prosthesis design developed using FrameworkVR, is also described. Tests have been carried to validate the approach and the designed NUI and results reached so far are presented and discussed.


2019 ◽  
Vol 16 (2) ◽  
pp. 22-31
Author(s):  
Christian Zabel ◽  
Gernot Heisenberg

Getrieben durch populäre Produkte und Anwendungen wie Oculus Rift, Pokémon Go oder der Samsung Gear stößt Virtual Reality, Augmented Reality und auch Mixed Reality auf zunehmend großes Interesse. Obwohl die zugrunde liegenden Technologien bereits seit den 1990er Jahren eingesetzt werden, ist eine breitere Adoption erst seit relativ kurzer Zeit zu beobachten. In der Folge ist ein sich schnell entwickelndes Ökosystem für VR und AR entstanden (Berg & Vance, 2017). Aus einer (medien-) politischen Perspektive interessiert dabei, welche Standortfaktoren die Ansiedlung und Agglomeration dieser Firmen begünstigen. Da die Wertschöpfungsaktivitäten sowohl hinsichtlich der Zielmärkte als auch der Leistungserstellung (z. B. starker Einsatz von IT und Hardware in der Produkterstellung) von denen klassischer Medienprodukte deutlich abweichen, kann insbesondere gefragt werden, ob die VR-, MR- und AR-Unternehmen mit Blick auf die Ansiedlungspolitik als Teil der Medienbranche aufzufassen sind und somit auf die für Medienunternehmen besonders relevanten Faktoren in ähnlichem Maße reagieren. Der vorliegende Aufsatz ist das Ergebnis eines Forschungsprojekts im Auftrag des Mediennetzwerks NRW, einer Tochterfirma der Film- und Medienstiftung NRW.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2016 ◽  
Vol 34 (1) ◽  
pp. 51-82 ◽  
Author(s):  
Manuela Chessa ◽  
Guido Maiello ◽  
Alessia Borsari ◽  
Peter J. Bex

Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 43
Author(s):  
Catarina Sá ◽  
Paulo Veloso Gomes ◽  
António Marques ◽  
António Correia

The application of electroencephalography electrodes in Virtual Reality (VR) glasses allows users to relate cognitive, emotional, and social functions with the exposure to certain stimuli. The development of non-invasive portable devices, coupled with VR, allows for the collection of electroencephalographic data. One of the devices that embraced this new trend is Looxid LinkTM, a system that adds electroencephalography to HTC VIVETM, VIVE ProTM, VIVE Pro EyeTM, or Oculus Rift STM glasses to create interactive environments using brain signals. This work analyzes the possibility of using the Looxid LinkTM device to perceive, evaluate and monitor the emotions of users exposed to VR.


Sign in / Sign up

Export Citation Format

Share Document