scholarly journals Virtual and Augmented Reality Direct Ophthalmoscopy Tool: A Comparison between Interactions Methods

2021 ◽  
Vol 5 (11) ◽  
pp. 66
Author(s):  
Michael Chan ◽  
Alvaro Uribe-Quevedo ◽  
Bill Kapralos ◽  
Michael Jenkin ◽  
Norman Jaimes ◽  
...  

Direct ophthalmoscopy (DO) is a medical procedure whereby a health professional, using a direct ophthalmoscope, examines the eye fundus. DO skills are in decline due to the use of interactive diagnostic equipment and insufficient practice with the direct ophthalmoscope. To address the loss of DO skills, physical and computer-based simulators have been developed to offer additional training. Among the computer-based simulations, virtual and augmented reality (VR and AR, respectively) allow simulated immersive and interactive scenarios with eye fundus conditions that are difficult to replicate in the classroom. VR and AR require employing 3D user interfaces (3DUIs) to perform the virtual eye examination. Using a combination of a between-subjects and within-subjects paradigm with two groups of five participants, this paper builds upon a previous preliminary usability study that compared the use of the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens 1 hand gesticulation interaction methods when performing a virtual direct ophthalmoscopy eye examination. The work described in this paper extends our prior work by considering the interactions with the Oculus Quest controller and Oculus Quest hand-tracking system to perform a virtual direct ophthalmoscopy eye examination while allowing us to compare these methods without our prior interaction techniques. Ultimately, this helps us develop a greater understanding of usability effects for virtual DO examinations and virtual reality in general. Although the number of participants was limited, n = 5 for Stage 1 (including the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens hand gesticulations), and n = 13 for Stage 2 (including the Oculus Quest controller and the Oculus Quest hand tracking), given the COVID-19 restrictions, our initial results comparing VR and AR 3D user interactions for direct ophthalmoscopy are consistent with our previous preliminary study where the physical controllers resulted in higher usability scores, while the Oculus Quest’s more accurate hand motion capture resulted in higher usability when compared to the Microsoft HoloLens hand gesticulation.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2020 ◽  
Vol 8 (6) ◽  
pp. 4667-4673

Virtual Reality, Augmented Reality and other such immersive environments have gained popularity with the increase in technological trends in the past decade. As they became widely used, the human computer interface design and the designing criteria emerges as a challenging task. Virtual and Augmented Reality provide a wide range of applications ranging from a primitive level like improving learning, education experiences to complex industrial and medical operations. Virtual reality is a viable alternative that can be focussed on, in the future interface design development because it can remove existing generic and complex physical interfaces and replace them with an alternative sensory relayed input form. It provides a natural and efficient mode of interaction, that the users can work with.Virtual and Augmented reality eradicates the need for development of different acceptable standards for user interfaces as it can provide a whole and generic interface to accommodate the work setting.In this paper, we investigated various prospects of applications for user interaction in Virtual and Augemnted realities and the limitations in the respective domains. The paper provides an outline on how the new era of human computer interaction leading to cognition-based communications, and how Virtual and Augmented realities can tailor the user needs and address the future demands which replaces the need for command-based interaction between the humans and computers.


Author(s):  
S. Graceline Jasmine ◽  
L. Jani Anbarasi ◽  
Modigari Narendra ◽  
Benson Edwin Raj

Augmented reality (AR) overlies manually made materials directly over the real-world materials. This chapter addresses the technological and design frameworks required to create realistic motion tracking environments, realistic audio, 3D graphical interactions, multimodal sensory integration, and user interfaces and games using virtual reality to augmented reality. Similarly, the portfolio required to build a personal VR or AR application is detailed. Virtual and augmented reality industry committed innovative technologies that can be explored in the field of entertainment, education, training, medical and industrial innovation, and the development are explored. Augmented reality (AR) allows the physical world to be enhanced by incorporating digital knowledge in real time created by virtual machine. Few applications that have used augmented and virtual reality in real-world applications are discussed.


Author(s):  
Paraskevi Papadopoulou ◽  
Kwok Tai Chui ◽  
Linda Daniela ◽  
Miltiadis D. Lytras

Virtual and Augmented Reality (VR & AR) with its various computer-based virtual simulations and teaching aids have already begun to transform the medical education and training. The use of virtual labs and anatomy lessons including the use of Virtual Learning Environments (VLEs) as in the delivery of lectures and surgery operations are explored. The purpose of this chapter is to promote the role of VR & AR in the context of medical education as an innovative, effective, and cost-reasonable solution for the provision of better and faster practical training. This chapter overall investigates and explores the potential of VLEs in terms of the necessary concepts and principles that allow students to develop a more direct and meaningful experiential understanding of the learning goals and outcomes of courses and of the practical and transferable skills required. A business model related to cloud active learning in medical education and training is proposed in line with the idea of an Open Agora of Virtual Reality Learning Services.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 4
Author(s):  
Aida Vidal-Balea ◽  
Oscar Blanco-Novoa ◽  
Paula Fraga-Lamas ◽  
Miguel Vilar-Montesinos ◽  
Tiago M. Fernández-Caramés

This paper presents the development of a novel Microsoft HoloLens collaborative application that allows shipyard operators to interact with a virtual clutch during its assembly in a real Turbine workshop. Such an Augmented Reality (AR) experience acts as a virtual guide while assembling different parts of a ship. In particular, the proposed application allows operators to position the clutch on a real environment and interact with it. The application also provides information about the documentation of each part of the clutch, showing its blueprints and physical measurements. The proposed AR application enables collaborative AR experiences, allowing users to visualize the same content and animations at the same time and interact simultaneously with 3D objects from multiple devices. Furthermore, the application is integrated with an Industrial Internet of Things (IIoT) framework, resulting on an AR-IIoT application that is able to receive and display real-time sensor data on information panels, as well as to trigger actions through actuators by making use of virtual user interfaces.


Author(s):  
Rafael Radkowski ◽  
Sravya Kanunganti

The Microsoft HoloLens is the latest augmented reality (AR) capable head-mounted-display (HMD) with the potential to leverage AR applications in manufacturing and design. Its optical system and the embedded tracking capability are superior to many precursor HMDs and mitigate several known obstacles such as size, massive weight, visual quality, and tracking latency. Especially the last one, the not-noticeable tracking latency, is a convincing factor for people outside an AR community. Along with its onboard tracking, it allows the HoloLens to populate the physical world with virtual objects and to maintain their position while the user is moving. Although these capabilities are already convincing, the majority of applications in assembly and design require a precise alignment of virtual objects with physical parts. Especially, if a user moves the majority of components in an application situation, thus, virtual information need to move along with the physical part to convey them semantically correct. Object tracking and automatic registration are required to establish this functionality. This paper introduces an AR system which integrates an external range camera-based tracking system and the HoloLens. It incorporates two calibration procedures, which are required to register virtual 3D objects with physical components. This AR system can be used for different visualization tasks along the product life-cycle, spanning the range from training to decision making, although our major area is currently manual assembly.


2011 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Judith Kelner ◽  
Luciano Pereira Soares

The special issue of the JIS (SBC Journal on 3D Inter- active Systems) acknowledging the best papers of the XII Symposium on Virtual and Augmented Reality (SVR 2010) was an important opportunity to present excellent papers in the Virtual and Augmented Reality field. The SVR is for more than 12 years the most important event on Virtual and Augmented Reality in Brazil, which is being conduct by academic professionals members of the Brazilian Computer Society (SBC) that is supporting the conference for so many years. Since this is the first special issue of JIS, we had selected papers to be included in the journal that fits the expectations of the readers. These papers were selected among the 10 best papers presented at the SVR 2010, and they come from different sub-areas of the Virtual and Augmented Reality. The development introduced in the papers present important new proposals and algorithms that might be used by several readers of the journal. The paper Exploring the Design of Transitional Hybrid User Interfaces authored by Daniela Trevisan, Felipe Carvalho, Alberto Raposo, Carla Freitas and Luciana Nedel, is about a discussion on how hybrid user interfaces should be managed in order to improve the feeling of immersion of the user in a virtual reality system. The second paper Expression Cloning Based on Anthropo- metric Proportions and Deformations by Motion of Spherical Influence Zones authored by Roberto C. Vieira, Creto Augusto Vidal, and Joaquim B. Cavalcante-Neto presented methods of mapping human facial expressions to virtual characters. This works was based on anthropometric proportions and geometric manipulations by moving influence zones of model with similar characteristics. The third paper Extending Brazilian DTV Middleware to Incorporate 3D Technologies authored by Daniel F. L. Souza, Liliane S. Machado, Tatiana A. Tavares explains how properly support for 3D content can be integrated on the new Brazilian Digital Television. The paper presents strategies and examples to prove their proposal. We would like to thank all the authors of this process that had to extend and adapted their papers as requested by both the submission and the reviewing process.


2021 ◽  
Vol 2 ◽  
Author(s):  
Pornthep Preechayasomboon ◽  
Eric Rombokas

We introduce Haplets, a wearable, low-encumbrance, finger-worn, wireless haptic device that provides vibrotactile feedback for hand tracking applications in virtual and augmented reality. Haplets are small enough to fit on the back of the fingers and fingernails while leaving the fingertips free for interacting with real-world objects. Through robust physically-simulated hands and low-latency wireless communication, Haplets can render haptic feedback in the form of impacts and textures, and supplements the experience with pseudo-haptic illusions. When used in conjunction with handheld tools, such as a pen, Haplets provide haptic feedback for otherwise passive tools in virtual reality, such as for emulating friction and pressure-sensitivity. We present the design and engineering for the hardware for Haplets, as well as the software framework for haptic rendering. As an example use case, we present a user study in which Haplets are used to improve the line width accuracy of a pressure-sensitive pen in a virtual reality drawing task. We also demonstrate Haplets used during manipulation of objects and during a painting and sculpting scenario in virtual reality. Haplets, at the very least, can be used as a prototyping platform for haptic feedback in virtual reality.


Sign in / Sign up

Export Citation Format

Share Document