scholarly journals Mixed reality as a novel tool for diagnostic and surgical navigation in orthopaedics

Author(s):  
Andrea Teatini ◽  
Rahul P. Kumar ◽  
Ole Jakob Elle ◽  
Ola Wiig

Abstract Purpose This study presents a novel surgical navigation tool developed in mixed reality environment for orthopaedic surgery. Joint and skeletal deformities affect all age groups and greatly reduce the range of motion of the joints. These deformities are notoriously difficult to diagnose and to correct through surgery. Method We have developed a surgical tool which integrates surgical instrument tracking and augmented reality through a head mounted display. This allows the surgeon to visualise bones with the illusion of possessing “X-ray” vision. The studies presented below aim to assess the accuracy of the surgical navigation tool in tracking a location at the tip of the surgical instrument in holographic space. Results Results show that the average accuracy provided by the navigation tool is around 8 mm, and qualitative assessment by the orthopaedic surgeons provided positive feedback in terms of the capabilities for diagnostic use. Conclusions More improvements are necessary for the navigation tool to be accurate enough for surgical applications, however, this new tool has the potential to improve diagnostic accuracy and allow for safer and more precise surgeries, as well as provide for better learning conditions for orthopaedic surgeons in training.

10.29007/wjjx ◽  
2018 ◽  
Author(s):  
Tomoki Itamiya ◽  
Toshinori Iwai ◽  
Tsuyoshi Kaneko

In surgical navigation, to accurately know the position of a surgical instrument in a patient's body is very important. Using transparent smart glasses is very useful for surgical navigation because a surgeon does not need to move his/her line of sight from the operative field. We propose a new application software development method that is able to show a stereoscopic vision of highly precise 3D-CG medical models and surgical instruments using transparent smart glasses for surgical navigation. We used Mixed Reality (MR) which is a concept exceeding Augmented Reality (AR) by using Microsoft HoloLens. In Mixed Reality, persons, places, and objects from our physical and virtual worlds merge together in a blended environment. Unlike competitive models, HoloLens can recognize surrounding with a front-facing cameras and 3D depth sensors. External markers and sensors are not required for surrounding recognition. Once a 3D-CG medical model is placed in a blended environment, it is fixed to the place and does not move on its own. Therefore, we can see a stereoscopic vision of a precise medical model projected into our surrounding such as a holographic human. A holographic human is as if he/she is there, which is a special immersive experience we have never felt before. A holographic human can not only be seen, but also can be moved by user’s hand gestures and interactive manipulation is possible. A holographic human and 3D-CG surgical instrument can be displayed simultaneously in a blended environment. The movement of 3D-CG surgical instruments can be linked with actual surgical instruments in the operation room. In the operation room, the holographic human is superimposed on the actual patient position. Since the positional relationship between the holographic human and surgical instruments is clear because it is overlapping, so it is very useful for surgical navigation. Multiple persons can see one holographic human at the same time using multiple HoloLenses. We developed the holographic human application software for surgical navigation using Unity and Vuforia, which are a development software and a library. A holographic vision of a 3D-CG medical model made from an actual patient’s CT/MRI image data is possible using our application software development method. A user can make the application software within only five minutes by preparing 3D-CG medical model file for instance STL. Therefore, surgeon dentists and clinical staff can make the holographic human content easily by themselves. As a result, the method can be utilized daily for routine medical treatment and education.


2022 ◽  
Vol 8 (1) ◽  
pp. 7
Author(s):  
Leah Groves ◽  
Natalie Li ◽  
Terry M. Peters ◽  
Elvis C. S. Chen

While ultrasound (US) guidance has been used during central venous catheterization to reduce complications, including the puncturing of arteries, the rate of such problems remains non-negligible. To further reduce complication rates, mixed-reality systems have been proposed as part of the user interface for such procedures. We demonstrate the use of a surgical navigation system that renders a calibrated US image, and the needle and its trajectory, in a common frame of reference. We compare the effectiveness of this system, whereby images are rendered on a planar monitor and within a head-mounted display (HMD), to the standard-of-care US-only approach, via a phantom-based user study that recruited 31 expert clinicians and 20 medical students. These users performed needle-insertions into a phantom under the three modes of visualization. The success rates were significantly improved under HMD-guidance as compared to US-guidance, for both expert clinicians (94% vs. 70%) and medical students (70% vs. 25%). Users more consistently positioned their needle closer to the center of the vessel’s lumen under HMD-guidance compared to US-guidance. The performance of the clinicians when interacting with this monitor system was comparable to using US-only guidance, with no significant difference being observed across any metrics. The results suggest that the use of an HMD to align the clinician’s visual and motor fields promotes successful needle guidance, highlighting the importance of continued HMD-guidance research.


2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


2020 ◽  
Vol 1 (1) ◽  
pp. 70-80
Author(s):  
Ekerin Oluseye Michael ◽  
Heidi Tan Yeen-Ju ◽  
Neo Tse Kian

Over the years educators have adopted a variety of technologies in a bid to improve student engagement, interest and understanding of abstract topics taught in the classroom. There has been an increasing interest in immersive technology such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The ability of VR to bring ideas to life in three dimensional spaces in a way that is easy for students to understand the subject matter makes it one of the important tools available today for education. A key feature of VR is the ability to provide multi-sensory visuals and virtual interaction to students wearing a Head Mounted Display thus providing students better learning experience and connection to the subject matter. Virtual Reality has been used for training purposes in the health sector, military, workplace training, gamification and exploration of sites and countless others. With the potential benefits of virtual technology in visualizing abstract concepts in a realistic virtual world, this paper presents a plan to study the use of situated cognition theory as a learning framework to develop an immersive VR application that would be used to train and prepare students studying Telecommunications Engineering for the workplace. This paper presents a review of literature in the area of Virtual Reality in education, offers insight into the motivation behind this research and the planned methodology in carrying out the research.


Sign in / Sign up

Export Citation Format

Share Document