scholarly journals Augmented Reality, Mixed Reality, and Hybrid Approach in Healthcare Simulation: A Systematic Review

2021 ◽  
Vol 11 (5) ◽  
pp. 2338
Author(s):  
Rosanna Maria Viglialoro ◽  
Sara Condino ◽  
Giuseppe Turini ◽  
Marina Carbone ◽  
Vincenzo Ferrari ◽  
...  

Simulation-based medical training is considered an effective tool to acquire/refine technical skills, mitigating the ethical issues of Halsted’s model. This review aims at evaluating the literature on medical simulation techniques based on augmented reality (AR), mixed reality (MR), and hybrid approaches. The research identified 23 articles that meet the inclusion criteria: 43% combine two approaches (MR and hybrid), 22% combine all three, 26% employ only the hybrid approach, and 9% apply only the MR approach. Among the studies reviewed, 22% use commercial simulators, whereas 78% describe custom-made simulators. Each simulator is classified according to its target clinical application: training of surgical tasks (e.g., specific tasks for training in neurosurgery, abdominal surgery, orthopedic surgery, dental surgery, otorhinolaryngological surgery, or also generic tasks such as palpation) and education in medicine (e.g., anatomy learning). Additionally, the review assesses the complexity, reusability, and realism of the physical replicas, as well as the portability of the simulators. Finally, we describe whether and how the simulators have been validated. The review highlights that most of the studies do not have a significant sample size and that they include only a feasibility assessment and preliminary validation; thus, further research is needed to validate existing simulators and to verify whether improvements in performance on a simulated scenario translate into improved performance on real patients.

2021 ◽  
Vol 51 (2) ◽  
pp. E8
Author(s):  
Frederick Van Gestel ◽  
Taylor Frantz ◽  
Cédric Vannerom ◽  
Anouk Verhellen ◽  
Anthony G. Gallagher ◽  
...  

OBJECTIVE The traditional freehand technique for external ventricular drain (EVD) placement is most frequently used, but remains the primary risk factor for inaccurate drain placement. As this procedure could benefit from image guidance, the authors set forth to demonstrate the impact of augmented-reality (AR) assistance on the accuracy and learning curve of EVD placement compared with the freehand technique. METHODS Sixteen medical students performed a total of 128 EVD placements on a custom-made phantom head, both before and after receiving a standardized training session. They were guided by either the freehand technique or by AR, which provided an anatomical overlay and tailored guidance for EVD placement through inside-out infrared tracking. The outcome was quantified by the metric accuracy of EVD placement as well as by its clinical quality. RESULTS The mean target error was significantly impacted by either AR (p = 0.003) or training (p = 0.02) in a direct comparison with the untrained freehand performance. Both untrained (11.9 ± 4.5 mm) and trained (12.2 ± 4.7 mm) AR performances were significantly better than the untrained freehand performance (19.9 ± 4.2 mm), which improved after training (13.5 ± 4.7 mm). The quality of EVD placement as assessed by the modified Kakarla scale (mKS) was significantly impacted by AR guidance (p = 0.005) but not by training (p = 0.07). Both untrained and trained AR performances (59.4% mKS grade 1 for both) were significantly better than the untrained freehand performance (25.0% mKS grade 1). Spatial aptitude testing revealed a correlation between perceptual ability and untrained AR-guided performance (r = 0.63). CONCLUSIONS Compared with the freehand technique, AR guidance for EVD placement yielded a higher outcome accuracy and quality for procedure novices. With AR, untrained individuals performed as well as trained individuals, which indicates that AR guidance not only improved performance but also positively impacted the learning curve. Future efforts will focus on the translation and evaluation of AR for EVD placement in the clinical setting.


2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2018 ◽  
Vol Vol 17 (Vol 17, No 1 (2018)) ◽  
pp. 128-140
Author(s):  
Oleksandr Pushkar

The article deals with the approach to developing an advertising multimedia product for the promotion or sale of goods or services. Under the advertising product is an advertising video, an interactive commercial, 3-D advertising, virtual and augmented reality, an online store. Based on the analogy method, a diagram of the process of perceiving the advertising multimedia product by the user is presented. The use of the hybrid approach of customer development for updating the multimedia product and taking into account the virtual values of users is substantiated. Developed scenarios for the development of a multimedia product, depending on the results of achieving the planned goals. The sequence of multimedia product development is proposed based on the convergence of face-to-face and screen-to-screen approaches.


Author(s):  
Eric S Tvedte ◽  
Mark Gasser ◽  
Benjamin C Sparklin ◽  
Jane Michalski ◽  
Carl E Hjelmen ◽  
...  

Abstract The newest generation of DNA sequencing technology is highlighted by the ability to generate sequence reads hundreds of kilobases in length. Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT) have pioneered competitive long read platforms, with more recent work focused on improving sequencing throughput and per-base accuracy. We used whole-genome sequencing data produced by three PacBio protocols (Sequel II CLR, Sequel II HiFi, RS II) and two ONT protocols (Rapid Sequencing and Ligation Sequencing) to compare assemblies of the bacteria Escherichia coli and the fruit fly Drosophila ananassae. In both organisms tested, Sequel II assemblies had the highest consensus accuracy, even after accounting for differences in sequencing throughput. ONT and PacBio CLR had the longest reads sequenced compared to PacBio RS II and HiFi, and genome contiguity was highest when assembling these datasets. ONT Rapid Sequencing libraries had the fewest chimeric reads in addition to superior quantification of E. coli plasmids versus ligation-based libraries. The quality of assemblies can be enhanced by adopting hybrid approaches using Illumina libraries for bacterial genome assembly or polishing eukaryotic genome assemblies, and an ONT-Illumina hybrid approach would be more cost-effective for many users. Genome-wide DNA methylation could be detected using both technologies, however ONT libraries enabled the identification of a broader range of known E. coli methyltransferase recognition motifs in addition to undocumented D. ananassae motifs. The ideal choice of long read technology may depend on several factors including the question or hypothesis under examination. No single technology outperformed others in all metrics examined.


Author(s):  
Sarah Beadle ◽  
Randall Spain ◽  
Benjamin Goldberg ◽  
Mahdi Ebnali ◽  
Shannon Bailey ◽  
...  

Virtual environments and immersive technologies are growing in popularity for human factors purposes. Whether it is training in a low-risk environment or using simulated environments for testing future automated vehicles, virtual environments show promise for the future of our field. The purpose of this session is to have current human factors practitioners and researchers demonstrate their immersive technologies. This is the eighth iteration of the “Me and My VE” interactive session. Presenters in this session will provide a brief introduction of their virtual reality, augmented reality, or virtual environment work before engaging with attendees in an interactive demonstration period. During this period, the presenters will each have a multimedia display of their immersive technology as well as discuss their work and development efforts. The selected demonstrations cover issues of designing immersive interfaces, military and medical training, and using simulation to better understand complex tasks. This includes a mix of government, industry, and academic-based work. Attendees will be virtually immersed in the technologies and research presented allowing for interaction with the work being done in this field.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2021 ◽  
pp. 1-19
Author(s):  
Eimei Oyama ◽  
Kohei Tokoi ◽  
Ryo Suzuki ◽  
Sousuke Nakamura ◽  
Naoji Shiroma ◽  
...  

2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


Sign in / Sign up

Export Citation Format

Share Document