Designing for engagement in mixed reality experiences that combine projection mapping and camera-based interaction

2013 ◽  
Vol 25 (2) ◽  
pp. 155-168 ◽  
Author(s):  
Anthony Rowe
2021 ◽  
Author(s):  
Tsukasa Koike ◽  
Taichi Kin ◽  
Shota Tanaka ◽  
Katsuya Sato ◽  
Tatsuya Uchida ◽  
...  

Abstract BACKGROUND Image-guided systems improve the safety, functional outcome, and overall survival of neurosurgery but require extensive equipment. OBJECTIVE To develop an image-guided surgery system that combines the brain surface photographic texture (BSP-T) captured during surgery with 3-dimensional computer graphics (3DCG) using projection mapping. METHODS Patients who underwent initial surgery with brain tumors were prospectively enrolled. The texture of the 3DCG (3DCG-T) was obtained from 3DCG under similar conditions as those when capturing the brain surface photographs. The position and orientation at the time of 3DCG-T acquisition were used as the reference. The correct position and orientation of the BSP-T were obtained by aligning the BSP-T with the 3DCG-T using normalized mutual information. The BSP-T was combined with and displayed on the 3DCG using projection mapping. This mixed-reality projection mapping (MRPM) was used prospectively in 15 patients (mean age 46.6 yr, 6 males). The difference between the centerlines of surface blood vessels on the BSP-T and 3DCG constituted the target registration error (TRE) and was measured in 16 fields of the craniotomy area. We also measured the time required for image processing. RESULTS The TRE was measured at 158 locations in the 15 patients, with an average of 1.19 ± 0.14 mm (mean ± standard error). The average image processing time was 16.58 min. CONCLUSION Our MRPM method does not require extensive equipment while presenting information of patients’ anatomy together with medical images in the same coordinate system. It has the potential to improve patient safety.


2015 ◽  
Vol 75 ◽  
pp. 327-333 ◽  
Author(s):  
Leonardo Rodriguez ◽  
Fabian Quint ◽  
Dominic Gorecky ◽  
David Romero ◽  
Héctor R. Siller

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
David J. Cote ◽  
Jacob Ruzevick ◽  
Ben A. Strickland ◽  
Gabriel Zada

Author(s):  
Jacqueline A. Towson ◽  
Matthew S. Taylor ◽  
Diana L. Abarca ◽  
Claire Donehower Paul ◽  
Faith Ezekiel-Wilder

Purpose Communication between allied health professionals, teachers, and family members is a critical skill when addressing and providing for the individual needs of patients. Graduate students in speech-language pathology programs often have limited opportunities to practice these skills prior to or during externship placements. The purpose of this study was to research a mixed reality simulator as a viable option for speech-language pathology graduate students to practice interprofessional communication (IPC) skills delivering diagnostic information to different stakeholders compared to traditional role-play scenarios. Method Eighty graduate students ( N = 80) completing their third semester in one speech-language pathology program were randomly assigned to one of four conditions: mixed-reality simulation with and without coaching or role play with and without coaching. Data were collected on students' self-efficacy, IPC skills pre- and postintervention, and perceptions of the intervention. Results The students in the two coaching groups scored significantly higher than the students in the noncoaching groups on observed IPC skills. There were no significant differences in students' self-efficacy. Students' responses on social validity measures showed both interventions, including coaching, were acceptable and feasible. Conclusions Findings indicated that coaching paired with either mixed-reality simulation or role play are viable methods to target improvement of IPC skills for graduate students in speech-language pathology. These findings are particularly relevant given the recent approval for students to obtain clinical hours in simulated environments.


2018 ◽  
Vol 5 (2) ◽  
pp. 8-11
Author(s):  
Shuangchun Liu ◽  
Surng Gahb Jahng

2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2017 ◽  
Author(s):  
Dirk Schart, Nathaly Tschanz
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document