Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study

2018 ◽  
Vol 32 (6) ◽  
pp. 2958-2967 ◽  
Author(s):  
Hannes Götz Kenngott ◽  
Anas Amin Preukschas ◽  
Martin Wagner ◽  
Felix Nickel ◽  
Michael Müller ◽  
...  
2007 ◽  
Vol 15 (6) ◽  
pp. 637-641 ◽  
Author(s):  
Roland A. Ammann ◽  
Franziska Zucol ◽  
Christoph Aebi ◽  
Felix K. Niggli ◽  
Thomas Kühne ◽  
...  

2020 ◽  
pp. 147592172097698
Author(s):  
Shaohan Wang ◽  
Sakib Ashraf Zargar ◽  
Fuh-Gwo Yuan

A two-stage knowledge-based deep learning algorithm is presented for enabling automated damage detection in real-time using the augmented reality smart glasses. The first stage of the algorithm entails the identification of damage prone zones within the region of interest. This requires domain knowledge about the damage as well as the structure being inspected. In the second stage, automated damage detection is performed independently within each of the identified zones starting with the one that is the most damage prone. For real-time visual inspection enhancement using the augmented reality smart glasses, this two-stage approach not only ensures computational feasibility and efficiency but also significantly improves the probability of detection when dealing with structures with complex geometric features. A pilot study is conducted using hands-free Epson BT-300 smart glasses during which two distinct tasks are performed: First, using a single deep learning model deployed on the augmented reality smart glasses, automatic detection and classification of corrosion/fatigue, which is the most common cause of failure in high-strength materials, is performed. Then, in order to highlight the efficacy of the proposed two-stage approach, the more challenging task of defect detection in a multi-joint bolted region is addressed. The pilot study is conducted without any artificial control of external conditions like acquisition angles, lighting, and so on. While automating the visual inspection process is not a new concept for large-scale structures, in most cases, assessment of the collected data is performed offline. The algorithms/techniques used therein cannot be implemented directly on computationally limited devices such as the hands-free augmented reality glasses which could then be used by inspectors in the field for real-time assistance. The proposed approach serves to overcome this bottleneck.


2021 ◽  
Vol 51 (2) ◽  
pp. E3
Author(s):  
Michael E. Ivan ◽  
Daniel G. Eichberg ◽  
Long Di ◽  
Ashish H. Shah ◽  
Evan M. Luther ◽  
...  

OBJECTIVE Monitor and wand–based neuronavigation stations (MWBNSs) for frameless intraoperative neuronavigation are routinely used in cranial neurosurgery. However, they are temporally and spatially cumbersome; the OR must be arranged around the MWBNS, at least one hand must be used to manipulate the MWBNS wand (interrupting a bimanual surgical technique), and the surgical workflow is interrupted as the surgeon stops to “check the navigation” on a remote monitor. Thus, there is need for continuous, real-time, hands-free, neuronavigation solutions. Augmented reality (AR) is poised to streamline these issues. The authors present the first reported prospective pilot study investigating the feasibility of using the OpenSight application with an AR head-mounted display to map out the borders of tumors in patients undergoing elective craniotomy for tumor resection, and to compare the degree of correspondence with MWBNS tracing. METHODS Eleven consecutive patients undergoing elective craniotomy for brain tumor resection were prospectively identified and underwent circumferential tumor border tracing at the time of incision planning by a surgeon wearing HoloLens AR glasses running the commercially available OpenSight application registered to the patient and preoperative MRI. Then, the same patient underwent circumferential tumor border tracing using the StealthStation S8 MWBNS. Postoperatively, both tumor border tracings were compared by two blinded board-certified neurosurgeons and rated as having an excellent, adequate, or poor correspondence degree based on a subjective sense of the overlap. Objective overlap area measurements were also determined. RESULTS Eleven patients undergoing craniotomy were included in the study. Five patient procedures were rated as having an excellent correspondence degree, 5 had an adequate correspondence degree, and 1 had poor correspondence. Both raters agreed on the rating in all cases. AR tracing was possible in all cases. CONCLUSIONS In this small pilot study, the authors found that AR was implementable in the workflow of a neurosurgery OR, and was a feasible method of preoperative tumor border identification for incision planning. Future studies are needed to identify strategies to improve and optimize AR accuracy.


2013 ◽  
Vol 5 (2) ◽  
pp. 98-102 ◽  
Author(s):  
Hideyuki Suenaga ◽  
Huy Hoang Tran ◽  
Hongen Liao ◽  
Ken Masamune ◽  
Takeyoshi Dohi ◽  
...  

Cureus ◽  
2021 ◽  
Author(s):  
Jagannath Hanumanthappa ◽  
Vamanjore A Naushad ◽  
Osama Mohammed ◽  
Ashok Kumar Ariboyina ◽  
Suresh Babu Chellapandian ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document