scholarly journals A Novel Suture Training System for Open Surgery Replicating Procedures Performed by Experts Using Augmented Reality

2021 ◽  
Vol 45 (5) ◽  
Author(s):  
Yuri Nagayo ◽  
Toki Saito ◽  
Hiroshi Oyama

AbstractThe surgical education environment has been changing significantly due to restricted work hours, limited resources, and increasing public concern for safety and quality, leading to the evolution of simulation-based training in surgery. Of the various simulators, low-fidelity simulators are widely used to practice surgical skills such as sutures because they are portable, inexpensive, and easy to use without requiring complicated settings. However, since low-fidelity simulators do not offer any teaching information, trainees do self-practice with them, referring to textbooks or videos, which are insufficient to learn open surgical procedures. This study aimed to develop a new suture training system for open surgery that provides trainees with the three-dimensional information of exemplary procedures performed by experts and allows them to observe and imitate the procedures during self-practice. The proposed system consists of a motion capture system of surgical instruments and a three-dimensional replication system of captured procedures on the surgical field. Motion capture of surgical instruments was achieved inexpensively by using cylindrical augmented reality (AR) markers, and replication of captured procedures was realized by visualizing them three-dimensionally at the same position and orientation as captured, using an AR device. For subcuticular interrupted suture, it was confirmed that the proposed system enabled users to observe experts’ procedures from any angle and imitate them by manipulating the actual surgical instruments during self-practice. We expect that this training system will contribute to developing a novel surgical training method that enables trainees to learn surgical skills by themselves in the absence of experts.

Author(s):  
Yu. Shchehelska

<div><p><em>In this study there were identified the main varieties of existing motion capture systems (mocap) that can be used primarily to create three-dimensional animation for augmented reality; as well as established their specific features, and also demonstrated the examples of the practical use of certain types of such systems in promotional communications.</em></p></div><p><em>This study unleashes the specificity of the functioning of non-marker and all types of marker motion capture systems – optical (optically passive and optically active, including «performance capture» as well as hybrid) and non-optical (acoustic, magnetic, mechanical and inertial).</em></p><p><em>There were analyzed two practical promotional cases: the American social PR project «Love Has No Labels» and the Japanese commercial brand «ZozoTown» («ZozoSuit»).</em></p><p><em>It has been found that in the practice of promotional communications inertial-type mocap systems with full magnetic interference are most actively used, since they can be used directly during mass AR-actions, primarily due to their portability and ability to function in a limited space.</em></p><p><em>It has also been revealed that AR-actions using motion capture systems are conducted primarily to create positive WOM and media resonances, allowing to significantly diversify the arsenal of communication tools with the target audience, as well as to increase the quality and efficiency of promotional messages, which in sum boosts the publicity capital.</em></p><p><em>Other varieties of mocap systems (with exception of non-marking one, which works through computer vision) are not used in real time regime for promotional events primarily due to their cumbersome nature. However, they can be employed to create realistic 3D animation for future utilization in promotional campaigns, projects, and actions using augmented reality technologies.</em></p><p><strong><em>Key words:</em></strong><em> motion capture systems (mocap), augmented reality (AR), promotion, empirical marketing.</em></p>


2018 ◽  
Vol 3 (1) ◽  
pp. 34-48
Author(s):  
Leelavathi Rajamanickam ◽  
◽  
Kate Lam Woon Yee ◽  

The objective of having this journal is to analyze about the field of Augmented Reality (AR), the upcoming three-dimensional (3D) virtual objects into real-time three-dimensional (3D) environment in the education environment. Apart from the education application, there are also medical, manufacturing, entertainment and military applications that have been explored. This article describes the features of Augmented Reality system by including types of augmented reality and the benefits from it. Life of the lecturer and student getting bored day by day without any new sort of teaching approaches. Basically, this research paper summarizes the current efforts and address these issues. Hence, discussion about the future direction and areas for Augmented Reality in the field of education. direction and areas for Augmented Reality in the field of education. A new way of learning and teaching can definitely increase the interest of the parties. This research paper provides a starting point for anyone who interested in changing the classic ways of educations into a technological one filled with AR.


2018 ◽  
Vol 25 (4) ◽  
pp. 380-388 ◽  
Author(s):  
Gustavo A. Alonso-Silverio ◽  
Fernando Pérez-Escamirosa ◽  
Raúl Bruno-Sanchez ◽  
José L. Ortiz-Simon ◽  
Roberto Muñoz-Guerrero ◽  
...  

Background. A trainer for online laparoscopic surgical skills assessment based on the performance of experts and nonexperts is presented. The system uses computer vision, augmented reality, and artificial intelligence algorithms, implemented into a Raspberry Pi board with Python programming language. Methods. Two training tasks were evaluated by the laparoscopic system: transferring and pattern cutting. Computer vision libraries were used to obtain the number of transferred points and simulated pattern cutting trace by means of tracking of the laparoscopic instrument. An artificial neural network (ANN) was trained to learn from experts and nonexperts’ behavior for pattern cutting task, whereas the assessment of transferring task was performed using a preestablished threshold. Four expert surgeons in laparoscopic surgery, from hospital “Raymundo Abarca Alarcón,” constituted the experienced class for the ANN. Sixteen trainees (10 medical students and 6 residents) without laparoscopic surgical skills and limited experience in minimal invasive techniques from School of Medicine at Universidad Autónoma de Guerrero constituted the nonexperienced class. Data from participants performing 5 daily repetitions for each task during 5 days were used to build the ANN. Results. The participants tend to improve their learning curve and dexterity with this laparoscopic training system. The classifier shows mean accuracy and receiver operating characteristic curve of 90.98% and 0.93, respectively. Moreover, the ANN was able to evaluate the psychomotor skills of users into 2 classes: experienced or nonexperienced. Conclusion. We constructed and evaluated an affordable laparoscopic trainer system using computer vision, augmented reality, and an artificial intelligence algorithm. The proposed trainer has the potential to increase the self-confidence of trainees and to be applied to programs with limited resources.


2014 ◽  
Vol 926-930 ◽  
pp. 1318-1321
Author(s):  
Teng Da Li ◽  
Fang Wen

Motion capture technique is motion data processed by a motion sensor and optical equipment tracker record computer. Three-dimensional information is to restore the moving object technology. Application of motion capture technology in sports training, the training into the scientific digital stage update the motion capture technology make it cheaper. The paper studied the technology of Kinect based motion capture, and its application in basketball training. This method has the advantages of simple data processing, high real-time performance and low price etc..


2019 ◽  
Vol 31 (1) ◽  
pp. 139-146 ◽  
Author(s):  
Camilo A. Molina ◽  
Nicholas Theodore ◽  
A. Karim Ahmed ◽  
Erick M. Westbroek ◽  
Yigal Mirovsky ◽  
...  

OBJECTIVEAugmented reality (AR) is a novel technology that has the potential to increase the technical feasibility, accuracy, and safety of conventional manual and robotic computer-navigated pedicle insertion methods. Visual data are directly projected to the operator’s retina and overlaid onto the surgical field, thereby removing the requirement to shift attention to a remote display. The objective of this study was to assess the comparative accuracy of AR-assisted pedicle screw insertion in comparison to conventional pedicle screw insertion methods.METHODSFive cadaveric male torsos were instrumented bilaterally from T6 to L5 for a total of 120 inserted pedicle screws. Postprocedural CT scans were obtained, and screw insertion accuracy was graded by 2 independent neuroradiologists using both the Gertzbein scale (GS) and a combination of that scale and the Heary classification, referred to in this paper as the Heary-Gertzbein scale (HGS). Non-inferiority analysis was performed, comparing the accuracy to freehand, manual computer-navigated, and robotics-assisted computer-navigated insertion accuracy rates reported in the literature. User experience analysis was conducted via a user experience questionnaire filled out by operators after the procedures.RESULTSThe overall screw placement accuracy achieved with the AR system was 96.7% based on the HGS and 94.6% based on the GS. Insertion accuracy was non-inferior to accuracy reported for manual computer-navigated pedicle insertion based on both the GS and the HGS scores. When compared to accuracy reported for robotics-assisted computer-navigated insertion, accuracy achieved with the AR system was found to be non-inferior when assessed with the GS, but superior when assessed with the HGS. Last, accuracy results achieved with the AR system were found to be superior to results obtained with freehand insertion based on both the HGS and the GS scores. Accuracy results were not found to be inferior in any comparison. User experience analysis yielded “excellent” usability classification.CONCLUSIONSAR-assisted pedicle screw insertion is a technically feasible and accurate insertion method.


ORL ◽  
2021 ◽  
pp. 1-10
Author(s):  
Claudia Scherl ◽  
Johanna Stratemeier ◽  
Nicole Rotter ◽  
Jürgen Hesser ◽  
Stefan O. Schönberg ◽  
...  

<b><i>Introduction:</i></b> Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. <b><i>Objectives:</i></b> So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. <b><i>Methods:</i></b> 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. <b><i>Results:</i></b> The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient’s anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (<i>p</i> = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). <b><i>Conclusion:</i></b> This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Joël L. Lavanchy ◽  
Joel Zindel ◽  
Kadir Kirtac ◽  
Isabell Twick ◽  
Enes Hosgor ◽  
...  

AbstractSurgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.


2019 ◽  
Vol 18 (6) ◽  
pp. e2690 ◽  
Author(s):  
F. Porpiglia ◽  
E. Checcucci ◽  
D. Amparore ◽  
F. Piramide ◽  
P. Verri ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document