surgical workflow analysis
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
pp. 1-8
Author(s):  
Danyal Z. Khan ◽  
Imanol Luengo ◽  
Santiago Barbarisi ◽  
Carole Addis ◽  
Lucy Culshaw ◽  
...  

OBJECTIVE Surgical workflow analysis involves systematically breaking down operations into key phases and steps. Automatic analysis of this workflow has potential uses for surgical training, preoperative planning, and outcome prediction. Recent advances in machine learning (ML) and computer vision have allowed accurate automated workflow analysis of operative videos. In this Idea, Development, Exploration, Assessment, Long-term study (IDEAL) stage 0 study, the authors sought to use Touch Surgery for the development and validation of an ML-powered analysis of phases and steps in the endoscopic transsphenoidal approach (eTSA) for pituitary adenoma resection, a first for neurosurgery. METHODS The surgical phases and steps of 50 anonymized eTSA operative videos were labeled by expert surgeons. Forty videos were used to train a combined convolutional and recurrent neural network model by Touch Surgery. Ten videos were used for model evaluation (accuracy, F1 score), comparing the phase and step recognition of surgeons to the automatic detection of the ML model. RESULTS The longest phase was the sellar phase (median 28 minutes), followed by the nasal phase (median 22 minutes) and the closure phase (median 14 minutes). The longest steps were step 5 (tumor identification and excision, median 17 minutes); step 3 (posterior septectomy and removal of sphenoid septations, median 14 minutes); and step 4 (anterior sellar wall removal, median 10 minutes). There were substantial variations within the recorded procedures in terms of video appearances, step duration, and step order, with only 50% of videos containing all 7 steps performed sequentially in numerical order. Despite this, the model was able to output accurate recognition of surgical phases (91% accuracy, 90% F1 score) and steps (76% accuracy, 75% F1 score). CONCLUSIONS In this IDEAL stage 0 study, ML techniques have been developed to automatically analyze operative videos of eTSA pituitary surgery. This technology has previously been shown to be acceptable to neurosurgical teams and patients. ML-based surgical workflow analysis has numerous potential uses—such as education (e.g., automatic indexing of contemporary operative videos for teaching), improved operative efficiency (e.g., orchestrating the entire surgical team to a common workflow), and improved patient outcomes (e.g., comparison of surgical techniques or early detection of adverse events). Future directions include the real-time integration of Touch Surgery into the live operative environment as an IDEAL stage 1 (first-in-human) study, and further development of underpinning ML models using larger data sets.



2020 ◽  
Vol 34 (S1) ◽  
pp. 1-1
Author(s):  
Oleksiy Zaika ◽  
Mel Boulton ◽  
Roy Eagleson ◽  
Sandrine de Ribaupierre


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2303 ◽  
Author(s):  
Vinicius Facco Rodrigues ◽  
Rodrigo da Rosa Righi ◽  
Cristiano André da Costa ◽  
Björn Eskofier ◽  
Andreas Maier

The Operating Room (OR) plays an important role in delivering vital medical services to patients in hospitals. Such environments contain several medical devices, equipment, and systems producing valuable information which might be combined for biomedical and surgical workflow analysis. Considering the sensibility of data from sensors in the OR, independently of processing and network loads, the middleware that provides data from these sensors have to respect applications quality of service (QoS) demands. In an OR middleware, there are two main bottlenecks that might suffer QoS problems and, consequently, impact directly in user experience: (i) simultaneous user applications connecting the middleware; and (ii) a high number of sensors generating information from the environment. Currently, many middlewares that support QoS have been proposed by many fields; however, to the best of our knowledge, there is no research on this topic or the OR environment. OR environments are characterized by being crowded by persons and equipment, some of them of specific use in such environments, as mobile x-ray machines. Therefore, this article proposes QualiCare, an adaptable middleware model to provide multi-level QoS, improve user experience, and increase hardware utilization to middlewares in OR environments. Our main contributions are a middleware model and an orchestration engine in charge of changing the middleware behavior to guarantee performance. Results demonstrate that adapting middleware parameters on demand reduces network usage and improves resource consumption maintaining data provisioning.



2019 ◽  
Vol 14 (6) ◽  
pp. 1079-1087 ◽  
Author(s):  
Sebastian Bodenstedt ◽  
Dominik Rivoir ◽  
Alexander Jenke ◽  
Martin Wagner ◽  
Michael Breucha ◽  
...  


2018 ◽  
Vol 4 (1) ◽  
pp. 415-418 ◽  
Author(s):  
Nour Aldeen Jalal ◽  
Tamer Abdulnaki Alshirbaji ◽  
Knut Möller

AbstractSurgical workflow analysis in laparoscopic surgeries has been studied widely during last years because of its various applications. For example, optimising the schedule of operating rooms (OR) and developing a context-aware system that supports surgical team during the intervention. Surgical phase recognition has been applied to various kinds of laparoscopic procedures, mainly of type cholecystectomy. Sigmoid resection procedures are considered more complex than cholecystectomy, and they have not been extensively studied. Therefore, the focus of this work is to study phase recognition in sigmoid resection. In this paper, a convolutional neural network (CNN) architecture and Hidden Markov Model (HMM) were evaluated for performing phase recognition in sigmoid resection videos. The CNN is an extension of a pretrained model, and it was fine-tuned to perform the recognition. To consider the temporal aspect of the phase sequences, confidences obtained by the CNN were then provided into a HMM to release final classification. Experimental results show a low performance of the proposed method to recognise surgical phases in such complex procedures. Therefore, the dataset used for the evaluation was also reviewed, and statistics of each phase were generated.



2017 ◽  
Vol 2017 ◽  
pp. 1-17 ◽  
Author(s):  
Dinh Tuan Tran ◽  
Ryuhei Sakurai ◽  
Hirotake Yamazoe ◽  
Joo-Ho Lee

In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.





Author(s):  
Ralf Stauder ◽  
Aslı Okur ◽  
Loïc Peter ◽  
Armin Schneider ◽  
Michael Kranzfelder ◽  
...  






Sign in / Sign up

Export Citation Format

Share Document