surgical workflow
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 72)

H-INDEX

12
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Shane Keogh ◽  
Deirdre Laski

Background. Modern surgical research has broadened to include an interest into the investigation of surgical workflow. Rigorous analysis of the surgical process has a particular focus on distractions. Operating theatres are inherently full of distractions, many not pertinent to the surgical process. Distractions have the potential to increase surgeon stress, operative time, and complications. Our study aims to objectively identify, classify, and quantify distractions during the surgical process. Methods. 46 general surgical procedures were observed within a tertiary Irish hospital between June 2019 and October 2019. An established observational tool was used to apply a structured observation to all operations. Additionally, a nine-point ordinal behaviourally anchor scoring scale was used to assign an interference level to each distraction. Results. The total operative observation time was 4605 minutes (mean = 100.11 minutes, std. deviation: 45.6 minutes). Overall, 855 intraoperative distractions were coded. On average, 18.58 distractions were coded per operation (std. deviation: 6.649; range: 5–34), with 11.14 distractions occurring per hour. Entering/exiting (n = 380, 42.88%) and case irrelevant communication (n = 251, 28.32%) occurred most frequently. Disruption rate was highest within the first (n = 275, 32%) and fourth operative quartiles (n = 342, 41%). Highest interference rates were observed from equipment issue and procedural interruptions. Anaesthetists initiated CIC more frequently (2.72 per operation), compared to nurses (1.57) and surgeons (1.17). Conclusion. Our results confirm that distractions are prevalent within the operating theatre. Distractions contribute to significant interferences of surgical workflow. Steps can be taken to reduce overall prevalence and interference level by drawing upon a systems-based perspective. However, due to the ubiquitous nature of distractions, surgeons may need to develop skills to help them resume interrupted primary tasks so as to negate the effects distraction has on surgical outcomes. Data for the above have been presented as conference abstract in 28th International Congress of the European Association for Endoscopic Surgery (EAES) Virtual Congress, 23–26 June 2020.


2021 ◽  
Vol 268 ◽  
pp. 318-325
Author(s):  
Hossein Mohamadipanah ◽  
LaDonna Kearse ◽  
Anna Witt ◽  
Brett Wise ◽  
Su Yang ◽  
...  

2021 ◽  
Vol 8 ◽  
Author(s):  
Christian Marzi ◽  
Tom Prinzen ◽  
Julia Haag ◽  
Thomas Klenzner ◽  
Franziska Mathis-Ullrich

Robotic systems for surgery of the inner ear must enable highly precise movement in relation to the patient. To allow for a suitable collaboration between surgeon and robot, these systems should not interrupt the surgical workflow and integrate well in existing processes. As the surgical microscope is a standard tool, present in almost every microsurgical intervention and due to it being in close proximity to the situs, it is predestined to be extended by assistive robotic systems. For instance, a microscope-mounted laser for ablation. As both, patient and microscope are subject to movements during surgery, a well-integrated robotic system must be able to comply with these movements. To solve the problem of on-line registration of an assistance system to the situs, the standard of care often utilizes marker-based technologies, which require markers being rigidly attached to the patient. This not only requires time for preparation but also increases invasiveness of the procedure and the line of sight of the tracking system may not be obstructed. This work aims at utilizing the existing imaging system for detection of relative movements between the surgical microscope and the patient. The resulting data allows for maintaining registration. Hereby, no artificial markers or landmarks are considered but an approach for feature-based tracking with respect to the surgical environment in otology is presented. The images for tracking are obtained by a two-dimensional RGB stream of a surgical microscope. Due to the bony structure of the surgical site, the recorded cochleostomy scene moves nearly rigidly. The goal of the tracking algorithm is to estimate motion only from the given image stream. After preprocessing, features are detected in two subsequent images and their affine transformation is computed by a random sample consensus (RANSAC) algorithm. The proposed method can provide movement feedback with up to 93.2 μm precision without the need for any additional hardware in the operating room or attachment of fiducials to the situs. In long term tracking, an accumulative error occurs.


2021 ◽  
Vol 233 (5) ◽  
pp. S194
Author(s):  
Maxwell McMahon ◽  
Katherine Ott ◽  
Jonathan Vacek ◽  
Andrew Hu ◽  
Chris De Boer ◽  
...  

2021 ◽  
pp. 1-8
Author(s):  
Danyal Z. Khan ◽  
Imanol Luengo ◽  
Santiago Barbarisi ◽  
Carole Addis ◽  
Lucy Culshaw ◽  
...  

OBJECTIVE Surgical workflow analysis involves systematically breaking down operations into key phases and steps. Automatic analysis of this workflow has potential uses for surgical training, preoperative planning, and outcome prediction. Recent advances in machine learning (ML) and computer vision have allowed accurate automated workflow analysis of operative videos. In this Idea, Development, Exploration, Assessment, Long-term study (IDEAL) stage 0 study, the authors sought to use Touch Surgery for the development and validation of an ML-powered analysis of phases and steps in the endoscopic transsphenoidal approach (eTSA) for pituitary adenoma resection, a first for neurosurgery. METHODS The surgical phases and steps of 50 anonymized eTSA operative videos were labeled by expert surgeons. Forty videos were used to train a combined convolutional and recurrent neural network model by Touch Surgery. Ten videos were used for model evaluation (accuracy, F1 score), comparing the phase and step recognition of surgeons to the automatic detection of the ML model. RESULTS The longest phase was the sellar phase (median 28 minutes), followed by the nasal phase (median 22 minutes) and the closure phase (median 14 minutes). The longest steps were step 5 (tumor identification and excision, median 17 minutes); step 3 (posterior septectomy and removal of sphenoid septations, median 14 minutes); and step 4 (anterior sellar wall removal, median 10 minutes). There were substantial variations within the recorded procedures in terms of video appearances, step duration, and step order, with only 50% of videos containing all 7 steps performed sequentially in numerical order. Despite this, the model was able to output accurate recognition of surgical phases (91% accuracy, 90% F1 score) and steps (76% accuracy, 75% F1 score). CONCLUSIONS In this IDEAL stage 0 study, ML techniques have been developed to automatically analyze operative videos of eTSA pituitary surgery. This technology has previously been shown to be acceptable to neurosurgical teams and patients. ML-based surgical workflow analysis has numerous potential uses—such as education (e.g., automatic indexing of contemporary operative videos for teaching), improved operative efficiency (e.g., orchestrating the entire surgical team to a common workflow), and improved patient outcomes (e.g., comparison of surgical techniques or early detection of adverse events). Future directions include the real-time integration of Touch Surgery into the live operative environment as an IDEAL stage 1 (first-in-human) study, and further development of underpinning ML models using larger data sets.


2021 ◽  
Vol 23 (Supplement_4) ◽  
pp. iv16-iv17
Author(s):  
Adam Nunn ◽  
Neil Barua

Abstract Aims The use of intraoperative ultrasound (iUS) has been associated with prolonged survival in patients with high grade glioma. However, iUS remains an under-utilised surgical adjunct in many neurosurgical units due to greater familiarity with CT and MR imaging. Navigated intraoperative ultrasound (NiUS) facilitates co-registration of pre-operative MR imaging with iUS, offering a number of advantages over standard neuronavigation. The aim of this study was to describe our initial experiences with NiUS for brain tumour resection in adults. Method We prospectively collected data on patient demographics, tumour location and histology, extent of resection and early post-operative outcome in 9 consecutive patients. Brainlab neuronavigation (BrainLab, Germany) and the BK5000 cranial ultrasound probe (BK Medical, Denmark) were used in all cases. We also collected data on surgical intent and the use of surgical adjuncts including neurophysiology monitoring, DTI and 5ALA. Results NiUS was used in 9 patients (6 male, 3 female). iUS scans were successfully co-registered in all cases. Histological diagnoses were GBM (7 patients), melanoma (1 patient) and oligodendroglioma (1 patient). NiUS was used in conjunction with the following techniques and adjuncts – awake craniotomy (2), DTI (all cases), neurophysiology monitoring (4 cases) and 5ALA (7 cases). Gross total resection was achieved in 8 patients. The mean operative time was 4 hours and 7 minutes, which is significantly lower than that reported in a recently published series involving intra-operative MRI. No patients suffered any deterioration in neurological status in the early post-operative period. Conclusion NiUS was rapidly assimilated into our surgical workflow with successful co-registration in all cases. NiUS was used successfully in conjunction with awake craniotomy, neurophysiology monitoring, DTI and 5ALA for both enhancing and non-enhancing tumours. Based on our early experience we offer learning points on patient positioning, set up of equipment and interpretation of iUS. Further studies are required to determine the impact of NiUS on patient outcome.


Sign in / Sign up

Export Citation Format

Share Document