surgical video
Recently Published Documents


TOTAL DOCUMENTS

137
(FIVE YEARS 97)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Guilherme J. Agnoletto ◽  
Sandrine Couldwell ◽  
Leslie R. Halpern ◽  
David R. Adams ◽  
William T. Couldwell

2021 ◽  
Author(s):  
Vartika Sengar ◽  
Vivek B S ◽  
Karthik Seemakurthy ◽  
Jayavardhana Gubbi ◽  
Balamuralidhar P.
Keyword(s):  

2021 ◽  
pp. 1-10

OBJECTIVE Experts can assess surgeon skill using surgical video, but a limited number of expert surgeons are available. Automated performance metrics (APMs) are a promising alternative but have not been created from operative videos in neurosurgery to date. The authors aimed to evaluate whether video-based APMs can predict task success and blood loss during endonasal endoscopic surgery in a validated cadaveric simulator of vascular injury of the internal carotid artery. METHODS Videos of cadaveric simulation trials by 73 neurosurgeons and otorhinolaryngologists were analyzed and manually annotated with bounding boxes to identify the surgical instruments in the frame. APMs in five domains were defined—instrument usage, time-to-phase, instrument disappearance, instrument movement, and instrument interactions—on the basis of expert analysis and task-specific surgical progressions. Bounding-box data of instrument position were then used to generate APMs for each trial. Multivariate linear regression was used to test for the associations between APMs and blood loss and task success (hemorrhage control in less than 5 minutes). The APMs of 93 successful trials were compared with the APMs of 49 unsuccessful trials. RESULTS In total, 29,151 frames of surgical video were annotated. Successful simulation trials had superior APMs in each domain, including proportionately more time spent with the key instruments in view (p < 0.001) and less time without hemorrhage control (p = 0.002). APMs in all domains improved in subsequent trials after the participants received personalized expert instruction. Attending surgeons had superior instrument usage, time-to-phase, and instrument disappearance metrics compared with resident surgeons (p < 0.01). APMs predicted surgeon performance better than surgeon training level or prior experience. A regression model that included APMs predicted blood loss with an R2 value of 0.87 (p < 0.001). CONCLUSIONS Video-based APMs were superior predictors of simulation trial success and blood loss than surgeon characteristics such as case volume and attending status. Surgeon educators can use APMs to assess competency, quantify performance, and provide actionable, structured feedback in order to improve patient outcomes. Validation of APMs provides a benchmark for further development of fully automated video assessment pipelines that utilize machine learning and computer vision.


Author(s):  
José Ernesto Chang M. ◽  
Guilherme Salemi Riechelmann ◽  
Sebastián Aníbal Alejandro ◽  
Samantha Lorena Paganelli ◽  
Evelyn Judith Vela Rojas ◽  
...  

ASVIDE ◽  
2021 ◽  
pp. 348-348
Author(s):  
John J. Kelly ◽  
Christopher K. Mehta ◽  
Christine Herman ◽  
Joshua C. Grimm ◽  
Brittany J. Cannon ◽  
...  

Author(s):  
Jinglu Zhang ◽  
Yinyu Nie ◽  
Yao Lyu ◽  
Xiaosong Yang ◽  
Jian Chang ◽  
...  

Abstract Purpose Surgical gesture recognition has been an essential task for providing intraoperative context-aware assistance and scheduling clinical resources. However, previous methods present limitations in catching long-range temporal information, and many of them require additional sensors. To address these challenges, we propose a symmetric dilated network, namely SD-Net, to jointly recognize surgical gestures and assess surgical skill levels only using RGB surgical video sequences. Methods We utilize symmetric 1D temporal dilated convolution layers to hierarchically capture gesture clues under different receptive fields such that features in different time span can be aggregated. In addition, a self-attention network is bridged in the middle to calculate the global frame-to-frame relativity. Results We evaluate our method on a robotic suturing task from the JIGSAWS dataset. The gesture recognition task largely outperforms the state of the arts on the frame-wise accuracy up to $$\sim $$ ∼ 6 points and the F1@50 score $$\sim $$ ∼ 8 points. We also keep the 100% predicted accuracy for the skill assessment task using LOSO validation scheme. Conclusion The results indicate that our architecture is able to obtain representative surgical video features by extensively considering the spatial, temporal and relational context from raw video input. Furthermore, the better performance in multi-task learning implies that surgical skill assessment has a complementary effects to gesture recognition task.


2021 ◽  
Vol 5 (2) ◽  
pp. V15
Author(s):  
Robert M. Conway ◽  
Nathan C. Tu ◽  
Pedrom C. Sioshansi ◽  
Dennis I. Bojrab ◽  
Jeffrey T. Jacob ◽  
...  

Cochlear implantation (CI) has become an option for the treatment of hearing loss after translabyrinthine resection of vestibular schwannomas. The surgical video presents the case of a 67-year-old male who had translabyrinthine resection of vestibular schwannoma with simultaneous CI and closure with a hydroxyapatite (HA) cement cranioplasty. HA cement cranioplasty can be utilized in place of abdominal fat graft for the closure of translabyrinthine approaches with similar efficacy and complication profile. To the authors’ knowledge, this is the first reported case of a simultaneous CI and translabyrinthine resection of vestibular schwannoma with HA cement cranioplasty. The video can be found here: https://stream.cadmore.media/r10.3171/2021.7.FOCVID211


Sign in / Sign up

Export Citation Format

Share Document