scholarly journals Endo-Depth-and-Motion: Reconstruction and Tracking in Endoscopic Videos using Depth Networks and Photometric Constraints

Author(s):  
David Recasens ◽  
Jose Lamarca ◽  
Jose M. Facil ◽  
J.M.M Montiel ◽  
Javier Civera
2021 ◽  
Vol 71 ◽  
pp. 102058
Author(s):  
Kutsev Bengisu Ozyoruk ◽  
Guliz Irem Gokceler ◽  
Taylor L. Bobrow ◽  
Gulfize Coskun ◽  
Kagan Incetan ◽  
...  

2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


Author(s):  
Leonardo Tanzi ◽  
Pietro Piazzolla ◽  
Francesco Porpiglia ◽  
Enrico Vezzetti

Abstract Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). Conclusion This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.


2018 ◽  
Vol 160 (3) ◽  
pp. 533-539 ◽  
Author(s):  
Steven Coppess ◽  
Reema Padia ◽  
David Horn ◽  
Sanjay R. Parikh ◽  
Andrew Inglis ◽  
...  

Objective While the Benjamin-Inglis classification system is widely used to categorize laryngeal clefts, it does not clearly differentiate a type 1 cleft from normal anatomy, and there is no widely accepted or validated protocol for systematically evaluating interarytenoid mucosal height. We sought to propose the interarytenoid assessment protocol as a method to standardize the description of the interarytenoid anatomy and to test its reliability. Study Design Retrospective review of endoscopic videos. Setting Pediatric academic center. Subjects and Methods The interarytenoid assessment protocol comprises 4 steps for evaluation of the interarytenoid region relative to known anatomic landmarks in the supraglottis, glottis, and subglottis. Thirty consecutively selected videos of the protocol were reviewed by 4 otolaryngologists. The raters were blinded to identifying information, and the video order was randomized for each review. We assessed protocol completion times and calculated Cohen’s linear-weighted κ coefficient between blinded expert raters and with the operating surgeon to evaluate interrater/intrarater reliability. Results Median age was 4.9 years (59 months; range, 1 month to 20 years). Median completion time was 144 seconds. Interrater and intrarater reliability showed substantial agreement (interrater κ = 0.71 [95% confidence interval (CI), 0.55-0.87]; intrarater mean κ = 0.70 [95% CI, 0.59-0.92/rater 1, 0.47-0.85/rater 2]; P < .001). Comparing raters to the operating surgeon demonstrated substantial agreement (mean κ = 0.62; 95% CI, 0.31-0.79/rater 1, 0.48-0.89/rater 2; P < .001). Conclusion The interarytenoid assessment protocol appears reliable in describing interarytenoid anatomy. Rapid completion times and substantial interrater/intrarater reliability were demonstrated. Incorporation of this protocol may provide important steps toward improved standardization in the anatomic description of the interarytenoid region in pediatric dysphagia.


Sign in / Sign up

Export Citation Format

Share Document