endoscopic videos
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 28)

H-INDEX

11
(FIVE YEARS 4)

2021 ◽  
Vol 71 ◽  
pp. 102058
Author(s):  
Kutsev Bengisu Ozyoruk ◽  
Guliz Irem Gokceler ◽  
Taylor L. Bobrow ◽  
Gulfize Coskun ◽  
Kagan Incetan ◽  
...  

Author(s):  
Leonardo Tanzi ◽  
Pietro Piazzolla ◽  
Francesco Porpiglia ◽  
Enrico Vezzetti

Abstract Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). Conclusion This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.


Author(s):  
Adrian Krenzer ◽  
Kevin Makowski ◽  
Amar Hekalo ◽  
Frank Puppe

A semi-automatic tool for fast and accurate annotation of endoscopic videos utilizing trained object detection models is presented. A novel workflow is implemented and the preliminary results suggest that the annotation process is nearly twice as fast with our novel tool compared to the current state of the art.


2021 ◽  
Vol 15 (Supplement_1) ◽  
pp. S173-S174
Author(s):  
B Gutierrez Becker ◽  
E Giuffrida ◽  
M Mangia ◽  
F Arcadu ◽  
V Whitehill ◽  
...  

Abstract Background Endoscopic assessment is a critical procedure to assess the improvement of mucosa and response to therapy, and therefore a pivotal component of clinical trial endpoints for IBD. Central scoring of endoscopic videos is challenging and time consuming. We evaluated the feasibility of using an Artificial Intelligence (AI) algorithm to automatically produce filtered videos where the non-readable portions of the video are removed, with the aim of accelerating the scoring of endoscopic videos. Methods The AI algorithm was based on a Convolutional Neural Network trained to perform a binary classification task. This task consisted of assigning the frames in a colonoscopy video to one of two classes: “readable” or “unreadable.” The algorithm was trained using annotations performed by two data scientists (BG, FA). The criteria to consider a frame “readable” were: i) the colon walls were within the field of view; ii) contrast and sharpness of the frame were sufficient to visually inspect the mucosa, and iii) no presence of artifacts completely obstructing the visibility of the mucosa. The frames were extracted randomly from 351 colonoscopy videos of the etrolizumab EUCALYPTUS (NCT01336465) Phase II ulcerative colitis clinical trial. Evaluation of the performance of the AI algorithm was performed on colonoscopy videos obtained as part of the etrolizumab HICKORY (NCT02100696) and LAUREL (NCT02165215) Phase III ulcerative colitis clinical trials. Each video was filtered using the AI algorithm, resulting in a shorter video where the sections considered unreadable by the AI algorithm were removed. Each of three annotators (EG, MM and MD) was randomly assigned an equal number of AI-filtered videos and raw videos. The gastroenterologist was tasked to score temporal segments of the video according to the Mayo Clinic Endoscopic Subscore (MCES). Annotations were performed by means of an online annotation platform (Virgo Surgical Video Solutions, Inc). Results We measured the time it took the annotators to score raw and AI-filtered videos. We observed a statistically significant reduction (Mann Whitney U test p-value=0.039) in the median time spent by the annotators scoring raw videos (10.59∓ 0.94 minutes) with respect to the time spent scoring AI-filtered videos (9.51 ∓ 0.92 minutes), with a substantial intra-rater agreement when evaluating highlight and raw videos (Cohen’s kappa 0.92 and 0.55 for experienced and junior gastroenterologists respectively). Conclusion Our analysis shows that AI can be used reliably as an assisting tool to automatically remove non-readable time segments from full colonoscopy videos. The use of our proposed algorithm can lead to reduced annotation times in the task of centrally reading colonoscopy videos.


2021 ◽  
Vol 11 ◽  
Author(s):  
Alberto Paderno ◽  
Cesare Piazza ◽  
Francesca Del Bon ◽  
Davide Lancini ◽  
Stefano Tanagli ◽  
...  

IntroductionFully convoluted neural networks (FCNN) applied to video-analysis are of particular interest in the field of head and neck oncology, given that endoscopic examination is a crucial step in diagnosis, staging, and follow-up of patients affected by upper aero-digestive tract cancers. The aim of this study was to test FCNN-based methods for semantic segmentation of squamous cell carcinoma (SCC) of the oral cavity (OC) and oropharynx (OP).Materials and MethodsTwo datasets were retrieved from the institutional registry of a tertiary academic hospital analyzing 34 and 45 NBI endoscopic videos of OC and OP lesions, respectively. The dataset referring to the OC was composed of 110 frames, while 116 frames composed the OP dataset. Three FCNNs (U-Net, U-Net 3, and ResNet) were investigated to segment the neoplastic images. FCNNs performance was evaluated for each tested network and compared to the gold standard, represented by the manual annotation performed by expert clinicians.ResultsFor FCNN-based segmentation of the OC dataset, the best results in terms of Dice Similarity Coefficient (Dsc) were achieved by ResNet with 5(×2) blocks and 16 filters, with a median value of 0.6559. In FCNN-based segmentation for the OP dataset, the best results in terms of Dsc were achieved by ResNet with 4(×2) blocks and 16 filters, with a median value of 0.7603. All tested FCNNs presented very high values of variance, leading to very low values of minima for all metrics evaluated.ConclusionsFCNNs have promising potential in the analysis and segmentation of OC and OP video-endoscopic images. All tested FCNN architectures demonstrated satisfying outcomes in terms of diagnostic accuracy. The inference time of the processing networks were particularly short, ranging between 14 and 115 ms, thus showing the possibility for real-time application.


2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Pradipta Sasmal ◽  
Avinash Paul ◽  
M.K. Bhuyan ◽  
Yuji Iwahori ◽  
Kunio Kasugai

Author(s):  
David Recasens ◽  
Jose Lamarca ◽  
Jose M. Facil ◽  
J.M.M Montiel ◽  
Javier Civera

Sign in / Sign up

Export Citation Format

Share Document