Diagnosis of ulcerative colitis from endoscopic images based on deep learning

2022 ◽  
Vol 73 ◽  
pp. 103443
Author(s):  
Xudong Luo ◽  
Junhua Zhang ◽  
Zonggui Li ◽  
Ruiqi Yang
Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


Gut ◽  
2020 ◽  
Vol 69 (10) ◽  
pp. 1778-1786 ◽  
Author(s):  
Peter Bossuyt ◽  
Hiroshi Nakase ◽  
Séverine Vermeire ◽  
Gert de Hertogh ◽  
Tom Eelbode ◽  
...  

BackgroundThe objective evaluation of endoscopic disease activity is key in ulcerative colitis (UC). A composite of endoscopic and histological factors is the goal in UC treatment. We aimed to develop an operator-independent computer-based tool to determine UC activity based on endoscopic images.MethodsFirst, we built a computer algorithm using data from 29 consecutive patients with UC and 6 healthy controls (construction cohort). The algorithm (red density: RD) was based on the red channel of the red-green-blue pixel values and pattern recognition from endoscopic images. The algorithm was refined in sequential steps to optimise correlation with endoscopic and histological disease activity. In a second phase, the operating properties were tested in patients with UC flares requiring treatment escalation. To validate the algorithm, we tested the correlation between RD score and clinical, endoscopic and histological features in a validation cohort.ResultsWe constructed the algorithm based on the integration of pixel colour data from the redness colour map along with vascular pattern detection. These data were linked with Robarts histological index (RHI) in a multiple regression analysis. In the construction cohort, RD correlated with RHI (r=0.74, p<0.0001), Mayo endoscopic subscores (r=0.76, p<0.0001) and UC Endoscopic Index of Severity scores (r=0.74, p<0.0001). The RD sensitivity to change had a standardised effect size of 1.16. In the validation set, RD correlated with RHI (r=0.65, p=0.00002).ConclusionsRD provides an objective computer-based score that accurately assesses disease activity in UC. In a validation study, RD correlated with endoscopic and histological disease activity.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Tien-Yu Huang ◽  
Shan-Quan Zhan ◽  
Peng-Jen Chen ◽  
Chih-Wei Yang ◽  
Henry Horng-Shing Lu

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 283
Author(s):  
Xiaoyuan Yu ◽  
Suigu Tang ◽  
Chak Fong Cheang ◽  
Hon Ho Yu ◽  
I Cheong Choi

The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.


Author(s):  
Kento Takenaka ◽  
Kazuo Ohtsuka ◽  
Toshimitsu Fujii ◽  
Shiori Oshima ◽  
Ryuichi Okamoto ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document