scholarly journals Automatic detection and segmentation of adenomatous colorectal polyps during colonoscopy using Mask R-CNN

2020 ◽  
Vol 15 (1) ◽  
pp. 588-596 ◽  
Author(s):  
Jie Meng ◽  
Linyan Xue ◽  
Ying Chang ◽  
Jianguang Zhang ◽  
Shilong Chang ◽  
...  

AbstractColorectal cancer (CRC) is one of the main alimentary tract system malignancies affecting people worldwide. Adenomatous polyps are precursors of CRC, and therefore, preventing the development of these lesions may also prevent subsequent malignancy. However, the adenoma detection rate (ADR), a measure of the ability of a colonoscopist to identify and remove precancerous colorectal polyps, varies significantly among endoscopists. Here, we attempt to use a convolutional neural network (CNN) to generate a unique computer-aided diagnosis (CAD) system by exploring in detail the multiple-scale performance of deep neural networks. We applied this system to 3,375 hand-labeled images from the screening colonoscopies of 1,197 patients; of whom, 3,045 were assigned to the training dataset and 330 to the testing dataset. The images were diagnosed simply as either an adenomatous or non-adenomatous polyp. When applied to the testing dataset, our CNN-CAD system achieved a mean average precision of 89.5%. We conclude that the proposed framework could increase the ADR and decrease the incidence of interval CRCs, although further validation through large multicenter trials is required.

2020 ◽  
Vol 08 (10) ◽  
pp. E1341-E1348
Author(s):  
Yuki Nakajima ◽  
Xin Zhu ◽  
Daiki Nemoto ◽  
Qin Li ◽  
Zhe Guo ◽  
...  

Abstract Background and study aims Colorectal cancers (CRC) with deep submucosal invasion (T1b) could be metastatic lesions. However, endoscopic images of T1b CRC resemble those of mucosal CRCs (Tis) or with superficial invasion (T1a). The aim of this study was to develop an automatic computer-aided diagnosis (CAD) system to identify T1b CRC based on plain endoscopic images. Patients and methods In two hospitals, 1839 non-magnified plain endoscopic images from 313 CRCs (Tis 134, T1a 46, T1b 56, beyond T1b 37) with sessile morphology were extracted for training. A CAD system was trained with the data augmented by rotation, saturation, resizing and exposure adjustment. Diagnostic performance was assessed using another dataset including 44 CRCs (Tis 23, T1b 21) from a third hospital. CAD generated a probability level for T1b diagnosis for each image, and > 95 % of probability level was defined as T1b. Lesions with at least one image with a probability level > 0.95 were regarded as T1b. Primary outcome is specificity. Six physicians separately read the same testing dataset. Results Specificity was 87 % (95 % confidence interval: 66–97) for CAD, 100 % (85–100) for Expert 1, 96 % (78–100) for Expert 2, 61 % (39–80) for both gastroenterology trainees, 48 % (27–69) for Novice 1 and 22 % (7–44) for Novice 2. Significant differences were observed between CAD and both novices (P = 0.013, P = 0.0003). Other diagnostic values of CAD were slightly lower than of the two experts. Conclusions Specificity of CAD was superior to novices and possibly to gastroenterology trainees but slightly inferior to experts.


2021 ◽  
Author(s):  
Scarlet Nazarian ◽  
Ben Glover ◽  
Hutan Ashrafian ◽  
Ara Darzi ◽  
Julian Teare

BACKGROUND Colonoscopy reduces the incidence of colorectal cancer by allowing detection and resection of neoplastic polyps. Evidence shows that many small polyps are missed on a single colonoscopy. There has been a successful adoption of AI technologies to tackle the issues around missed polyps and as a tool to increase adenoma detection rate (ADR). OBJECTIVE The aim of this review was to examine the diagnostic accuracy of AI-based technologies in assessing colorectal polyps. METHODS A comprehensive literature search was undertaken using the databases of EMBASE, Medline and the Cochrane Library. PRISMA guidelines were followed. Studies reporting use of computer-aided diagnosis for polyp detection or characterisation during colonoscopy were included. Independent proportion and their differences were calculated and pooled through DerSimonian and Laird random-effects modelling. RESULTS A total of 48 studies were included. The meta-analysis showed a significant increase in pooled PDR in patients with the use of AI for polyp detection during colonoscopy compared with patients who had standard colonoscopy (OR 1.75; 95% CI 1.56-1.96; p= 0.0005). When comparing patients undergoing colonoscopy with the use of AI to those without, there was also a significant increase in ADR (OR 1.53; 95% CI 1.32-1.77; p= 0005). CONCLUSIONS With the aid of machine learning, there is potential to improve ADR and consequently reduce the incidence of CRC. The current generation of AI-based systems demonstrate impressive accuracy for the detection and characterisation of colorectal polyps. However, this is an evolving field and before its adoption into a clinical setting, AI systems must prove worthy to patients and clinicians. CLINICALTRIAL Prospero registration - CRD42020169786


2021 ◽  
Vol 108 (Supplement_3) ◽  
Author(s):  
L F Sánchez Peralta ◽  
J F Ortega Morán ◽  
Cr L Saratxaga ◽  
J B Pagador ◽  
A Picón ◽  
...  

Abstract INTRODUCTION Deep learning techniques have significantly contributed to the field of medical imaging analysis. In case of colorectal cancer, they have shown a great utility for increasing the adenoma detection rate at colonoscopy, but a common validation methodology is still missing. In this study, we present preliminary efforts towards the definition of a validation framework. MATERIAL AND METHODS Different models based on different backbones and encoder-decoder architectures have been trained with a publicly available dataset that contains white light and NBI colonoscopy videos, with 76 different lesions from colonoscopy procedures in 48 human patients. A computer aided detection (CADe) demonstrator has been implemented to show the performance of the models. RESULTS This CADe demonstrator shows the areas detected as polyp by overlapping the predicted mask on the endoscopic image. It allows selecting the video to be used, among those from the test set. Although it only present basic features such as play, pause and moving to the next video, it easily loads the model and allows for visualization of results. The demonstrator is accompanied by a set of metrics to be used depending on the aimed task: polyp detection, localization and segmentation. CONCLUSIONS The use of this CADe demonstrator, together with a publicly available dataset and predefined metrics will allow for an easier and more fair comparison of methods. Further work is still required to validate the proposed framework.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 973
Author(s):  
Valentina Giannini ◽  
Simone Mazzetti ◽  
Giovanni Cappello ◽  
Valeria Maria Doronzio ◽  
Lorenzo Vassallo ◽  
...  

Recently, Computer Aided Diagnosis (CAD) systems have been proposed to help radiologists in detecting and characterizing Prostate Cancer (PCa). However, few studies evaluated the performances of these systems in a clinical setting, especially when used by non-experienced readers. The main aim of this study is to assess the diagnostic performance of non-experienced readers when reporting assisted by the likelihood map generated by a CAD system, and to compare the results with the unassisted interpretation. Three resident radiologists were asked to review multiparametric-MRI of patients with and without PCa, both unassisted and assisted by a CAD system. In both reading sessions, residents recorded all positive cases, and sensitivity, specificity, negative and positive predictive values were computed and compared. The dataset comprised 90 patients (45 with at least one clinically significant biopsy-confirmed PCa). Sensitivity significantly increased in the CAD assisted mode for patients with at least one clinically significant lesion (GS > 6) (68.7% vs. 78.1%, p = 0.018). Overall specificity was not statistically different between unassisted and assisted sessions (94.8% vs. 89.6, p = 0.072). The use of the CAD system significantly increases the per-patient sensitivity of inexperienced readers in the detection of clinically significant PCa, without negatively affecting specificity, while significantly reducing overall reporting time.


Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 694
Author(s):  
Xuejiao Pang ◽  
Zijian Zhao ◽  
Ying Weng

At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.


2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


2021 ◽  
Vol 160 (6) ◽  
pp. S-376
Author(s):  
Eladio Rodriguez-Diaz ◽  
Gyorgy Baffy Wai-Kit Lo ◽  
Hiroshi Mashimo ◽  
Aparna Repaka ◽  
Alexander Goldowsky ◽  
...  

Author(s):  
Kamyab Keshtkar

As a relatively high percentage of adenoma polyps are missed, a computer-aided diagnosis (CAD) tool based on deep learning can aid the endoscopist in diagnosing colorectal polyps or colorectal cancer in order to decrease polyps missing rate and prevent colorectal cancer mortality. Convolutional Neural Network (CNN) is a deep learning method and has achieved better results in detecting and segmenting specific objects in images in the last decade than conventional models such as regression, support vector machines or artificial neural networks. In recent years, based on the studies in medical imaging criteria, CNN models have acquired promising results in detecting masses and lesions in various body organs, including colorectal polyps. In this review, the structure and architecture of CNN models and how colonoscopy images are processed as input and converted to the output are explained in detail. In most primary studies conducted in the colorectal polyp detection and classification field, the CNN model has been regarded as a black box since the calculations performed at different layers in the model training process have not been clarified precisely. Furthermore, I discuss the differences between the CNN and conventional models, inspect how to train the CNN model for diagnosing colorectal polyps or cancer, and evaluate model performance after the training process.


Gut ◽  
2021 ◽  
pp. gutjnl-2020-323799
Author(s):  
Neeraj Narula ◽  
Emily C L Wong ◽  
Jean-Frederic Colombel ◽  
William J Sandborn ◽  
John Kenneth Marshall ◽  
...  

Background and aimsThe Simple Endoscopic Score for Crohn’s disease (SES-CD) is the primary tool for measurement of mucosal inflammation in clinical trials but lacks prognostic potential. We set to develop and validate a modified multiplier of the SES-CD (MM-SES-CD), which takes into consideration each individual parameter’s prognostic value for achieving endoscopic remission (ER) while on active therapy.MethodsIn this posthoc analysis of three CD clinical trial programmes (n=350 patients, baseline SES-CD ≥ 3 with confirmed ulceration), data were pooled and randomly split into a 70% training and 30% testing cohort. The MM-SES-CD was designed using weights for individual parameters as determined by logistic regression modelling, with 1-year ER (SES-CD < 3) being the dependent variable. A cut point score for low and high probability of ER was determined by using the maximum Youden Index and validated in the testing cohort.ResultsBaseline ulcer size, extent of ulceration and presence of non-passable strictures had the strongest association with 1-year ER as compared with affected surface area, with differential weighting of individual parameters across disease segments being observed during logistic regression. The MM-SES-CD was generated using this weighted regression model and demonstrated strong discrimination for ER in the training dataset (area under the receiver operator curve (AUC) 0.83, 95% CI 0.78 to 0.94) and in the testing dataset (AUC 0.82, 95% CI 0.77 to 0.92). In comparison to the MM-SES-CD scoring model, the original SES-CD score lacks accuracy (AUC 0.60, 95% CI 0.55 to 0.65) for predicting the achievement of ER.ConclusionsWe developed and internally validated the MM-SES-CD as an endoscopic severity assessment tool to predict one-year ER in patients with CD on active therapy.


2020 ◽  
Vol 10 (6) ◽  
pp. 2104
Author(s):  
Michał Tomaszewski ◽  
Paweł Michalski ◽  
Jakub Osuchowski

This article presents an analysis of the effectiveness of object detection in digital images with the application of a limited quantity of input. The possibility of using a limited set of learning data was achieved by developing a detailed scenario of the task, which strictly defined the conditions of detector operation in the considered case of a convolutional neural network. The described solution utilizes known architectures of deep neural networks in the process of learning and object detection. The article presents comparisons of results from detecting the most popular deep neural networks while maintaining a limited training set composed of a specific number of selected images from diagnostic video. The analyzed input material was recorded during an inspection flight conducted along high-voltage lines. The object detector was built for a power insulator. The main contribution of the presented papier is the evidence that a limited training set (in our case, just 60 training frames) could be used for object detection, assuming an outdoor scenario with low variability of environmental conditions. The decision of which network will generate the best result for such a limited training set is not a trivial task. Conducted research suggests that the deep neural networks will achieve different levels of effectiveness depending on the amount of training data. The most beneficial results were obtained for two convolutional neural networks: the faster region-convolutional neural network (faster R-CNN) and the region-based fully convolutional network (R-FCN). Faster R-CNN reached the highest AP (average precision) at a level of 0.8 for 60 frames. The R-FCN model gained a worse AP result; however, it can be noted that the relationship between the number of input samples and the obtained results has a significantly lower influence than in the case of other CNN models, which, in the authors’ assessment, is a desired feature in the case of a limited training set.


Sign in / Sign up

Export Citation Format

Share Document