scholarly journals Prediction of Submucosal Invasion for Gastric Neoplasms in Endoscopic Images Using Deep-Learning

2020 ◽  
Vol 9 (6) ◽  
pp. 1858
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Jae Jun Lee ◽  
Chang Won Seo ◽  
Ju Han Kim

Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms. This study aimed to establish a DL algorithm for accurately predicting submucosal invasion in endoscopic images of gastric neoplasms. Pre-trained convolutional neural network models were fine-tuned with 2899 white-light endoscopic images. The prediction models were subsequently validated with an external dataset of 206 images. In the internal test, the mean area under the curve discriminating submucosal invasion was 0.887 (95% confidence interval: 0.849–0.924) by DenseNet−161 network. In the external test, the mean area under the curve reached 0.887 (0.863–0.910). Clinical simulation showed that 6.7% of patients who underwent gastrectomy in the external test were accurately qualified by the established algorithm for potential endoscopic resection, avoiding unnecessary operation. The established DL algorithm proves useful for the prediction of submucosal invasion in endoscopic images of gastric neoplasms.

10.2196/25167 ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. e25167
Author(s):  
Chang Seok Bang ◽  
Hyun Lim ◽  
Hae Min Jeong ◽  
Sung Hyeon Hwang

Background In a previous study, we examined the use of deep learning models to classify the invasion depth (mucosa-confined versus submucosa-invaded) of gastric neoplasms using endoscopic images. The external test accuracy reached 77.3%. However, model establishment is labor intense, requiring high performance. Automated deep learning (AutoDL) models, which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed. Objective The objective of this study was to establish AutoDL models to classify the invasion depth of gastric neoplasms. Additionally, endoscopist–artificial intelligence interactions were explored. Methods The same 2899 endoscopic images that were employed to establish the previous model were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external test accuracy. Neuro-T, Create ML Image Classifier, and AutoML Vision were used in establishing the models. Three doctors with different levels of endoscopy expertise were asked to classify the invasion depth of gastric neoplasms for each image without AutoDL support, with faulty AutoDL support, and with best performance AutoDL support in sequence. Results The Neuro-T–based model reached 89.3% (95% CI 85.1%-93.5%) external test accuracy. For the model establishment time, Create ML Image Classifier showed the fastest time of 13 minutes while reaching 82.0% (95% CI 76.8%-87.2%) external test accuracy. While the expert endoscopist's decisions were not influenced by AutoDL, the faulty AutoDL misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL model. The trainee gained the most benefit from the AutoDL support. Conclusions AutoDL is deemed useful for the on-site establishment of customized deep learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.


2020 ◽  
Author(s):  
Chang Seok Bang ◽  
Hyun Lim ◽  
Hae Min Jeong ◽  
Sung Hyeon Hwang

BACKGROUND Authors previously examined deep-learning models to classify the invasion depth (mucosa-confined vs. submucosa-invaded) of gastric neoplasms using endoscopic images. The external-test accuracy reach 77.3%. However, model establishment is labor-intense, requiring high performance. Automated deep-learning (AutoDL), which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed. OBJECTIVE To establish AutoDL models in classifying the invasion depth of gastric neoplasms. Additionally, endoscopist-artificial intelligence interactions were explored. METHODS The same 2,899 endoscopic images, which were employed to establish the previous model, were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external-test accuracy. “Neuro-T,” “Create ML-Image Classifier,” and “AutoML-Vision” were used in establishing the models. Three doctors with different levels of endoscopy expertise were analyzed for each image without AutoDL’s support, with faulty AutoDL’s support, and with best performance AutoDL’s support in sequence. RESULTS Neuro-T-based model reached 89.3% (95% confidence interval: 85.1–93.5%) external-test accuracy. For the model establishment time, Create ML-Image Classifier showed the fastest time of 13 minutes while reaching 82% external-test accuracy. Expert endoscopist decisions were not influenced by AutoDL. The faulty AutoDL has misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL. The trainee gained the highest benefit from the AutoDL’s support. CONCLUSIONS AutoDL is deemed useful for the on-site establishment of customized deep-learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.


Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Gang Yu ◽  
Kai Sun ◽  
Chao Xu ◽  
Xing-Hua Shi ◽  
Chong Wu ◽  
...  

AbstractMachine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.


2019 ◽  
Vol 98 (11) ◽  
pp. 1234-1238 ◽  
Author(s):  
S. Yamaguchi ◽  
C. Lee ◽  
O. Karaer ◽  
S. Ban ◽  
A. Mine ◽  
...  

A preventive measure for debonding has not been established and is highly desirable to improve the survival rate of computer-aided design/computer-aided manufacturing (CAD/CAM) composite resin (CR) crowns. The aim of this study was to assess the usefulness of deep learning with a convolution neural network (CNN) method to predict the debonding probability of CAD/CAM CR crowns from 2-dimensional images captured from 3-dimensional (3D) stereolithography models of a die scanned by a 3D oral scanner. All cases of CAD/CAM CR crowns were manufactured from April 2014 to November 2015 at the Division of Prosthodontics, Osaka University Dental Hospital (Ethical Review Board at Osaka University, approval H27-E11). The data set consisted of a total of 24 cases: 12 trouble-free and 12 debonding as known labels. A total of 8,640 images were randomly divided into 6,480 training and validation images and 2,160 test images. Deep learning with a CNN method was conducted to develop a learning model to predict the debonding probability. The prediction accuracy, precision, recall, F-measure, receiver operating characteristic, and area under the curve of the learning model were assessed for the test images. Also, the mean calculation time was measured during the prediction for the test images. The prediction accuracy, precision, recall, and F-measure values of deep learning with a CNN method for the prediction of the debonding probability were 98.5%, 97.0%, 100%, and 0.985, respectively. The mean calculation time was 2 ms/step for 2,160 test images. The area under the curve was 0.998. Artificial intelligence (AI) technology—that is, the deep learning with a CNN method established in this study—demonstrated considerably good performance in terms of predicting the debonding probability of a CAD/CAM CR crown with 3D stereolithography models of a die scanned from patients.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Munetoshi Hinata ◽  
Tetsuo Ushiku

AbstractImmune checkpoint inhibitor (ICI) therapy is widely used but effective only in a subset of gastric cancers. Epstein–Barr virus (EBV)-positive and microsatellite instability (MSI) / mismatch repair deficient (dMMR) tumors have been reported to be highly responsive to ICIs. However, detecting these subtypes requires costly techniques, such as immunohistochemistry and molecular testing. In the present study, we constructed a histology-based deep learning model that aimed to screen this immunotherapy-sensitive subgroup efficiently. We processed whole slide images of 408 cases of gastric adenocarcinoma, including 108 EBV, 58 MSI/dMMR, and 242 other subtypes. Many images generated by data augmentation of the learning set were used for training convolutional neural networks to establish an automatic detection platform for EBV and MSI/dMMR subtypes, and the test sets of images were used to verify the learning outcome. Our model detected the subgroup (EBV + MSI/dMMR tumors) with high accuracy in test cases with an area under the curve of 0.947 (0.901–0.992). This result was slightly better than when EBV and MSI/dMMR tumors were detected separately. In an external validation cohort including 244 gastric cancers from The Cancer Genome Atlas database, our model showed a favorable result for detecting the “EBV + MSI/dMMR” subgroup with an AUC of 0.870 (0.809–0.931). In addition, a visualization of the trained neural network highlighted intraepithelial lymphocytosis as the ground for prediction, suggesting that this feature is a discriminative characteristic shared by EBV and MSI/dMMR tumors. Histology-based deep learning models are expected to be used for detecting EBV and MSI/dMMR gastric cancers as economical and less time-consuming alternatives, which may help to effectively stratify patients who respond to ICIs.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


2021 ◽  
Vol 13 (14) ◽  
pp. 2822
Author(s):  
Zhe Lin ◽  
Wenxuan Guo

An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.


Sign in / Sign up

Export Citation Format

Share Document