scholarly journals Use of Endoscopic Images in the Prediction of Submucosal Invasion of Gastric Neoplasms: Automated Deep Learning Model Development and Usability Study

10.2196/25167 ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. e25167
Author(s):  
Chang Seok Bang ◽  
Hyun Lim ◽  
Hae Min Jeong ◽  
Sung Hyeon Hwang

Background In a previous study, we examined the use of deep learning models to classify the invasion depth (mucosa-confined versus submucosa-invaded) of gastric neoplasms using endoscopic images. The external test accuracy reached 77.3%. However, model establishment is labor intense, requiring high performance. Automated deep learning (AutoDL) models, which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed. Objective The objective of this study was to establish AutoDL models to classify the invasion depth of gastric neoplasms. Additionally, endoscopist–artificial intelligence interactions were explored. Methods The same 2899 endoscopic images that were employed to establish the previous model were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external test accuracy. Neuro-T, Create ML Image Classifier, and AutoML Vision were used in establishing the models. Three doctors with different levels of endoscopy expertise were asked to classify the invasion depth of gastric neoplasms for each image without AutoDL support, with faulty AutoDL support, and with best performance AutoDL support in sequence. Results The Neuro-T–based model reached 89.3% (95% CI 85.1%-93.5%) external test accuracy. For the model establishment time, Create ML Image Classifier showed the fastest time of 13 minutes while reaching 82.0% (95% CI 76.8%-87.2%) external test accuracy. While the expert endoscopist's decisions were not influenced by AutoDL, the faulty AutoDL misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL model. The trainee gained the most benefit from the AutoDL support. Conclusions AutoDL is deemed useful for the on-site establishment of customized deep learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.

2020 ◽  
Author(s):  
Chang Seok Bang ◽  
Hyun Lim ◽  
Hae Min Jeong ◽  
Sung Hyeon Hwang

BACKGROUND Authors previously examined deep-learning models to classify the invasion depth (mucosa-confined vs. submucosa-invaded) of gastric neoplasms using endoscopic images. The external-test accuracy reach 77.3%. However, model establishment is labor-intense, requiring high performance. Automated deep-learning (AutoDL), which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed. OBJECTIVE To establish AutoDL models in classifying the invasion depth of gastric neoplasms. Additionally, endoscopist-artificial intelligence interactions were explored. METHODS The same 2,899 endoscopic images, which were employed to establish the previous model, were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external-test accuracy. “Neuro-T,” “Create ML-Image Classifier,” and “AutoML-Vision” were used in establishing the models. Three doctors with different levels of endoscopy expertise were analyzed for each image without AutoDL’s support, with faulty AutoDL’s support, and with best performance AutoDL’s support in sequence. RESULTS Neuro-T-based model reached 89.3% (95% confidence interval: 85.1–93.5%) external-test accuracy. For the model establishment time, Create ML-Image Classifier showed the fastest time of 13 minutes while reaching 82% external-test accuracy. Expert endoscopist decisions were not influenced by AutoDL. The faulty AutoDL has misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL. The trainee gained the highest benefit from the AutoDL’s support. CONCLUSIONS AutoDL is deemed useful for the on-site establishment of customized deep-learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.


2020 ◽  
Vol 9 (6) ◽  
pp. 1858
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Jae Jun Lee ◽  
Chang Won Seo ◽  
Ju Han Kim

Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms. This study aimed to establish a DL algorithm for accurately predicting submucosal invasion in endoscopic images of gastric neoplasms. Pre-trained convolutional neural network models were fine-tuned with 2899 white-light endoscopic images. The prediction models were subsequently validated with an external dataset of 206 images. In the internal test, the mean area under the curve discriminating submucosal invasion was 0.887 (95% confidence interval: 0.849–0.924) by DenseNet−161 network. In the external test, the mean area under the curve reached 0.887 (0.863–0.910). Clinical simulation showed that 6.7% of patients who underwent gastrectomy in the external test were accurately qualified by the established algorithm for potential endoscopic resection, avoiding unnecessary operation. The established DL algorithm proves useful for the prediction of submucosal invasion in endoscopic images of gastric neoplasms.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 786
Author(s):  
Daniel M. Lang ◽  
Jan C. Peeken ◽  
Stephanie E. Combs ◽  
Jan J. Wilkens ◽  
Stefan Bartzsch

Infection with the human papillomavirus (HPV) has been identified as a major risk factor for oropharyngeal cancer (OPC). HPV-related OPCs have been shown to be more radiosensitive and to have a reduced risk for cancer related death. Hence, the histological determination of HPV status of cancer patients depicts an essential diagnostic factor. We investigated the ability of deep learning models for imaging based HPV status detection. To overcome the problem of small medical datasets, we used a transfer learning approach. A 3D convolutional network pre-trained on sports video clips was fine-tuned, such that full 3D information in the CT images could be exploited. The video pre-trained model was able to differentiate HPV-positive from HPV-negative cases, with an area under the receiver operating characteristic curve (AUC) of 0.81 for an external test set. In comparison to a 3D convolutional neural network (CNN) trained from scratch and a 2D architecture pre-trained on ImageNet, the video pre-trained model performed best. Deep learning models are capable of CT image-based HPV status determination. Video based pre-training has the ability to improve training for 3D medical data, but further studies are needed for verification.


2021 ◽  
Vol 11 ◽  
Author(s):  
Yubizhuo Wang ◽  
Jiayuan Shao ◽  
Pan Wang ◽  
Lintao Chen ◽  
Mingliang Ying ◽  
...  

BackgroundOur aim was to establish a deep learning radiomics method to preoperatively evaluate regional lymph node (LN) staging for hilar cholangiocarcinoma (HC) patients. Methods and MaterialsOf the 179 enrolled HC patients, 90 were pathologically diagnosed with lymph node metastasis. Quantitative radiomic features and deep learning features were extracted. An LN metastasis status classifier was developed through integrating support vector machine, high-performance deep learning radiomics signature, and three clinical characteristics. An LN metastasis stratification classifier (N1 vs. N2) was also proposed with subgroup analysis.ResultsThe average areas under the receiver operating characteristic curve (AUCs) of the LN metastasis status classifier reached 0.866 in the training cohort and 0.870 in the external test cohorts. Meanwhile, the LN metastasis stratification classifier performed well in predicting the risk of LN metastasis, with an average AUC of 0.946.ConclusionsTwo classifiers derived from computed tomography images performed well in predicting LN staging in HC and will be reliable evaluation tools to improve decision-making.


2020 ◽  
pp. 1-17
Author(s):  
Yanhong Yang ◽  
Fleming Y.M. Lure ◽  
Hengyuan Miao ◽  
Ziqi Zhang ◽  
Stefan Jaeger ◽  
...  

Background: Accurate and rapid diagnosis of coronavirus disease (COVID-19) is crucial for timely quarantine and treatment. Purpose: In this study, a deep learning algorithm-based AI model using ResUNet network was developed to evaluate the performance of radiologists with and without AI assistance in distinguishing COVID-19 infected pneumonia patients from other pulmonary infections on CT scans. Methods: For model development and validation, a total number of 694 cases with 111,066 CT slides were retrospectively collected as training data and independent test data in the study. Among them, 118 are confirmed COVID-19 infected pneumonia cases and 576 are other pulmonary infections cases (e.g. tuberculosis cases, common pneumonia cases and non-COVID-19 viral pneumonia cases). The cases were divided into training and testing datasets. The independent test was performed by evaluating and comparing the performance of three radiologists with different years of practice experience in distinguishing COVID-19 infected pneumonia cases with and without the AI assistance. Results: Our final model achieved an overall test accuracy of 0.914 with an area of the receiver operating characteristic (ROC) curve (AUC) of 0.903 in which the sensitivity and specificity are 0.918 and 0.909, respectively. The deep learning-based model then achieved a comparable performance by improving the radiologists’ performance in distinguish COVOD-19 from other pulmonary infections, yielding better average accuracy and sensitivity, from 0.941 to 0.951 and from 0.895 to 0.942, respectively, when compared to radiologists without using AI assistance. Conclusion: A deep learning algorithm-based AI model developed in this study successfully improved radiologists’ performance in distinguishing COVID-19 from other pulmonary infections using chest CT images.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 417 ◽  
Author(s):  
Mohammad Farukh Hashmi ◽  
Satyarth Katiyar ◽  
Avinash G Keskar ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.


Medicina ◽  
2021 ◽  
Vol 57 (4) ◽  
pp. 395
Author(s):  
Corina Maria Vasile ◽  
Anca Loredana Udriștoiu ◽  
Alice Elena Ghenea ◽  
Mihaela Popescu ◽  
Cristian Gheonea ◽  
...  

Background and Objectives: At present, thyroid disorders have a great incidence in the worldwide population, so the development of alternative methods for improving the diagnosis process is necessary. Materials and Methods: For this purpose, we developed an ensemble method that fused two deep learning models, one based on convolutional neural network and the other based on transfer learning. For the first model, called 5-CNN, we developed an efficient end-to-end trained model with five convolutional layers, while for the second model, the pre-trained VGG-19 architecture was repurposed, optimized and trained. We trained and validated our models using a dataset of ultrasound images consisting of four types of thyroidal images: autoimmune, nodular, micro-nodular, and normal. Results: Excellent results were obtained by the ensemble CNN-VGG method, which outperformed the 5-CNN and VGG-19 models: 97.35% for the overall test accuracy with an overall specificity of 98.43%, sensitivity of 95.75%, positive and negative predictive value of 95.41%, and 98.05%. The micro average areas under each receiver operating characteristic curves was 0.96. The results were also validated by two physicians: an endocrinologist and a pediatrician. Conclusions: We proposed a new deep learning study for classifying ultrasound thyroidal images to assist physicians in the diagnosis process.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jun Chen ◽  
Lianlian Wu ◽  
Jun Zhang ◽  
Liang Zhang ◽  
Dexin Gong ◽  
...  

Abstract Computed tomography (CT) is the preferred imaging method for diagnosing 2019 novel coronavirus (COVID19) pneumonia. We aimed to construct a system based on deep learning for detecting COVID-19 pneumonia on high resolution CT. For model development and validation, 46,096 anonymous images from 106 admitted patients, including 51 patients of laboratory confirmed COVID-19 pneumonia and 55 control patients of other diseases in Renmin Hospital of Wuhan University were retrospectively collected. Twenty-seven prospective consecutive patients in Renmin Hospital of Wuhan University were collected to evaluate the efficiency of radiologists against 2019-CoV pneumonia with that of the model. An external test was conducted in Qianjiang Central Hospital to estimate the system’s robustness. The model achieved a per-patient accuracy of 95.24% and a per-image accuracy of 98.85% in internal retrospective dataset. For 27 internal prospective patients, the system achieved a comparable performance to that of expert radiologist. In external dataset, it achieved an accuracy of 96%. With the assistance of the model, the reading time of radiologists was greatly decreased by 65%. The deep learning model showed a comparable performance with expert radiologist, and greatly improved the efficiency of radiologists in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document