Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network

Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.

2018 ◽  
Vol 21 (4) ◽  
pp. 653-660 ◽  
Author(s):  
Toshiaki Hirasawa ◽  
Kazuharu Aoyama ◽  
Tetsuya Tanimoto ◽  
Soichiro Ishihara ◽  
Satoki Shichijo ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Xiangyu Tan ◽  
Kexin Li ◽  
Jiucheng Zhang ◽  
Wenzhe Wang ◽  
Bian Wu ◽  
...  

Abstract Background The incidence rates of cervical cancer in developing countries have been steeply increasing while the medical resources for prevention, detection, and treatment are still quite limited. Computer-based deep learning methods can achieve high-accuracy fast cancer screening. Such methods can lead to early diagnosis, effective treatment, and hopefully successful prevention of cervical cancer. In this work, we seek to construct a robust deep convolutional neural network (DCNN) model that can assist pathologists in screening cervical cancer. Methods ThinPrep cytologic test (TCT) images diagnosed by pathologists from many collaborating hospitals in different regions were collected. The images were divided into a training dataset (13,775 images), validation dataset (2301 images), and test dataset (408,030 images from 290 scanned copies) for training and effect evaluation of a faster region convolutional neural network (Faster R-CNN) system. Results The sensitivity and specificity of the proposed cervical cancer screening system was 99.4 and 34.8%, respectively, with an area under the curve (AUC) of 0.67. The model could also distinguish between negative and positive cells. The sensitivity values of the atypical squamous cells of undetermined significance (ASCUS), the low-grade squamous intraepithelial lesion (LSIL), and the high-grade squamous intraepithelial lesions (HSIL) were 89.3, 71.5, and 73.9%, respectively. This system could quickly classify the images and generate a test report in about 3 minutes. Hence, the system can reduce the burden on the pathologists and saves them valuable time to analyze more complex cases. Conclusions In our study, a CNN-based TCT cervical-cancer screening model was established through a retrospective study of multicenter TCT images. This model shows improved speed and accuracy for cervical cancer screening, and helps overcome the shortage of medical resources required for cervical cancer screening.


2019 ◽  
Vol 3 (2) ◽  
Author(s):  
Toru Hirano ◽  
Masayuki Nishide ◽  
Naoki Nonaka ◽  
Jun Seita ◽  
Kosuke Ebina ◽  
...  

Abstract Objective The purpose of this research was to develop a deep-learning model to assess radiographic finger joint destruction in RA. Methods The model comprises two steps: a joint-detection step and a joint-evaluation step. Among 216 radiographs of 108 patients with RA, 186 radiographs were assigned to the training/validation dataset and 30 to the test dataset. In the training/validation dataset, images of PIP joints, the IP joint of the thumb or MCP joints were manually clipped and scored for joint space narrowing (JSN) and bone erosion by clinicians, and then these images were augmented. As a result, 11 160 images were used to train and validate a deep convolutional neural network for joint evaluation. Three thousand seven hundred and twenty selected images were used to train machine learning for joint detection. These steps were combined as the assessment model for radiographic finger joint destruction. Performance of the model was examined using the test dataset, which was not included in the training/validation process, by comparing the scores assigned by the model and clinicians. Results The model detected PIP joints, the IP joint of the thumb and MCP joints with a sensitivity of 95.3% and assigned scores for JSN and erosion. Accuracy (percentage of exact agreement) reached 49.3–65.4% for JSN and 70.6–74.1% for erosion. The correlation coefficient between scores by the model and clinicians per image was 0.72–0.88 for JSN and 0.54–0.75 for erosion. Conclusion Image processing with the trained convolutional neural network model is promising to assess radiographs in RA.


2021 ◽  
Vol 11 (8) ◽  
pp. 786
Author(s):  
Chaoyue Chen ◽  
Yisong Cheng ◽  
Jianfeng Xu ◽  
Ting Zhang ◽  
Xin Shu ◽  
...  

The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset.


Sign in / Sign up

Export Citation Format

Share Document