AUTOMATED CLASSIFICATION OF GASTRIC NEOPLASMS IN ENDOSCOPIC IMAGES USING A CONVOLUTIONAL NEURAL NETWORK

2020 ◽  
Author(s):  
CS Bang ◽  
BJ Cho ◽  
GH Baik
2020 ◽  
Vol 8 (7) ◽  
pp. 486-486
Author(s):  
Gaoshuang Liu ◽  
Jie Hua ◽  
Zhan Wu ◽  
Tianfang Meng ◽  
Mengxue Sun ◽  
...  

Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Yichuan Liu ◽  
Brandon L Hancock ◽  
Tri Hoang ◽  
Mark R Etherton ◽  
Steven J Mocking ◽  
...  

Background: Fundamental advances in stroke care will require pooling imaging phenotype data from multiple centers, to complement the current aggregation of genomic, environmental, and clinical information. Sharing clinically acquired MRI data from multiple hospitals is challenging due to inherent heterogeneity of clinical data, where the same MRI series may be labeled differently depending on vendor and hospital. Furthermore, the de-identification process may remove data describing the MRI series, requiring human review. However, manually annotating the MRI series is not only laborious and slow but prone to human error. In this work, we present a recurrent convolutional neural network (RCNN) for automated classification of the MRI series. Methods: We randomly selected 1000 subjects from the MRI-GENetics Interface Exploration study and partitioned them into 800 training, 100 validation and 100 testing subjects. We categorized the MRI series into 24 groups (see Table). The RCNN used a modified AlexNet to extract features from 2D slices. AlexNet was pretrained on ImageNet photographs. Since clinical MRI are 3D and 4D, a gated recurrent unit neural network was used to aggregate information from multiple 2D slices to make the final prediction. Results: We achieved a classification accuracy (correct/total cases) of 99.8%, 98.5% and 97.5% on the training, validation and testing set, respectively. The averaged F1-score (percent overlap between predicted cases and actual cases) over all categories were 99.8% 98.2% and 94.4% on the training, validation and testing set. Conclusion: We showed that automated annotation of MRI series by repurposing deep-learning techniques used for photographic image recognition tasks is feasible. Such methods can be used to facilitate high throughput curation of MRI data acquired across multiple centers and enable scientifically productive collaboration by researchers and, ultimately enhancing big data stroke research.


Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


Sign in / Sign up

Export Citation Format

Share Document