Recognizing Ureter and Uterine Artery in Endoscopic Images Using a Convolutional Neural Network

Author(s):  
Balazs Harangi ◽  
Andras Hajdu ◽  
Rudolf Lampe ◽  
Peter Torok
2020 ◽  
Vol 8 (7) ◽  
pp. 486-486
Author(s):  
Gaoshuang Liu ◽  
Jie Hua ◽  
Zhan Wu ◽  
Tianfang Meng ◽  
Mengxue Sun ◽  
...  

Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


2018 ◽  
Vol 21 (4) ◽  
pp. 653-660 ◽  
Author(s):  
Toshiaki Hirasawa ◽  
Kazuharu Aoyama ◽  
Tetsuya Tanimoto ◽  
Soichiro Ishihara ◽  
Satoki Shichijo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document