scholarly journals A Novel Approach for Increased Convolutional Neural Network Performance in Gastric-cancer Classification using Endoscopic Images

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Sin-ae Lee ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho
Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


2018 ◽  
Vol 21 (4) ◽  
pp. 653-660 ◽  
Author(s):  
Toshiaki Hirasawa ◽  
Kazuharu Aoyama ◽  
Tetsuya Tanimoto ◽  
Soichiro Ishihara ◽  
Satoki Shichijo ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Muhammad Aqeel Aslam ◽  
Cuili Xue ◽  
Yunsheng Chen ◽  
Amin Zhang ◽  
Manhua Liu ◽  
...  

AbstractDeep learning is an emerging tool, which is regularly used for disease diagnosis in the medical field. A new research direction has been developed for the detection of early-stage gastric cancer. The computer-aided diagnosis (CAD) systems reduce the mortality rate due to their effectiveness. In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. A Softmax classifier was then integrated to the proposed method of feature extraction, to classify gastric cancer from the breath samples. Precisely, we identified fifty peaks in each spectrum to distinguish the EGC, AGC, and healthy persons. This CAD system reduces the distance between the input and output by learning the features and preserve the structure of the input data set of breath samples. The features were extracted from the unlabeled data of the breath samples. After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. In last, fine-tuning of the developed neural network was carried out with labeled training data to make the model more reliable and repeatable. The proposed deep stacked sparse autoencoder neural network architecture exhibits excellent results, with an overall accuracy of 98.7% for advanced gastric cancer classification and 97.3% for early gastric cancer detection using breath analysis. Moreover, the developed model produces an excellent result for recall, precision, and f score value, making it suitable for clinical application.


Sign in / Sign up

Export Citation Format

Share Document