scholarly journals Automated detection of cecal intubation with variable bowel preparation using a deep convolutional neural network

2021 ◽  
Vol 09 (11) ◽  
pp. E1778-E1784
Author(s):  
Daniel J. Low ◽  
Zhuoqiao Hong ◽  
Rishad Khan ◽  
Rishi Bansal ◽  
Nikko Gimpaya ◽  
...  

Abstract Background and study aims Colonoscopy completion reduces post-colonoscopy colorectal cancer. As a result, there have been attempts at implementing artificial intelligence to automate the detection of the appendiceal orifice (AO) for quality assurance. However, the utilization of these algorithms has not been demonstrated in suboptimal conditions, including variable bowel preparation. We present an automated computer-assisted method using a deep convolutional neural network to detect the AO irrespective of bowel preparation. Methods A total of 13,222 images (6,663 AO and 1,322 non-AO) were extracted from 35 colonoscopy videos recorded between 2015 and 2018. The images were labelled with Boston Bowel Preparation Scale scores. A total of 11,900 images were used for training/validation and 1,322 for testing. We developed a convolutional neural network (CNN) with a DenseNet architecture pre-trained on ImageNet as a feature extractor on our data and trained a classifier uniquely tailored for identification of AO and non-AO images using binary cross entropy loss. Results The deep convolutional neural network was able to correctly classify the AO and non-AO images with an accuracy of 94 %. The area under the receiver operating curve of this neural network was 0.98. The sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm were 0.96, 0.92, 0.92 and 0.96, respectively. AO detection was > 95 % regardless of BBPS scores, while non-AO detection improved from BBPS 1 score (83.95 %) to BBPS 3 score (98.28 %). Conclusions A deep convolutional neural network was created demonstrating excellent discrimination between AO from non-AO images despite variable bowel preparation. This algorithm will require further testing to ascertain its effectiveness in real-time colonoscopy.

2021 ◽  
Vol 160 (6) ◽  
pp. S-376-S-377
Author(s):  
Daniel J. Low ◽  
Zhuoqiao Hong ◽  
Sechiv Jugnundan ◽  
Anjishnu Mukherjee ◽  
Samir C. Grover

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pei Yang ◽  
Yong Pi ◽  
Tao He ◽  
Jiangming Sun ◽  
Jianan Wei ◽  
...  

Abstract Background 99mTc-pertechnetate thyroid scintigraphy is a valid complementary avenue for evaluating thyroid disease in the clinic, the image feature of thyroid scintigram is relatively simple but the interpretation still has a moderate consistency among physicians. Thus, we aimed to develop an artificial intelligence (AI) system to automatically classify the four patterns of thyroid scintigram. Methods We collected 3087 thyroid scintigrams from center 1 to construct the training dataset (n = 2468) and internal validating dataset (n = 619), and another 302 cases from center 2 as external validating datasets. Four pre-trained neural networks that included ResNet50, DenseNet169, InceptionV3, and InceptionResNetV2 were implemented to construct AI models. The models were trained separately with transfer learning. We evaluated each model’s performance with metrics as following: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), recall, precision, and F1-score. Results The overall accuracy of four pre-trained neural networks in classifying four common uptake patterns of thyroid scintigrams all exceeded 90%, and the InceptionV3 stands out from others. It reached the highest performance with an overall accuracy of 92.73% for internal validation and 87.75% for external validation, respectively. As for each category of thyroid scintigrams, the area under the receiver operator characteristic curve (AUC) was 0.986 for ‘diffusely increased,’ 0.997 for ‘diffusely decreased,’ 0.998 for ‘focal increased,’ and 0.945 for ‘heterogeneous uptake’ in internal validation, respectively. Accordingly, the corresponding performances also obtained an ideal result of 0.939, 1.000, 0.974, and 0.915 in external validation, respectively. Conclusions Deep convolutional neural network-based AI model represented considerable performance in the classification of thyroid scintigrams, which may help physicians improve the interpretation of thyroid scintigrams more consistently and efficiently.


2020 ◽  
Author(s):  
Sajid Ahmed ◽  
Rafsanjani Muhammod ◽  
Sheikh Adilina ◽  
Zahid Hossain Khan ◽  
Swakkhar Shatabda ◽  
...  

AbstractAlthough advancing the therapeutic alternatives for treating deadly cancers has gained much attention globally, still the primary methods such as chemotherapy have significant downsides and low specificity. Most recently, Anticancer peptides (ACPs) have emerged as a potential alternative to therapeutic alternatives with much fewer negative side-effects. However, the identification of ACPs through wet-lab experiments is expensive and time-consuming. Hence, computational methods have emerged as viable alternatives. During the past few years, several computational ACP identification techniques using hand-engineered features have been proposed to solve this problem. In this study, we propose a new multi headed deep convolutional neural network model called ACP-MHCNN, for extracting and combining discriminative features from different information sources in an interactive way. Our model extracts sequence, physicochemical, and evolutionary based features for ACP identification through simultaneous interaction with different numerical peptide representations while restraining parameter overhead. It is evident through rigorous experiments using cross-validation and independent-dataset that ACP-MHCNN outperforms other models for anticancer peptide identification by a substantial margin. ACP-MHCNN outperforms state-of-the-art model by 6.3%, 8.6%, 3.7%, 4.0%, and 0.20 in terms of accuracy, sensitivity, specificity, precision, and MCC respectively. ACP-MHCNN and its relevant codes and datasets are publicly available at: https://github.com/mrzResearchArena/Anticancer-Peptides-CNN.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sajid Ahmed ◽  
Rafsanjani Muhammod ◽  
Zahid Hossain Khan ◽  
Sheikh Adilina ◽  
Alok Sharma ◽  
...  

AbstractAlthough advancing the therapeutic alternatives for treating deadly cancers has gained much attention globally, still the primary methods such as chemotherapy have significant downsides and low specificity. Most recently, Anticancer peptides (ACPs) have emerged as a potential alternative to therapeutic alternatives with much fewer negative side-effects. However, the identification of ACPs through wet-lab experiments is expensive and time-consuming. Hence, computational methods have emerged as viable alternatives. During the past few years, several computational ACP identification techniques using hand-engineered features have been proposed to solve this problem. In this study, we propose a new multi headed deep convolutional neural network model called ACP-MHCNN, for extracting and combining discriminative features from different information sources in an interactive way. Our model extracts sequence, physicochemical, and evolutionary based features for ACP identification using different numerical peptide representations while restraining parameter overhead. It is evident through rigorous experiments using cross-validation and independent-dataset that ACP-MHCNN outperforms other models for anticancer peptide identification by a substantial margin on our employed benchmarks. ACP-MHCNN outperforms state-of-the-art model by 6.3%, 8.6%, 3.7%, 4.0%, and 0.20 in terms of accuracy, sensitivity, specificity, precision, and MCC respectively. ACP-MHCNN and its relevant codes and datasets are publicly available at: https://github.com/mrzResearchArena/Anticancer-Peptides-CNN. ACP-MHCNN is also publicly available as an online predictor at: https://anticancer.pythonanywhere.com/.


2020 ◽  
pp. bjophthalmol-2020-316526
Author(s):  
Yo-Ping Huang ◽  
Haobijam Basanta ◽  
Eugene Yu-Chuan Kang ◽  
Kuan-Jen Chen ◽  
Yih-Shiou Hwang ◽  
...  

Background/AimTo automatically detect and classify the early stages of retinopathy of prematurity (ROP) using a deep convolutional neural network (CNN).MethodsThis retrospective cross-sectional study was conducted in a referral medical centre in Taiwan. Only premature infants with no ROP, stage 1 ROP or stage 2 ROP were enrolled. Overall, 11 372 retinal fundus images were compiled and split into 10 235 images (90%) for training, 1137 (10%) for validation and 244 for testing. A deep CNN was implemented to classify images according to the ROP stage. Data were collected from December 17, 2013 to May 24, 2019 and analysed from December 2018 to January 2020. The metrics of sensitivity, specificity and area under the receiver operating characteristic curve were adopted to evaluate the performance of the algorithm relative to the reference standard diagnosis.ResultsThe model was trained using fivefold cross-validation, yielding an average accuracy of 99.93%±0.03 during training and 92.23%±1.39 during testing. The sensitivity and specificity scores of the model were 96.14%±0.87 and 95.95%±0.48, 91.82%±2.03 and 94.50%±0.71, and 89.81%±1.82 and 98.99%±0.40 when predicting no ROP versus ROP, stage 1 ROP versus no ROP and stage 2 ROP, and stage 2 ROP versus no ROP and stage 1 ROP, respectively.ConclusionsThe proposed system can accurately differentiate among ROP early stages and has the potential to help ophthalmologists classify ROP at an early stage.


Endoscopy ◽  
2020 ◽  
Author(s):  
Atsuo Yamada ◽  
Ryota Niikura ◽  
Keita Otani ◽  
Tomonori Aoki ◽  
Kazuhiko Koike

Abstract Background Although colorectal neoplasms are the most common abnormalities found in colon capsule endoscopy (CCE), no computer-aided detection method is yet available. We developed an artificial intelligence (AI) system that uses deep learning to automatically detect such lesions in CCE images. Methods We trained a deep convolutional neural network system based on a Single Shot MultiBox Detector using 15 933 CCE images of colorectal neoplasms, such as polyps and cancers. We assessed performance by calculating areas under the receiver operating characteristic curves, along with sensitivities, specificities, and accuracies, using an independent test set of 4784 images, including 1850 images of colorectal neoplasms and 2934 normal colon images. Results The area under the curve for detection of colorectal neoplasia by the AI model was 0.902. The sensitivity, specificity, and accuracy were 79.0 %, 87.0 %, and 83.9 %, respectively, at a probability cutoff of 0.348. Conclusions We developed and validated a new AI-based system that automatically detects colorectal neoplasms in CCE images.


2020 ◽  
Vol 7 ◽  
Author(s):  
Junrong Jiang ◽  
Hai Deng ◽  
Yumei Xue ◽  
Hongtao Liao ◽  
Shulin Wu

Background: Left atrial enlargement (LAE) can independently predict the development of a variety of cardiovascular diseases.Objectives: This study sought to develop an artificial intelligence approach for the detection of LAE based on 12-lead electrocardiography (ECG).Methods: The study population came from an epidemiological survey of heart disease in Guangzhou. Elderly people (3,391) over 65 years old who had both 10-s 12 lead ECG and echocardiography were enrolled in this study. The left atrial (LA) anteroposterior diameter >40 mm on echocardiography was diagnosed as LAE, and the LA anteroposterior diameter was indexed by body surface area (BSA) to classify LAE into different degrees. A convolutional neural network (CNN) was trained and validated to detect LAE from normal ECGs. The performance of the model was evaluated by calculating the area under the curve (AUC), accuracy, sensitivity, specificity, and F1 score.Results: In this study, gender, obesity, hypertension, and valvular heart disease seemed to be related to left atrial enlargement. The AI-enabled ECG identified LAE with an AUC of 0.949 (95% CI: 0.911–0.987). The sensitivity, specificity, accuracy, precision, and F1 score were 84.0%, 92.0%, 88.0%, 91.3%, and 0.875, respectively. Physicians identified LAE with sensitivity, specificity, accuracy, precision, and F1 scores of 38.0%, 84.0%, 61.0%, 70.4%, and 0.494, respectively. In classifying LAE in different degrees, the AUCs of identifying normal, mild LAE, and moderate-severe LAE ECGs were 0.942 (95% CI: 0.903–0.981), 0.951 (95% CI: 0.917–0.987), and 0.998 (95% CI: 0.996–1.00), respectively. The sensitivity, specificity, accuracy, positive predictive value, and F1 scores of diagnosing mild LAE were 82.0%, 92.0%, 88.7%, 89.1%, and 0.854, while the sensitivity, specificity, accuracy, positive predictive value, and F1 scores of diagnosing moderate-severe LAE were 98.0%, 84.0%, 88.7%, 96.1%, and 0.969, respectively.Conclusions: An AI-enabled ECG acquired during sinus rhythm permits identification of individuals with a high likelihood of LAE. This model requires further refinement and external validation, but it may hold promise for LAE screening.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yung-Hui Li ◽  
Nai-Ning Yeh ◽  
Shih-Jen Chen ◽  
Yu-Chien Chung

Diabetic retinopathy (DR) is a complication of long-standing diabetes, which is hard to detect in its early stage because it only shows a few symptoms. Nowadays, the diagnosis of DR usually requires taking digital fundus images, as well as images using optical coherence tomography (OCT). Since OCT equipment is very expensive, it will benefit both the patients and the ophthalmologists if an accurate diagnosis can be made, based solely on reading digital fundus images. In the paper, we present a novel algorithm based on deep convolutional neural network (DCNN). Unlike the traditional DCNN approach, we replace the commonly used max-pooling layers with fractional max-pooling. Two of these DCNNs with a different number of layers are trained to derive more discriminative features for classification. After combining features from metadata of the image and DCNNs, we train a support vector machine (SVM) classifier to learn the underlying boundary of distributions of each class. For the experiments, we used the publicly available DR detection database provided by Kaggle. We used 34,124 training images and 1,000 validation images to build our model and tested with 53,572 testing images. The proposed DR classifier classifies the stages of DR into five categories, labeled with an integer ranging between zero and four. The experimental results show that the proposed method can achieve a recognition rate up to 86.17%, which is higher than previously reported in the literature. In addition to designing a machine learning algorithm, we also develop an app called “Deep Retina.” Equipped with a handheld ophthalmoscope, the average person can take fundus images by themselves and obtain an immediate result, calculated by our algorithm. It is beneficial for home care, remote medical care, and self-examination.


Sign in / Sign up

Export Citation Format

Share Document