scholarly journals Gallbladder Polyp Classification in Ultrasound Images Using an Ensemble Convolutional Neural Network Model

2021 ◽  
Vol 10 (16) ◽  
pp. 3585
Author(s):  
Taewan Kim ◽  
Young Hoon Choi ◽  
Jin Ho Choi ◽  
Sang Hyub Lee ◽  
Seungchul Lee ◽  
...  

Differential diagnosis of true gallbladder polyps remains a challenging task. This study aimed to differentiate true polyps in ultrasound images using deep learning, especially gallbladder polyps less than 20 mm in size, where clinical distinction is necessary. A total of 501 patients with gallbladder polyp pathology confirmed through cholecystectomy were enrolled from two tertiary hospitals. Abdominal ultrasound images of gallbladder polyps from these patients were analyzed using an ensemble model combining three convolutional neural network (CNN) models and a 5-fold cross-validation. True polyp diagnosis with the ensemble model that learned only using ultrasonography images achieved an area under receiver operating characteristic curve (AUC) of 0.8960 and accuracy of 83.63%. After adding patient age and polyp size information, the diagnostic performance of the ensemble model improved, with a high specificity of 88.35%, AUC of 0.9082, and accuracy of 87.61%, outperforming the individual CNN models constituting the ensemble model. In the subgroup analysis, the ensemble model showed the best performance with AUC of 0.9131 for polyps larger than 10 mm. Our proposed ensemble model that combines three CNN models classifies gallbladder polyps of less than 20 mm in ultrasonography images with high accuracy and can be useful for avoiding unnecessary cholecystectomy with high specificity.

BMJ Open ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. e045120
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matthew White ◽  
...  

ObjectivesLung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01.ConclusionsA DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Heng Ye ◽  
Jing Hang ◽  
Meimei Zhang ◽  
Xiaowei Chen ◽  
Xinhua Ye ◽  
...  

AbstractTriple negative (TN) breast cancer is a subtype of breast cancer which is difficult for early detection and the prognosis is poor. In this paper, 910 benign and 934 malignant (110 TN and 824 NTN) B-mode breast ultrasound images were collected. A Resnet50 deep convolutional neural network was fine-tuned. The results showed that the averaged area under the receiver operating characteristic curve (AUC) of discriminating malignant from benign ones were 0.9789 (benign vs. TN), 0.9689 (benign vs. NTN). To discriminate TN from NTN breast cancer, the AUC was 0.9000, the accuracy was 88.89%, the sensitivity was 87.5%, and the specificity was 90.00%. It showed that the computer-aided system based on DCNN is expected to be a promising noninvasive clinical tool for ultrasound diagnosis of TN breast cancer.


Author(s):  
Oguz Akbilgic ◽  
Liam Butler ◽  
Ibrahim Karabayir ◽  
Patricia P Chang ◽  
Dalane W Kitzman ◽  
...  

Abstract Aims Heart failure (HF) is a leading cause of death. Early intervention is the key to reduce HF-related morbidity and mortality. This study assesses the utility of electrocardiograms (ECGs) in HF risk prediction. Methods and results Data from the baseline visits (1987–89) of the Atherosclerosis Risk in Communities (ARIC) study was used. Incident hospitalized HF events were ascertained by ICD codes. Participants with good quality baseline ECGs were included. Participants with prevalent HF were excluded. ECG-artificial intelligence (AI) model to predict HF was created as a deep residual convolutional neural network (CNN) utilizing standard 12-lead ECG. The area under the receiver operating characteristic curve (AUC) was used to evaluate prediction models including (CNN), light gradient boosting machines (LGBM), and Cox proportional hazards regression. A total of 14 613 (45% male, 73% of white, mean age ± standard deviation of 54 ± 5) participants were eligible. A total of 803 (5.5%) participants developed HF within 10 years from baseline. Convolutional neural network utilizing solely ECG achieved an AUC of 0.756 (0.717–0.795) on the hold-out test data. ARIC and Framingham Heart Study (FHS) HF risk calculators yielded AUC of 0.802 (0.750–0.850) and 0.780 (0.740–0.830). The highest AUC of 0.818 (0.778–0.859) was obtained when ECG-AI model output, age, gender, race, body mass index, smoking status, prevalent coronary heart disease, diabetes mellitus, systolic blood pressure, and heart rate were used as predictors of HF within LGBM. The ECG-AI model output was the most important predictor of HF. Conclusions ECG-AI model based solely on information extracted from ECG independently predicts HF with accuracy comparable to existing FHS and ARIC risk calculators.


Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


Author(s):  
Arun Asokan Nair ◽  
Mardava Rajugopal Gubbi ◽  
Trac Duy Tran ◽  
Austin Reiter ◽  
Muyinatu A. Lediju Bell

2020 ◽  
Vol 79 (9) ◽  
pp. 1189-1193
Author(s):  
Anders Bossel Holst Christensen ◽  
Søren Andreas Just ◽  
Jakob Kristian Holm Andersen ◽  
Thiusius Rajeeth Savarimuthu

ObjectivesWe have previously shown that neural network technology can be used for scoring arthritis disease activity in ultrasound images from rheumatoid arthritis (RA) patients, giving scores according to the EULAR-OMERACT grading system. We have now further developed the architecture of this neural network and can here present a new idea applying cascaded convolutional neural network (CNN) design with even better results. We evaluate the generalisability of this method on unseen data, comparing the CNN with an expert rheumatologist.MethodsThe images were graded by an expert rheumatologist according to the EULAR-OMERACT synovitis scoring system. CNNs were systematically trained to find the best configuration. The algorithms were evaluated on a separate test data set and compared with the gradings of an expert rheumatologist on a per-joint basis using a Kappa statistic, and on a per-patient basis using a Wilcoxon signed-rank test.ResultsWith 1678 images available for training and 322 images for testing the model, it achieved an overall four-class accuracy of 83.9%. On a per-patient level, there was no significant difference between the classifications of the model and of a human expert (p=0.85). Our original CNN had a four-class accuracy of 75.0%.ConclusionsUsing a new network architecture we have further enhanced the algorithm and have shown strong agreement with an expert rheumatologist on a per-joint basis and on a per-patient basis. This emphasises the potential of using CNNs with this architecture as a strong assistive tool for the objective assessment of disease activity of RA patients.


Sign in / Sign up

Export Citation Format

Share Document