scholarly journals Development of a convolutional neural network to differentiate among the etiology of similar appearing pathological B lines on lung ultrasound: a deep learning study

BMJ Open ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. e045120
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matthew White ◽  
...  

ObjectivesLung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01.ConclusionsA DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.

2020 ◽  
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matt White ◽  
...  

AbstractObjectivesLung ultrasound (LUS) is a portable, low cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network was trained on LUS images with B lines of different etiologies. CNN diagnostic performance, as validated using a 10% data holdback set was compared to surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants600 LUS videos (121,381 frames) of B lines from 243 distinct patients with either 1) COVID-19, Non-COVID acute respiratory distress syndrome (NCOVID) and 3) Hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (AUC 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p < 0.01.ConclusionsA deep learning model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multi-center research is merited.


Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


2020 ◽  
Vol 10 (21) ◽  
pp. 7448
Author(s):  
Jorge Felipe Gaviria ◽  
Alejandra Escalante-Perez ◽  
Juan Camilo Castiblanco ◽  
Nicolas Vergara ◽  
Valentina Parra-Garces ◽  
...  

Real-time automatic identification of audio distress signals in urban areas is a task that in a smart city can improve response times in emergency alert systems. The main challenge in this problem lies in finding a model that is able to accurately recognize these type of signals in the presence of background noise and allows for real-time processing. In this paper, we present the design of a portable and low-cost device for accurate audio distress signal recognition in real urban scenarios based on deep learning models. As real audio distress recordings in urban areas have not been collected and made publicly available so far, we first constructed a database where audios were recorded in urban areas using a low-cost microphone. Using this database, we trained a deep multi-headed 2D convolutional neural network that processed temporal and frequency features to accurately recognize audio distress signals in noisy environments with a significant performance improvement to other methods from the literature. Then, we deployed and assessed the trained convolutional neural network model on a Raspberry Pi that, along with the low-cost microphone, constituted a device for accurate real-time audio recognition. Source code and database are publicly available.


2021 ◽  
Vol 10 (16) ◽  
pp. 3585
Author(s):  
Taewan Kim ◽  
Young Hoon Choi ◽  
Jin Ho Choi ◽  
Sang Hyub Lee ◽  
Seungchul Lee ◽  
...  

Differential diagnosis of true gallbladder polyps remains a challenging task. This study aimed to differentiate true polyps in ultrasound images using deep learning, especially gallbladder polyps less than 20 mm in size, where clinical distinction is necessary. A total of 501 patients with gallbladder polyp pathology confirmed through cholecystectomy were enrolled from two tertiary hospitals. Abdominal ultrasound images of gallbladder polyps from these patients were analyzed using an ensemble model combining three convolutional neural network (CNN) models and a 5-fold cross-validation. True polyp diagnosis with the ensemble model that learned only using ultrasonography images achieved an area under receiver operating characteristic curve (AUC) of 0.8960 and accuracy of 83.63%. After adding patient age and polyp size information, the diagnostic performance of the ensemble model improved, with a high specificity of 88.35%, AUC of 0.9082, and accuracy of 87.61%, outperforming the individual CNN models constituting the ensemble model. In the subgroup analysis, the ensemble model showed the best performance with AUC of 0.9131 for polyps larger than 10 mm. Our proposed ensemble model that combines three CNN models classifies gallbladder polyps of less than 20 mm in ultrasonography images with high accuracy and can be useful for avoiding unnecessary cholecystectomy with high specificity.


2019 ◽  
Vol 147 (8) ◽  
pp. 2827-2845 ◽  
Author(s):  
David John Gagne II ◽  
Sue Ellen Haupt ◽  
Douglas W. Nychka ◽  
Gregory Thompson

Abstract Deep learning models, such as convolutional neural networks, utilize multiple specialized layers to encode spatial patterns at different scales. In this study, deep learning models are compared with standard machine learning approaches on the task of predicting the probability of severe hail based on upper-air dynamic and thermodynamic fields from a convection-allowing numerical weather prediction model. The data for this study come from patches surrounding storms identified in NCAR convection-allowing ensemble runs from 3 May to 3 June 2016. The machine learning models are trained to predict whether the simulated surface hail size from the Thompson hail size diagnostic exceeds 25 mm over the hour following storm detection. A convolutional neural network is compared with logistic regressions using input variables derived from either the spatial means of each field or principal component analysis. The convolutional neural network statistically significantly outperforms all other methods in terms of Brier skill score and area under the receiver operator characteristic curve. Interpretation of the convolutional neural network through feature importance and feature optimization reveals that the network synthesized information about the environment and storm morphology that is consistent with our understanding of hail growth, including large lapse rates and a wind shear profile that favors wide updrafts. Different neurons in the network also record different storm modes, and the magnitude of the output of those neurons is used to analyze the spatiotemporal distributions of different storm modes in the NCAR ensemble.


Cancers ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1235 ◽  
Author(s):  
Khushboo Munir ◽  
Hassan Elahi ◽  
Afsheen Ayub ◽  
Fabrizio Frezza ◽  
Antonello Rizzi

In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Heng Ye ◽  
Jing Hang ◽  
Meimei Zhang ◽  
Xiaowei Chen ◽  
Xinhua Ye ◽  
...  

AbstractTriple negative (TN) breast cancer is a subtype of breast cancer which is difficult for early detection and the prognosis is poor. In this paper, 910 benign and 934 malignant (110 TN and 824 NTN) B-mode breast ultrasound images were collected. A Resnet50 deep convolutional neural network was fine-tuned. The results showed that the averaged area under the receiver operating characteristic curve (AUC) of discriminating malignant from benign ones were 0.9789 (benign vs. TN), 0.9689 (benign vs. NTN). To discriminate TN from NTN breast cancer, the AUC was 0.9000, the accuracy was 88.89%, the sensitivity was 87.5%, and the specificity was 90.00%. It showed that the computer-aided system based on DCNN is expected to be a promising noninvasive clinical tool for ultrasound diagnosis of TN breast cancer.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chenyi Lin ◽  
Xuefei Song ◽  
Lunhao Li ◽  
Yinwei Li ◽  
Mengda Jiang ◽  
...  

Abstract Background This study aimed to establish a deep learning system for detecting the active and inactive phases of thyroid-associated ophthalmopathy (TAO) using magnetic resonance imaging (MRI). This system could provide faster, more accurate, and more objective assessments across populations. Methods A total of 160 MRI images of patients with TAO, who visited the Ophthalmology Clinic of the Ninth People’s Hospital, were retrospectively obtained for this study. Of these, 80% were used for training and validation, and 20% were used for testing. The deep learning system, based on deep convolutional neural network, was established to distinguish patients with active phase from those with inactive phase. The accuracy, precision, sensitivity, specificity, F1 score and area under the receiver operating characteristic curve were analyzed. Besides, visualization method was applied to explain the operation of the networks. Results Network A inherited from Visual Geometry Group network. The accuracy, specificity and sensitivity were 0.863±0.055, 0.896±0.042 and 0.750±0.136 respectively. Due to the recurring phenomenon of vanishing gradient during the training process of network A, we added parts of Residual Neural Network to build network B. After modification, network B improved the sensitivity (0.821±0.021) while maintaining a good accuracy (0.855±0.018) and a good specificity (0.865±0.021). Conclusions The deep convolutional neural network could automatically detect the activity of TAO from MRI images with strong robustness, less subjective judgment, and less measurement error. This system could standardize the diagnostic process and speed up the treatment decision making for TAO.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


Sign in / Sign up

Export Citation Format

Share Document