scholarly journals Using deep learning to extract novel and quantitative imaging features from perfluoropropane contrast, sulphur hexafluoride contrast and non-contrast stress echocardiography images

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
C Dockerill ◽  
W Woodward ◽  
A McCourt ◽  
A Beqiri ◽  
A Parker ◽  
...  

Abstract Background Stress echocardiography has become established as the most widely applied non-invasive imaging test for diagnosis of coronary artery disease within the UK. However, stress echocardiography has been substantially qualitative, rather than quantitative, based on visual wall motion assessment. For the first time, we have identified and validated quantitative descriptors of cardiac geometry and motion, extracted from ultrasound images acquired using contrast agents in an automated way. Purpose To establish whether these novel imaging features can be generated in an automated, quantifiable and reproducible way from images acquired with perfluoropropane contrast, as well as investigating how these extracted measures compare to those extracted from sulphur hexafluoride contrast and non-contrast studies. Methods 100 patients who received perfluoropropane contrast during their stress echocardiogram were recruited. Their stress echocardiography images were processed through a deep learning algorithm. Novel feature values were recorded and a subset of 10 studies were repeated. The automated measures of global longitudinal strain (GLS) and ejection fraction (EF) extracted from these images were compared to values previously extracted from sulphur hexafluoride contrast and non-contrast images using the same software. Results A full set of 31 novel imaging features were successfully extracted from 79 studies acquired using the perfluoropropane contrast agent with a dropout rate of 14% (n=92, 8 incomplete image sets). Repeated analysis in a subset of 10 perfluoropropane cases demonstrated excellent reproducibility of the extracted feature values (R2=1). Automated values of GLS and EF, at both rest (GLS = −16.4±4.8%, EF = 63±13%) and stress stages (GLS = −17.7±5.8%, EF = 68±11%), were extracted from 83 perfluoropropane studies, with a dropout rate of 16% (n=99, fewer incomplete sets as short axis view not required). The ranges of GLS and EF measures extracted from the perfluoropropane images were comparable to the other contrast studies (n=222) (Rest GLS = −16.8±5.8%, Rest EF = 63±10%; Stress GLS = −19.1±6.7%, Stress EF = 71±9%) and non-contrast studies (n=86) (Rest GLS = −15.7±5.3%, Rest EF = 57±10%; Stress GLS = −17.3±6.4%, Stress EF = 61±14%). Conclusions Novel features and clinically relevant measures were extracted from images acquired using perfluoropropane contrast for the first time in a fully automated and reproducible way using a deep learning algorithm. The analysis failure rate and generated measures are comparable to those extracted from images using other commonly used sulphur hexafluoride contrast agents and non-contrast stress echocardiography studies. These findings demonstrate that deep learning algorithms can be used for automated quantitative analysis of stress echocardiograms acquired using various contrast agents and in non-contrast studies to improve stress echocardiography practice. Funding Acknowledgement Type of funding source: Private company. Main funding source(s): Lantheus Medical Imaging, Inc.

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Wei Zhang ◽  
Yang Wang

This study was aimed at exploring the treatment of asthma children with small airway obstruction in CT imaging features of deep learning and glucocorticoid. A total of 145 patients meeting the requirements in hospital were included in this study, and they were randomly assigned to receive aerosolized glucocorticoid ( n = 45 ), aerosolized glucocorticoid combined with bronchodilator ( n = 50 ), or oral steroids ( n = 50 ) for 4 weeks after discharge. The lung function and fractional exhaled nitric oxide (FENO) indexes of the three groups were measured, respectively, and then the effective rates were compared to evaluate the clinical efficacy of glucocorticoids with different administration methods and combined medications in the short-term maintenance treatment after acute exacerbation of asthma. Deep learning algorithm was used for CT image segmentation. The CT image is sent to the workbench for processing on the workbench, and then the convolution operation is performed on each input pixel point during the image processing. After 4 weeks of maintenance treatment, FEF50 %, FEF75 %, and MMEF75/25 increased significantly, and FENO decreased significantly ( P < 0.01 ). The improvement results of FEF50 %, FEF75 %, MMEF75/25, and FENO after maintenance treatment were as follows: the oral hormone group was the most effective, followed by the combined atomization inhalation group, and the hormone atomization inhalation group was the least effective. The differences among them were statistically significant ( P < 0.05 ). The accuracy of artificial intelligence segmentation algorithm was 81%. All the hormones were more effective than local medication in the treatment of small airway function and airway inflammation. In the treatment of aerosol inhalation, the hormone combined with bronchiectasis drug was the most effective in improving small airway obstruction and reducing airway inflammation compared with single drug inhalation. Deep learning CT images are simple, noninvasive, and intuitively observe lung changes in asthma with small airway functional obstruction. Asthma with small airway functional obstruction has high clinical diagnosis and evaluation value.


2020 ◽  
Author(s):  
S. Duchesne ◽  
D. Gourdeau ◽  
P. Archambault ◽  
C. Chartrand-Lefebvre ◽  
L. Dieumegarde ◽  
...  

ABSTRACTBackgroundDecision scores and ethically mindful algorithms are being established to adjudicate mechanical ventilation in the context of potential resources shortage due to the current onslaught of COVID-19 cases. There is a need for a reproducible and objective method to provide quantitative information for those scores.PurposeTowards this goal, we present a retrospective study testing the ability of a deep learning algorithm at extracting features from chest x-rays (CXR) to track and predict radiological evolution.Materials and MethodsWe trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from two open-source datasets (last accessed on April 9, 2020)(Italian Society for Medical and Interventional Radiology and MILA). Data collected form 60 pairs of sequential CXRs from 40 COVID patients (mean age ± standard deviation: 56 ± 13 years; 23 men, 10 women, seven not reported) and were categorized in three categories: “Worse”, “Stable”, or “Improved” on the basis of radiological evolution ascertained from images and reports. Receiver operating characteristic analyses, Mann-Whitney tests were performed.ResultsOn patients from the CheXnet dataset, the area under ROC curves ranged from 0.71 to 0.93 for seven imaging features and one diagnosis. Deep learning features between “Worse” and “Improved” outcome categories were significantly different for three radiological signs and one diagnostic (“Consolidation”, “Lung Lesion”, “Pleural effusion” and “Pneumonia”; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between “Worse” and “Improved” cases with 82.7% accuracy.ConclusionCXR deep learning features show promise for classifying the disease trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Chia-Yen Lee ◽  
Guan-Lin Chen ◽  
Zhong-Xuan Zhang ◽  
Yi-Hong Chou ◽  
Chih-Chung Hsu

The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254997
Author(s):  
Ari Lee ◽  
Min Su Kim ◽  
Sang-Sun Han ◽  
PooGyeon Park ◽  
Chena Lee ◽  
...  

This study aimed to develop a high-performance deep learning algorithm to differentiate Stafne’s bone cavity (SBC) from cysts and tumors of the jaw based on images acquired from various panoramic radiographic systems. Data sets included 176 Stafne’s bone cavities and 282 odontogenic cysts and tumors of the mandible (98 dentigerous cysts, 91 odontogenic keratocysts, and 93 ameloblastomas) that required surgical removal. Panoramic radiographs were obtained using three different imaging systems. The trained model showed 99.25% accuracy, 98.08% sensitivity, and 100% specificity for SBC classification and resulted in one misclassified SBC case. The algorithm was approved to recognize the typical imaging features of SBC in panoramic radiography regardless of the imaging system when traced back with Grad-Cam and Guided Grad-Cam methods. The deep learning model for SBC differentiating from odontogenic cysts and tumors showed high performance with images obtained from multiple panoramic systems. The present algorithm is expected to be a useful tool for clinicians, as it diagnoses SBCs in panoramic radiography to prevent unnecessary examinations for patients. Additionally, it would provide support for clinicians to determine further examinations or referrals to surgeons for cases where even experts are unsure of diagnosis using panoramic radiography alone.


Sign in / Sign up

Export Citation Format

Share Document