scholarly journals Deep Learning-based Detection for COVID-19 from Chest CT using Weak Label

Author(s):  
Chuansheng Zheng ◽  
Xianbo Deng ◽  
Qiang Fu ◽  
Qiang Zhou ◽  
Jiapei Feng ◽  
...  

AbstractAccurate and rapid diagnosis of COVID-19 suspected cases plays a crucial role in timely quarantine and medical treatment. Developing a deep learning-based model for automatic COVID-19 detection on chest CT is helpful to counter the outbreak of SARS-CoV-2. A weakly-supervised deep learning-based software system was developed using 3D CT volumes to detect COVID-19. For each patient, the lung region was segmented using a pre-trained UNet; then the segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infectious. 499 CT volumes collected from Dec. 13, 2019, to Jan. 23, 2020, were used for training and 131 CT volumes collected from Jan 24, 2020, to Feb 6, 2020, were used for testing. The deep learning algorithm obtained 0.959 ROC AUC and 0.976 PR AUC. There was an operating point with 0.907 sensitivity and 0.911 specificity in the ROC curve. When using a probability threshold of 0.5 to classify COVID-positive and COVID-negative, the algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840 and a very high negative predictive value of 0.982. The algorithm took only 1.93 seconds to process a single patient’s CT volume using a dedicated GPU. Our weakly-supervised deep learning model can accurately predict the COVID-19 infectious probability in chest CT volumes without the need for annotating the lesions for training. The easily-trained and highperformance deep learning algorithm provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-CoV-2. The developed deep learning software is available at https://github.com/sydney0zq/covid-19-detection.

2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e16572-e16572
Author(s):  
Alexa Meyer ◽  
Nancy Stambler ◽  
Karl Sjöstrand ◽  
Jens Richter ◽  
Mohamad Allaf ◽  
...  

e16572 Background: Previous work has shown that the degree of expression of prostate-specific membrane antigen (PSMA) correlates with prostate cancer (PCa) grade and stage. We evaluated the additive value of a deep learning algorithm (PSMA-AI) of a PSMA-targeted small molecule SPECT/CT imaging agent (99mTc-MIP-1404) to identify men with low risk PCa who are potential active surveillance candidates. Methods: A secondary analysis of a phase III trial (NCT02615067) of men with PCa who underwent 99mTc-MIP-1404 SPECT/CT was conducted. Patients with a biopsy Gleason score (GS) of ≤6, clinical stage ≤T2, and prostate specific antigen (PSA) < 10 ng/mL who underwent radical prostatectomy (RP) following SPECT/CT were included in the present analysis. SPECT/CT images were retrospectively analyzed by PSMA-AI, which was developed and locked prior to analysis. PSMA-AI calculated the uptake of 99mTc-MIP-1404 against the background reference (TBR). The automated TBR of 14 was used as a threshold for PSMA-AI calls of positive disease. Multivariable logistic regression analysis was used to develop a base model for identifying men with occult GS ≥7 PCa in the RP specimen. This model included PSA density, % positive biopsy cores, and clinical stage. The diagnostic performance of this model was then compared to a second model that incorporated PSMA-AI calls. Results: In total, 87 patients enrolled in the original trial contributed to the analysis. The base model indicated that PSA density and % positive cores were significantly associated with occult GS ≥7 PCa (p < 0.05), but clinical stage was not (p = 0.23). The predictive ability of the model resulted in an area under the curve (AUC) of 0.73. Upon adding PSMA-AI calls, the AUC increased to 0.77. PSMA-AI calls (p = 0.045), pre-surgery PSA density (0.019) and % positive core (p < 0.004) remained statistically significant. PSMA-AI calls increased the positive predictive value from 70% to 77% and the negative predictive value from 57% to 74%. Conclusions: The addition of PSMA-AI calls demonstrated a significant improvement over known predictors for identifying men with occult GS ≥7 PCa, who are inappropriate candidates for active surveillance. Clinical trial information: NCT02615067.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoungwoo Jang ◽  
Jae Ho Choi ◽  
Namkug Kim ◽  
Jae Suk Chang ◽  
Pil Whan Yoon ◽  
...  

AbstractDespite being the gold standard for diagnosis of osteoporosis, dual-energy X-ray absorptiometry (DXA) could not be widely used as a screening tool for osteoporosis. This study aimed to predict osteoporosis via simple hip radiography using deep learning algorithm. A total of 1001 datasets of proximal femur DXA with matched same-side cropped simple hip bone radiographic images of female patients aged ≥ 55 years were collected. Of these, 504 patients had osteoporosis (T-score ≤ − 2.5), and 497 patients did not have osteoporosis. The 1001 images were randomly divided into three sets: 800 images for the training, 100 images for the validation, and 101 images for the test. Based on VGG16 equipped with nonlocal neural network, we developed a deep neural network (DNN) model. We calculated the confusion matrix and evaluated the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We drew the receiver operating characteristic (ROC) curve. A gradient-based class activation map (Grad-CAM) overlapping the original image was also used to visualize the model performance. Additionally, we performed external validation using 117 datasets. Our final DNN model showed an overall accuracy of 81.2%, sensitivity of 91.1%, and specificity of 68.9%. The PPV was 78.5%, and the NPV was 86.1%. The area under the ROC curve value was 0.867, indicating a reasonable performance for screening osteoporosis by simple hip radiography. The external validation set confirmed a model performance with an overall accuracy of 71.8% and an AUC value of 0.700. All Grad-CAM results from both internal and external validation sets appropriately matched the proximal femur cortex and trabecular patterns of the radiographs. The DNN model could be considered as one of the useful screening tools for easy prediction of osteoporosis in the real-world clinical setting.


2019 ◽  
Author(s):  
Soonil Kwon ◽  
Joonki Hong ◽  
Eue-Keun Choi ◽  
Byunghwan Lee ◽  
Changhyun Baik ◽  
...  

BACKGROUND Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF). OBJECTIVE We aimed to evaluate the diagnostic performance of a ring-type wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals. METHODS Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR). RESULTS In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR. CONCLUSIONS A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations. CLINICALTRIAL ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188


10.2196/16443 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e16443
Author(s):  
Soonil Kwon ◽  
Joonki Hong ◽  
Eue-Keun Choi ◽  
Byunghwan Lee ◽  
Changhyun Baik ◽  
...  

Background Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF). Objective We aimed to evaluate the diagnostic performance of a ring-type wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals. Methods Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR). Results In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR. Conclusions A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations. Trial Registration ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1385
Author(s):  
Marc Baget-Bernaldiz ◽  
Romero-Aroca Pedro ◽  
Esther Santos-Blanco ◽  
Raul Navarro-Gil ◽  
Aida Valls ◽  
...  

Background: The aim of the present study was to test our deep learning algorithm (DLA) by reading the retinographies. Methods: We tested our DLA built on convolutional neural networks in 14,186 retinographies from our population and 1200 images extracted from MESSIDOR. The retinal images were graded both by the DLA and independently by four retina specialists. Results of the DLA were compared according to accuracy (ACC), sensitivity (S), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC), distinguishing between identification of any type of DR (any DR) and referable DR (RDR). Results: The results of testing the DLA for identifying any DR in our population were: ACC = 99.75, S = 97.92, SP = 99.91, PPV = 98.92, NPV = 99.82, and AUC = 0.983. When detecting RDR, the results were: ACC = 99.66, S = 96.7, SP = 99.92, PPV = 99.07, NPV = 99.71, and AUC = 0.988. The results of testing the DLA for identifying any DR with MESSIDOR were: ACC = 94.79, S = 97.32, SP = 94.57, PPV = 60.93, NPV = 99.75, and AUC = 0.959. When detecting RDR, the results were: ACC = 98.78, S = 94.64, SP = 99.14, PPV = 90.54, NPV = 99.53, and AUC = 0.968. Conclusions: Our DLA performed well, both in detecting any DR and in classifying those eyes with RDR in a sample of retinographies of type 2 DM patients in our population and the MESSIDOR database.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Chi-Tung Cheng ◽  
Yirui Wang ◽  
Huan-Wu Chen ◽  
Po-Meng Hsiao ◽  
Chun-Nan Yeh ◽  
...  

AbstractPelvic radiograph (PXR) is essential for detecting proximal femur and pelvis injuries in trauma patients, which is also the key component for trauma survey. None of the currently available algorithms can accurately detect all kinds of trauma-related radiographic findings on PXRs. Here, we show a universal algorithm can detect most types of trauma-related radiographic findings on PXRs. We develop a multiscale deep learning algorithm called PelviXNet trained with 5204 PXRs with weakly supervised point annotation. PelviXNet yields an area under the receiver operating characteristic curve (AUROC) of 0.973 (95% CI, 0.960–0.983) and an area under the precision-recall curve (AUPRC) of 0.963 (95% CI, 0.948–0.974) in the clinical population test set of 1888 PXRs. The accuracy, sensitivity, and specificity at the cutoff value are 0.924 (95% CI, 0.912–0.936), 0.908 (95% CI, 0.885–0.908), and 0.932 (95% CI, 0.919–0.946), respectively. PelviXNet demonstrates comparable performance with radiologists and orthopedics in detecting pelvic and hip fractures.


Sign in / Sign up

Export Citation Format

Share Document