scholarly journals Detection of Atrial Fibrillation Using a Ring-Type Wearable Device (CardioTracker) and Deep Learning Analysis of Photoplethysmography Signals: Prospective Observational Proof-of-Concept Study (Preprint)

2019 ◽  
Author(s):  
Soonil Kwon ◽  
Joonki Hong ◽  
Eue-Keun Choi ◽  
Byunghwan Lee ◽  
Changhyun Baik ◽  
...  

BACKGROUND Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF). OBJECTIVE We aimed to evaluate the diagnostic performance of a ring-type wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals. METHODS Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR). RESULTS In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR. CONCLUSIONS A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations. CLINICALTRIAL ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188

10.2196/16443 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e16443
Author(s):  
Soonil Kwon ◽  
Joonki Hong ◽  
Eue-Keun Choi ◽  
Byunghwan Lee ◽  
Changhyun Baik ◽  
...  

Background Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF). Objective We aimed to evaluate the diagnostic performance of a ring-type wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals. Methods Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR). Results In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR. Conclusions A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations. Trial Registration ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188


Author(s):  
Chuansheng Zheng ◽  
Xianbo Deng ◽  
Qiang Fu ◽  
Qiang Zhou ◽  
Jiapei Feng ◽  
...  

AbstractAccurate and rapid diagnosis of COVID-19 suspected cases plays a crucial role in timely quarantine and medical treatment. Developing a deep learning-based model for automatic COVID-19 detection on chest CT is helpful to counter the outbreak of SARS-CoV-2. A weakly-supervised deep learning-based software system was developed using 3D CT volumes to detect COVID-19. For each patient, the lung region was segmented using a pre-trained UNet; then the segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infectious. 499 CT volumes collected from Dec. 13, 2019, to Jan. 23, 2020, were used for training and 131 CT volumes collected from Jan 24, 2020, to Feb 6, 2020, were used for testing. The deep learning algorithm obtained 0.959 ROC AUC and 0.976 PR AUC. There was an operating point with 0.907 sensitivity and 0.911 specificity in the ROC curve. When using a probability threshold of 0.5 to classify COVID-positive and COVID-negative, the algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840 and a very high negative predictive value of 0.982. The algorithm took only 1.93 seconds to process a single patient’s CT volume using a dedicated GPU. Our weakly-supervised deep learning model can accurately predict the COVID-19 infectious probability in chest CT volumes without the need for annotating the lesions for training. The easily-trained and highperformance deep learning algorithm provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-CoV-2. The developed deep learning software is available at https://github.com/sydney0zq/covid-19-detection.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8294
Author(s):  
Chih-Ta Yen ◽  
Jia-Xian Liao ◽  
Yi-Kai Huang

This paper presents a wearable device, fitted on the waist of a participant that recognizes six activities of daily living (walking, walking upstairs, walking downstairs, sitting, standing, and laying) through a deep-learning algorithm, human activity recognition (HAR). The wearable device comprises a single-board computer (SBC) and six-axis sensors. The deep-learning algorithm employs three parallel convolutional neural networks for local feature extraction and for subsequent concatenation to establish feature fusion models of varying kernel size. By using kernels of different sizes, relevant local features of varying lengths were identified, thereby increasing the accuracy of human activity recognition. Regarding experimental data, the database of University of California, Irvine (UCI) and self-recorded data were used separately. The self-recorded data were obtained by having 21 participants wear the device on their waist and perform six common activities in the laboratory. These data were used to verify the proposed deep-learning algorithm on the performance of the wearable device. The accuracy of these six activities in the UCI dataset and in the self-recorded data were 97.49% and 96.27%, respectively. The accuracies in tenfold cross-validation were 99.56% and 97.46%, respectively. The experimental results have successfully verified the proposed convolutional neural network (CNN) architecture, which can be used in rehabilitation assessment for people unable to exercise vigorously.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lifang Sun ◽  
Xi Hu ◽  
Yutao Liu ◽  
Hengyu Cai

In order to explore the effect of convolutional neural network (CNN) algorithm based on deep learning on magnetic resonance imaging (MRI) images of brain tumor patients and evaluate the practical value of MRI image features based on deep learning algorithm in the clinical diagnosis and nursing of malignant tumors, in this study, a brain tumor MRI image model based on the CNN algorithm was constructed, and 80 patients with brain tumors were selected as the research objects. They were divided into an experimental group (CNN algorithm) and a control group (traditional algorithm). The patients were nursed in the whole process. The macroscopic characteristics and imaging index of the MRI image and anxiety of patients in two groups were compared and analyzed. In addition, the image quality after nursing was checked. The results of the study revealed that the MRI characteristics of brain tumors based on CNN algorithm were clearer and more accurate in the fluid-attenuated inversion recovery (FLAIR), MRI T1, T1c, and T2; in terms of accuracy, sensitivity, and specificity, the mean value was 0.83, 0.84, and 0.83, which had obvious advantages compared with the traditional algorithm ( P < 0.05 ). The patients in the nursing group showed lower depression scores and better MRI images in contrast to the control group ( P < 0.05 ). Therefore, the deep learning algorithm can further accurately analyze the MRI image characteristics of brain tumor patients on the basis of conventional algorithms, showing high sensitivity and specificity, which improved the application value of MRI image characteristics in the diagnosis of malignant tumors. In addition, effective nursing for patients undergoing analysis and diagnosis on brain tumor MRI image characteristics can alleviate the patient’s anxiety and ensure that high-quality MRI images were obtained after the examination.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e16572-e16572
Author(s):  
Alexa Meyer ◽  
Nancy Stambler ◽  
Karl Sjöstrand ◽  
Jens Richter ◽  
Mohamad Allaf ◽  
...  

e16572 Background: Previous work has shown that the degree of expression of prostate-specific membrane antigen (PSMA) correlates with prostate cancer (PCa) grade and stage. We evaluated the additive value of a deep learning algorithm (PSMA-AI) of a PSMA-targeted small molecule SPECT/CT imaging agent (99mTc-MIP-1404) to identify men with low risk PCa who are potential active surveillance candidates. Methods: A secondary analysis of a phase III trial (NCT02615067) of men with PCa who underwent 99mTc-MIP-1404 SPECT/CT was conducted. Patients with a biopsy Gleason score (GS) of ≤6, clinical stage ≤T2, and prostate specific antigen (PSA) < 10 ng/mL who underwent radical prostatectomy (RP) following SPECT/CT were included in the present analysis. SPECT/CT images were retrospectively analyzed by PSMA-AI, which was developed and locked prior to analysis. PSMA-AI calculated the uptake of 99mTc-MIP-1404 against the background reference (TBR). The automated TBR of 14 was used as a threshold for PSMA-AI calls of positive disease. Multivariable logistic regression analysis was used to develop a base model for identifying men with occult GS ≥7 PCa in the RP specimen. This model included PSA density, % positive biopsy cores, and clinical stage. The diagnostic performance of this model was then compared to a second model that incorporated PSMA-AI calls. Results: In total, 87 patients enrolled in the original trial contributed to the analysis. The base model indicated that PSA density and % positive cores were significantly associated with occult GS ≥7 PCa (p < 0.05), but clinical stage was not (p = 0.23). The predictive ability of the model resulted in an area under the curve (AUC) of 0.73. Upon adding PSMA-AI calls, the AUC increased to 0.77. PSMA-AI calls (p = 0.045), pre-surgery PSA density (0.019) and % positive core (p < 0.004) remained statistically significant. PSMA-AI calls increased the positive predictive value from 70% to 77% and the negative predictive value from 57% to 74%. Conclusions: The addition of PSMA-AI calls demonstrated a significant improvement over known predictors for identifying men with occult GS ≥7 PCa, who are inappropriate candidates for active surveillance. Clinical trial information: NCT02615067.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Yoichiro Yamamoto ◽  
Toyonori Tsuzuki ◽  
Jun Akatsuka ◽  
Masao Ueki ◽  
Hiromu Morikawa ◽  
...  

AbstractDeep learning algorithms have been successfully used in medical image classification. In the next stage, the technology of acquiring explainable knowledge from medical images is highly desired. Here we show that deep learning algorithm enables automated acquisition of explainable features from diagnostic annotation-free histopathology images. We compare the prediction accuracy of prostate cancer recurrence using our algorithm-generated features with that of diagnosis by expert pathologists using established criteria on 13,188 whole-mount pathology images consisting of over 86 billion image patches. Our method not only reveals findings established by humans but also features that have not been recognized, showing higher accuracy than human in prognostic prediction. Combining both our algorithm-generated features and human-established criteria predicts the recurrence more accurately than using either method alone. We confirm robustness of our method using external validation datasets including 2276 pathology images. This study opens up fields of machine learning analysis for discovering uncharted knowledge.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1246
Author(s):  
Ning Hung ◽  
Andy Kuan-Yu Shih ◽  
Chihung Lin ◽  
Ming-Tse Kuo ◽  
Yih-Shiou Hwang ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between 1 January 2010 and 31 December 2019 from two medical centers in Taiwan. We constructed a deep learning algorithm consisting of a segmentation model for cropping cornea images and a classification model that applies different convolutional neural networks (CNNs) to differentiate between FK and BK. The CNNs included DenseNet121, DenseNet161, DenseNet169, DenseNet201, EfficientNetB3, InceptionV3, ResNet101, and ResNet50. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heat map of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved the highest average accuracy of 80.0%. Using different CNNs, the diagnostic accuracy for BK ranged from 79.6% to 95.9%, and that for FK ranged from 26.3% to 65.8%. The CNN of DenseNet161 showed the best model performance, with an AUC of 0.85 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed a better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.


Author(s):  
Ning Hung ◽  
Eugene Yu-Chuan Kang ◽  
Andy Guan-Yu Shih ◽  
Chi-Hung Lin ◽  
Ming‐Tse Kuo ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between January 1, 2010, and December 31, 2019, from two medical centers in Taiwan. We constructed a deep learning algorithm, consisting of a segmentation model for cropping cornea images and a classification model that applies convolutional neural networks to differentiate between FK and BK. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heatmap of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved an average diagnostic accuracy of 80.00%. The diagnostic accuracy for BK ranged from 79.59% to 95.91% and that for FK ranged from 26.31% to 63.15%. DenseNet169 showed the best model performance, with an AUC of 0.78 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.


Sign in / Sign up

Export Citation Format

Share Document