scholarly journals Deep Learning Classification of Unipolar Electrograms in Human Atrial Fibrillation: Application in Focal Source Mapping

2021 ◽  
Vol 12 ◽  
Author(s):  
Shun Liao ◽  
Don Ragot ◽  
Sachin Nayyar ◽  
Adrian Suszko ◽  
Zhaolei Zhang ◽  
...  

Focal sources are potential targets for atrial fibrillation (AF) catheter ablation, but they can be time-consuming and challenging to identify when unipolar electrograms (EGM) are numerous and complex. Our aim was to apply deep learning (DL) to raw unipolar EGMs in order to automate putative focal sources detection. We included 78 patients from the Focal Source and Trigger (FaST) randomized controlled trial that evaluated the efficacy of adjunctive FaST ablation compared to pulmonary vein isolation alone in reducing AF recurrence. FaST sites were identified based on manual classification of sustained periodic unipolar QS EGMs over 5-s. All periodic unipolar EGMs were divided into training (n = 10,004) and testing cohorts (n = 3,180). DL was developed using residual convolutional neural network to discriminate between FaST and non-FaST. A gradient-based method was applied to interpret the DL model. DL classified FaST with a receiver operator characteristic area under curve of 0.904 ± 0.010 (cross-validation) and 0.923 ± 0.003 (testing). At a prespecified sensitivity of 90%, the specificity and accuracy were 81.9 and 82.5%, respectively, in detecting FaST. DL had similar performance (sensitivity 78%, specificity 89%) to that of FaST re-classification by cardiologists (sensitivity 78%, specificity 79%). The gradient-based interpretation demonstrated accurate tracking of unipolar QS complexes by select DL convolutional layers. In conclusion, our novel DL model trained on raw unipolar EGMs allowed automated and accurate classification of FaST sites. Performance was similar to FaST re-classification by cardiologists. Future application of DL to classify FaST may improve the efficiency of real-time focal source detection for targeted AF ablation therapy.

Author(s):  
Alexander M. Zolotarev ◽  
Brian J. Hansen ◽  
Ekaterina A. Ivanova ◽  
Katelynn M. Helfrich ◽  
Ning Li ◽  
...  

Background: Atrial fibrillation (AF) can be maintained by localized intramural reentrant drivers. However, AF driver detection by clinical surface-only multielectrode mapping (MEM) has relied on subjective interpretation of activation maps. We hypothesized that application of machine learning to electrogram frequency spectra may accurately automate driver detection by MEM and add some objectivity to the interpretation of MEM findings. Methods: Temporally and spatially stable single AF drivers were mapped simultaneously in explanted human atria (n=11) by subsurface near-infrared optical mapping (NIOM; 0.3 mm 2 resolution) and 64-electrode MEM (higher density or lower density with 3 and 9 mm 2 resolution, respectively). Unipolar MEM and NIOM recordings were processed by Fourier transform analysis into 28 407 total Fourier spectra. Thirty-five features for machine learning were extracted from each Fourier spectrum. Results: Targeted driver ablation and NIOM activation maps efficiently defined the center and periphery of AF driver preferential tracks and provided validated annotations for driver versus nondriver electrodes in MEM arrays. Compared with analysis of single electrogram frequency features, averaging the features from each of the 8 neighboring electrodes, significantly improved classification of AF driver electrograms. The classification metrics increased when less strict annotation, including driver periphery electrodes, were added to driver center annotation. Notably, f1-score for the binary classification of higher-density catheter data set was significantly higher than that of lower-density catheter (0.81±0.02 versus 0.66±0.04, P <0.05). The trained algorithm correctly highlighted 86% of driver regions with higher density but only 80% with lower-density MEM arrays (81% for lower-density+higher-density arrays together). Conclusions: The machine learning model pretrained on Fourier spectrum features allows efficient classification of electrograms recordings as AF driver or nondriver compared with the NIOM gold-standard. Future application of NIOM-validated machine learning approach may improve the accuracy of AF driver detection for targeted ablation treatment in patients.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Jessica Torres-Soto ◽  
Euan A. Ashley

Abstract Wearable devices enable theoretically continuous, longitudinal monitoring of physiological measurements such as step count, energy expenditure, and heart rate. Although the classification of abnormal cardiac rhythms such as atrial fibrillation from wearable devices has great potential, commercial algorithms remain proprietary and tend to focus on heart rate variability derived from green spectrum LED sensors placed on the wrist, where noise remains an unsolved problem. Here we develop DeepBeat, a multitask deep learning method to jointly assess signal quality and arrhythmia event detection in wearable photoplethysmography devices for real-time detection of atrial fibrillation. The model is trained on approximately one million simulated unlabeled physiological signals and fine-tuned on a curated dataset of over 500 K labeled signals from over 100 individuals from 3 different wearable devices. We demonstrate that, in comparison with a single-task model, our architecture using unsupervised transfer learning through convolutional denoising autoencoders dramatically improves the performance of atrial fibrillation detection from a F1 score of 0.54 to 0.96. We also include in our evaluation a prospectively derived replication cohort of ambulatory participants where the algorithm performed with high sensitivity (0.98), specificity (0.99), and F1 score (0.93). We show that two-stage training can help address the unbalanced data problem common to biomedical applications, where large-scale well-annotated datasets are hard to generate due to the expense of manual annotation, data acquisition, and participant privacy.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Miguel Rodrigo ◽  
Albert J Rogers ◽  
Prasanth Ganesan ◽  
Mahmood Alhusseini ◽  
JUSTIN XU ◽  
...  

Introduction: It is unclear whether atrial fibrillation (AF) is best identified on intracardiac recordings by varying shape, rapid rate or extent of irregularity. Prioritizing these features may improve device diagnosis of AF. Hypothesis: AF can be separated from organized atrial flutter (AFL) by electrogram shape, independent of the contributions of rate or regularity. Methods: In 86 patients (25 female, age 65±11 years) we trained a convolutional neural network (CNN) to classify AF or AFL from 64 unipolar electrograms of persistent AF recorded for 60 seconds. In cases labelled as AF, we modified inputs by progressively regularizing (a) electrogram shape; (b) rate or (c) regularity in timing, to define which switched the classification to AFL. Results: The CNN provided a c-statistic of 0.95 ± 0.05 to identify AF or AFL in independent test cohorts not used for training, using 10-fold cross validation. Fig A shows AF in which progressive regularization of shape and timing from #1 to #4 flipped CNN classification into AFL in 45%. EGMs with 100% consistent shape and timing were classified by their cycle length (CL=1/rate): ~90% AF for CL < 175 ms, ~80% AFL for CL from 200-280 ms. Fig. B shows sequences simulated from patient-specific EGMs of AF that were classified as AF in 91 ± 12% of cases even if regular with CL of 200-280 ms, showing AF classification based on EGM shape alone. Figure B illustrates some ‘AF pathognomic’ electrogram shapes in red. Conclusions: AF may be identified by specific EGM shape patterns independent of regularity or rate. Regularity in shape and timing contribute ~45% to AFL classification, and adding CL explains up to 80%. Studies are required to study the mechanistic basis and clinical implications of specific AF-electrogram shapes.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009613
Author(s):  
Kaitlin E. Frasier

Machine learning algorithms, including recent advances in deep learning, are promising for tools for detection and classification of broadband high frequency signals in passive acoustic recordings. However, these methods are generally data-hungry and progress has been limited by challenges related to the lack of labeled datasets adequate for training and testing. Large quantities of known and as yet unidentified broadband signal types mingle in marine recordings, with variability introduced by acoustic propagation, source depths and orientations, and interacting signals. Manual classification of these datasets is unmanageable without an in-depth knowledge of the acoustic context of each recording location. A signal classification pipeline is presented which combines unsupervised and supervised learning phases with opportunities for expert oversight to label signals of interest. The method is illustrated with a case study using unsupervised clustering to identify five toothed whale echolocation click types and two anthropogenic signal categories. These categories are used to train a deep network to classify detected signals in either averaged time bins or as individual detections, in two independent datasets. Bin-level classification achieved higher overall precision (>99%) than click-level classification. However, click-level classification had the advantage of providing a label for every signal, and achieved higher overall recall, with overall precision from 92 to 94%. The results suggest that unsupervised learning is a viable solution for efficiently generating the large, representative training sets needed for applications of deep learning in passive acoustics.


Author(s):  
Zinah Mohsin Arkah ◽  
Dalya S. Al-Dulaimi ◽  
Ahlam R. Khekan

<p>Skin cancer is an example of the most dangerous disease. Early diagnosis of skin cancer can save many people’s lives. Manual classification methods are time-consuming and costly. Deep learning has been proposed for the automated classification of skin cancer. Although deep learning showed impressive performance in several medical imaging tasks, it requires a big number of images to achieve a good performance. The skin cancer classification task suffers from providing deep learning with sufficient data due to the expensive annotation process and required experts. One of the most used solutions is transfer learning of pre-trained models of the ImageNet dataset. However, the learned features of pre-trained models are different from skin cancer image features. To end this, we introduce a novel approach of transfer learning by training the pre-trained models of the ImageNet (VGG, GoogleNet, and ResNet50) on a large number of unlabelled skin cancer images, first. We then train them on a small number of labeled skin images. Our experimental results proved that the proposed method is efficient by achieving an accuracy of 84% with ResNet50 when directly trained with a small number of labeled skin and 93.7% when trained with the proposed approach.</p>


2020 ◽  
Author(s):  
Víctor Sevillano ◽  
Katherine Holt ◽  
José L. Aznarte

AbstractIn palynology, the visual classification of pollen grains from different species is a hard task which is usually tackled by human operators using microscopes. Many industries, including medical and farmaceutical, rely on the accuracy of this manual classification process, which is reported to be around 67%. In this paper, we propose a new method to automatically classify pollen grains using deep learning techniques that improve the correct classification rates in images not previously seen by the models. Our proposal manages to properly classify up to 98% of the examples from a dataset with 46 different classes of pollen grains, produced by the Classifynder classification system. This is an unprecedented result which surpasses all previous attempts both in accuracy and number and difficulty of taxa under consideration, which include types previously considered as indistinguishable.


Sign in / Sign up

Export Citation Format

Share Document