scholarly journals Optical Mapping-Validated Machine Learning Improves Atrial Fibrillation Driver Detection by Multi-Electrode Mapping

Author(s):  
Alexander M. Zolotarev ◽  
Brian J. Hansen ◽  
Ekaterina A. Ivanova ◽  
Katelynn M. Helfrich ◽  
Ning Li ◽  
...  

Background: Atrial fibrillation (AF) can be maintained by localized intramural reentrant drivers. However, AF driver detection by clinical surface-only multielectrode mapping (MEM) has relied on subjective interpretation of activation maps. We hypothesized that application of machine learning to electrogram frequency spectra may accurately automate driver detection by MEM and add some objectivity to the interpretation of MEM findings. Methods: Temporally and spatially stable single AF drivers were mapped simultaneously in explanted human atria (n=11) by subsurface near-infrared optical mapping (NIOM; 0.3 mm 2 resolution) and 64-electrode MEM (higher density or lower density with 3 and 9 mm 2 resolution, respectively). Unipolar MEM and NIOM recordings were processed by Fourier transform analysis into 28 407 total Fourier spectra. Thirty-five features for machine learning were extracted from each Fourier spectrum. Results: Targeted driver ablation and NIOM activation maps efficiently defined the center and periphery of AF driver preferential tracks and provided validated annotations for driver versus nondriver electrodes in MEM arrays. Compared with analysis of single electrogram frequency features, averaging the features from each of the 8 neighboring electrodes, significantly improved classification of AF driver electrograms. The classification metrics increased when less strict annotation, including driver periphery electrodes, were added to driver center annotation. Notably, f1-score for the binary classification of higher-density catheter data set was significantly higher than that of lower-density catheter (0.81±0.02 versus 0.66±0.04, P <0.05). The trained algorithm correctly highlighted 86% of driver regions with higher density but only 80% with lower-density MEM arrays (81% for lower-density+higher-density arrays together). Conclusions: The machine learning model pretrained on Fourier spectrum features allows efficient classification of electrograms recordings as AF driver or nondriver compared with the NIOM gold-standard. Future application of NIOM-validated machine learning approach may improve the accuracy of AF driver detection for targeted ablation treatment in patients.

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Aleksei Mikhailov ◽  
Brian Hansen ◽  
Matthew Fazio ◽  
Stanislav Zakharkin ◽  
Jichao Zhao ◽  
...  

Introduction: Conventional multielectrode mapping is not sufficient to reveal subsurface intramural activation. Thus, atrial fibrillation (AF) driver identification remains challenging. To overcome these limitations we utilized machine learning (ML) to identify AF drivers based on the combination of electrogram (EGM) and 3D structural magnetic resonance imaging (MRI) features. Hypothesis: Detailed electrogram features analysis, including minor deflections, combined with local structural features, can be used to define AF driver. Methods: Sustained AF was mapped in coronary perfused explanted human atria (n=7) with near-infrared optical mapping (NIOM) (0.3-0.9mm 2 resolution) and 64-electrode mapping catheter (3mm 2 resolution). Unipolar EGMs were analyzed for multiple features of the steepest negative deflection and the 2nd-4th steepest deflections in multicomponent EGMs. Atria underwent 9.4T MRI (154-180μm 3 resolution) with gadolinium enhancement and histology validation of fibrosis. Both 3D structural and EGM data from NIOM defined driver and non-driver regions were processed by ML algorithms (LR; PLSDA; GBM; CRF; PSVM; RSVM) using double cross-validation. Results: AF drivers’ reentrant tracks were defined by NIOM activation mapping, the gold-standard, and confirmed by targeted ablation. The best performing ML algorithm (PLSDA) correctly classified mapped driver region with 76.1% accuracy on the testing data. The most important features included sub-endocardial fibrosis, sub-epicardial fiber orientation, local wall thickness, beat-to-beat variability of multicomponent EGM deflections. Conclusions: The ML models pre-trained on combined EGM and structural features allow efficient classification of AF driver vs non-driver regions defined by the NIOM gold-standard. The results suggest that AF driver substrates formed by the combination of 3D fibrotic structural features, which correlate with local EGM characteristics.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7417
Author(s):  
Alex J. Hope ◽  
Utkarsh Vashisth ◽  
Matthew J. Parker ◽  
Andreas B. Ralston ◽  
Joshua M. Roper ◽  
...  

Concussion injuries remain a significant public health challenge. A significant unmet clinical need remains for tools that allow related physiological impairments and longer-term health risks to be identified earlier, better quantified, and more easily monitored over time. We address this challenge by combining a head-mounted wearable inertial motion unit (IMU)-based physiological vibration acceleration (“phybrata”) sensor and several candidate machine learning (ML) models. The performance of this solution is assessed for both binary classification of concussion patients and multiclass predictions of specific concussion-related neurophysiological impairments. Results are compared with previously reported approaches to ML-based concussion diagnostics. Using phybrata data from a previously reported concussion study population, four different machine learning models (Support Vector Machine, Random Forest Classifier, Extreme Gradient Boost, and Convolutional Neural Network) are first investigated for binary classification of the test population as healthy vs. concussion (Use Case 1). Results are compared for two different data preprocessing pipelines, Time-Series Averaging (TSA) and Non-Time-Series Feature Extraction (NTS). Next, the three best-performing NTS models are compared in terms of their multiclass prediction performance for specific concussion-related impairments: vestibular, neurological, both (Use Case 2). For Use Case 1, the NTS model approach outperformed the TSA approach, with the two best algorithms achieving an F1 score of 0.94. For Use Case 2, the NTS Random Forest model achieved the best performance in the testing set, with an F1 score of 0.90, and identified a wider range of relevant phybrata signal features that contributed to impairment classification compared with manual feature inspection and statistical data analysis. The overall classification performance achieved in the present work exceeds previously reported approaches to ML-based concussion diagnostics using other data sources and ML models. This study also demonstrates the first combination of a wearable IMU-based sensor and ML model that enables both binary classification of concussion patients and multiclass predictions of specific concussion-related neurophysiological impairments.


Author(s):  
A. Hanel ◽  
H. Klöden ◽  
L. Hoegner ◽  
U. Stilla

Today, cameras mounted in vehicles are used to observe the driver as well as the objects around a vehicle. In this article, an outline of a concept for image based recognition of dynamic traffic situations is shown. A dynamic traffic situation will be described by road users and their intentions. Images will be taken by a vehicle fleet and aggregated on a server. On these images, new strategies for machine learning will be applied iteratively when new data has arrived on the server. The results of the learning process will be models describing the traffic situation and will be transmitted back to the recording vehicles. The recognition will be performed as a standalone function in the vehicles and will use the received models. It can be expected, that this method can make the detection and classification of objects around the vehicles more reliable. In addition, the prediction of their actions for the next seconds should be possible. As one example how this concept is used, a method to recognize the illumination situation of a traffic scene is described. This allows to handle different appearances of objects depending on the illumination of the scene. Different illumination classes will be defined to distinguish different illumination situations. Intensity based features are extracted from the images and used by a classifier to assign an image to an illumination class. This method is being tested for a real data set of daytime and nighttime images. It can be shown, that the illumination class can be classified correctly for more than 80% of the images.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 135703-135719
Author(s):  
Weijun Zhu ◽  
Huanmei Wu ◽  
Miaolei Deng

2017 ◽  
Vol 24 (1) ◽  
pp. 3-37 ◽  
Author(s):  
SANDRA KÜBLER ◽  
CAN LIU ◽  
ZEESHAN ALI SAYYED

AbstractWe investigate feature selection methods for machine learning approaches in sentiment analysis. More specifically, we use data from the cooking platform Epicurious and attempt to predict ratings for recipes based on user reviews. In machine learning approaches to such tasks, it is a common approach to use word or part-of-speech n-grams. This results in a large set of features, out of which only a small subset may be good indicators for the sentiment. One of the questions we investigate concerns the extension of feature selection methods from a binary classification setting to a multi-class problem. We show that an inherently multi-class approach, multi-class information gain, outperforms ensembles of binary methods. We also investigate how to mitigate the effects of extreme skewing in our data set by making our features more robust and by using review and recipe sampling. We show that over-sampling is the best method for boosting performance on the minority classes, but it also results in a severe drop in overall accuracy of at least 6 per cent points.


2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


Sign in / Sign up

Export Citation Format

Share Document