Enhancing AI-guided STEMI detection algorithms by incorporating higher quality fiduciary EKG elements

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Mehta ◽  
J Avila ◽  
S Niklitschek ◽  
F Fernandez ◽  
C Villagran ◽  
...  

Abstract Background As EKG interpretation paradigms to a physician-free milieu, accumulating massive quantities of distilled pre-processed data becomes a must for machine learning techniques. In our pursuit of reducing ischemic times in STEMI management, we have improved our Artificial Intelligence (AI)-guided diagnostic tool by following a three-step approach: 1) Increase accuracy by adding larger clusters of data. 2) Increase the breadth of EKG classifications to provide more precise feedback and further refine the inputs which ultimately reflects in better and more accurate outputs. 3) Improving the algorithms' ability to discern between cardiovascular entities reflected in the EKG records. Purpose To bolster our algorithm's accuracy and reliability for electrocardiographic STEMI recognition. Methods Dataset: A total of 7,286 12-lead EKG records of 10-seconds length with a sampling frequency of 500 Hz obtained from Latin America Telemedicine Infarct Network from April 2014 to December 2019. This included the following balanced classes: angiographically confirmed STEMI, branch blocks, non-specific ST-T abnormalities, normal, and abnormal (200+ CPT codes, excluding the ones included in other classes). Labels of each record were manually checked by cardiologists to ensure precision (Ground truth). Pre-processing: First and last 250 samples were discarded to avoid a standardization pulse. Order 5 digital low pass filters with a 35 Hz cut-off was applied. For each record, the mean was subtracted to each individual lead. Classification: Determined classes were “STEMI” and “Not-STEMI” (A combination of randomly sampled normal, branch blocks, non-specific ST-T abnormalities and abnormal records – 25% of each subclass). Training & Testing: A 1-D Convolutional Neural Network was trained and tested with a dataset proportion of 90/10, respectively. The last dense layer outputs a probability for each record of being STEMI or Not-STEMI. Additional testing was performed with a subset of the original complete dataset of unconfirmed STEMI. Performance indicators (accuracy, sensitivity, and specificity) were calculated for each model and results were compared with our previous findings from past experiments. Results Complete STEMI data: Accuracy: 95.9% Sensitivity: 95.7% Specificity: 96.5%; Confirmed STEMI: Accuracy: 98.1% Sensitivity: 98.1% Specificity: 98.1%; Prior Data obtained in our previous experiments are shown below for comparison. Conclusion(s) After the addition of clustered pre-processed data, all performance indicators for STEMI detection increased considerably between both Confirmed STEMI datasets. On the other hand, the Complete STEMI dataset kept a strong and steady set of performance metrics when compared with past results. These findings not only validate the consistency and reliability of our algorithm but also connotes the importance of creating a pristine dataset for this and any other AI-derived medical tools. Funding Acknowledgement Type of funding source: None

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Mehta ◽  
S Niklitschek ◽  
F Fernandez ◽  
C Villagran ◽  
J Avila ◽  
...  

Abstract Background EKG interpretation is slowly transitioning to a physician-free, Artificial Intelligence (AI)-driven endeavor. Our continued efforts to innovate follow a carefully laid stepwise approach, as follows: 1) Create an AI algorithm that accurately identifies STEMI against non-STEMI using a 12-lead EKG; 2) Challenging said algorithm by including different EKG diagnosis to the previous experiment, and now 3) To further validate the accuracy and reliability of our algorithm while also improving performance in a prehospital and hospital settings. Purpose To provide an accurate, reliable, and cost-effective tool for STEMI detection with the potential to redirect human resources into other clinically relevant tasks and save the need for human resources. Methods Database: EKG records obtained from Latin America Telemedicine Infarct Network (Mexico, Colombia, Argentina, and Brazil) from April 2014 to December 2019. Dataset: A total of 11,567 12-lead EKG records of 10-seconds length with sampling frequency of 500 [Hz], including the following balanced classes: unconfirmed and angiographically confirmed STEMI, branch blocks, non-specific ST-T abnormalities, normal and abnormal (200+ CPT codes, excluding the ones included in other classes). The label of each record was manually checked by cardiologists to ensure precision (Ground truth). Pre-processing: The first and last 250 samples were discarded as they may contain a standardization pulse. An order 5 digital low pass filter with a 35 Hz cut-off was applied. For each record, the mean was subtracted to each individual lead. Classification: The determined classes were STEMI (STEMI in different locations of the myocardium – anterior, inferior and lateral); Not-STEMI (A combination of randomly sampled normal, branch blocks, non-specific ST-T abnormalities and abnormal records – 25% of each subclass). Training & Testing: A 1-D Convolutional Neural Network was trained and tested with a dataset proportion of 90/10; respectively. The last dense layer outputs a probability for each record of being STEMI or Not-STEMI. Additional testing was performed with a subset of the original dataset of angiographically confirmed STEMI. Results See Figure Attached – Preliminary STEMI Dataset Accuracy: 96.4%; Sensitivity: 95.3%; Specificity: 97.4% – Confirmed STEMI Dataset: Accuracy: 97.6%; Sensitivity: 98.1%; Specificity: 97.2%. Conclusions Our results remain consistent with our previous experience. By further increasing the amount and complexity of the data, the performance of the model improves. Future implementations of this technology in clinical settings look promising, not only in performing swift screening and diagnostic steps but also partaking in complex STEMI management triage. Funding Acknowledgement Type of funding source: None


Author(s):  
Jesús García ◽  
Jose Manuel Molina ◽  
Jorge Trincado

This paper presents a methodology to design sensor fusion parameters using real performance indicators of navigation in UAVs based on PixHawk flight controller and peripherals. This methodology and the selected performance indicators allows to find the best parameters for the fusion system of a determined configuration of sensors and a predefined real mission. The selected real platform is described with stress on available sensors and data processing software, and the experimental methodology is proposed to characterize sensor data fusion output and determine the best choice of parameters using quality measurements of tracking output with performance metrics not requiring ground truth.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 279
Author(s):  
Rafael Padilla ◽  
Wesley L. Passos ◽  
Thadeu L. B. Dias ◽  
Sergio L. Netto ◽  
Eduardo A. B. da Silva

Recent outstanding results of supervised object detection in competitions and challenges are often associated with specific metrics and datasets. The evaluation of such methods applied in different contexts have increased the demand for annotated datasets. Annotation tools represent the location and size of objects in distinct formats, leading to a lack of consensus on the representation. Such a scenario often complicates the comparison of object detection methods. This work alleviates this problem along the following lines: (i) It provides an overview of the most relevant evaluation methods used in object detection competitions, highlighting their peculiarities, differences, and advantages; (ii) it examines the most used annotation formats, showing how different implementations may influence the assessment results; and (iii) it provides a novel open-source toolkit supporting different annotation formats and 15 performance metrics, making it easy for researchers to evaluate the performance of their detection algorithms in most known datasets. In addition, this work proposes a new metric, also included in the toolkit, for evaluating object detection in videos that is based on the spatio-temporal overlap between the ground-truth and detected bounding boxes.


Data ◽  
2019 ◽  
Vol 4 (3) ◽  
pp. 127 ◽  
Author(s):  
Lucas Pereira

Datasets are important for researchers to build models and test how these perform, as well as to reproduce research experiments from others. This data paper presents the NILM Performance Evaluation dataset (NILMPEds), which is aimed primarily at research reproducibility in the field of Non-intrusive load monitoring. This initial release of NILMPEds is dedicated to event detection algorithms and is comprised of ground-truth data for four test datasets, the specification of 47,950 event detection models, the power events returned by each model in the four test datasets, and the performance of each individual model according to 31 performance metrics.


2015 ◽  
Vol E98.C (2) ◽  
pp. 156-161
Author(s):  
Hidenori YUKAWA ◽  
Koji YOSHIDA ◽  
Tomohiro MIZUNO ◽  
Tetsu OWADA ◽  
Moriyasu MIYAZAKI
Keyword(s):  
Ka Band ◽  
Low Pass ◽  

2011 ◽  
Vol 5 (2) ◽  
pp. 155-162
Author(s):  
Jose de Jesus Rubio ◽  
Diana M. Vazquez ◽  
Jaime Pacheco ◽  
Vicente Garcia

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 328
Author(s):  
Mikulas Huba ◽  
Damir Vrancic

The paper investigates and explains a new simple analytical tuning of proportional-integrative-derivative (PID) controllers. In combination with nth order series binomial low-pass filters, they are to be applied to the double-integrator-plus-dead-time (DIPDT) plant models. With respect to the use of derivatives, it should be understood that the design of appropriate filters is not only an implementation problem. Rather, it is also critical for the resulting performance, robustness and noise attenuation. To simplify controller commissioning, integrated tuning procedures (ITPs) based on three different concepts of filter delay equivalences are presented. For simultaneous determination of controller + filter parameters, the design uses the multiple real dominant poles method. The excellent control loop performance in a noisy environment and the specific advantages and disadvantages of the resulting equivalences are discussed. The results show that none of them is globally optimal. Each of them is advantageous only for certain noise levels and the desired degree of their filtering.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 563
Author(s):  
Jorge Pérez-Bailón ◽  
Belén Calvo ◽  
Nicolás Medrano

This paper presents a new approach based on the use of a Current Steering (CS) technique for the design of fully integrated Gm–C Low Pass Filters (LPF) with sub-Hz to kHz tunable cut-off frequencies and an enhanced power-area-dynamic range trade-off. The proposed approach has been experimentally validated by two different first-order single-ended LPFs designed in a 0.18 µm CMOS technology powered by a 1.0 V single supply: a folded-OTA based LPF and a mirrored-OTA based LPF. The first one exhibits a constant power consumption of 180 nW at 100 nA bias current with an active area of 0.00135 mm2 and a tunable cutoff frequency that spans over 4 orders of magnitude (~100 mHz–152 Hz @ CL = 50 pF) preserving dynamic figures greater than 78 dB. The second one exhibits a power consumption of 1.75 µW at 500 nA with an active area of 0.0137 mm2 and a tunable cutoff frequency that spans over 5 orders of magnitude (~80 mHz–~1.2 kHz @ CL = 50 pF) preserving a dynamic range greater than 73 dB. Compared with previously reported filters, this proposal is a competitive solution while satisfying the low-voltage low-power on-chip constraints, becoming a preferable choice for general-purpose reconfigurable front-end sensor interfaces.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 37
Author(s):  
Bingsheng Wei ◽  
Martin Barczyk

We consider the problem of vision-based detection and ranging of a target UAV using the video feed from a monocular camera onboard a pursuer UAV. Our previously published work in this area employed a cascade classifier algorithm to locate the target UAV, which was found to perform poorly in complex background scenes. We thus study the replacement of the cascade classifier algorithm with newer machine learning-based object detection algorithms. Five candidate algorithms are implemented and quantitatively tested in terms of their efficiency (measured as frames per second processing rate), accuracy (measured as the root mean squared error between ground truth and detected location), and consistency (measured as mean average precision) in a variety of flight patterns, backgrounds, and test conditions. Assigning relative weights of 20%, 40% and 40% to these three criteria, we find that when flying over a white background, the top three performers are YOLO v2 (76.73 out of 100), Faster RCNN v2 (63.65 out of 100), and Tiny YOLO (59.50 out of 100), while over a realistic background, the top three performers are Faster RCNN v2 (54.35 out of 100, SSD MobileNet v1 (51.68 out of 100) and SSD Inception v2 (50.72 out of 100), leading us to recommend Faster RCNN v2 as the recommended solution. We then provide a roadmap for further work in integrating the object detector into our vision-based UAV tracking system.


Sign in / Sign up

Export Citation Format

Share Document