The impact of artifact correction methods of RR series on heart rate variability parameters

2018 ◽  
Vol 124 (3) ◽  
pp. 646-652 ◽  
Author(s):  
Anderson Ivan Rincon Soler ◽  
Luiz Eduardo Virgilio Silva ◽  
Rubens Fazan ◽  
Luiz Otavio Murta

Heart rate variability (HRV) analysis is widely used to investigate the autonomic regulation of the cardiovascular system. HRV is often analyzed using RR time series, which can be affected by different types of artifacts. Although there are several artifact correction methods, there is no study that compares their performances in actual experimental contexts. This work aimed to evaluate the impact of different artifact correction methods on several HRV parameters. Initially, 36 ECG recordings of control rats or rats with heart failure or hypertension were analyzed to characterize artifact occurrence rates and distributions, to be mimicked in simulations. After a rigorous analysis, only 16 recordings ( n = 16) with artifact-free segments of at least 10,000 beats were selected. RR interval losses were then simulated in the artifact-free (reference) time series according to real observations. Correction methods applied to simulated series were deletion, linear interpolation, cubic spline interpolation, modified moving average window, and nonlinear predictive interpolation. Linear (time- and frequency-domain) and nonlinear HRV parameters were calculated from corrupted-corrected time series, as well as for reference series to evaluate the accuracy of each correction method. Results show that NPI provides the overall best performance. However, several correction approaches, for example the simple deletion procedure, can provide good performance in some situations, depending on the HRV parameters under consideration. NEW & NOTEWORTHY This work analyzes the performance of some correction techniques commonly applied to the missing beats problem in RR time series. From artifact-free RR series, spurious values were inserted based on actual data of experimental settings. We intend our work to be a guide to show how artifacts should be corrected to preserve as much as possible the original heart rate variability properties.

2021 ◽  
Author(s):  
Jesse M. Vance ◽  
Kim Currie ◽  
John Zeldis ◽  
Peter Dillingham ◽  
Cliff S. Law

Abstract. Regularized time series of ocean carbon data are necessary for assessing seasonal dynamics, annual budgets, interannual variability and long-term trends. There are, however, no standardized methods for imputing gaps in ocean carbon time series, and only limited evaluation of the numerous methods available for constructing uninterrupted time series. A comparative assessment of eight imputation models was performed using data from seven long-term monitoring sites. Multivariate linear regression (MLR), mean imputation, linear interpolation, spline interpolation, Stineman interpolation, Kalman filtering, weighted moving average and multiple imputation by chained equation (MICE) models were compared using cross-validation to determine error and bias. A bootstrapping approach was employed to determine model sensitivity to varied degrees of data gaps and secondary time series with artificial gaps were used to evaluate impacts on seasonality and annual summations and to estimate uncertainty. All models were fit to DIC time series, with MLR and MICE models also applied to field measurements of temperature, salinity and remotely sensed chlorophyll, with model coefficients fit for monthly mean conditions. MLR estimated DIC with a mean error of 8.8 umol kg−1 among 5 oceanic sites and 20.0 ummol kg−1 among 2 coastal sites. The empirical methods of MLR, MICE and mean imputation retained observed seasonal cycles over greater amounts and durations of gaps resulting in lower error in annual budgets, outperforming the other statistical methods. MLR had lower bias and sampling sensitivity than MICE and mean imputation and provided the most robust option for imputing time series with gaps of various duration.


Author(s):  
J. Philip Saul ◽  
Gaetano Valenza

Spontaneous beat-to-beat variations of heart rate (HR) have intrigued scientists and casual observers for centuries; however, it was not until the 1970s that investigators began to apply engineering tools to the analysis of these variations, fostering the field we now know as heart rate variability or HRV . Since then, the field has exploded to not only include a wide variety of traditional linear time and frequency domain applications for the HR signal, but also more complex linear models that include additional physiological parameters such as respiration, arterial blood pressure, central venous pressure and autonomic nerve signals. Most recently, the field has branched out to address the nonlinear components of many physiological processes, the complexity of the systems being studied and the important issue of specificity for when these tools are applied to individuals. When the impact of all these developments are combined, it seems likely that the field of HRV will soon begin to realize its potential as an important component of the toolbox used for diagnosis and therapy of patients in the clinic. This article is part of the theme issue 'Advanced computation in cardiovascular physiology: new challenges and opportunities'.


Author(s):  
Richard McCleary ◽  
David McDowall ◽  
Bradley J. Bartos

The general AutoRegressive Integrated Moving Average (ARIMA) model can be written as the sum of noise and exogenous components. If an exogenous impact is trivially small, the noise component can be identified with the conventional modeling strategy. If the impact is nontrivial or unknown, the sample AutoCorrelation Function (ACF) will be distorted in unknown ways. Although this problem can be solved most simply when the outcome of interest time series is long and well-behaved, these time series are unfortunately uncommon. The preferred alternative requires that the structure of the intervention is known, allowing the noise function to be identified from the residualized time series. Although few substantive theories specify the “true” structure of the intervention, most specify the dichotomous onset and duration of an impact. Chapter 5 describes this strategy for building an ARIMA intervention model and demonstrates its application to example interventions with abrupt and permanent, gradually accruing, gradually decaying, and complex impacts.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adriana Leal ◽  
Mauro F. Pinto ◽  
Fábio Lopes ◽  
Anna M. Bianchi ◽  
Jorge Henriques ◽  
...  

AbstractElectrocardiogram (ECG) recordings, lasting hours before epileptic seizures, have been studied in the search for evidence of the existence of a preictal interval that follows a normal ECG trace and precedes the seizure’s clinical manifestation. The preictal interval has not yet been clinically parametrized. Furthermore, the duration of this interval varies for seizures both among patients and from the same patient. In this study, we performed a heart rate variability (HRV) analysis to investigate the discriminative power of the features of HRV in the identification of the preictal interval. HRV information extracted from the linear time and frequency domains as well as from nonlinear dynamics were analysed. We inspected data from 238 temporal lobe seizures recorded from 41 patients with drug-resistant epilepsy from the EPILEPSIAE database. Unsupervised methods were applied to the HRV feature dataset, thus leading to a new perspective in preictal interval characterization. Distinguishable preictal behaviour was exhibited by 41% of the seizures and 90% of the patients. Half of the preictal intervals were identified in the 40 min before seizure onset. The results demonstrate the potential of applying clustering methods to HRV features to deepen the current understanding of the preictal state.


2021 ◽  
Vol 11 (8) ◽  
pp. 3561
Author(s):  
Diego Duarte ◽  
Chris Walshaw ◽  
Nadarajah Ramesh

Across the world, healthcare systems are under stress and this has been hugely exacerbated by the COVID pandemic. Key Performance Indicators (KPIs), usually in the form of time-series data, are used to help manage that stress. Making reliable predictions of these indicators, particularly for emergency departments (ED), can facilitate acute unit planning, enhance quality of care and optimise resources. This motivates models that can forecast relevant KPIs and this paper addresses that need by comparing the Autoregressive Integrated Moving Average (ARIMA) method, a purely statistical model, to Prophet, a decomposable forecasting model based on trend, seasonality and holidays variables, and to the General Regression Neural Network (GRNN), a machine learning model. The dataset analysed is formed of four hourly valued indicators from a UK hospital: Patients in Department; Number of Attendances; Unallocated Patients with a DTA (Decision to Admit); Medically Fit for Discharge. Typically, the data exhibit regular patterns and seasonal trends and can be impacted by external factors such as the weather or major incidents. The COVID pandemic is an extreme instance of the latter and the behaviour of sample data changed dramatically. The capacity to quickly adapt to these changes is crucial and is a factor that shows better results for GRNN in both accuracy and reliability.


Sign in / Sign up

Export Citation Format

Share Document