Smoothing Facilitates the Detection of Coupled Responses in Psychophysiological Time Series

2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.

Author(s):  
Mehdi Moradi ◽  
Manuel Montesino-SanMartin ◽  
M. Dolores Ugarte ◽  
Ana F. Militino

AbstractWe propose an adaptive-sliding-window approach (LACPD) for the problem of change-point detection in a set of time-ordered observations. The proposed method is combined with sub-sampling techniques to compensate for the lack of enough data near the time series’ tails. Through a simulation study, we analyse its behaviour in the presence of an early/middle/late change-point in the mean, and compare its performance with some of the frequently used and recently developed change-point detection methods in terms of power, type I error probability, area under the ROC curves (AUC), absolute bias, variance, and root-mean-square error (RMSE). We conclude that LACPD outperforms other methods by maintaining a low type I error probability. Unlike some other methods, the performance of LACPD does not depend on the time index of change-points, and it generally has lower bias than other alternative methods. Moreover, in terms of variance and RMSE, it outperforms other methods when change-points are close to the time series’ tails, whereas it shows a similar (sometimes slightly poorer) performance as other methods when change-points are close to the middle of time series. Finally, we apply our proposal to two sets of real data: the well-known example of annual flow of the Nile river in Awsan, Egypt, from 1871 to 1970, and a novel remote sensing data application consisting of a 34-year time-series of satellite images of the Normalised Difference Vegetation Index in Wadi As-Sirham valley, Saudi Arabia, from 1986 to 2019. We conclude that LACPD shows a good performance in detecting the presence of a change as well as the time and magnitude of change in real conditions.


2020 ◽  
Vol 13 (2) ◽  
pp. 225-232 ◽  
Author(s):  
Mieczysław Szyszkowicz

AbstractIn this work, a new technique is proposed to study short-term exposure and adverse health effects. The presented approach uses hierarchical clusters with the following structure: each pair of two sequential days in 1 year is embedded in the year. We have 183 clusters per year with the embedded structure <year:2 days>. Time-series analysis is conducted using a conditional Poisson regression with the constructed clusters as a stratum. Unmeasured confounders such as seasonal and long-term trends are not modelled but are controlled by the structure of the clusters. The proposed technique is illustrated using four freely accessible databases, which contain complex simulated data. These data are available as the compressed R workspace files. Results based on the simulated data were very close to the truth based on the presented methodology. In addition, the case-crossover method with 1-month and 2-week window, and a conditional Poisson regression on 3-day clusters as a stratum, was also applied to the simulated data. Difficulties (high type I error rate) were observed for the case-crossover method in the presence of high concurvity in the simulated data. The proposed methods using various forms of a stratum were further applied to the Chicago mortality data. The considered methods have often different qualitative and quantitative estimations.


2020 ◽  
Author(s):  
Corey Peltier ◽  
Reem Muharib ◽  
April Haas ◽  
Art Dowdy

Single-case research designs (SCRDs) are used to evaluate functional relations between an independent variable and dependent variable(s). When analyzing data related to autism spectrum disorder, SCRDs are frequently used. Namely, SCRDs allow for empirical evidence in support of practices that improve socially significant outcomes for individuals diagnosed with ASD. To determine a functional relation in SCRDs, a time-series graph is constructed and visual analysts evaluate data patterns. Preliminary evidence suggest that the approach used to scale the ordinate (i.e., y-axis) and the proportions of the x-axis length to y-axis height (i.e., data points per x- to y-axis ratio) impact visual analysts’ decisions regarding a functional relation and the magnitude of treatment effect, resulting in an increased likelihood of a Type I errors. The purpose for this systematic review was to evaluate all time-series graphs published in the last decade (i.e., 2010-2020) in four premier journals in the field of ASD: Journal of Autism and Developmental Disorders, Research in Autism Spectrum Disorders, Autism, and Focus on Autism and Other Developmental Disabilities. The systematic search yielded 348 articles including 2,675 graphs. We identified large variation across and within types of SCRDs for the standardized X:Y and DPPXYR. In addition, 73% of graphs were below a DPPXYR of 0.14, providing evidence of the Type I error rate. A majority of graphs used an appropriate ordinate scaling method that would not increase Type I error rates. Implications for future research and practice are provided.


2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Ming-Wen An ◽  
Xin Lu ◽  
Daniel J. Sargent ◽  
Sumithra J. Mandrekar

Background. A phase II design with an option for direct assignment (stop randomization and assign all patients to experimental treatment based on interim analysis, IA) for a predefined subgroup was previously proposed. Here, we illustrate the modularity of the direct assignment option by applying it to the setting of two predefined subgroups and testing for separate subgroup main effects.Methods. We power the 2-subgroup direct assignment option design with 1 IA (DAD-1) to test for separate subgroup main effects, with assessment of power to detect an interaction in a post-hoc test. Simulations assessed the statistical properties of this design compared to the 2-subgroup balanced randomized design with 1 IA, BRD-1. Different response rates for treatment/control in subgroup 1 (0.4/0.2) and in subgroup 2 (0.1/0.2, 0.4/0.2) were considered.Results. The 2-subgroup DAD-1 preserves power and type I error rate compared to the 2-subgroup BRD-1, while exhibiting reasonable power in a post-hoc test for interaction.Conclusion. The direct assignment option is a flexible design component that can be incorporated into broader design frameworks, while maintaining desirable statistical properties, clinical appeal, and logistical simplicity.


1996 ◽  
Vol 21 (4) ◽  
pp. 390-404 ◽  
Author(s):  
Bradley E. Huitema ◽  
Joseph W. McKean ◽  
Jinsheng Zhao

The runs test is frequently recommended as a method of testing for nonindependent errors in time-series regression models. A Monte Carlo investigation was carried out to evaluate the empirical properties of this test using (a) several intervention and nonintervention regression models, (b) sample sizes ranging from 12 to 100, (c) three levels of α, (d) directional and nondirectional tests, and (e) 19 levels of autocorrelation among the errors. The results indicate that the runs test yields markedly asymmetrical error rates in the two tails and that neither directional nor nondirectional tests are satisfactory with respect to Type I error, even when the ratio of degrees of freedom to sample size is as high as .98. It is recommended that the test generally not be employed in evaluating the independence of the errors in time-series regression models.


Author(s):  
Asli Kaya ◽  
Fatih Cemrek ◽  
Ozer Ozdemir

COVID-19 is a respiratory disease caused by a novel coronavirus first detected in December 2019. As the number of new cases increases rapidly, pandemic fatigue and public disinterest in different response strategies are creating new challenges for government officials in tackling the pandemic. Therefore, government officials need to fully understand the future dynamics of COVID-19 to develop strategic preparedness and flexible response planning. In the light of the above-mentioned conditions, in this study, autoregressive integrated moving average (ARIMA) time series model and Wavelet Neural Networks (WNN) methods are used to predict the number of new cases and new deaths to draw possible future epidemic scenarios. These two methods were applied to publicly available data of the COVID-19 pandemic for Turkey, Italy, and the United Kingdom. In our analysis, excluding Turkey data, the WNN algorithm outperformed the ARIMA model in terms of forecasting consistency. Our work highlighted the promising validation of using wavelet neural networks when making predictions with very few features and a smaller amount of historical data.


1994 ◽  
Vol 78 (1) ◽  
pp. 331-336 ◽  
Author(s):  
Bradley E. Huitema ◽  
Joseph W. McKean

Two new tests of the hypothesis of no lag-1 autocorrelation in a time-series process, i.e., H0: ρ1 = 0, are presented for two reduced-bias autocorrelation estimators that were introduced by Huitema and McKean in 1994. The performance of the new tests, ZF1 and tF2, was evaluated using Monte Carlo methods. Both new tests were superior to the conventional Bartlett asymptotic test in the case of small and intermediate N; tF2 outperforms ZF1 substantially at small and intermediate values of N. The true probability of Type I error associated with test tF2 is exceedingly close to the nominal value for all values of α (.01, .05, and .10) and N (6—500) investigated. The tF2 test is also more powerful against positive values of ρ1 than are the Bartlett and ZF1 tests.


Author(s):  
Peter K. Enns ◽  
Carolina Moehlecke ◽  
Christopher Wlezien

Abstract It is fairly well-known that proper time series analysis requires that estimated equations be balanced. Numerous scholars mistake this to mean that one cannot mix orders of integration. Previous studies have clarified the distinction between equation balance and having different orders of integration, and shown that mixing orders of integration does not increase the risk of type I error when using the general error correction/autoregressive distributed lag (GECM/ADL) models, that is, so long as equations are balanced (and other modeling assumptions are met). This paper builds on that research to assess the consequences for type II error when employing those models. Specifically, we consider cases where a true relationship exists, the left- and right-hand sides of the equation mix orders of integration, and the equation still is balanced. Using the asymptotic case, we find that the different orders of integration do not preclude identification of the true relationship using the GECM/ADL. We then highlight that estimation is trickier in practice, over finite time, as data sometimes do not reveal the underlying process. But, simulations show that even in these cases, researchers will typically draw accurate inferences as long as they select their models based on the observed characteristics of the data and test to be sure that standard model assumptions are met. We conclude by considering the implications for researchers analyzing or conducting simulations with time series data.


2003 ◽  
Vol 22 (2) ◽  
pp. 147-170 ◽  
Author(s):  
Robert A Leitch ◽  
Yining Chen

This study analyzes the hypothesis generation and elimination capability of analytical procedures using Structural, Stepwise, Martingale, and ARIMA (Auto Regressive Integrated Moving Average) expectation models. We seed 14 errors into 36 complete sets of actual monthly financial statements from three companies. The bivariate pattern of differences, along with the structure of the accounting, business process, and economic system, are used to analytically determine (hypothesize) the most likely cause of the error. We then test the capability of these expectation models to generate correct hypotheses or to eliminate incorrect hypotheses. Positive and negative testing approaches, founded on multivariate normal theory, are examined. From a hypothesis generation perspective using the positive testing approach, the results indicate that the Structural and Stepwise models, yield lower effectiveness risks (type II error) than the ARIMA and Martingale models, with the edge going to the Structural model. From a hypothesis elimination perspective using the negative approach, the results indicate that the Structural and Stepwise models yield lower efficiency risks (type I error) than the Martingale and ARIMA models, with the edge going to the Stepwise model. This study provides strong evidence to support the use of the structure of an organization's business processes (McCarthy 1982; Bell et al. 1997), its associated accounting system, and economic structure to build an expectation model. Moreover, the joint consideration of errors is found to be superior to the marginal approach advocated by Kinney (1987). The results presented here have the potential to assist auditors in directing audit efforts.


Sign in / Sign up

Export Citation Format

Share Document