scholarly journals Comparison of Different Response Time Outlier Exclusion Methods: A Simulation Study

2021 ◽  
Vol 12 ◽  
Author(s):  
Alexander Berger ◽  
Markus Kiefer

In response time (RT) research, RT outliers are typically excluded from statistical analysis to improve the signal-to-noise ratio. Nevertheless, there exist several methods for outlier exclusion. This poses the question, how these methods differ with respect to recovering the uncontaminated RT distribution. In the present simulation study, two RT distributions with a given population difference were simulated in each iteration. RTs were replaced by outliers following two different approaches. The first approach generated outliers at the tails of the distribution, the second one inserted outliers overlapping with the genuine RT distribution. We applied ten different outlier exclusion methods and tested, how many pairs of distributions significantly differed. Outlier exclusion methods were compared in terms of bias. Bias was defined as the deviation of the proportion of significant differences after outlier exclusion from the proportion of significant differences in the uncontaminated samples (before introducing outliers). Our results showed large differences in bias between the exclusion methods. Some methods showed a high rate of Type-I errors and should therefore clearly not be used. Overall, our results showed that applying an exclusion method based on z-scores / standard deviations introduced only small biases, while the absence of outlier exclusion showed the largest absolute bias.

2017 ◽  
Author(s):  
Jesse E D Miller ◽  
Anthony Ives ◽  
Ellen Damschen

1. Plant functional traits are increasingly being used to infer mechanisms about community assembly and predict global change impacts. Of the several approaches that are used to analyze trait-environment relationships, one of the most popular is community-weighted means (CWM), in which species trait values are averaged at the site level. Other approaches that do not require averaging are being developed, including multilevel models (MLM, also called generalized linear mixed models). However, relative strengths and weaknesses of these methods have not been extensively compared. 2. We investigated three statistical models for trait-environment associations: CWM, a MLM in which traits were not included as fixed effects (MLM1), and a MLM with traits as fixed effects (MLM2). We analyzed a real plant community dataset to investigate associations between two traits and one environmental variable. We then analyzed permutations of the dataset to investigate sources of type I errors, and performed a simulation study to compare the statistical power of the methods. 3. In the analysis of real data, CWM gave highly significant associations for both traits, while MLM1 and MLM2 did not. Using P-values derived by simulating the data using the fitted MLM2, none of the models gave significant associations, showing that CWM had inflated type I errors (false positives). In the permutation tests, MLM2 performed the best of the three approaches. MLM2 still had inflated type I error rates in some situations, but this could be corrected using bootstrapping. The simulation study showed that MLM2 always had as good or better power than CWM. These simulations also confirmed the causes of type I errors from the permutation study. 4. The MLM that includes main effects of traits (MLM2) is the best method for identifying trait-environmental association in community assembly, with better type I error control and greater power. Analyses that regress CWMs on continuous environmental variables are not reliable because they are likely to produce type I errors.


2015 ◽  
Vol 1 (2) ◽  
pp. 115
Author(s):  
Samih Antoine Azar ◽  
Marybel Nasr

This study examines the ability of financial ratios in predicting the financial state of small and medium entities (SME) in Lebanon. This financial state can be either one of well-performing loans or one of non-performing loans. An empirical study is conducted using a data analysis of the financial statements of 222 SMEs in Lebanon for the years 2011 and 2012, of which 187 have currently well-performing loans and 35 have currently non-performing loans. Altman Z-scores are calculated, independent samples t-tests are performed, and models are developed using the binary logistic regression. Empirical evidence shows that the Altman Z-scores are able to predict well the solvent state of SMEs having well-performing loans, but are unable to predict accurately the bankruptcy state of the SMEs having non-performing loans. The independent samples t-tests revealed that five financial ratios are statistically significantly different between SMEs having well-performing loans and those having non-performing loans. Finally, a logistic regression model is developed for each year under study with limited success. In all cases accuracy results are inferred showing the percentage of companies that are accurately classified for being solvent and bankrupt, in addition to the two standard measures of error: the Type I errors and the Type II errors. Although a high accuracy is achieved in correctly classifying non-distressed and distressed firms, the Type I errors are in general relatively large. By contrast the Type II errors are in general relatively low.


2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 110-115 ◽  
Author(s):  
Rand R. Wilcox ◽  
Jinxia Ma

Abstract. The paper compares methods that allow both within group and between group heteroscedasticity when performing all pairwise comparisons of the least squares lines associated with J independent groups. The methods are based on simple extension of results derived by Johansen (1980) and Welch (1938) in conjunction with the HC3 and HC4 estimators. The probability of one or more Type I errors is controlled using the improvement on the Bonferroni method derived by Hochberg (1988) . Results are illustrated using data from the Well Elderly 2 study, which motivated this paper.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2020 ◽  
Vol 39 (3) ◽  
pp. 185-208
Author(s):  
Qiao Xu ◽  
Rachana Kalelkar

SUMMARY This paper examines whether inaccurate going-concern opinions negatively affect the audit office's reputation. Assuming that clients perceive the incidence of going-concern opinion errors as a systematic audit quality concern within the entire audit office, we expect these inaccuracies to impact the audit office market share and dismissal rate. We find that going-concern opinion inaccuracy is negatively associated with the audit office market share and is positively associated with the audit office dismissal rate. Furthermore, we find that the decline in market share and the increase in dismissal rate are primarily associated with Type I errors. Additional analyses reveal that the negative consequence of going-concern opinion inaccuracy is lower for Big 4 audit offices. Finally, we find that the decrease in the audit office market share is explained by the distressed clients' reactions to Type I errors and audit offices' lack of ability to attract new clients.


2016 ◽  
Vol 60 ◽  
pp. 170-184 ◽  
Author(s):  
Mehdi Zarkeshzadeh ◽  
Hadi Zare ◽  
Zainabolhoda Heshmati ◽  
Mehdi Teimouri

2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Awantha Jayasiri ◽  
Raymond G. Gosine ◽  
George K. I. Mann ◽  
Peter McGuire

This paper presents a simulation study of an autonomous underwater vehicle (AUV) navigation system operating in a GPS-denied environment. The AUV navigation method makes use of underwater transponder positioning and requires only one transponder. A multirate unscented Kalman filter is used to determine the AUV orientation and position by fusing high-rate sensor data and low-rate information. The paper also proposes a gradient-based, efficient, and adaptive novel algorithm for plume boundary tracking missions. The algorithm follows a centralized approach and it includes path optimization features based on gradient information. The proposed algorithm is implemented in simulation on the AUV-based navigation system and successful boundary tracking results are obtained.


Sign in / Sign up

Export Citation Format

Share Document