scholarly journals Dealing with Distributional Assumptions in Preregistered Research

2019 ◽  
Vol 3 ◽  
Author(s):  
Matt N Williams ◽  
Casper Albers

Virtually any inferential statistical analysis relies on distributional assumptions of some kind. The violation of distributional assumptions can result in consequences ranging from small changes to error rates through to substantially biased estimates and parameters fundamentally losing their intended interpretations. Conventionally, researchers have conducted assumption checks after collecting data, and then changed the primary analysis technique if violations of distributional assumptions are observed. An approach to dealing with distributional assumptions that requires decisions to be made contingent on observed data is problematic, however, in preregisteredresearch, where researchers attempt to specify all important analysis decisions prior to collecting data. Limited methodological advice is currently available regarding how to deal with the prospect of distributional assumption violations in preregistered research. In this article, we examine several strategies that researchers could use in preregistrations to reduce the potential impact of distributional assumption violations. We suggest that pre-emptively selecting analysis methods that are as robust as possible to assumption violations, performing planned robustness analyses, and/or supplementing preregistered confirmatory analyses with exploratory checks of distributional assumptions may all be useful strategies. On the other hand, we suggest that prespecifying “decision trees” for selecting data analysis methods based on the distributional characteristics of the data may not be practical in most situations.

2018 ◽  
Author(s):  
Matt Williams ◽  
Casper J Albers

Virtually any inferential statistical analysis relies on distributional assumptions of some kind. The violation of distributional assumptions can result in consequences ranging from small changes to error rates through to substantially biased estimates and parameters fundamentally losing their intended interpretations. Conventionally, researchers have conducted assumption checks after collecting data, and then changed the primary analysis technique if violations of distributional assumptions are observed. An approach to dealing with distributional assumptions that requires decisions to be made contingent on observed data is problematic, however, in preregistered research, where researchers attempt to specify all important analysis decisions prior to collecting data. Limited methodological advice is currently available regarding how to deal with the prospect of distributional assumption violations in preregistered research. In this article, we examine several strategies that researchers could use in preregistrations to reduce the potential impact of distributional assumption violations. We suggest that pre-emptively selecting analysis methods that are as robust as possible to assumption violations, performing planned robustness analyses, and/or supplementing preregistered confirmatory analyses with exploratory checks of distributional assumptions may all be useful strategies. On the other hand, we suggest that prespecifying “decision trees” for selecting data analysis methods based on the distributional characteristics of the data may not be practical in most situations.


2017 ◽  
Vol 9 (33) ◽  
pp. 4783-4789 ◽  
Author(s):  
Samuel Mabbott ◽  
Yun Xu ◽  
Royston Goodacre

Reproducibility of SERS signal acquired from thin films developed in-house and commercially has been assessed using seven data analysis methods.


2010 ◽  
Vol 58 (2) ◽  
pp. e22-e23
Author(s):  
Karen A. Monsen ◽  
Karen S. Martin ◽  
Bonnie L Westra

2010 ◽  
Vol 19 (8) ◽  
pp. 996 ◽  
Author(s):  
Philip E. Higuera ◽  
Daniel G. Gavin ◽  
Patrick J. Bartlein ◽  
Douglas J. Hallett

Over the past several decades, high-resolution sediment–charcoal records have been increasingly used to reconstruct local fire history. Data analysis methods usually involve a decomposition that detrends a charcoal series and then applies a threshold value to isolate individual peaks, which are interpreted as fire episodes. Despite the proliferation of these studies, methods have evolved largely in the absence of a thorough statistical framework. We describe eight alternative decomposition models (four detrending methods used with two threshold-determination methods) and evaluate their sensitivity to a set of known parameters integrated into simulated charcoal records. Results indicate that the combination of a globally defined threshold with specific detrending methods can produce strongly biased results, depending on whether or not variance in a charcoal record is stationary through time. These biases are largely eliminated by using a locally defined threshold, which adapts to changes in variability throughout a charcoal record. Applying the alternative decomposition methods on three previously published charcoal records largely supports our conclusions from simulated records. We also present a minimum-count test for empirical records, which reduces the likelihood of false positives when charcoal counts are low. We conclude by discussing how to evaluate when peak detection methods are warranted with a given sediment–charcoal record.


2014 ◽  
Vol 439 (1) ◽  
pp. 2-27 ◽  
Author(s):  
Anja von der Linden ◽  
Mark T. Allen ◽  
Douglas E. Applegate ◽  
Patrick L. Kelly ◽  
Steven W. Allen ◽  
...  

2018 ◽  
Author(s):  
Anahid Ehtemami ◽  
Rollin Scott ◽  
Shonda Bernadin

Sign in / Sign up

Export Citation Format

Share Document