scholarly journals Peer Review #2 of "Comparing multiple comparisons: practical guidance for choosing the best multiple comparisons test (v0.1)"

PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e10387
Author(s):  
Stephen Midway ◽  
Matthew Robertson ◽  
Shane Flinn ◽  
Michael Kaller

Multiple comparisons tests (MCTs) include the statistical tests used to compare groups (treatments) often following a significant effect reported in one of many types of linear models. Due to a variety of data and statistical considerations, several dozen MCTs have been developed over the decades, with tests ranging from very similar to each other to very different from each other. Many scientific disciplines use MCTs, including >40,000 reports of their use in ecological journals in the last 60 years. Despite the ubiquity and utility of MCTs, several issues remain in terms of their correct use and reporting. In this study, we evaluated 17 different MCTs. We first reviewed the published literature for recommendations on their correct use. Second, we created a simulation that evaluated the performance of nine common MCTs. The tests examined in the simulation were those that often overlapped in usage, meaning the selection of the test based on fit to the data is not unique and that the simulations could inform the selection of one or more tests when a researcher has choices. Based on the literature review and recommendations: planned comparisons are overwhelmingly recommended over unplanned comparisons, for planned non-parametric comparisons the Mann-Whitney-Wilcoxon U test is recommended, Scheffé’s S test is recommended for any linear combination of (unplanned) means, Tukey’s HSD and the Bonferroni or the Dunn-Sidak tests are recommended for pairwise comparisons of groups, and that many other tests exist for particular types of data. All code and data used to generate this paper are available at: https://github.com/stevemidway/MultipleComparisons.


2020 ◽  
Author(s):  
David Nicol ◽  
Suzanne McCallum

This article takes the view that students generate internal feedback about their own work by comparing it against some external information. Based on this framing, it explores the inner feedback that students generate during peer review when they compare their work with the work of peers and with comments received from peers. The outputs of these comparisons were made explicit by having students write an account of what they learned from them. This allowed us to evaluate the extent to which students’ internal feedback would match the feedback a teacher might provide. Analysis revealed that inner feedback builds up over sequential comparisons and that this, and multiple simultaneous comparisons, resulted in students generating feedback that not only matched the feedback a teacher might provide but went beyond it in powerful and productive ways. The implications are that having students make the internal feedback they generate explicit not only helps them build their self-regulatory abilities but can also decrease teacher workload in providing comments.


Author(s):  
Susan Haack

<span lang="EN-US">Appraising the worth of others’ testimony is always complex; appraising the worth of expert testimony is even harder; appraising the worth of expert testimony in a legal context is harder yet. Legal efforts to assess the reliability of expert testimony—I’ll focus on evolving U.S. law governing the admissibility of such testimony—seem far from adequate, offering little effective practical guidance. My purpose in this paper is to think through what might be done to offer courts more real, operational help. The first step is to explain why the legal formulae that have evolved over the years may seem reassuring, but aren’t really of much practical use. The next is to suggest that we might do better not by amending evidentiary rules but by helping judges and attorneys understand what questions they should ask about expert evidence. I focus here on (i) epidemiological testimony, and (ii) the process of peer review.</span>


Author(s):  
Debi A. LaPlante ◽  
Heather M. Gray ◽  
Pat M. Williams ◽  
Sarah E. Nelson

Abstract. Aims: To discuss and review the latest research related to gambling expansion. Method: We completed a literature review and empirical comparison of peer reviewed findings related to gambling expansion and subsequent gambling-related changes among the population. Results: Although gambling expansion is associated with changes in gambling and gambling-related problems, empirical studies suggest that these effects are mixed and the available literature is limited. For example, the peer review literature suggests that most post-expansion gambling outcomes (i. e., 22 of 34 possible expansion outcomes; 64.7 %) indicate no observable change or a decrease in gambling outcomes, and a minority (i. e., 12 of 34 possible expansion outcomes; 35.3 %) indicate an increase in gambling outcomes. Conclusions: Empirical data related to gambling expansion suggests that its effects are more complex than frequently considered; however, evidence-based intervention might help prepare jurisdictions to deal with potential consequences. Jurisdictions can develop and evaluate responsible gambling programs to try to mitigate the impacts of expanded gambling.


1994 ◽  
Vol 92 (4) ◽  
pp. 535-542 ◽  
Author(s):  
Terence M. Murphy ◽  
Jessica M. Utts

Sign in / Sign up

Export Citation Format

Share Document