scholarly journals Null and Void? Errors in Meta-analysis on Perceptual Disfluency and Recommendations to Improve Meta-analytical Reproducibility

Author(s):  
Sophia C. Weissgerber ◽  
Matthias Brunmair ◽  
Ralf Rummer

AbstractIn the 2018 meta-analysis of Educational Psychology Review entitled “Null effects of perceptual disfluency on learning outcomes in a text-based educational context” by Xie, Zhou, and Liu, we identify some errors and inconsistencies in both the methodological approach and the reported results regarding coding and effect sizes. While from a technical point of view the meta-analysis aligns with current meta-analytical guidelines (e.g., PRISMA) and conforms to general meta-analytical requirements (e.g., considering publication bias), it exemplifies certain insufficient practices in the creation and review of meta-analysis. We criticize the lack of transparency and negligence of open-science practices in the generation and reporting of results, which complicate evaluation of the meta-analytical reproducibility, especially given the flexibility in subjective choices regarding the analytical approach and the flexibility in creating the database. Here we present a framework applicable to pre- and post-publication review on improving the Methods Reproducibility of meta-analysis. Based on considerations of the transparency and openness (TOP)-guidlines (Nosek et al. Science 348: 1422–1425, 2015), the Reproducibility Enhancement Principles (REP; Stodden et al. Science 354:1240–1241, 2016), and recommendations by Lakens et al. (BMC Psychology 4: Article 24, 2016), we outline Computational Reproducibility (Level 1), Computational Verification (Level 2), Analysis Reproducibility (Level 3), and Outcome Reproducibility (Level 4). Applying reproducibility checks to TRANSFER performance as the chosen outcome variable, we found Xie’s and colleagues’ results to be (rather) robust. Yet, regarding RECALL performance and the moderator analysis, the identified problems raise doubts about the credibility of the reported results.

2018 ◽  
Author(s):  
Gerit Pfuhl ◽  
Jon Grahe

Watch the VIDEO.Recent years have seen a revolution in publishing, and large support for open access publishing. There has been a slower acceptance and transition to other open science principles such as open data, open materials, and preregistration. To accelerate the transition and make open science the new standard, the collaborative replications and education project (CREP; http://osf.io/wfc6u/)) was launched in 2013, hosted on the Open Science Framework (osf.io). OSF is like a preprint, collecting partial data with each individual contributors project. CREP introduces open science at the start of academic research, facilitating student research training in open science and solidifying behavioral science results. The CREP team attempts to achieve this by inviting contributors to replicate one of several replication studies selected for scientific impact and suitability for undergraduates to complete during one academic term. Contributors follow clear protocols with students interacting with a CREP team that reviews the materials and video of the procedure to ensure quality data collection while students are learning science practices and methods. By combining multiple replications from undergraduates across the globe, the findings can be pooled to conduct meta-analysis and so contribute to generalizable and replicable research findings. CREP is careful to not interpret any single result. CREP has recently joined forces with the psychological science accelerator (PsySciAcc), a globally distributed network of psychological laboratories accelerating the accumulation of reliable and generalizable results in the behavioral sciences. The Department of Psychology at UiT is part of the network and has two ongoing CREP studies, maintaining open science practices early on. In this talk, we will present our experiences of conducting transparent replicable research, and experience with preprints from a supervisor and researcher perspective.


2018 ◽  
Author(s):  
Robert Calin-Jageman ◽  
Geoff Cumming

&&& Now published in the American Statistician: https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1518266 *** The "New Statistics" emphasizes effect sizes, confidence intervals, meta-analysis and the use of Open Science practices. We present 3 specific ways in which a New Statistics approach can help improve scientific practice: by reducing over-confidence in small samples, by reducing confirmation bias, and by fostering more cautious judgements of consistency.


2021 ◽  
Vol 5 ◽  
Author(s):  
Tobias Dienlin ◽  
Ye Sun

In their meta-analysis on how privacy concerns and perceived privacy risk are related to online disclosure intentionand behavior, Yu et al. (2020) conclude that “the ‘privacy paradox’ phenomenon (...) exists in our research model” (p. 8). In this comment, we contest this conclusion and present evidence and arguments against it. We find five areas of problems: (1) Flawed logic of hypothesis testing; (2) erroneous and implausible results; (3) questionable decision to use only the direct effect of privacy concerns on disclosure behavior as evidence in testing the privacy paradox; (4) overinterpreting results from MASEM; (5) insufficient reporting and lack of transparency. To guide future research, we offer three recommendations: Going beyond mere null hypothesis significance testing, probing alternative theoretical models, and implementing open science practices. While we value this meta-analytic effort, we caution its readers that, contrary to the authors’ claim, it does not offer evidence in support of the privacy paradox.


2020 ◽  
Author(s):  
Tobias Dienlin ◽  
Ye Sun

In their meta-analysis on how privacy concerns and perceived privacy risks are related to online disclosure intention and behavior, Yu et al. (2020) conclude that “the ‘privacy paradox’ phenomenon [...] exists in our research model” (p. 8). In this comment, we contest this conclusion and present evidence and arguments against it. We find three areas of problems: (1) flawed logic of hypothesis testing; (2) erroneous and implausible results; (3) questionable decision to use only the direct effect of privacy concerns on disclosure behavior as evidence in testing the privacy paradox. In light of these issues and to help guide future research, we propose a research agenda for the privacy paradox. We encourage researchers to (1) go beyond the null hypothesis significance testing (NHST), (2) engage in open science practices, (3) refine theoretical explications, (4) consider confounding, mediating, and boundary variables, and (5) improve the rigor of causal inference. Overall, while we value this meta-analytic effort by Yu et al., we caution its readers that, contrary to the authors’ claim, it does not offer evidence in support of the privacy paradox.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262185
Author(s):  
Noora Taipale ◽  
Laurent Chiotti ◽  
Veerle Rots

Projectile technology is commonly viewed as a significant contributor to past human subsistence and, consequently, to our evolution. Due to the allegedly central role of projectile weapons in the food-getting strategies of Upper Palaeolithic people, typo-technological changes in the European lithic record have often been linked to supposed developments in hunting weaponry. Yet, relatively little reliable functional data is currently available that would aid the detailed reconstruction of past weapon designs. In this paper, we take a use-wear approach to the backed tool assemblages from the Recent and Final Gravettian layers (Levels 3 and 2) of Abri Pataud (Dordogne, France). Our use of strict projectile identification criteria relying on combinations of low and high magnification features and our critical view of the overlap between production and use-related fractures permitted us to confidently identify a large number of used armatures in both collections. By isolating lithic projectiles with the strongest evidence of impact and by recording wear attributes on them in detail, we could establish that the hunting equipment used during the Level 3 occupations involved both lithic weapon tips and composite points armed with lithic inserts. By contrast, the Level 2 assemblage reflects a heavy reliance on composite points in hunting reindeer and other game. Instead of an entirely new weapon design, the Level 2 collection therefore marks a shift in weapon preferences. Using recent faunal data, we discuss the significance of the observed diachronic change from the point of view of prey choice, seasonality, and social organisation of hunting activities. Our analysis shows that to understand their behavioural significance, typo-technological changes in the lithic record must be viewed in the light of functional data and detailed contextual information.


2020 ◽  
Author(s):  
Timon Elmer

The analyses of Quoidbach et al. (2019) indicate that unhappy individuals are more likely to subsequently interact with others. From a theoretical point of view, this finding is contrary to most existing psychological studies on this matter. Motivated by these theoretically surprising findings, this commentary reports re-analyses of the openly available data of Quoidbach et al.’s study (2019). These re-analyses indicate that a statistically problematic control variable is responsible for this counterintuitive finding. Models reporting raw associations and including alternative control variables suggest that unhappy individuals are less likely to subsequently interact.To support the transparency and the trustworthiness of psychological science, I encourage further open science practices and suggest that reports of raw data and stepwise model results are more frequently reported.


2021 ◽  
pp. 174569162098447
Author(s):  
Robert Körner ◽  
Lukas Röseler ◽  
Astrid Schütz

We offer a critical perspective on the meta-analysis by Elkjær et al. (2020) by pointing out three constraints: The first refers to open-science practices, the second addresses the selection of studies, and the third offers a broader theoretical perspective. We argue that preregistration and adherence to the highest standards of conducting meta-analyses is important. Further, we identified several missing studies. Regarding the theoretical perspective, we suggest that it may be useful to tie body positions into the dominance-prestige framework and, on that basis, to distinguish two types of body positions. Such an approach has the potential to account for discrepancies in previous meta-analytical evidence regarding the effects of expansive versus contractive nonverbal displays. Future research may thus be able to provide not only methodological but also theoretical innovations to the field of body positions.


2021 ◽  
Vol 12 ◽  
pp. 7-36
Author(s):  
Piotr Bylica

Presently, naturalistic theism is the dominant position in the debate on the relation between science and religion, defending a thesis that the conflict between science and religion is only an apparent one. Also, this version of theism accepts the naturalist assumptions behind contemporary science and attempts to reformulate the beliefs held within the traditional Christian theism in order to present the religious view of reality as not conflicting with the scientific picture of the world. Certain assumptions behind Mark Harris’s views on the relations between science and religion can be described as consistent with naturalistic theism. The model of levels of analysis helps to analyze the most important themes found within naturalistic theism and show how these are described in the works of Harris. The model facilitates the identification of the relations between particular kinds of assumptions behind the position taken from the point of view of naturalistic theism in the debate on the relation between science and religion. The list of most frequently recurring assumptions — that are also important in Harris’s writings — include: the general division of epistemic competence, which assumes theology (religion) to be competent in dealing with the metaphysical issues (Levels 1 and 2) and science to be the only one competent to deliver the empirical statements describing processes and entities found within the empirical sphere (Levels 4 and 5); the acceptance of the naturalistic assumptions behind contemporary science (Level 2) and skepticism toward the religious notions found in the traditional Christian theism describing supernatural interventions and toward the dualist interpretation of human soul (Level 3). This leads to the acceptance of purely scientific, naturalistic, explanations of the events found within the empirical sphere and to skepticism toward the literal meaning of descriptions of empirical events (Level 5) that are not consistent with the anti-interventionist assumptions behind science. Harris’s acceptance of naturalistic theism in terms of the relation between science and religion and his use of the techniques found in the modern biblical scholarship have led him to the ideas of plurality of meanings and the lack of one definite truth with respect to the specific issues he deals with. From the point of view of MLA it is the rejection of super-naturalistic assumptions of the traditional Christian theism and the acceptance of the naturalistic assumptions of science that seems to be the cause of lack of definite truth in his theological explanations.


2020 ◽  
Author(s):  
David Moreau ◽  
Beau Gamble

Psychology researchers are rapidly adopting open science practices, yet clear guidelines on how to apply these practices to meta-analysis remain lacking. In this tutorial, we describe why open science is important in the context of meta-analysis in psychology, and suggest how to adopt the three main components of open science: preregistration, open materials, and open data. We first describe how to make the preregistration as thorough as possible—and how to handle deviations from the plan. We then focus on creating easy-to-read materials (e.g., search syntax, R scripts) to facilitate reproducibility and bolster the impact of a meta-analysis. Finally, we suggest how to organize data (e.g., literature search results, data extracted from studies) that are easy to share, interpret, and update as new studies emerge. For each step of the meta-analysis, we provide example templates, accompanied by brief video tutorials, and show how to integrate these practices into the Open Science Framework (https://osf.io/q8stz/).


Sign in / Sign up

Export Citation Format

Share Document