scholarly journals Excess success in “Don’t count calorie labeling out: Calorie counts on the left side of menu items lead to lower calorie food choices”

2019 ◽  
Author(s):  
Gregory Francis ◽  
Evelina Thunell

Based on findings from six experiments, Dallas, Liu & Ubel (2019) concluded that placing calorie labels to the left of menu items influences consumers to choose lower calorie food options. Contrary to previously reported findings, they suggested that calorie labels do influence food choices, but only when placed to the left because they are in this case read first. If true, these findings have important implications for the design of menus and may help address the obesity pandemic. However, an analysis of the reported results indicates that they seem too good to be true. We show that if the effect sizes in Dallas et al. (2019) are representative of the populations, a replication of the six studies (with the same sample sizes) has a probability of only 0.014 of producing uniformly significant outcomes. Such a low success rate suggests that the original findings might be the result of questionable research practices or publication bias. We therefore caution readers and policy makers to be skeptical about the results and conclusions reported by Dallas et al. (2019).

2020 ◽  
Vol 4 ◽  
Author(s):  
Evelina Thunell ◽  
Gregory Francis

Based on findings from six experiments, Dallas, Liu, and Ubel (2019) conclude that placing calorie labels to the left of menu items influences consumers to choose lower calorie food options. Contrary to previously reported findings, they suggest that calorie labels can influence food choices, but only when placed to the left because they are in this case read first. If true, these findings have important implications for the design of menus and may help address the obesity pandemic. However, an analysis of the reported results indicates that they seem too good to be true. We show that if the effect sizes in Dallas et al. (2019) are representative of the populations, a replication of the six studies (with the same sample sizes) has a probability of only 0.014 of producing uniformly significant outcomes. Such a low success rate suggests that the original findings might be the result of questionable research practices or publication bias. We therefore caution readers and policy makers to be skeptical about the results and conclusions reported by Dallas et al. (2019).


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Gregory Francis ◽  
Evelina Thunell

Dong, Huang, and Zhong (2015) report five successful experiments linking brightness perception with the feeling of hopelessness. They argue that a gloomy future is psychologically represented as darkness, not just metaphorically but as an actual perceptual bias. Based on multiple results, they conclude that people who feel hopeless perceive their environment as darker and therefore prefer brighter lighting than controls. Reversely, dim lighting caused participants to feel more hopeless. However, the experiments succeed at a rate much higher than predicted by the magnitude of the reported effects. Based on the reported statistics, the estimated probability of all five experiments being fully successful, if replicated with the same sample sizes, is less than 0.016. This low rate suggests that the original findings are (perhaps unintentionally) the result of questionable research practices or publication bias. Readers should therefore be skeptical about the original results and conclusions. Finally, we discuss how to design future studies to investigate the relationship between hopelessness and brightness.


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


2021 ◽  
pp. 56-90
Author(s):  
R. Barker Bausell

The linchpin of both publication bias and irreproducibility involves an exhaustive list of more than a score of individually avoidable questionable research practices (QRPs) supplemented by 10 inane institutional research practices. While these untoward effects on the production of false-positive results are unsettling, a far more entertaining (in a masochistic sort of way) pair of now famous iconoclastic experiments conducted by Simmons, Nelson, and Simonsohn are presented in which, with the help of only a few well-chosen QRPs, research participants can actually become older after simply listening to a Beatle’s song. In addition, surveys designed to estimate the prevalence of these and other QRPs in the published literatures are also described.


2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


2021 ◽  
pp. 074193252199645
Author(s):  
Bryan G. Cook ◽  
Daniel M. Maggin ◽  
Rachel E. Robertson

This article introduces a special series of registered reports in Remedial and Special Education. Registered reports are an innovative approach to publishing that aim to increase the credibility of research. Registered reports are provisionally accepted for publication before a study is conducted, based on the importance of the research questions and the rigor of the proposed methods. If provisionally accepted, the journal agrees to publish the study if researchers adhere to accepted plans and report the study appropriately, regardless of study findings. In this article, we describe how registered reports work, review their benefits (e.g., combatting questionable research practices and publication bias, allowing expert reviewers to provide constructive feedback before a study is conducted) and limitations (e.g., requires additional time and effort, cannot be applied to all studies), review the application of registered reports in education and special education, and make recommendations for implementing registered reports in special education.


2021 ◽  
Author(s):  
Bryan G. Cook ◽  
Daniel Maggin ◽  
Rachel Robertson

This paper introduces a special series of registered reports in Remedial and Special Education. Registered reports are an innovative approach to publishing that aim to increase the credibility of research. Registered reports are provisionally accepted for publication before a study is conducted, based on the importance of the research questions and the rigor of the proposed methods. If provisionally accepted, the journal agrees to publish the study if researchers adhere to accepted plans and report the study appropriately, regardless of study findings. In this article, we describe how registered reports work, review their benefits (e.g., combatting questionable research practices and publication bias, allowing expert reviewers to provide constructive feedback before a study is conducted) and limitations (e.g., requires additional time and effort, cannot be applied to all studies), review the application of registered reports in education and special education, and make recommendations for implementing registered reports in special education.


Author(s):  
Holly L. Storkel ◽  
Frederick J. Gallun

Purpose: This editorial introduces the new registered reports article type for the Journal of Speech, Language, and Hearing Research . The goal of registered reports is to create a structural solution to address issues of publication bias toward results that are unexpected and sensational, questionable research practices that are used to produce novel results, and a peer-review process that occurs at the end of the research process when changes in fundamental design are difficult or impossible to implement. Conclusion: Registered reports can be a positive addition to scientific publications by addressing issues of publication bias, questionable research practices, and the late influence of peer review. This article type does so by requiring reviewers and authors to agree in advance that the experimental design is solid, the questions are interesting, and the results will be publishable regardless of the outcome. This procedure ensures that replication studies and null results make it into the published literature and that authors are not incentivized to alter their analyses based on the results that they obtain. Registered reports represent an ongoing commitment to research integrity and finding structural solutions to structural problems inherent in a research and publishing landscape in which publications are such a high-stakes aspect of individual and institutional success.


2021 ◽  
pp. 39-55
Author(s):  
R. Barker Bausell

This chapter explores three empirical concepts (the p-value, the effect size, and statistical power) integral to the avoidance of false positive scientific. Their relationship to reproducibility is explained in a nontechnical manner without formulas or statistical jargon, with p-values and statistical power presented in terms of probabilities from zero to 1.0 with the values of most interest to scientists being 0.05 (synonymous with a positive, hence, publishable result) and 0.80 (the most commonly recommended probability that a positive result will be obtained if the hypothesis that generated it was correct and the study will be properly designed and conducted). Unfortunately many scientists circumvent both by artifactually inflating the 0.05 criterion, overstating the available statistical power, and engaging in a number of other questionable research practices. These issues are discussed via statistical models from the genetic and psychological fields and then extended to a number of different p-values, statistical power levels, effect sizes, and the prevalence of “true,” effects expected to exist in the research literature. Among the basic conclusions of these modeling efforts are that employing more stringent p-values and larger sample sizes constitute the most effective statistical approaches for increasing the reproducibility of published results in all empirically based scientific literatures. This chapter thus lays the necessary foundation for understanding and appreciating the effects of appropriate p-values, sufficient statistical power, reaslistic effect sizes, and the avoidance of questionable research practices upon the production of reproducible results.


2018 ◽  
Author(s):  
Christopher Brydges

Objectives: Research has found evidence of publication bias, questionable research practices (QRPs), and low statistical power in published psychological journal articles. Isaacowitz’s (2018) editorial in the Journals of Gerontology Series B, Psychological Sciences called for investigation of these issues in gerontological research. The current study presents meta-research findings based on published research to explore if there is evidence of these practices in gerontological research. Method: 14,481 test statistics and p values were extracted from articles published in eight top gerontological psychology journals since 2000. Frequentist and Bayesian caliper tests were used to test for publication bias and QRPs (specifically, p-hacking and incorrect rounding of p values). A z-curve analysis was used to estimate average statistical power across studies.Results: Strong evidence of publication bias was observed, and average statistical power was approximately .70 – below the recommended .80 level. Evidence of p-hacking was mixed. Evidence of incorrect rounding of p values was inconclusive.Discussion: Gerontological research is not immune to publication bias, QRPs, and low statistical power. Researchers, journals, institutions, and funding bodies are encouraged to adopt open and transparent research practices, and using Registered Reports as an alternative article type to minimize publication bias and QRPs, and increase statistical power.


Sign in / Sign up

Export Citation Format

Share Document