scholarly journals Questionable Research Practices and Determinants of Their Frequency

2021 ◽  
Author(s):  
Taym Alsalti

Concern has been mounting over the reproducibility of findings in psychology and other empiri-cal sciences. Large scale replication attempts found worrying results. The high rate of false find-ings in the published research has been partly attributed to scientists’ engagement in questionable research practices (QRPs). I discuss reasons and solutions for this problem. Employing a content analysis of empirical studies published in the years 2007 and 2017, I found a decrease in the prevalence of QRPs in the investigated decade. I subsequently discuss possible explanations for the improvement as well as further potential contributors to the high rate of false findings in sci-ence. Most scientists agree that a change towards more open and transparent scientific practice on part of both scientists and publishers is necessary. Debate exists as to how this should be achieved.

2021 ◽  
Author(s):  
Taym Alsalti

Concern has been mounting over the reproducibility of findings in psychology and other empiri-cal sciences. Large scale replication attempts found worrying results. The high rate of false findings in the published research has been partly attributed to scientists’ engagement in questionable research practices (QRPs). I discuss reasons and solutions for this problem. Employing a content analysis of empirical studies published in the years 2007 and 2017, I found a decrease in the prevalence of QRPs in the investigated decade. I subsequently discuss possible explanations for the improvement as well as further potential contributors to the high rate of false findings in science. Most scientists agree that a change towards more open and transparent scientific practice on part of both scientists and publishers is necessary. Debate exists as to how this should be achieved.


Author(s):  
Dave Gelders ◽  
Hans Peeraer ◽  
Jelle Goossens

PurposeThe purpose of this paper is to gain insight into the content, format and evaluation of printed public communication from police officers and governments regarding home burglary prevention in Belgium.Design/methodology/approachThe content and format in this paper is analyzed through content analysis of 104 printed communication pieces in the Belgian province of Flemish‐Brabant in 2005. The evaluation is analyzed through five focus group interviews among professionals and common citizens.FindingsThe paper finds that police zones significantly differ in terms of communication efforts. The media mix is not diverse with poor collaboration between police officers and government information officers, while intermediaries (i.e. architects) are rarely used, culminating in poor targeted communication.Research limitations/implicationsThe paper shows that only printed communication is analyzed and more large‐scale empirical research is desired.Practical implicationsThe paper shows that a richer media mix, more targeted communication, more national communication support and additional dialogue between and training of police officers and communication with professionals are advisable.Originality/valueThis paper combines two empirical studies and methods (content analysis and focus group interviews), resulting in a series of recommendations for further inquiry and future action.


2020 ◽  
Author(s):  
Dwight Kravitz ◽  
Stephen Mitroff

Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.


2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


2021 ◽  
Author(s):  
Phil McAleer ◽  
Helena Paterson

An emphasis on research-led teaching across educational settings has meant that more and more implementations for improving student learning are based on published research. However, the validity and the reliability of those implementations is only as strong as the research they are based on. The recent replication crisis, witnessed across various fields of science, including pedagogical research, has called into question the published research record. Along with addressing issues in research practices, changes to publication practices are seen as an important step in ensuring that evidence-based teaching practices we adopt in our classrooms are fit for purpose. Here we highlight a number of issues within the pedagogical literature, including a lack of replication studies and a positive publication bias, as well as common questionable research practices, that need addressed to ensure credible science as a basis for educational interventions. We propose the adoption of Registered Reports as a means for counteracting these issues. By ensuring that the literature that we base policy and practice upon is published based on its scientific rigor, and not merely its outcome, we believe that the field will have a stronger basis on which to decide the implementations and approaches we adopt within our classrooms.


2018 ◽  
Author(s):  
Hannah Fraser ◽  
Timothy H. Parker ◽  
Shinichi Nakagawa ◽  
Ashley Barnett ◽  
Fiona Fidler

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.


2018 ◽  
Author(s):  
Christopher Brydges

Objectives: Research has found evidence of publication bias, questionable research practices (QRPs), and low statistical power in published psychological journal articles. Isaacowitz’s (2018) editorial in the Journals of Gerontology Series B, Psychological Sciences called for investigation of these issues in gerontological research. The current study presents meta-research findings based on published research to explore if there is evidence of these practices in gerontological research. Method: 14,481 test statistics and p values were extracted from articles published in eight top gerontological psychology journals since 2000. Frequentist and Bayesian caliper tests were used to test for publication bias and QRPs (specifically, p-hacking and incorrect rounding of p values). A z-curve analysis was used to estimate average statistical power across studies.Results: Strong evidence of publication bias was observed, and average statistical power was approximately .70 – below the recommended .80 level. Evidence of p-hacking was mixed. Evidence of incorrect rounding of p values was inconclusive.Discussion: Gerontological research is not immune to publication bias, QRPs, and low statistical power. Researchers, journals, institutions, and funding bodies are encouraged to adopt open and transparent research practices, and using Registered Reports as an alternative article type to minimize publication bias and QRPs, and increase statistical power.


2017 ◽  
Author(s):  
Evan C Carter ◽  
Felix D. Schönbrodt ◽  
Will M Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.


2018 ◽  
Author(s):  
Hannah Fraser ◽  
Timothy H. Parker ◽  
Shinichi Nakagawa ◽  
Ashley Barnett ◽  
Fiona Fidler

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.


Sign in / Sign up

Export Citation Format

Share Document