scholarly journals The Effect of Preregistration on Trust in Empirical Research Findings

Author(s):  
Sarahanne Miranda Field ◽  
Eric-Jan Wagenmakers ◽  
Henk Kiers ◽  
Rink Hoekstra ◽  
Anja Ernst ◽  
...  

The crisis of confidence has played a primary role in undermining the trust researchers place in the findings of their peers, and our beliefs about the credibility of research results. Thus, the importance of increasing trust in credible reported research is paramount. Incentives such as preregistration are aimed at establishing a more trustworthy scientific literature, in that they help prevent various questionable research practices. As it stands, however, no empirical evidence exists demonstrating that preregistration does increase trust. Indeed, the objective merits of preregistration greatly lose in value if a researcher's subjective assessment of the value of preregistration does not align. Additionally, the picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. The following proposal describes how we aim to test the extent to which preregistration increases the trust of participants in the reported outcomes. We also aim to assess how familiarity with another researcher might influence trust. We expect that preregistration increases researchers' trust in findings, relative to no preregistration, and that registered reporting increases trust more than preregistration alone. We also expect that familiarity enhances trust judgments to some extent, however we do not have specific expectations regarding the nature of this effect. We therefore include familiarity as an exploratory effect in our analyses. The OSF page for this registered report proposal and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75

2020 ◽  
Vol 7 (4) ◽  
pp. 181351 ◽  
Author(s):  
Sarahanne M. Field ◽  
E.-J. Wagenmakers ◽  
Henk A. L. Kiers ◽  
Rink Hoekstra ◽  
Anja F. Ernst ◽  
...  

The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75 .


Author(s):  
Hengky Latan ◽  
Charbel Jose Chiappetta Jabbour ◽  
Ana Beatriz Lopes de Sousa Jabbour ◽  
Murad Ali

AbstractAcademic leaders in management from all over the world—including recent calls by the Academy of Management Shaw (Academy of Management Journal 60(3): 819–822, 2017)—have urged further research into the extent and use of questionable research practices (QRPs). In order to provide empirical evidence on the topic of QRPs, this work presents two linked studies. Study 1 determines the level of use of QRPs based on self-admission rates and estimated prevalence among business scholars in Indonesia. It was determined that if the level of QRP use identified in Study 1 was quite high, Study 2 would be conducted to follow-up on this result, and this was indeed the case. Study 2 examines the factors that encourage and discourage the use of QRPs in the sample analyzed. The main research findings are as follows: (a) in Study 1, we found the self-admission rates and estimated prevalence of business scholars’ involvement in QRPs to be quite high when compared with studies conducted in other countries and (b) in Study 2, we found pressure for publication from universities, fear of rejection of manuscripts, meeting the expectations of reviewers, and available rewards to be the main reasons for the use of QRPs in Indonesia, whereas (c) formal sanctions and prevention efforts are factors that discourage QRPs. Recommendations for stakeholders (in this case, reviewers, editors, funders, supervisors, chancellors and others) are also provided in order to reduce the use of QRPs.


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


2018 ◽  
Author(s):  
Christopher Brydges

Objectives: Research has found evidence of publication bias, questionable research practices (QRPs), and low statistical power in published psychological journal articles. Isaacowitz’s (2018) editorial in the Journals of Gerontology Series B, Psychological Sciences called for investigation of these issues in gerontological research. The current study presents meta-research findings based on published research to explore if there is evidence of these practices in gerontological research. Method: 14,481 test statistics and p values were extracted from articles published in eight top gerontological psychology journals since 2000. Frequentist and Bayesian caliper tests were used to test for publication bias and QRPs (specifically, p-hacking and incorrect rounding of p values). A z-curve analysis was used to estimate average statistical power across studies.Results: Strong evidence of publication bias was observed, and average statistical power was approximately .70 – below the recommended .80 level. Evidence of p-hacking was mixed. Evidence of incorrect rounding of p values was inconclusive.Discussion: Gerontological research is not immune to publication bias, QRPs, and low statistical power. Researchers, journals, institutions, and funding bodies are encouraged to adopt open and transparent research practices, and using Registered Reports as an alternative article type to minimize publication bias and QRPs, and increase statistical power.


2017 ◽  
Vol 45 (3) ◽  
pp. 99-118
Author(s):  
Marzena Stor

The main goal of the article is to identify changes that appeared in local subsidiaries of MNCs in Central Europe in the scope of variety of HR knowledge sources and HR competency structuring and acquisition as a  response to the changing macroeconomic conditions in the last decade. Additionally, the aim is to determine if such changes have been experienced in Poland and whether they have been similar to those that characterize Central Europe or not. The empirical research findings come from 5 Central European countries. The general conclusion is that to face altering worldwide economic situation MNCs had to reformulate their strategies of operation and this found its reflection not only in changing practices within the variety of HR knowledge sources and HR competency structurization and acquisition but within the scope of these practices evaluation from the management perspective as well.


2020 ◽  
Vol 32 (4) ◽  
pp. 183-190
Author(s):  
William Newton Suter

This article focuses on questionable research practices (QRPs) that bias findings and conclusions. QRPs cast doubt on the credibility of research findings in home health and nursing science in general. They assault the research integrity of all researchers to the extent they are permitted to exist at all. Each QRP is defined via bundles of specific research behaviors with unifying labels that include deceptive mirages and phantom sharpshooters among others. These questionable behaviors are described in ways that enhance research understanding and enable QRP avoidance by careful home health nurse researchers using higher standards of scientific rigor. QRPs impede scientific progress by generating false conclusions. They threaten the validity and dependability of scientific research and confuse other researchers who practice rigorous science and maintain integrity. QRPs also clog the literature with studies that cannot be replicated. When researchers engage in QRPs at the expense of rigor, overall trust in the scientific knowledge base erodes.


Author(s):  
Toby Prike

AbstractRecent years have seen large changes to research practices within psychology and a variety of other empirical fields in response to the discovery (or rediscovery) of the pervasiveness and potential impact of questionable research practices, coupled with well-publicised failures to replicate published findings. In response to this, and as part of a broader open science movement, a variety of changes to research practice have started to be implemented, such as publicly sharing data, analysis code, and study materials, as well as the preregistration of research questions, study designs, and analysis plans. This chapter outlines the relevance and applicability of these issues to computational modelling, highlighting the importance of good research practices for modelling endeavours, as well as the potential of provenance modelling standards, such as PROV, to help discover and minimise the extent to which modelling is impacted by unreliable research findings from other disciplines.


Akuntabilitas ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 19-36
Author(s):  
Nur Eny

This study aims to examine whether corporate characteristics and information asymmetry affects earnings management in Indonesia. This study use meta-analysis techniques approach with 35 samples from international and national accredited journals as well as Indonesian National Symposium of Accounting proceedings. Research results reinforce meta- analysis findings of previous studies where earnings management is done for different purposes. Management’s motivation to perform earnings management varies between opportunistic and efficient contract motives. Empirical evidence shows that corporate characteristics are predictors of earnings management. Cash flow from operations and information asymmetry significantly affect earnings management. This empirical evidence supports several previous meta-analysis in accounting field where moderator measurement variables has an effect on heterogeneity of research findings.


2021 ◽  
Author(s):  
Jason Chin ◽  
Justin Pickett ◽  
Simine Vazire ◽  
Alex O. Holcombe

Objectives. Questionable research practices (QRPs) lead to incorrect research results and contribute to irreproducibility in science. Researchers and institutions have proposed open science practices (OSPs) to improve the detectability of QRPs and the credibility of science. We examine the prevalence of QRPs and OSPs in criminology, and researchers’ opinions of those practices.Methods. We administered an anonymous survey to authors of articles published in criminology journals. Respondents self-reported their own use of 10 QRPs and 5 OSPs. They also estimated the prevalence of use by others, and reported their attitudes toward the practices. Results. QRPs and OSPs are both common in quantitative criminology, about as common as they are in other fields. Criminologists who responded to our survey support using QRPs in some circumstances, but are even more supportive of using OSPs. We did not detect a significant relationship between methodological training and either QRP or OSP use. Support for QRPs is negatively and significantly associated with support for OSPs. Perceived prevalence estimates for some practices resembled a uniform distribution, suggesting criminologists have little knowledge of the proportion of researchers that engage in certain questionable practices.Conclusions. Most quantitative criminologists in our sample use QRPs, and many use multiple QRPs. The substantial prevalence of QRPs raises questions about the validity and reproducibility of published criminological research. We found promising levels of OSP use, albeit at levels lagging what researchers endorse. The findings thus suggest that additional reforms are needed to decrease QRP use and increase the use of OSPs.


Author(s):  
Hye K. Pae

Abstract This chapter reviews the evolution of the linguistic relativity hypothesis and how it was dismissed. The opponents of linguistic relativity misinterpreted the hypothesis itself and research results. With new interpretations and more scientific research findings, the hypothesis has gained rekindled interest in recent years. Empirical evidence for linguistic relativity is reviewed from the perspectives of first language influences on cognition, including color, motion, number, time, objects, and nonlinguistic representations, and from the prism of cross-linguistic influences. The chapter drives the discussion from linguistic relativity to the introduction to script relativity. The chapter ends with the claim that, among other factors that can explain cross-linguistic and cross-scriptal influences, script relativity has the greatest competitive plausibility to explain the consequences of reading.


Sign in / Sign up

Export Citation Format

Share Document