scholarly journals The Use of Questionable Research Practices to Survive in Academia Examined With Expert Elicitation, Prior-Data Conflicts, Bayes Factors for Replication Effects, and the Bayes Truth Serum

2021 ◽  
Vol 12 ◽  
Author(s):  
Rens van de Schoot ◽  
Sonja D. Winter ◽  
Elian Griffioen ◽  
Stephan Grimmelikhuijsen ◽  
Ingrid Arts ◽  
...  

The popularity and use of Bayesian methods have increased across many research domains. The current article demonstrates how some less familiar Bayesian methods can be used. Specifically, we applied expert elicitation, testing for prior-data conflicts, the Bayesian Truth Serum, and testing for replication effects via Bayes Factors in a series of four studies investigating the use of questionable research practices (QRPs). Scientifically fraudulent or unethical research practices have caused quite a stir in academia and beyond. Improving science starts with educating Ph.D. candidates: the scholars of tomorrow. In four studies concerning 765 Ph.D. candidates, we investigate whether Ph.D. candidates can differentiate between ethical and unethical or even fraudulent research practices. We probed the Ph.D.s’ willingness to publish research from such practices and tested whether this is influenced by (un)ethical behavior pressure from supervisors or peers. Furthermore, 36 academic leaders (deans, vice-deans, and heads of research) were interviewed and asked to predict what Ph.D.s would answer for different vignettes. Our study shows, and replicates, that some Ph.D. candidates are willing to publish results deriving from even blatant fraudulent behavior–data fabrication. Additionally, some academic leaders underestimated this behavior, which is alarming. Academic leaders have to keep in mind that Ph.D. candidates can be under more pressure than they realize and might be susceptible to using QRPs. As an inspiring example and to encourage others to make their Bayesian work reproducible, we published data, annotated scripts, and detailed output on the Open Science Framework (OSF).

2020 ◽  
Vol 7 (4) ◽  
pp. 181351 ◽  
Author(s):  
Sarahanne M. Field ◽  
E.-J. Wagenmakers ◽  
Henk A. L. Kiers ◽  
Rink Hoekstra ◽  
Anja F. Ernst ◽  
...  

The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75 .


2021 ◽  
Author(s):  
Jason Chin ◽  
Justin T. Pickett ◽  
Simine Vazire ◽  
Alex O. Holcombe

2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


2018 ◽  
Author(s):  
Jeff Annis ◽  
Nathan J. Evans

One of the more principled methods of performing model selection is via Bayes factors. However, calculating Bayes factors requires marginal likelihoods, which are integrals over the entire parameter space, making estimation of Bayes factors for models with more than a few parameters a significant computational challenge. Here, we review two Monte Carlo techniques rarely used in psychology that efficiently and accurately compute marginal likelihoods: thermodynamic integration (Friel & Pettitt, 2008; Lartillot & Philippe, 2006) and steppingstone sampling (Xie, Lewis, Fan, Kuo, & Chen, 2011). The methods are general and can be easily implemented in existing MCMC code; we provide both the details for implementation and associated R code for the interested reader. While Bayesian toolkits implementing standard statistical analyses (e.g., JASP Team, 2017; Morey & Rouder, 2015) often compute Bayes factors for the researcher, those using Bayesian approaches to evaluate cognitive models are usually left to compute Bayes factors for themselves. Here, we provide examples of the methods by computing marginal likelihoods for a moderately complex model of choice response time, the Linear Ballistic Accumulator model (Brown & Heathcote, 2008), and compare them to findings of Evans and Brown (2017), who used a brute force technique. We then present a derivation of TI and SS within a hierarchical framework, provide results of a model recovery case study using hierarchical models, and show an application to empirical data. A companion R package is available at the Open Science Framework: https://osf.io/jpnb4.


BMJ Open ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. e045546
Author(s):  
Henry Douglas Robb ◽  
Gemma Scrimgeour ◽  
Piers R Boshier ◽  
Svetlana Balyasnikova ◽  
Gina Brown ◽  
...  

IntroductionThree-dimensional (3D) reconstruction describes the generation of either virtual or physically printed anatomically accurate 3D models from two-dimensional medical images. Their implementation has revolutionised medical practice. Within surgery, key applications include growing roles in operative planning and procedures, surgical education and training, as well as patient engagement and education. In comparison to other surgical specialties, oesophagogastric surgery has been slow in their adoption of this technology. Herein the authors outline a scoping review protocol that aims to analyse the current role of 3D modelling in oesophagogastric surgery and highlight any unexplored avenues for future research.Methods and analysisThe protocol was generated using internationally accepted methodological frameworks. A succinct primary question was devised, and a comprehensive search strategy was developed for key databases (MEDLINE, Embase, Elsevier Scopus and ISI Web of Science). These were searched from their inception to 1 June 2020. Reference lists will be reviewed by hand and grey literature identified using OpenGrey and Grey Literature Report. The protocol was registered to the Open Science Framework (osf.io/ta789).Two independent reviewers will screen titles, abstracts and perform full-text reviews for study selection. There will be no methodological quality assessment to ensure a full thematic analysis is possible. A data charting tool will be created by the investigatory team. Results will be analysed to generate descriptive numerical tabular results and a thematic analysis will be performed.Ethics and disseminationEthical approval was not required for the collection and analysis of the published data. The scoping review report will be disseminated through a peer-reviewed publication and international conferences.Registration detailsThe scoping review protocol has been registered on the Open Science Framework (https://osf.io/ta789).


2017 ◽  
Vol 48 (6) ◽  
pp. 365-371 ◽  
Author(s):  
Stefan Stürmer ◽  
Aileen Oeberst ◽  
Roman Trötschel ◽  
Oliver Decker

Abstract. Young researchers of today will shape the field in the future. In light of current debates about social psychology’s research culture, this exploratory survey assessed early-career researchers’ beliefs (N = 88) about the prevalence of questionable research practices (QRPs), potential causes, and open science as a possible solution. While there was relative consensus that outright fraud is an exception, a majority of participants believed that some QRPs are moderately to highly prevalent what they attributed primarily to academic incentive structures. A majority of participants felt that open science is necessary to improve research practice. They indicated to consider some open science recommendations in the future, but they also indicated some reluctance. Limitation and implications of these findings are discussed.


2021 ◽  
Author(s):  
Bradley David McAuliff ◽  
Melanie B. Fessinger ◽  
Anthony Perillo ◽  
Jennifer Torkildson Perillo

As the field of psychology and law begins to embrace more transparent and accessible science, many questions arise about what open science actually is and how to do it. In this chapter, we contextualize this reform by examining fundamental concerns about psychological research—irreproducibility and replication failures, false-positive errors, and questionable research practices—that threaten its validity and credibility. Next, we turn to psychology’s response by reviewing the concept of open science and explaining how to implement specific practices—preregistration, registered reports, open materials/data/code, and open access publishing—designed to make research more transparent and accessible. We conclude by weighing the implications of open science for the field of psychology and law, specifically with respect to how we conduct and evaluate research, as well as how we train the next generation of psychological scientists and share scientific findings in applied settings.


Author(s):  
Hengky Latan ◽  
Charbel Jose Chiappetta Jabbour ◽  
Ana Beatriz Lopes de Sousa Jabbour ◽  
Murad Ali

AbstractAcademic leaders in management from all over the world—including recent calls by the Academy of Management Shaw (Academy of Management Journal 60(3): 819–822, 2017)—have urged further research into the extent and use of questionable research practices (QRPs). In order to provide empirical evidence on the topic of QRPs, this work presents two linked studies. Study 1 determines the level of use of QRPs based on self-admission rates and estimated prevalence among business scholars in Indonesia. It was determined that if the level of QRP use identified in Study 1 was quite high, Study 2 would be conducted to follow-up on this result, and this was indeed the case. Study 2 examines the factors that encourage and discourage the use of QRPs in the sample analyzed. The main research findings are as follows: (a) in Study 1, we found the self-admission rates and estimated prevalence of business scholars’ involvement in QRPs to be quite high when compared with studies conducted in other countries and (b) in Study 2, we found pressure for publication from universities, fear of rejection of manuscripts, meeting the expectations of reviewers, and available rewards to be the main reasons for the use of QRPs in Indonesia, whereas (c) formal sanctions and prevention efforts are factors that discourage QRPs. Recommendations for stakeholders (in this case, reviewers, editors, funders, supervisors, chancellors and others) are also provided in order to reduce the use of QRPs.


2020 ◽  
Author(s):  
Soufian Azouaghe ◽  
Adeyemi Adetula ◽  
Patrick S. Forscher ◽  
Dana Basnight-Brown ◽  
Nihal Ouherrou ◽  
...  

The quality of scientific research is assessed not only by its positive impact on socio-economic development and human well-being, but also by its contribution to the development of valid and reliable scientific knowledge. Thus, researchers regardless of their scientific discipline, are supposed to adopt research practices based on transparency and rigor. However, the history of science and the scientific literature teach us that a part of scientific results is not systematically reproducible (Ioannidis, 2005). This is what is commonly known as the "replication crisis" which concerns the natural sciences as well as the social sciences, of which psychology is no exception.Firstly, we aim to address some aspects of the replication crisis and Questionable Research Practices (QRPs). Secondly, we discuss how we can involve more labs in Africa to take part in the global research process, especially the Psychological Science Accelerator (PSA). For these goals, we will develop a tutorial for the labs in Africa, by highlighting the open science practices. In addition, we emphasize that it is substantial to identify African labs needs and factors that hinder their participating in the PSA, and the support needed from the Western world. Finally, we discuss how to make psychological science more participatory and inclusive.


Sign in / Sign up

Export Citation Format

Share Document