scholarly journals A practical guide for transparency in psychological science

Author(s):  
Olivier Klein ◽  
Tom Elis Hardwicke ◽  
Frederik Aust ◽  
Johannes Breuer ◽  
Henrik Danielsson ◽  
...  

The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. There has never been a better time to embrace transparent research practices. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research. Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.

2018 ◽  
Vol 4 (1) ◽  
Author(s):  
Olivier Klein ◽  
Tom E. Hardwicke ◽  
Frederik Aust ◽  
Johannes Breuer ◽  
Henrik Danielsson ◽  
...  

The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.). Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.


2016 ◽  
Author(s):  
Krzysztof J. Gorgolewski ◽  
Russell A. Poldrack

AbstractRecent years have seen an increase in alarming signals regarding the lack of replicability in neuroscience, psychology, and other related fields. To avoid a widespread crisis in neuroimaging research and consequent loss of credibility in the public eye, we need to improve how we do science. This article aims to be a practical guide for researchers at any stage of their careers that will help them make their research more reproducible and transparent while minimizing the additional effort that this might require. The guide covers three major topics in open science (data, code, and publications) and offers practical advice as well as highlighting advantages of adopting more open research practices that go beyond improved transparency and reproducibility.


2021 ◽  
Author(s):  
Adam H. Sparks ◽  
Emerson del Ponte ◽  
Kaique S. Alves ◽  
Zachary S. L. Foster ◽  
Niklaus J. Grünwald

Abstract Open research practices have been highlighted extensively during the last ten years in many fields of scientific study as essential standards needed to promote transparency and reproducibility of scientific results. Scientific claims can only be evaluated based on how protocols, materials, equipment and methods were described; data were collected and prepared; and, analyses were conducted. Openly sharing protocols, data and computational code is central for current scholarly dissemination and communication, but in many fields, including plant pathology, adoption of these practices has been slow. We randomly selected 300 articles published from 2012 to 2018 across 21 journals representative of the plant pathology discipline and assigned them scores reflecting their openness and reproducibility. We found that most of the articles were not following protocols for open science, and were failing to share data or code in a reproducible way. We also propose that use of open-source tools facilitates reproducible work and analyses benefitting not just readers, but the authors as well. Finally, we also provide ideas and tools to promote open, reproducible research practices among plant pathologists.


2019 ◽  
Author(s):  
Simon Dennis ◽  
Paul Michael Garrett ◽  
Hyungwook Yim ◽  
Jihun Hamm ◽  
Adam F Osth ◽  
...  

Pervasive internet and sensor technologies promise to revolutionize psychological science. However, the data collected using these technologies is often very personal - indeed the value of the data is often directly related to how personal it is. At the same time, driven by the replication crisis, there is a sustained push to publish data to open repositories. These movements are in fundamental conflict. In this paper, we propose a way to navigate this issue. We argue that there are significant advantages to be gained by ceding the ownership of data to the participants who generate it. Then we provide desiderata for a privacy-preserving platform. In particular, we suggest that researchers should use an interface to perform experiments and run analyses rather than observing the stimuli themselves. We argue that this method not only improves privacy but will also encourage greater compliance with good research practices than is possible with open repositories.


2021 ◽  
Author(s):  
Caitlyn A. Hall ◽  
Sheila M. Saia ◽  
Andrea L. Popp ◽  
Nilay Dogulu ◽  
Stanislaus J. Schymanski ◽  
...  

Abstract. Open, accessible, reusable, and reproducible hydrologic research can have a significant impact on the scientific community and broader society. While more individuals and organizations within the hydrology community are embracing open science practices, technical (e.g., limited coding experience), resource (e.g., open access fees), and social (e.g., fear of being scooped) challenges remain. Furthermore, there are a growing number of constantly evolving open science tools, resources, and initiatives that can seem overwhelming. These challenges and the ever-evolving nature of the open science landscape may seem insurmountable for hydrologists interested in pursuing open science. Therefore, we propose general Open Hydrology Principles to guide individual and community progress toward open science for research and education and the Open Hydrology Practical Guide to improve the accessibility of currently available tools and approaches. We aim to inform and empower hydrologists as they transition to open, accessible, reusable, and reproducible research. We discuss the benefits as well as common open science challenges and how hydrologists can overcome them. The Open Hydrology Principles and Open Hydrology Practical Guide reflect our knowledge of the current state of open hydrology; we recognize that recommendations and suggestions will evolve and expand with emerging open science infrastructures, workflows, and research experiences. Therefore, we encourage hydrologists all over the globe to join in and help advance open science by contributing to the living version of this document and by sharing open hydrology resources in the community-supported repository (https://open-hydrology.github.io).


2019 ◽  
Author(s):  
Brian A. Nosek ◽  
Lucy Ofiesh ◽  
Fielding Grasty ◽  
Nicole Pfeiffer ◽  
David Thomas Mellor ◽  
...  

The Center for Open Science (COS) will create an ECR Data Resource Hub to facilitate rigorous and reproducible research practices such as data sharing and study registration. The Hub will integrate training materials, infrastructure, community engagement, and innovation in research to advance rigorous research skills and behavior across the STEM education research community. The Hub will foster innovation in open and reproducible research practices for the breadth of research activities in education including experimental, observational, longitudinal, and qualitative methods. Finally, the Hub will connect the STEM education research community with neighboring communities to leverage shared insights and knowledge building.


2019 ◽  
Author(s):  
Richard Ramsey

The credibility of psychological science has been questioned recently, due to low levels of reproducibility and the routine use of inadequate research practices (Chambers, 2017; Open Science Collaboration, 2015; Simmons, Nelson, & Simonsohn, 2011). In response, wide-ranging reform to scientific practice has been proposed (e.g., Munafò et al., 2017), which has been dubbed a “credibility revolution” (Vazire, 2018). My aim here is to advocate why and how we should embrace such reform, and discuss the likely implications.


2020 ◽  
Author(s):  
Soufian Azouaghe ◽  
Adeyemi Adetula ◽  
Patrick S. Forscher ◽  
Dana Basnight-Brown ◽  
Nihal Ouherrou ◽  
...  

The quality of scientific research is assessed not only by its positive impact on socio-economic development and human well-being, but also by its contribution to the development of valid and reliable scientific knowledge. Thus, researchers regardless of their scientific discipline, are supposed to adopt research practices based on transparency and rigor. However, the history of science and the scientific literature teach us that a part of scientific results is not systematically reproducible (Ioannidis, 2005). This is what is commonly known as the "replication crisis" which concerns the natural sciences as well as the social sciences, of which psychology is no exception.Firstly, we aim to address some aspects of the replication crisis and Questionable Research Practices (QRPs). Secondly, we discuss how we can involve more labs in Africa to take part in the global research process, especially the Psychological Science Accelerator (PSA). For these goals, we will develop a tutorial for the labs in Africa, by highlighting the open science practices. In addition, we emphasize that it is substantial to identify African labs needs and factors that hinder their participating in the PSA, and the support needed from the Western world. Finally, we discuss how to make psychological science more participatory and inclusive.


2021 ◽  
Author(s):  
Chelsea Moran ◽  
Alexandra Richard ◽  
Kate Wilson ◽  
Rosemary Twomey ◽  
Adina Coroiu

Background: Questionable research practices (QRPs) have been identified as a driving force of the replication crisis in the field of psychological science. The aim of this study was to assess the frequency of QRP use among psychology students in Canadian universities, and to better understand reasons and motivations for QRP use.Method: Participants were psychology students attending Canadian universities and were recruited via online advertising and university email invitations to complete a bilingual survey. Respondents were asked how often they and others engaged in seven QRPs. They were also asked to estimate the proportion of psychology research impacted by each QRP and how acceptable they found each QRP. Data were collected through Likert-scale survey items and open-ended text responses between May 2020 and January 2021, and was analyzed using descriptive statistics and thematic analysis. Results: 425 psychology students completed the survey. The sample consisted of 40% undergraduate students, 59% graduate students and 1% post-doctoral fellows. Overall, 64% of participants reported using at least one QRP, while 79% reported having observed others engaging in at least one QRP. The most frequently reported QRPs were p-hacking (46%), not submitting null results for publication (31%), excluding outcome measures (30%), and hypothesizing after results are known (27%). These QRPs were also the most frequently observed in others, estimated to be the most prevalent in the field, and rated as the most acceptable. Qualitative findings show that students reported that pressures to publish motivated their QRP use, with some reporting that certain QRPs are justifiable in some cases (e.g., in the case of exploratory research). Students also reported that QRPs contribute to the replication crisis and to publication bias and offered several alternatives and solutions to engaging in QRPs, such as gaining familiarity with open science practices. Conclusions: Most Canadian psychology students in this sample report using QRPs, which is unsurprising since they observe such practices in their research environment and estimate that they are prevalent. In contrast, most students believe that QRPs are not acceptable. The results of this study highlight the need to examine the pedagogical standards and cultural norms in academia that may promote or normalize QRPs in psychological science, to improve the quality and replicability of research in this field.


2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.


Sign in / Sign up

Export Citation Format

Share Document