An Open Science Workflow for More Credible, Rigorous Research

2021 ◽  
Author(s):  
Katherine S. Corker

Part of what distinguishes science from other ways of knowing is that scientists show their work. Yet when probed, it turns out that much of the process of research is hidden away: in personal files, in undocumented conversations, in point-and-click menus, and so on. In recent years, a movement towards more open science has arisen in psychology. Open science practices capture a broad swath of activities designed to take parts of the research process that were previously known only to a research team and make them more broadly accessible (e.g., open data, open analysis code, pre-registration, open research materials). Such practices increase the value of research by increasing transparency, which may in turn facilitate higher research quality. Plus, open science practices are now required at many journals. This chapter will introduce open science practices and provide plentiful resources for researchers seeking to integrate these practices into their workflow.

Author(s):  
Laura Fortunato ◽  
Mark Galassi

Free and open source software (FOSS) is any computer program released under a licence that grants users rights to run the program for any purpose, to study it, to modify it, and to redistribute it in original or modified form. Our aim is to explore the intersection between FOSS and computational reproducibility. We begin by situating FOSS in relation to other ‘open’ initiatives, and specifically open science, open research, and open scholarship. In this context, we argue that anyone who actively contributes to the research process today is a computational researcher, in that they use computers to manage and store information. We then provide a primer to FOSS suitable for anyone concerned with research quality and sustainability—including researchers in any field, as well as support staff, administrators, publishers, funders, and so on. Next, we illustrate how the notions introduced in the primer apply to resources for scientific computing, with reference to the GNU Scientific Library as a case study. We conclude by discussing why the common interpretation of ‘open source’ as ‘open code’ is misplaced, and we use this example to articulate the role of FOSS in research and scholarship today. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Amanda Kay Montoya ◽  
William Leo Donald Krenzer ◽  
Jessica Louise Fossum

Registered reports are a new publication workflow where the decision to publish is made prior to data collection and analysis and thus cannot be dependent on the outcome of the study. An increasing number of journals have adopted this new mechanism, but previous research suggests that submission rates are still relatively low. We conducted a census of journals publishing registered reports (N = 278) using independent coders to collect information from submission guidelines, with the goal of documenting journals’ early adoption of registered reports. Our results show that the majority of journals adopting registered reports are in psychology, and it typically takes about a year to publish the first registered report after adopting. Still, many journals have not published their first registered report. There is high variability in impact of journals adopting registered reports. Many journals do not include concrete information about policies that address concerns about registered reports (e.g., exploratory analysis); however, those that do typically allow these practices with some restrictions. Additionally, other open science practices are commonly encouraged or required as part of the registered report process, especially open data and materials. Overall, many journals did not include many of the fields coded by the research team, which could be a barrier to submission for some authors. Though the majority of journals allow authors to be anonymous during the review process, a sizable portion do not, which could also be a barrier to submission. We conclude with future directions and implications for authors of registered reports, journals that have already adopted registered reports, and journals that may consider adopting registered reports in the future.


2021 ◽  
Author(s):  
Eric R. Louderback ◽  
Sally M Gainsbury ◽  
Robert Heirene ◽  
Karen Amichia ◽  
Alessandra Grossman ◽  
...  

The replication crisis has stimulated researchers around the world to adopt open science research practices intended to reduce publication bias and improve research quality. Open science practices include study pre-registration, open data, open publication, and avoiding methods that can lead to publication bias and low replication rates. Although gambling studies uses similar research methods to behavioral research fields that have struggled with replication, we know little about the uptake of open science research practices in gambling-focused research. We conducted a scoping review of 500 recent (1/1/2016 – 12/1/2019) studies focused on gambling and problem gambling to examine the use of open science and transparent research practices. Our results showed that a small percentage of studies used most practices: whereas 54.6% (95% CI: [50.2, 58.9]) of studies used at least one of nine open science practices, each practice’s prevalence was: 1.6% for pre-registration (95% CI:[0.8, 3.1]), 3.2% for open data (95% CI:[2.0, 5.1]), 0% for open notebook, 35.2% for open access (95% CI:[31.1, 39.5]), 7.8% for open materials (95% CI:[5.8, 10.5]), 1.4% for open code (95% CI:[0.7, 2.9]), and 15.0% for preprint posting (95% CI:[12.1, 18.4]). In all, 6.4% (95% CI:[4.6, 8.9]) used a power analysis and 2.4% (95% CI:[1.4, 4.2]) of the studies were replication studies. Exploratory analyses showed that studies that used any open science practice, and open access in particular, had higher citation counts. We suggest several practical ways to enhance the uptake of open science principles and practices both within gambling studies and in science more broadly.


2020 ◽  
Vol 36 (3) ◽  
pp. 263-279
Author(s):  
Isabel Steinhardt

Openness in science and education is increasing in importance within the digital knowledge society. So far, less attention has been paid to teaching Open Science in bachelor’s degrees or in qualitative methods. Therefore, the aim of this article is to use a seminar example to explore what Open Science practices can be taught in qualitative research and how digital tools can be involved. The seminar focused on the following practices: Open data practices, the practice of using the free and open source tool “Collaborative online Interpretation, the practice of participating, cooperating, collaborating and contributing through participatory technologies and in social (based) networks. To learn Open Science practices, the students were involved in a qualitative research project about “Use of digital technologies for the study and habitus of students”. The study shows the practices of Open Data are easy to teach, whereas the use of free and open source tools and participatory technologies for collaboration, participation, cooperation and contribution is more difficult. In addition, a cultural shift would have to take place within German universities to promote Open Science practices in general.


2021 ◽  
Author(s):  
Tamara Kalandadze ◽  
Sara Ann Hart

The increasing adoption of open science practices in the last decade has been changing the scientific landscape across fields. However, developmental science has been relatively slow in adopting open science practices. To address this issue, we followed the format of Crüwell et al., (2019) and created summaries and an annotated list of informative and actionable resources discussing ten topics in developmental science: Open science; Reproducibility and replication; Open data, materials and code; Open access; Preregistration; Registered reports; Replication; Incentives; Collaborative developmental science.This article offers researchers and students in developmental science a starting point for understanding how open science intersects with developmental science. After getting familiarized with this article, the developmental scientist should understand the core tenets of open and reproducible developmental science, and feel motivated to start applying open science practices in their workflow.


Author(s):  
Katarzyna Biernacka ◽  
Niels Pinkwart

The relevance of open research data is already acknowledged in many disciplines. Demanded by publishers, funders, and research institutions, the number of published research data increases every day. In learning analytics though, it seems that data are not sufficiently published and re-used. This chapter discusses some of the progress that the learning analytics community has made in shifting towards open practices, and it addresses the barriers that researchers in this discipline have to face. As an introduction, the movement and the term open science is explained. The importance of its principles is demonstrated before the main focus is put on open data. The main emphasis though lies in the question, Why are the advantages of publishing research data not capitalized on in the field of learning analytics? What are the barriers? The authors evaluate them, investigate their causes, and consider some potential ways for development in the future in the form of a toolkit and guidelines.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Erin C McKiernan ◽  
Philip E Bourne ◽  
C Titus Brown ◽  
Stuart Buck ◽  
Amye Kenall ◽  
...  

Open access, open data, open source and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.


2020 ◽  
Author(s):  
Emily Kate Farran ◽  
Priya Silverstein ◽  
Aminath A. Ameen ◽  
Iliana Misheva ◽  
Camilla Gilmore

Open research is best described as “an umbrella term used to refer to the concepts of openness, transparency, rigor, reproducibility, replicability, and accumulation of knowledge” (Crüwell et al., 2019, p. 3). Although a lot of open research practices have commonly been discussed under the term “open science”, open research applies to all disciplines. If the concept of open research is new to you, it might be difficult for you to determine how you can apply open research practices to your research. The aim of this document is to provide resources and examples of open research practices that are relevant to your discipline. The document lists case studies of open research per discipline, and resources per discipline (organised as: general, open methods, open data, open output and open education).


2020 ◽  
Author(s):  
Denis Cousineau

Born-Open Data experiments are encouraged for better open science practices. To be adopted, Born-Open data practices must be easy to implement. Herein, I introduce a package for E-Prime such that the data files are automatically saved on a GitHub repository. The BornOpenData package for E-Prime works seamlessly and performs the upload as soon as the experiment is finished so that there is no additional steps to perform beyond placing a package call within E-Prime. Because E-Prime files are not standard tab-separated files, I also provide an R function that retrieves the data directly from GitHub into a data frame ready to be analyzed. At this time, there are no standards as to what should constitute an adequate open-access data repository so I propose a few suggestions that any future Born-Open data system could follow for easier use by the research community.


2022 ◽  
Author(s):  
Bermond Scoggins ◽  
Matthew Peter Robertson

The scientific method is predicated on transparency -- yet the pace at which transparent research practices are being adopted by the scientific community is slow. The replication crisis in psychology showed that published findings employing statistical inference are threatened by undetected errors, data manipulation, and data falsification. To mitigate these problems and bolster research credibility, open data and preregistration have increasingly been adopted in the natural and social sciences. While many political science and international relations journals have committed to implementing these reforms, the extent of open science practices is unknown. We bring large-scale text analysis and machine learning classifiers to bear on the question. Using population-level data -- 93,931 articles across the top 160 political science and IR journals between 2010 and 2021 -- we find that approximately 21% of all statistical inference papers have open data, and 5% of all experiments are preregistered. Despite this shortfall, the example of leading journals in the field shows that change is feasible and can be effected quickly.


Sign in / Sign up

Export Citation Format

Share Document