scholarly journals Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee ◽  
Sina Kianersi ◽  
Afsah Amin ◽  
...  

Abstract Background The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments. Methods We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy. Discussion The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/.

2021 ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee ◽  
Sina Kianersi ◽  
Afsah Amin ◽  
...  

Background: The Transparency and Openness Promotion (TOP) Guidelines describe standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.Methods: We describe a reproducible process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.Discussion: The TRUST Process is a reproducible method for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the production of trustworthy evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation.


2020 ◽  
Vol 44 (1-2) ◽  
pp. 1-2
Author(s):  
Harrison Dekker ◽  
Amy Riegelman

As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: Center for Open Science (COS) / Open Science Framework (OSF)[i] Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] CUrating for REproducibility (CURE)[iii] Project Tier[iv] Data Curation Network[v] UK Reproducibility Network[vi] While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). The articles in this issue comprise several very important aspects of reproducible research: Identification of barriers to reproducibility and solutions to such barriers Evidence synthesis as related to transparent reporting and reproducibility Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. Systematic reviews are a research approach that involves the quantitative and/or qualitative synthesis of data collected through a comprehensive literature review.  “Methods reporting that supports reader confidence for systematic reviews in psychology” looks at the reproducibility of electronic literature searches reported in psychology systematic reviews. A fundamental challenge in reproducing or replicating computational results is the need for researchers to make available the code used in producing these results. But sharing code and having it to run correctly for another user can present significant technical challenges. In “Reproducibility, preservation, and access to research with Reprozip, Reproserver” the authors describe open source software that they are developing to address these challenges.  Taking a published article and attempting to reproduce the results, is an exercise that is sometimes used in academic courses to highlight the inherent difficulty of the process. The final article in this issue, “ReprohackNL 2019: How libraries can promote research reproducibility through community engagement” describes an innovative library-based variation to this exercise.   Harrison Dekker, Data Librarian, University of Rhode Island Amy Riegelman, Social Sciences Librarian, University of Minnesota   References Center for Effective Global Action (2020), About the Berkeley Initiative for Transparency in the Social Sciences. Available at: https://www.bitss.org/about (accessed 23 June 2020). Ioannidis, J.P. (2005) ‘Why most published research findings are false’, PLoS Medicine, 2(8), p. e124.  doi:  https://doi.org/10.1371/journal.pmed.0020124   [i] https://osf.io [ii] https://www.bitss.org/ [iii] http://cure.web.unc.edu [iv] https://www.projecttier.org/ [v] https://datacurationnetwork.org/ [vi] https://ukrn.org


2018 ◽  
Author(s):  
Andres Montealegre ◽  
William Jimenez-Leal

According to the social heuristics hypothesis, people intuitively cooperate or defect depending on which behavior is beneficial in their interactions. If cooperation is beneficial, people intuitively cooperate, but if defection is beneficial, they intuitively defect. However, deliberation promotes defection. Here, we tested two novel predictions regarding the role of trust in the social heuristics hypothesis. First, whether trust promotes intuitive cooperation. Second, whether preferring to think intuitively or deliberatively moderates the effect of trust on cooperation. In addition, we examined whether deciding intuitively promotes cooperation, compared to deciding deliberatively. To evaluate these predictions, we conducted a lab study in Colombia and an online study in the United Kingdom (N = 1,066; one study was pre-registered). Unexpectedly, higher trust failed to promote intuitive cooperation, though higher trust promoted cooperation. In addition, preferring to think intuitively or deliberatively failed to moderate the effect of trust on cooperation, although preferring to think intuitively increased cooperation. Moreover, deciding intuitively failed to promote cooperation, and equivalence testing confirmed that this null result was explained by the absence of an effect, rather than a lack of statistical power (equivalence bounds: d = -0.26 and 0.26). An intuitive cooperation effect emerged when non-compliant participants were excluded, but this effect could be due to selection biases. Taken together, most results failed to support the social heuristics hypothesis. We conclude by discussing implications, future directions, and limitations. The materials, data, and code are available on the Open Science Framework (https://osf.io/939jv/).


2002 ◽  
Vol 1 (3) ◽  
pp. 215-224 ◽  
Author(s):  
Ken Young ◽  
Deborah Ashby ◽  
Annette Boaz ◽  
Lesley Grayson

There is a growing interest in ‘evidence-based policy making’ in the UK. However, there remains some confusion about what evidence-based policy making actually means. This paper outlines some of the models used to understand how evidence is thought to shape or inform policy in order to explore the assumptions underlying ‘evidence-based policy making.’ By way of example, it considers the process of evidence seeking and in particular the systematic review as a presumed ‘gold standard’ of the EBP movement. It highlights some of the opportunities and challenges represented in this approach for policy research. The final part of the paper outlines some questions of capacity that need to be addressed if the social sciences are to make a more effective contribution to policy debate in Britain.


2015 ◽  
Vol 14 (4) ◽  
pp. 448-491 ◽  
Author(s):  
T. M. Tong ◽  
J. Asare ◽  
E. R. Rwenyagila ◽  
V. Anye ◽  
O. K. Oyewole ◽  
...  

The problem of access to electricity is still a major challenge to about 2 billion people that still live in rural and urban off-grid areas on incomes of $1-2/day. Since the cost of linking these people to the grid is high, there is a need to explore the development of alternative energy solutions for the provision of electricity in such contexts. There is also a need to develop new insights for the formulation of evidence-based policy that could enable the development of strategies to provide electricity to people that live in off-grid areas. This paper presents the results of a survey that provides insights for the formulation of evidence-based policy for the adoption of solar lanterns into rural/urban off-grid areas. The two year questionnaire study was carried out in Mpala Village in the Laikipia district of Kenya. The study identifies the factors that resulted in the adoption rate of 96% and a decrease of 14.7% in annual family expenditures. The social and health impacts are also elucidated before discussing the implications of the results for the formulation of evidence-based solar energy policy in developing countries.


2002 ◽  
Vol 1 (3) ◽  
pp. 213-214 ◽  
Author(s):  
Miriam E David

The concept of evidence-based policy and practice has many origins but its relation to the growth of the social sciences is arguably the most important. The uses of the social sciences for both understanding and transforming social policies and political systems has come to be assumed – complex and problematic though these may be. The concept is also closely linked with the concepts of globalisation, technological developments, and the ‘knowledge economy’. Thus the notions of ‘evidence’ and social science research have often been elided with political movements for social and economic change. In other contexts, these notions have been contextualised, so that ‘evidence’ and research are not deemed to be the same. Indeed, it is possible to argue that the notion of legal ‘evidence’ illustrates just how ideological it can be, how it can be used to marshal particular arguments and sustain a specific case rather than present it in a dispassionate manner.


2011 ◽  
Vol 32 (4) ◽  
pp. 518-546 ◽  
Author(s):  
Ray Pawson ◽  
Geoff Wong ◽  
Lesley Owen

The authors present a case study examining the potential for policies to be “evidence-based.” To what extent is it possible to say that a decision to implement a complex social intervention is warranted on the basis of available empirical data? The case chosen is whether there is sufficient evidence to justify banning smoking in cars carrying children. The numerous assumptions underpinning such legislation are elicited, the weight and validity of evidence for each is appraised, and a mixed picture emerges. Certain propositions seem well supported; others are not yet proven and possibly unknowable. The authors argue that this is the standard predicament of evidence-based policy. Evidence does not come in finite chunks offering certainty and security to policy decisions. Rather, evidence-based policy is an accumulative process in which the data pursue but never quite capture unfolding policy problems. The whole point is the steady conversion of “unknowns” to “knowns.”


Sign in / Sign up

Export Citation Format

Share Document