scholarly journals Evaluating implementation of the Transparency and Openness Promotion Guidelines: The TRUST Process for rating journal policies, procedures, and practices

2021 ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee ◽  
Sina Kianersi ◽  
Afsah Amin ◽  
...  

Background: The Transparency and Openness Promotion (TOP) Guidelines describe standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.Methods: We describe a reproducible process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.Discussion: The TRUST Process is a reproducible method for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the production of trustworthy evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation.

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee ◽  
Sina Kianersi ◽  
Afsah Amin ◽  
...  

Abstract Background The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments. Methods We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy. Discussion The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/.


2020 ◽  
Vol 44 (1-2) ◽  
pp. 1-2
Author(s):  
Harrison Dekker ◽  
Amy Riegelman

As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: Center for Open Science (COS) / Open Science Framework (OSF)[i] Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] CUrating for REproducibility (CURE)[iii] Project Tier[iv] Data Curation Network[v] UK Reproducibility Network[vi] While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). The articles in this issue comprise several very important aspects of reproducible research: Identification of barriers to reproducibility and solutions to such barriers Evidence synthesis as related to transparent reporting and reproducibility Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. Systematic reviews are a research approach that involves the quantitative and/or qualitative synthesis of data collected through a comprehensive literature review.  “Methods reporting that supports reader confidence for systematic reviews in psychology” looks at the reproducibility of electronic literature searches reported in psychology systematic reviews. A fundamental challenge in reproducing or replicating computational results is the need for researchers to make available the code used in producing these results. But sharing code and having it to run correctly for another user can present significant technical challenges. In “Reproducibility, preservation, and access to research with Reprozip, Reproserver” the authors describe open source software that they are developing to address these challenges.  Taking a published article and attempting to reproduce the results, is an exercise that is sometimes used in academic courses to highlight the inherent difficulty of the process. The final article in this issue, “ReprohackNL 2019: How libraries can promote research reproducibility through community engagement” describes an innovative library-based variation to this exercise.   Harrison Dekker, Data Librarian, University of Rhode Island Amy Riegelman, Social Sciences Librarian, University of Minnesota   References Center for Effective Global Action (2020), About the Berkeley Initiative for Transparency in the Social Sciences. Available at: https://www.bitss.org/about (accessed 23 June 2020). Ioannidis, J.P. (2005) ‘Why most published research findings are false’, PLoS Medicine, 2(8), p. e124.  doi:  https://doi.org/10.1371/journal.pmed.0020124   [i] https://osf.io [ii] https://www.bitss.org/ [iii] http://cure.web.unc.edu [iv] https://www.projecttier.org/ [v] https://datacurationnetwork.org/ [vi] https://ukrn.org


2008 ◽  
Vol 17 (2) ◽  
pp. 43-49
Author(s):  
James L. Coyle

Abstract The modern clinician is a research consumer. Rehabilitation of oropharyngeal impairments, and prevention of the adverse outcomes of dysphagia, requires the clinician to select interventions for which evidence of a reasonable likelihood of a successful, important outcome exists. The purpose of this paper is to provide strategies for evaluation of published research regarding treatment of oropharyngeal dysphagia. This article utilizes tutorial and examples to inform and educate practitioners in methods of appraising published research. It provides and encourages the use of methods of efficiently evaluating the validity and clinical importance of published research. Additionally, it discusses the importance of the ethical obligation we, as practitioners, have to use evidence-based treatment selection methods and measurement of patient performance during therapy. The reader is provided with tactics for evaluating treatment studies to establish a study's validity and, thereby, objectively select interventions. The importance of avoiding subjective or unsubstantiated claims and using objective methods of generating empirical clinical evidence is emphasized. The ability to evaluate the quality of research provides clinicians with objective intervention selection as an important, essential component of evidence-based clinical practice. ASHA Code of Ethics (2003): Principle I, Rule F: “Individuals shall fully inform the persons they serve of the nature and possible effects of services rendered and products dispensed…” (p. 2) Principle I, Rule G: “Individuals shall evaluate the effectiveness of services rendered and of products dispensed and shall provide services or dispense products only when benefit can reasonably be expected.” (p. 2) Principle IV, Rule G: “Individuals shall not provide professional services without exercising independent professional judgment, regardless of referral source or prescription.” (p. 4)


2019 ◽  
Vol 18 (1) ◽  
pp. 1
Author(s):  
Antonio Marcos Andrade

Em 2005, o grego John Loannidis, professor da Universidade de Stanford, publicou um artigo na PLOS Medicine intitulado “Why most published research findings are false” [1]. Ele que é dos pioneiros da chamada “meta-ciência”, disciplina que analisa o trabalho de outros cientistas, avaliou se estão respeitando as regras fundamentais que definem a boa ciência. Esse trabalho foi visto com muito espanto e indignação por parte dos pesquisadores na época, pois colocava em xeque a credibilidade da ciência.Para muitos cientistas, isso acontece porque a forma de se produzir conhecimento ficou diferente, ao ponto que seria quase irreconhecível para os grandes gênios dos séculos passados. Antigamente, se analisavam os dados em estado bruto, os autores iam às academias reproduzir suas experiências diante de todos, mas agora isso se perdeu porque os estudos são baseados em seis milhões de folhas de dados. Outra questão importante que garantia a confiabilidade dos achados era que os cientistas, independentemente de suas titulações e da relevância de suas descobertas anteriores, tinham que demonstrar seus novos achados diante de seus pares que, por sua vez, as replicavam em seus laboratórios antes de dar credibilidade à nova descoberta. Contudo, na atualidade, essas garantias veem sendo esquecidas e com isso colocando em xeque a validade de muitos estudos na área de saúde.Preocupados com a baixa qualidade dos trabalhos atuais, um grupo de pesquisadores se reuniram em 2017 e construíram um documento manifesto que acabou de ser publicado no British Medical Journal “Evidence Based Medicine Manifesto for Better Health Care” [2]. O Documento é uma iniciativa para a melhoria da qualidade das evidências em saúde. Nele se discute as possíveis causas da pouca confiabilidade científica e são apresentadas algumas alternativas para a correção do atual cenário. Segundo seus autores, os problemas estão presentes nas diferentes fases da pesquisa:Fases da elaboração dos objetivos - Objetivos inúteis. Muito do que é produzido não tem impacto científico nem clínico. Isso porque os pesquisadores estão mais interessados em produzir um número grande de artigos do que gerar conhecimento. Quase 85% dos trabalhos não geram nenhum benefício direto a humanidade.Fase do delineamento do estudo - Estudos com amostras subdimensionados, que não previnem erros aleatórios. Métodos que não previnem erros sistemáticos (viés na escolha das amostras, falta de randomização correta, viés de confusão, desfechos muito abertos). Em torno de 35% dos pesquisadores assumem terem construídos seus métodos de maneira enviesada.Fase de análise dos dados - Trinta e cinco por cento dos pesquisadores assumem práticas inadequadas no momento de análise dos dados. Muitos assumem que durante esse processo realizam várias análises simultaneamente, e as que apresentam significância estatística são transformadas em objetivos no trabalho. As revistas também têm sua parcela de culpa nesse processo já que os trabalhos com resultados positivos são mais aceitos (2x mais) que trabalhos com resultados negativos.Fase de revisão do trabalho - Muitos revisores de saúde não foram treinados para reconhecer potenciais erros sistemáticos e aleatórios nos trabalhos.Em suma é necessário que pesquisadores e revistas científicas pensem nisso. Só assim, teremos evidências de maior qualidade, estimativas estatísticas adequadas, pensamento crítico e analítico desenvolvido e prevenção dos mais comuns vieses cognitivos do pensamento.


Author(s):  
Petah Atkinson ◽  
Marilyn Baird ◽  
Karen Adams

Yarning as a research method has its grounding as an Aboriginal culturally specified process. Significant to the Research Yarn is relationality, however; this is a missing feature of published research findings. This article aims to address this. The research question was, what can an analysis of Social and Family Yarning tell us about relationality that underpins a Research Yarn. Participant recruitment occurred using convenience sampling, and data collection involved Yarning method. Five steps of data analysis occurred featuring Collaborative Yarning and Mapping. Commonality existed between researcher and participants through predominantly experiences of being a part of Aboriginal community, via Aboriginal organisations and Country. This suggests shared explicit and tacit knowledge and generation of thick data. Researchers should report on their experience with Yarning, the types of Yarning they are using, and the relationality generated from the Social, Family and Research Yarn.


BMJ Open ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. e041238
Author(s):  
Maxence Ouafik ◽  
Laetitia Buret ◽  
Jean-Luc Belche ◽  
Beatrice Scholtes

IntroductionMen who have sex with men (MSM) are disproportionally affected by a number of health conditions that are associated with violence, stigma, discrimination, poverty, unemployment or poor healthcare access. In recent years, syndemic theory provided a framework to explore the interactions of these health disparities on the biological and social levels. Research in this field has been increasing for the past 10 years, but methodologies have evolved and sometimes differed from the original concept. The aim of this paper is to provide an overview of the existing literature on syndemic theory applied to MSM in order to identify knowledge gaps, inform future investigations and expand our understanding of the complex interactions between avoidable health conditions in a vulnerable population.Methods and analysisThe proposed scoping review will follow the methodological framework developed by Arksey and O’Malley with subsequent enhancements by Levac et al, Colquhoun et al and Peters et al as well as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping review. A systematic search of MEDLINE, PsycInfo, Scopus, Cochrane Central Register of Controlled Trials and ProQuest Sociological Abstracts will be conducted. Reference lists of the included studies will be hand-searched for additional studies. Screening and data charting will be achieved using DistillerSR. Data collating, summarising and reporting will be performed using R and RStudio. Tabular and graphical summaries will be presented, alongside an evidence map and a descriptive overview of the main results.Ethics and disseminationThis scoping review does not require ethical approval. Data and code will be made accessible after manuscript submission. Final results will be disseminated through publication in a peer-reviewed journal and collaboration with grassroots Lesbian, Gay, Bisexual, Transgender, Queer, Intersex and Asexual (LGBTQIA+) organisations.RegistrationThis protocol was registered on manuscript submission on the Open Science Framework at the following address: https://osf.io/jwxtd; DOI: 10.17605/OSF.IO/JWXTD.


Author(s):  
Lauren H. Supplee ◽  
Robert T. Ammerman ◽  
Anne K. Duggan ◽  
John A. List ◽  
Dana Suskind

2018 ◽  
Vol 28 (03) ◽  
pp. 254-257 ◽  
Author(s):  
Sinan Guloksuz ◽  
Jim van Os

AbstractThere had been a long way to go before we felt comfortable about even discussing the issues revolving around the concept of ‘schizophrenia’, let alone reckoning on mere semantic revision. In this editorial, we aim to extend our discussion on the reasons behind the slow death of the concept of ‘schizophrenia’ and the benefits of changing the name and embracing a spectrum approach with an umbrella psychosis spectrum disorder (PSD) category (similar to autism spectrum disorder) that goes further than a mere semantic revision. We attempted to cover the topic of the renaming by providing five most pertinent points categorised under five domains: reasons, signals, challenges, promises and steps for the change. Admittedly, even a modest revision, such as classifying all psychotic disorder categories under an umbrella category of PSD, and abolishing the term schizophrenia requires careful deliberation and some effort in the beginning, but the revision is well worth the effort considering the benefits in the long run. Renaming a particular form of mental suffering should be accompanied by a broader debate of the entire diagnosis-evidence-based-practice (EBP)-symptom-reduction model as the normative factor driving the content and organisation of mental health services that may be detached from patients’ needs and reality, overlooks the trans-syndromal structure of mental difficulties, appraises the significance of the technical features over the relational and ritual components of care, and underestimates the lack of EBP group-to-individual generalisability. Individuals may make great strides in attaining well-being by accommodating to living with mental vulnerabilities through building resilience in the social and existential domains. Changing the name and the concept of ‘schizophrenia’, which goes beyond a mere semantic revision, may become the first step that allows catalysation of the process of modernising psychiatric science and services worldwide.


2019 ◽  
Author(s):  
Ineke Wessel ◽  
Helen Niemeyer

Adopting Registered Reports is an important step for the European Journal of Psychotraumatology to promote open science practices in the field of psychotrauma research. However, adopting these practices requires us as individual researchers to change our perspective fundamentally. We need to put fears of being scooped aside, adopt a permissive stance towards making mistakes and accept that null-results should be part of the scientific record. Journal policies that reinforce openness and transparency can facilitate such an attitude change in individual researchers.


Sign in / Sign up

Export Citation Format

Share Document