scholarly journals Transparent, Reproducible, and Open Science Practices of Published Literature in Dermatology Journals: Cross-Sectional Analysis (Preprint)

2019 ◽  
Author(s):  
J Michael Anderson ◽  
Andrew Niemann ◽  
Austin L Johnson ◽  
Courtney Cook ◽  
Daniel Tritz ◽  
...  

BACKGROUND Reproducible research is a foundational component for scientific advancements, yet little is known regarding the extent of reproducible research within the dermatology literature. OBJECTIVE This study aimed to determine the quality and transparency of the literature in dermatology journals by evaluating for the presence of 8 indicators of reproducible and transparent research practices. METHODS By implementing a cross-sectional study design, we conducted an advanced search of publications in dermatology journals from the National Library of Medicine catalog. Our search included articles published between January 1, 2014, and December 31, 2018. After generating a list of eligible dermatology publications, we then searched for full text PDF versions by using Open Access Button, Google Scholar, and PubMed. Publications were analyzed for 8 indicators of reproducibility and transparency—availability of materials, data, analysis scripts, protocol, preregistration, conflict of interest statement, funding statement, and open access—using a pilot-tested Google Form. RESULTS After exclusion, 127 studies with empirical data were included in our analysis. Certain indicators were more poorly reported than others. We found that most publications (113, 88.9%) did not provide unmodified, raw data used to make computations, 124 (97.6%) failed to make the complete protocol available, and 126 (99.2%) did not include step-by-step analysis scripts. CONCLUSIONS Our sample of studies published in dermatology journals do not appear to include sufficient detail to be accurately and successfully reproduced in their entirety. Solutions to increase the quality, reproducibility, and transparency of dermatology research are warranted. More robust reporting of key methodological details, open data sharing, and stricter standards journals impose on authors regarding disclosure of study materials might help to better the climate of reproducible research in dermatology.

10.2196/16078 ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. e16078
Author(s):  
J Michael Anderson ◽  
Andrew Niemann ◽  
Austin L Johnson ◽  
Courtney Cook ◽  
Daniel Tritz ◽  
...  

Background Reproducible research is a foundational component for scientific advancements, yet little is known regarding the extent of reproducible research within the dermatology literature. Objective This study aimed to determine the quality and transparency of the literature in dermatology journals by evaluating for the presence of 8 indicators of reproducible and transparent research practices. Methods By implementing a cross-sectional study design, we conducted an advanced search of publications in dermatology journals from the National Library of Medicine catalog. Our search included articles published between January 1, 2014, and December 31, 2018. After generating a list of eligible dermatology publications, we then searched for full text PDF versions by using Open Access Button, Google Scholar, and PubMed. Publications were analyzed for 8 indicators of reproducibility and transparency—availability of materials, data, analysis scripts, protocol, preregistration, conflict of interest statement, funding statement, and open access—using a pilot-tested Google Form. Results After exclusion, 127 studies with empirical data were included in our analysis. Certain indicators were more poorly reported than others. We found that most publications (113, 88.9%) did not provide unmodified, raw data used to make computations, 124 (97.6%) failed to make the complete protocol available, and 126 (99.2%) did not include step-by-step analysis scripts. Conclusions Our sample of studies published in dermatology journals do not appear to include sufficient detail to be accurately and successfully reproduced in their entirety. Solutions to increase the quality, reproducibility, and transparency of dermatology research are warranted. More robust reporting of key methodological details, open data sharing, and stricter standards journals impose on authors regarding disclosure of study materials might help to better the climate of reproducible research in dermatology.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0244529
Author(s):  
Ksenija Baždarić ◽  
Iva Vrkić ◽  
Evgenia Arh ◽  
Martina Mavrinac ◽  
Maja Gligora Marković ◽  
...  

Attitudes towards open peer review, open data and use of preprints influence scientists’ engagement with those practices. Yet there is a lack of validated questionnaires that measure these attitudes. The goal of our study was to construct and validate such a questionnaire and use it to assess attitudes of Croatian scientists. We first developed a 21-item questionnaire called Attitudes towards Open data sharing, preprinting, and peer-review (ATOPP), which had a reliable four-factor structure, and measured attitudes towards open data, preprint servers, open peer-review and open peer-review in small scientific communities. We then used the ATOPP to explore attitudes of Croatian scientists (n = 541) towards these topics, and to assess the association of their attitudes with their open science practices and demographic information. Overall, Croatian scientists’ attitudes towards these topics were generally neutral, with a median (Md) score of 3.3 out of max 5 on the scale score. We also found no gender (P = 0.995) or field differences (P = 0.523) in their attitudes. However, attitudes of scientist who previously engaged in open peer-review or preprinting were higher than of scientists that did not (Md 3.5 vs. 3.3, P<0.001, and Md 3.6 vs 3.3, P<0.001, respectively). Further research is needed to determine optimal ways of increasing scientists’ attitudes and their open science practices.


BMJ ◽  
2020 ◽  
pp. m2081 ◽  
Author(s):  
Danielle B Rice ◽  
Hana Raffoul ◽  
John P A Ioannidis ◽  
David Moher

AbstractObjectiveTo determine the presence of a set of pre-specified traditional and non-traditional criteria used to assess scientists for promotion and tenure in faculties of biomedical sciences among universities worldwide.DesignCross sectional study.SettingInternational sample of universities.Participants170 randomly selected universities from the Leiden ranking of world universities list.Main outcome measurePresence of five traditional (for example, number of publications) and seven non-traditional (for example, data sharing) criteria in guidelines for assessing assistant professors, associate professors, and professors and the granting of tenure in institutions with biomedical faculties.ResultsA total of 146 institutions had faculties of biomedical sciences, and 92 had eligible guidelines available for review. Traditional criteria of peer reviewed publications, authorship order, journal impact factor, grant funding, and national or international reputation were mentioned in 95% (n=87), 37% (34), 28% (26), 67% (62), and 48% (44) of the guidelines, respectively. Conversely, among non-traditional criteria, only citations (any mention in 26%; n=24) and accommodations for employment leave (37%; 34) were relatively commonly mentioned. Mention of alternative metrics for sharing research (3%; n=3) and data sharing (1%; 1) was rare, and three criteria (publishing in open access mediums, registering research, and adhering to reporting guidelines) were not found in any guidelines reviewed. Among guidelines for assessing promotion to full professor, traditional criteria were more commonly reported than non-traditional criteria (traditional criteria 54.2%, non-traditional items 9.5%; mean difference 44.8%, 95% confidence interval 39.6% to 50.0%; P=0.001). Notable differences were observed across continents in whether guidelines were accessible (Australia 100% (6/6), North America 97% (28/29), Europe 50% (27/54), Asia 58% (29/50), South America 17% (1/6)), with more subtle differences in the use of specific criteria.ConclusionsThis study shows that the evaluation of scientists emphasises traditional criteria as opposed to non-traditional criteria. This may reinforce research practices that are known to be problematic while insufficiently supporting the conduct of better quality research and open science. Institutions should consider incentivising non-traditional criteria.Study registrationOpen Science Framework (https://osf.io/26ucp/?view_only=b80d2bc7416543639f577c1b8f756e44).


Heart ◽  
2020 ◽  
Vol 107 (2) ◽  
pp. 120-126
Author(s):  
J Michael Anderson ◽  
Bryan Wright ◽  
Shelby Rauh ◽  
Daniel Tritz ◽  
Jarryd Horn ◽  
...  

ObjectivesIt has been suggested that biomedical research is facing a reproducibility issue, yet the extent of reproducible research within the cardiology literature remains unclear. Thus, our main objective was to assess the quality of research published in cardiology journals by assessing for the presence of eight indicators of reproducibility and transparency.MethodsUsing a cross-sectional study design, we conducted an advanced search of the National Library of Medicine catalogue for publications in cardiology journals. We included publications published between 1 January 2014 and 31 December 2019. After the initial list of eligible cardiology publications was generated, we searched for full-text PDF versions using Open Access, Google Scholar and PubMed. Using a pilot-tested Google Form, a random sample of 532 publications were assessed for the presence of eight indicators of reproducibility and transparency.ResultsA total of 232 eligible publications were included in our final analysis. The majority of publications (224/232, 96.6%) did not provide access to complete and unmodified data sets, all 229/232 (98.7%) failed to provide step-by-step analysis scripts and 228/232 (98.3%) did not provide access to complete study protocols.ConclusionsThe presentation of studies published in cardiology journals would make reproducing study outcomes challenging, at best. Solutions to increase the reproducibility and transparency of publications in cardiology journals is needed. Moving forward, addressing inadequate sharing of materials, raw data and key methodological details might help to better the landscape of reproducible research within the field.


2019 ◽  
Author(s):  
J. Michael Anderson ◽  
Bryan Wright ◽  
Daniel Tritz ◽  
Jarryd Horn ◽  
Ian Parker ◽  
...  

AbstractBackgroundThe extent of reproducibility in cardiology research remains unclear. Therefore, our main objective was to determine the quality of research published in cardiology journals using eight indicators of reproducibility.MethodsUsing a cross-sectional study design, we conducted an advanced search of the National Library of Medicine (NLM) catalog for publications from 2014-2018 in journals pertaining to cardiology. Journals must have been published in the English language and must have been indexed in MEDLINE. Once the initial list of publications from all cardiology journals was obtained, we searched for full-text PDF versions using Open Access, Google Scholar, and PubMed. Studies were analyzed using a pilot-tested Google Form to evaluate the presence of information that was deemed necessary to reproduce the study in its entirety.ResultsAfter exclusions, we included 132 studies containing empirical data. Of these studies, the majority (126/132, 95.5%) did not provide the raw data collected while conducting the study, 0/132 (0%) provided step-by-step analysis scripts, and 117/132 (88.6%) failed to provide sufficient materials needed to reproduce the study.ConclusionsThe presentation of studies published in cardiology journals does not appear to facilitate reproducible research. Considerable improvements to the framework of biomedical science, specifically in the field of cardiology, are necessary. Solutions to increase the reproducibility and transparency of published works in cardiology journals is warranted, including addressing inadequate sharing of materials, raw data, and key methodological details.


2019 ◽  
Author(s):  
Ian A. Fladie ◽  
Tomi Adewumi ◽  
Nam Vo ◽  
Daniel Tritz ◽  
Matt Vassar

AbstractBackgroundReproducibility is critical to diagnostic accuracy and treatment implementation. Concurrent with clinical reproducibility, research reproducibility establishes whether the use of identical study materials and methodologies in replication efforts permit researchers to arrive at similar results and conclusions. In this study, we address this gap by evaluating nephrology literature for common indicators of transparent and reproducible research.MethodsWe searched the National Library of Medicine catalog to identify 36 MEDLINE-indexed, English language nephrology journals. We randomly sampled 300 publications published between January 1, 2014, and December 31, 2018. In a duplicated and blinded fashion, two investigators screened and extracted data from the 300 publications.ResultsOur search yielded 28,835 publications, of which we randomly sampled 300 publications. Of the 300 publications, 152 (50.67%) were publicly available whereas 143 (47.67%) were restricted through paywall and 5 (1.67%) were inaccessible. Of the remaining 295 publications, 123 were excluded because they lack empirical data necessary for reproducibility. Of the 172 publications with empirical data, 43 (25%) reported data availability statements, 4 (2.33%) analysis scripts, 4 (2.33%) links to a protocol, and 10 (5.81%) were pre-registered.ConclusionOur study found that reproducible and transparent research practices are infrequently employed by the nephrology research community. Greater efforts should be made by both funders and journals, two entities that have the greatest ability to influence change. In doing so, an open science culture may eventually become the norm rather than the exception.


2021 ◽  
pp. 089198872110026
Author(s):  
Sivan Klil-Drori ◽  
Natalie Phillips ◽  
Alita Fernandez ◽  
Shelley Solomon ◽  
Adi J. Klil-Drori ◽  
...  

Objective: Compare a telephone version and full version of the Montreal Cognitive Assessment (MoCA). Methods: Cross-sectional analysis of a prospective study. A 20-point telephone version of MoCA (Tele-MoCA) was compared to the Full-MoCA and Mini Mental State Examination. Results: Total of 140 participants enrolled. Mean scores for language were significantly lower with Tele-MoCA than with Full-MoCA (P = .003). Mean Tele-MoCA scores were significantly higher for participants with over 12 years of education (P < .001). Cutoff score of 17 for the Tele-MoCA yielded good specificity (82.2%) and negative predictive value (84.4%), while sensitivity was low (18.2%). Conclusions: Remote screening of cognition with a 20-point Tele-MoCA is as specific for defining normal cognition as the Full-MoCA. This study shows that telephone evaluation is adequate for virtual cognitive screening. Our sample did not allow accurate assessment of sensitivity for Tele-MoCA in detecting MCI or dementia. Further studies with representative populations are needed to establish sensitivity.


Author(s):  
Angélica Conceição Dias Miranda ◽  
Milton Shintaku ◽  
Simone Machado Firme

Resumo: Os repositórios têm se tornado comum nas universidades e institutos de pesquisa, como forma de ofertar acesso à produção científica e, com isso, dar visibilidade à instituição. Entretanto, em muitos casos ainda estão restritos aos conceitos do movimento do arquivo aberto e acesso aberto, sendo que já se discute o Movimento da Ciência Aberta, revelando certo descompasso, requerendo estudos que apoiem a atualização dessa importante ferramenta. Nesse sentido, o presente estudo verifica os requisitos envolvidos nos movimentos abertos, de forma a apoiar a discussão técnica e tecnológica. Um estudo bibliográfico, que transforma as informações sobre os movimentos em critérios para avaliação de ferramentas para criação de repositórios, apresentando a implementação da interação como um novo desafio. Nas considerações procura-se contribuir com a discussão sobre a Ciência Aberta, de forma mais aplicada bem como o ajuste dos repositórios a esse movimento.Palavras-chave: Repositórios.  Critérios de avaliação. Arquivo aberto. Acesso aberto. Dados abertos. Ciência aberta.SURVEY OF CRITERIA FOR EVALUATION OF REPOSITORY TOOLS ACCORDING TO OPEN SCIENCE Abstract: Repositories have become common in universities and research institutes, as a way of offering access to scientific production, thereby giving visibility to the institution. Meanwhile, in many cases, repositories are restricted to the concepts of open movement and open access considering that the Open Science Movement is already being discussed. Regarding this matter, this study verifies the requirements involved in the open movements, in order to support a technical and technological discussion.  A bibliographic study that transforms information about movements into criteria to evaluate tools used to create repositories, presenting an implementation of interaction as a new challenge. In the considerations, we contribute with a discussion about an Open Science, in a more applied way, as well as the adjustment of the repositories to this movement.Keywords: Repositories. Evaluation Criteria. Open File. Open Access. Open Data. Open Science.


BMJ Open ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. e049716
Author(s):  
Timothy D Dye ◽  
Monica Barbosu ◽  
Shazia Siddiqi ◽  
José G Pérez Ramos ◽  
Hannah Murphy ◽  
...  

BackgroundDeterminants of COVID-19 vaccine acceptance are complex; how perceptions of the effectiveness of science, healthcare and government impact personal COVID-19 vaccine acceptance is unclear, despite all three domains providing critical roles in development, funding and provision, and distribution of COVID-19 vaccine.ObjectiveTo estimate impact of perception of science, healthcare systems, and government along with sociodemographic, psychosocial, and cultural characteristics on vaccine acceptance.DesignWe conducted a global nested analytical cross-sectional study of how the perceptions of healthcare, government and science systems have impacted COVID-19 on vaccine acceptance.SettingGlobal Facebook, Instagram and Amazon Mechanical Turk (mTurk) users from 173 countries.Participants7411 people aged 18 years or over, and able to read English, Spanish, Italian, or French.MeasurementsWe used Χ2 analysis and logistic regression-derived adjusted Odds Ratios (aORs) and 95% CIs to evaluate the relationship between effectiveness perceptions and vaccine acceptance controlling for other factors. We used natural language processing and thematic analysis to analyse the role of vaccine-related narratives in open-ended explanations of effectiveness.ResultsAfter controlling for confounding, attitude toward science was a strong predictor of vaccine acceptance, more so than other attitudes, demographic, psychosocial or COVID-19-related variables (aOR: 2.1; 95% CI: 1.8 to 2.5). The rationale for science effectiveness was dominated by vaccine narratives, which were uncommon in other domains.LimitationsThis study did not include participants from countries where Facebook and Amazon mTurk are not available, and vaccine acceptance reflected intention rather than actual behaviour.ConclusionsAs our findings show, vaccine-related issues dominate public perception of science’s impact around COVID-19, and this perception of science relates strongly to the decision to obtain vaccination once available.


Sign in / Sign up

Export Citation Format

Share Document