Bibliometric study of ‘overviews of systematic reviews’ of health interventions: evaluation of prevalence, citation and journal impact factor

2021 ◽  
Author(s):  
Carole Lunny ◽  
Trish Neelakant ◽  
Alyssa Chen ◽  
Gavindeep Shinger ◽  
Adrienne Stevens ◽  
...  
2021 ◽  
Author(s):  
Carole Lunny ◽  
Trish Neelakant ◽  
Alyssa Chen ◽  
Gavindeep Shinger ◽  
Adrienne Stevens ◽  
...  

Abstract Background: Overviews synthesizing the results of multiple systematic reviews help inform evidence-based clinical practice. In this first of two companion papers, we evaluate the bibliometrics of ‘overviews of systematic reviews’, including their prevalence, number of citations, and factors affecting citation rates and journal impact factor.Methods: We searched MEDLINE, Epistemonikos and the Cochrane library databases. We applied eligibility criteria to identify overviews that: (a) aimed to focus on synthesizing reviews, (b) conducted a systematic search, (c) had a full methods section, and (d) examined a health intervention or clinical treatment effect. A multivariate regression was conducted to determine the association between citation density and impact factor and 6 predictor variables of interest. Results: We found 1218 overviews published from 2000 to 2020; the majority (73%) of which were published in the most recent 5-year period (2016-2020). We extracted a selection of these overviews (n=541; 44%) dated from 2000 to 2018. The 541 overviews were published in 307 journals; the Cochrane Database of Systematic Reviews (8%), PLOS ONE (3%) and the Sao Paulo Medical Journal (2%) being the most prevalent. The majority of overviews (70%) were published in journals with impact factors between 0.05 and 3.97. The average citation rate was 90 (SD ±219.7) over 9 years, or 10 citations per overview per year. In multivariate analysis, overviews with a high number of citations and high journal impact factors tended to have more authors, larger sample sizes, be open access and report funding source. Conclusions: We found an 8 fold increase in the number of overviews from 2009 to 2020; and a representation of one published a day in 2020. Factors driving the increase in overviews include the exponential increase in the number of systematic reviews, the publication of Cochrane guidance on overview of reviews in 2009 and the subsequent publication of the first Cochrane overview in the same year. Our study found a significantly higher mean citation count of 10 overviews per year, published in journals with a mean impact factor of 4.4. These data indicate that, overall, overviews perform above average for the journals in which they publish. We also found that highly cited overviews in high impact factor journals had group authorship, large sample sizes, were openly accessible, and reported funding source.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2020 ◽  
Vol 13 (3) ◽  
pp. 328-333
Author(s):  
Sven Kepes ◽  
George C. Banks ◽  
Sheila K. Keener

Author(s):  
Susie Allard ◽  
Ali Andalibi ◽  
Patty Baskin ◽  
Marilyn Billings ◽  
Eric Brown ◽  
...  

Following up on recommendations from OSI 2016, this team will dig deeper into the question of developing and recommending new tools to repair or replace the journal impact factor (and/or how it is used), and propose actions the OSI community can take between now and the next meeting. What’s needed? What change is realistic and how will we get there from here?


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


Sign in / Sign up

Export Citation Format

Share Document