scholarly journals How to Become an Informed Research Consumer: Evaluating Journal Impact Factors and Their Alternatives

2020 ◽  
Vol 25 (4) ◽  
pp. 304-307
Author(s):  
Barbora Hoskova ◽  
Courtney A. Colgan ◽  
Betty S. Lai

Approximately two million scientific research articles are published in journals worldwide each year (Altbach & De Wit, 2018). As a result, identifying relevant and high-quality journal articles can be an overwhelming task. journal impact factors are one metric for assessing the quality of research journals and articles. To help you become a more informed research consumer, this article will explore some common questions about journal impact factors. We begin with an explanation of Journal impact factors and their origins, followed by some critiques of journal impact factors, alternative ways of assessing publication quality, and the applications of this information to your work in psychology.

2012 ◽  
Vol 7 (3) ◽  
pp. 90
Author(s):  
Jason Martin

Objective – Determine what characteristics of a journal’s published articles can be used to predict the journal impact factor (JIF). Design – A retrospective cohort study. Setting – The researchers are located at McMaster University, Hamilton, Ontario, Canada. Subjects – The sample consisted of 1,267 clinical research articles from 103 evidence based and clinical journals which were published in 2005 and indexed in the McMaster University Premium LiteratUre Service (PLUS) database and those same journals’ JIF from 2007. Method – The articles were divided 60:40 into a derivation set (760 articles and 99 journals) and a validation set (507 articles and 88 journals). Ten variables which could influence JIF were developed and a multiple linear regression was run on the derivation set and then applied to the validation set. Main Results – The four variables found to be significant were the number of databases which indexed the journal, the number of authors, the quality of research, and the “newsworthiness” of the journal’s published articles. Conclusion – The quality of research and newsworthiness at time of publication of a journal’s articles can predict the journal impact factor with 60% accuracy.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2012 ◽  
Vol 33 (1) ◽  
pp. 1-6 ◽  
Author(s):  
Heather L. Barske ◽  
Judith Baumhauer

Background: The quality of research and evidence to support medical treatments is under scrutiny from the medical profession and the public. This study examined the current quality of research and level of evidence (LOE) of foot and ankle surgery papers published in orthopedic and podiatric medical journals. Methods: Two independent evaluators performed a blinded assessment of all foot and ankle clinical research articles (January 2010 to June 2010) from seven North American orthopedic and podiatric journals. JBJS-A grading system was used for LOE. Articles were assessed for indicators of study quality. The data was stratified by journal and medical credentials. Results: A total of 245 articles were published, 128 were excluded based on study design, leaving 117 clinical research articles. Seven (6%) were Level I, 14 (12%) Level II, 18 (15%) Level III, and 78 (67%) Level IV. The orthopedic journals published 78 studies on foot and ankle topics. Of the podiatric journals, the Journal of the American Podiatric Medical Association (JAPMA) published 12 clinical studies and the Journal of Foot and Ankle Surgery (JFAS) published 27, 21 (78%) of which were Level IV studies. When the quality of research was examined, few therapeutic studies used validated outcome measures and only 38 of 96 (40%) gathered data prospectively. Thirty (31%) studies used a comparison group. Eighty-seven articles (74%) were authored by a MD and 22 (19%) by a DPM. Conclusion: Foot & Ankle International (FAI) published higher quality studies with a higher LOE as compared to podiatry journals. Regardless of the journal, MDs produced the majority of published clinical foot and ankle research. Although improvements have been made in the quality of some clinical research, this study highlights the need for continued improvement in methodology within foot and ankle literature.


2003 ◽  
Vol 183 (5) ◽  
pp. 384-397 ◽  
Author(s):  
Eva Jané-Llopis ◽  
Clemens Hosman ◽  
Rachel Jenkins ◽  
Peter Anderson

BackgroundWorldwide, 340 million people are affected by depression, with high comorbid, social and economic costs.AimsTo identify potential predictors of effect in prevention programmes.MethodA meta-analysis was made of 69 programmes to reduce depression or depressive symptoms.ResultsThe weighted mean effect size of 0.22 was effective for different age groups and different levels of risk, and in reducing risk factors and depressive or psychiatric symptoms. Programmes with larger effect sizes were multi-component, included competence techniques, had more than eight sessions, had sessions 60–90 min long, had a high quality of research design and were delivered by a health care provider in targeted programmes. Older people benefited from social support, whereas behavioural methods were detrimental.ConclusionsAn 11% improvement in depressive symptoms can be achieved through prevention programmes. Single trial evaluations should ensure high quality of the research design and detailed reporting of results and potential predictors.


Episteme ◽  
2011 ◽  
Vol 8 (2) ◽  
pp. 165-183 ◽  
Author(s):  
Max Albert

Why is the average quality of research in open science so high? The answer seems obvious. Science is highly competitive, and publishing high quality research is the way to rise to the top. Thus, researchers face strong incentives to produce high quality work. However, this is only part of the answer. High quality in science, after all, is what researchers in the relevant field consider to be high quality. Why and how do competing researchers coordinate on common quality standards? I argue that, on the methodological level, science is a dynamic beauty contest.


Author(s):  
Amit Shovon Ray ◽  
M. Parameswaran ◽  
Manmohan Agarwal ◽  
Sunandan Ghosh ◽  
Udaya S. Mishra ◽  
...  

The chapter analyses the quality of research in terms of quality of articles and of journals by using a quality index. It uses two-dimension indicators to judge the quality of articles, that is, citations (scholarly) and readership, which is the number of hits an article receives in a simple Google keyword search. The quality of a journal is measured in terms of three dimensions: its presence over time, its presence across space, and its depth. The study took 21351 journal articles from 1006 journals (902 journals from Scopus and 104 journals from ISID for five-year period, 2010–14. It emerged that India’s social science research (SSR) contributes more to public debates and policy formulations and relatively less in pushing the frontiers of knowledge for further research.


F1000Research ◽  
2015 ◽  
Vol 4 ◽  
pp. 66 ◽  
Author(s):  
Catherine Joynson ◽  
Ottoline Leyser

In 2014, the UK-based Nuffield Council on Bioethics carried out a series of engagement activities, including an online survey to which 970 people responded, and 15 discussion events at universities around the UK to explore the culture of research in the UK and its effect on ethical conduct in science and the quality of research. The findings of the project were published in December 2014 and the main points are summarised here. We found that scientists are motivated in their work to find out more about the world and to benefit society, and that they believe collaboration, multidisciplinarity, openness and creativity are important for the production of high quality science. However, in some cases, our findings suggest, the culture of research in higher education institutions does not support or encourage these goals or activities. For example, high levels of competition and perceptions about how scientists are assessed for jobs and funding are reportedly contributing to a loss of creativity in science, less collaboration and poor research practices. The project led to suggestions for action for funding bodies, research institutions, publishers and editors, professional bodies and individual researchers.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


Sign in / Sign up

Export Citation Format

Share Document