scholarly journals Citation counts and journal impact factors do not capture research quality in the behavioral and brain sciences

2019 ◽  
Author(s):  
Michael R Dougherty ◽  
Zachary Horne

Citation data and journal impact factors are important components of faculty dossiers and figure prominently in both promotion decisions and assessments of a researcher's broader societal impact. Although these metrics play a large role in high-stakes decisions, the evidence is mixed regarding whether they are valid proxies for key aspects of research quality. We use data from three large scale studies to assess whether citation counts and impact factors predict four indicators of aspects of research quality: (1) the number of statistical reporting errors in a paper, (2) the evidential value of the reported data, (3) the expected replicability of reported research findings in peer reviewed journals, and (4) the actual replicability of a given experimental result. Both citation counts and impact factors were weak and inconsistent predictors of research quality, so defined, and sometimes negatively related to quality. Our findings impugn the validity of citation data and impact factors as indices of research quality and call into question their usefulness in evaluating scientists and their research. In light of these results, we argue that research evaluation should instead focus on the process of how research is conducted and incentivize behaviors that support open, transparent, and reproducible research.

2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


2007 ◽  
Vol 2 (2) ◽  
pp. 84
Author(s):  
Gaby Haddow

A review of: Duy, Joanna and Liwen Vaughan. “Can Electronic Journal Usage Data Replace Citation Data as a Measure of Journal Use? An Empirical Examination.” The Journal of Academic Librarianship 32.5 (Sept. 2006): 512-17. Abstract Objective – To identify valid measures of journal usage by comparing citation data with print and electronic journal use data. Design – Bibliometric study. Setting – Large academic library in Canada. Subjects – Instances of use were collected from 11 print journals of the American Chemical Society (ACS), 9 print journals of the Royal Society of Chemistry (RSC), and electronic journals in chemistry and biochemistry from four publishers – ACS, RSC, Elsevier, and Wiley. ACS, Elsevier, and Wiley journals in chemistry-related subject areas were sampled for Journal Impact Factors and citations data from the Institute for Scientific Information (ISI). Methods – Journal usage data were collected to determine if an association existed between: (1) print and electronic journal use; (2) electronic journal use and citations to journals by authors from the university; and (3) electronic journal use and Journal Impact Factors. Between June 2000 and September 2003, library staff recorded the re-shelving of bound volumes and loose issues of 20 journal titles published by the ACS and the RSC. Electronic journal usage data were collected for journals published by ACS, RSC, Elsevier, and Wiley within the ISI-defined chemistry and biochemistry subject area. Data were drawn from the publishers’ Level 1 COUNTER compliant usage statistics. These data equate 1 instance of use with a user viewing an HTML or PDF full text article. The period of data collection varied, but at least 2.5 years of data were collected for each publisher. Journal Impact Factors were collected for all ISI chemistry-related journals published by ACS, Elsevier, and Wiley for the year 2001. Library Journal Utilization Reports (purchased from ISI) were used to determine the number of times researchers at the university cited journals in the same set of chemistry-related journals over the period 1998 to 2002. The authors call this “local citation data.” (512) The results from electronic journal use were also analysed for correlation with the total number of citations, as reported in the Journal Citation Reports, for each journal in the sample. Main results – The study found a significant correlation between the results for print journal and electronic journal usage. A similar finding was reported for correlation between electronic journal usage data and local citation data. No significant association was found between Journal Impact Factors and electronic journal usage data. However, when an analysis was conducted for the total number of citations to the journals (drawn from the Journal Impact Factor calculations in Journal Citation Reports) and electronic journal use, significant correlations were found for all publishers’ journals. Conclusion – Within the fields of chemistry and biochemistry, electronic journal usage data provided by publishers are an equally valid method of determining journal usage as print journal re-shelving data. The results of the study indicate this association is valid even when print journal subscriptions have ceased. Local citation data (the citations made by researchers at the institution being studied) also provide a valid measure of journal use when compared with electronic journal usage results. Journal Impact Factors should be used with caution when libraries are making journal collection decisions.


2016 ◽  
Vol 109 (3) ◽  
pp. 2129-2150 ◽  
Author(s):  
Loet Leydesdorff ◽  
Paul Wouters ◽  
Lutz Bornmann

AbstractBibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-12
Author(s):  
Zao Liu

Although there are bibliometric studies of journals in various fields, the field of family studies remains unexplored. Using the bibliometric metrics of the two-year and five-year Journal Impact Factors, the H-index, and the newly revised CiteScore, this paper examines the relationships among these metrics in a bibliometric study of forty-four representative family studies journals. The citation data were drawn from Journal Citation Reports, Scopus, and Google Scholar. The correlation analysis found strong positive relationships on the metrics. Despite the strong correlations, discrepancies in rank orders of the journals were found. A possible explanation of noticeable discrepancy in rankings was provided, and the implications of the study for stakeholders were discussed.


2012 ◽  
Vol 7 (3) ◽  
pp. 90
Author(s):  
Jason Martin

Objective – Determine what characteristics of a journal’s published articles can be used to predict the journal impact factor (JIF). Design – A retrospective cohort study. Setting – The researchers are located at McMaster University, Hamilton, Ontario, Canada. Subjects – The sample consisted of 1,267 clinical research articles from 103 evidence based and clinical journals which were published in 2005 and indexed in the McMaster University Premium LiteratUre Service (PLUS) database and those same journals’ JIF from 2007. Method – The articles were divided 60:40 into a derivation set (760 articles and 99 journals) and a validation set (507 articles and 88 journals). Ten variables which could influence JIF were developed and a multiple linear regression was run on the derivation set and then applied to the validation set. Main Results – The four variables found to be significant were the number of databases which indexed the journal, the number of authors, the quality of research, and the “newsworthiness” of the journal’s published articles. Conclusion – The quality of research and newsworthiness at time of publication of a journal’s articles can predict the journal impact factor with 60% accuracy.


2018 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual-that is a mortal sin'. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan's. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal-never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


2016 ◽  
pp. 161-172
Author(s):  
Thorsten Gruber

Increasingly, academics have to demonstrate that their research has academic impact. Universities normally use journal rankings and journal impact factors to assess the research impact of individual academics. More recently, citation counts for individual articles and the h-index have also been used to measure the academic impact of academics. There are, however, several serious problems with relying on journal rankings, journal impact factors and citation counts. For example, articles without any impact may be published in highly ranked journals or journals with high impact factor, whereas articles with high impact could be published in lower ranked journals or journals with low impact factor. Citation counts can also be easily gamed and manipulated and the h-index disadvantages early career academics. This paper discusses these and several other problems and suggests alternatives such as post-publication peer review and open-access journals.


2020 ◽  
Vol 9 (2) ◽  
Author(s):  
Bùi Thị Bích Lan

In Vietnam, the construction of hydropower projects has contributed significantly in the cause of industrialization and modernization of the country. The place where hydropower projects are built is mostly inhabited by ethnic minorities - communities that rely primarily on land, a very important source of livelihood security. In the context of the lack of common productive land in resettlement areas, the orientation for agricultural production is to promote indigenous knowledge combined with increasing scientific and technical application; shifting from small-scale production practices to large-scale commodity production. However, the research results of this article show that many obstacles in the transition process are being posed such as limitations on natural resources, traditional production thinking or the suitability and effectiveness of scientific - technical application models. When agricultural production does not ensure food security, a number of implications for people’s lives are increasingly evident, such as poverty, preserving cultural identity, social relations and resource protection. Since then, it has set the role of the State in researching and building appropriate agricultural production models to exploit local strengths and ensure sustainability.


Sign in / Sign up

Export Citation Format

Share Document