scholarly journals Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen's work on journal impact and research evaluation

PLoS ONE ◽  
2017 ◽  
Vol 12 (3) ◽  
pp. e0174205 ◽  
Author(s):  
Lin Zhang ◽  
Ronald Rousseau ◽  
Gunnar Sivertsen
Cortex ◽  
2001 ◽  
Vol 37 (4) ◽  
pp. 595-597 ◽  
Author(s):  
Jesús Rey-Rocha ◽  
M. José Martín-Sempere ◽  
Jesús Martínez-Frías ◽  
Fernando López-Vera

2019 ◽  
Author(s):  
Michael R Dougherty ◽  
Zachary Horne

Citation data and journal impact factors are important components of faculty dossiers and figure prominently in both promotion decisions and assessments of a researcher's broader societal impact. Although these metrics play a large role in high-stakes decisions, the evidence is mixed regarding whether they are valid proxies for key aspects of research quality. We use data from three large scale studies to assess whether citation counts and impact factors predict four indicators of aspects of research quality: (1) the number of statistical reporting errors in a paper, (2) the evidential value of the reported data, (3) the expected replicability of reported research findings in peer reviewed journals, and (4) the actual replicability of a given experimental result. Both citation counts and impact factors were weak and inconsistent predictors of research quality, so defined, and sometimes negatively related to quality. Our findings impugn the validity of citation data and impact factors as indices of research quality and call into question their usefulness in evaluating scientists and their research. In light of these results, we argue that research evaluation should instead focus on the process of how research is conducted and incentivize behaviors that support open, transparent, and reproducible research.


2016 ◽  
Vol 109 (3) ◽  
pp. 2129-2150 ◽  
Author(s):  
Loet Leydesdorff ◽  
Paul Wouters ◽  
Lutz Bornmann

AbstractBibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.


2020 ◽  
Author(s):  
Trung Tran ◽  
Hoang Khanh Linh ◽  
Viet-Phuong La ◽  
Toan Manh Ho ◽  
Quan-Hoang Vuong

Universities and funders in many countries have been using Journal Impact Factor (JIF) as an indicator for research and grant assessment despite its controversial nature as a statistical representation of scientific quality. This study investigates how the changes of JIF over the years can affect its role in research evaluation and science management by using JIF data from annual Journal Citation Reports (JCR) to illustrate the changes. The descriptive statistics find out an increase in the median JIF for the top 50 journals in the JCR, from 29.300 in 2017 to 33.162 in 2019. Moreover, on average, elite journal families have up to 27 journals in the top 50. In the group of journals with a JIF of lower than 1, the proportion has shrunk by 14.53% in the 2015–2019 period. The findings suggest a potential ‘JIF bubble period’ that science policymaker, university, public fund managers, and other stakeholders should pay more attention to JIF as a criterion for quality assessment to ensure more efficient science management.


2022 ◽  
Author(s):  
Gunnar Sivertsen

The paper is focused on practical advice for the use of bibliometrics in research assessment in the social sciences. Guidelines are presented from three official sources of advice with a particular focus on individual-level assessments of applications for positions, promotions, and external funding. General problems with applying bibliometrics in evaluations of the social sciences are also discussed, as well as the specific problems with using the Journal Impact Factor and the H-Index. The conclusion is not that bibliometrics should be avoided in research assessment of social scientists. Used with care and competence, bibliometrics can be a valuable extra source of information, but not replace judgement in research evaluation.


Author(s):  
Emilio Delgado-López-Cózar ◽  
Ismael Ràfols ◽  
Ernest Abadal

This letter is a call to the Spanish scientific authorities to abandon current research evaluation policies, which are based on an excessive and indiscriminate use of bibliometric indicators for nearly all areas of scientific activity. This narrow evaluation focus is especially applied to assess the individual performance of researchers. To this end, we first describe the contexts in which the journal impact factor (JIF) and other bibliometric indicators are being used. We then consider the toxic effects of this abuse of indicators. Finally, we outline some significant transformations and initiatives being introduced in various academic fields and regions of the world. These international initiatives offer alternatives to bibliometrics that can improve evaluation processes, and we urge political leaders in Spain to adopt and develop them.


2020 ◽  
Vol 18 (1) ◽  
pp. 48-56
Author(s):  
Trung Tran ◽  
Khanh-Linh Hoang ◽  
Viet-Phuong La ◽  
Manh-Toan Ho ◽  
Quan-Hoang Vuong

Universities and funders in many countries have been using Journal Impact Factor (JIF) as an indicator for research and grant assessment despite its controversial nature as a statistical representation of scientific quality. This study investigates how the changes of JIF over the years can affect its role in research evaluation and science management by using JIF data from annual Journal Citation Reports (JCR) to illustrate the changes. The descriptive statistics find out an increase in the median JIF for the top 50 journals in the JCR, from 29.300 in 2017 to 33.162 in 2019. Moreover, on average, elite journal families have up to 27 journals in the top 50. In the group of journals with a JIF of lower than 1, the proportion has shrunk by 14.53% in the 2015–2019 period. The findings suggest a potential ‘JIF bubble period’ that science policymaker, university, public fund managers, and other stakeholders should pay more attention to JIF as a criterion for quality assessment to ensure more efficient science management.


Sign in / Sign up

Export Citation Format

Share Document