scholarly journals WHAT DO EXPERTS KNOW ABOUT FORECASTING JOURNAL QUALITY? A COMPARISON WITH ISI RESEARCH IMPACT IN FINANCE

2013 ◽  
Vol 08 (01) ◽  
pp. 1350005 ◽  
Author(s):  
CHIA-LIN CHANG ◽  
MICHAEL MCALEER

Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business–Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al., 2011a,b,c). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance per Paper Online), h-index, PI-BETA (Papers Ignored — By even the Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analyzed.

2014 ◽  
Vol 09 (01) ◽  
pp. 1450005 ◽  
Author(s):  
CHIA-LIN CHANG ◽  
MICHAEL MCALEER

The paper is concerned with ranking academic journal quality and research impact in Finance, based on the widely-used Thomson Reuters ISI (2013) Web of Science citations database (hereafter ISI). The paper analyses the 89 leading international journals in the ISI category of "Business–Finance" using quantifiable Research Assessment Measures (RAMs). The analysis highlights the similarities and differences in various RAMs, all of which are based on alternative transformations of journal citations and impact. Alternative RAMs may be calculated annually or updated daily to determine the citations frequency of published papers that are cited in journals listed in ISI. The RAMs include the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy including journal self citations, Eigenfactor (or Journal Influence), Article Influence (AI), h-index, Papers Ignored-By Even The Authors (PI-BETA), Self-citation Threshold Approval Rating (STAR), 5YD2 (namely, 5YIF divided by 2YIF), Escalating Self Citations (ESC) and Index of Citation Quality (ICQ). The paper calculates the harmonic mean (HM) of the ranks of up to 16 RAMs. It is shown that emphasizing 2YIF to the exclusion of other informative RAMs can lead to a misleading evaluation of journal quality and impact relative to the HM of the ranks. The analysis of the 89 ISI journals in Finance makes it clear that there are three leading journals in Finance, namely Journal of Finance, Journal of Financial Economics and Review of Financial Studies, which form an exclusive club in terms of the RAMs that measure journal quality and impact based on alternative measures of journal citations. The next two journals in Finance in terms of overall quality and impact are Journal of Accounting and Economics and Journal of Monetary Economics. As Accounting does not have a separate classification in ISI, the tables of rankings given in the paper are also used to rank the top 3 journals in the sub-category of Accounting in the ISI category of "Business – Finance", namely Journal of Accounting and Economics, Accounting Review, and Journal of Accounting Research.


2014 ◽  
Vol 65 (1) ◽  
Author(s):  
Chia-Lin Chang ◽  
Michael McAleer

AbstractThe paper analyses academic journal quality and impact using quality weighted citations that are based on the widely-used Thomson Reuters ISI Web of Science citations database (ISI). A recently developed Index of Citations Quality (ICQ), based on quality weighted citations, is used to analyse the top 276 Economics and top 10 Econometrics journals in the ISI Economics category using alternative quantifiable Research Assessment Measures (RAMs). It is shown that ICQ is a useful additional measure to the 2-Year Impact Factor (2YIF) and other well known RAMs available in ISI for the purpose of evaluating journal impact and quality, as well as ranking, of Economics and Econometrics journals as it contains information that has very low correlations with the information contained in alternative well-known RAMs. Among other findings, the top Econometrics journals have some of the highest ICQ scores in the ISI category of Economics.


2016 ◽  
Vol 42 (4) ◽  
pp. 324-337 ◽  
Author(s):  
Chia-Lin Chang ◽  
Michael McAleer

Purpose – Both journal self-citations and exchanged citations have the effect of increasing a journal’s impact factor, which may be deceptive. The purpose of this paper is to analyse academic journal quality and research impact using quality-weighted citations vs total citations, based on the widely used Thomson Reuters ISI Web of Science citations database (ISI). A new Index of Citations Quality (ICQ) is presented, based on quality-weighted citations. Design/methodology/approach – The new index is used to analyse the leading 500 journals in both the sciences and social sciences, as well as finance and accounting, using quantifiable Research Assessment Measures (RAMs) that are based on alternative transformations of citations. Findings – It is shown that ICQ is a useful additional measure to 2-year impact factor (2YIF) and other well-known RAMs for the purpose of evaluating the impact and quality, as well as ranking, of journals as it contains information that has very low correlations with the information contained in the well-known RAMs for both the sciences and social sciences, and finance and accounting. Practical implications – Journals can, and do, inflate the number of citations through self-citation practices, which may be coercive. Another method for distorting journal impact is through a set of journals agreeing to cite each other, that is, by exchanging citations. This may be less coercive than self-citations, but is nonetheless unprofessional and distortionary. Social implications – The premise underlying the use of citations data is that higher quality journals generally have a higher number of citations. The impact of citations can be distorted in a number of ways, both consciously and unconsciously. Originality/value – Regardless of whether self-citations arise through collusive practices, the increase in citations will affect both 2YIF and 5-year impact factor (5YIF), though not Eigenfactor and Article Influence. This leads to an ICQ, where a higher ICQ would generally be preferred to lower. Unlike 5YIF, which is increased by journal self-citations and exchanged citations, and Eigenfactor and Article Influence, both of which are affected by quality-weighted exchanged citations, ICQ will be less affected by exchanged citations. In the absence of any empirical evidence to the contrary, 5YIF and AI are assumed to be affected similarly by exchanged citations.


2021 ◽  
pp. 016555152110597
Author(s):  
Sumeer Gul ◽  
Aasif Ahmad Mir ◽  
Sheikh Shueb ◽  
Nahida Tun Nisa ◽  
Salma Nisar

The manuscript processing timeline, a necessary facet of the publishing process, varies from journal to journal, and its influence on the journal impact needs to be studied. The current research looks into the correlation between the ‘Peer Review Metrics’ (submission to first editorial decision; submission to first post-review decision and submission to accept) and the ‘Journal Impact Data’ (2-year Impact Factor; 5-year Impact Factor; Immediacy Index; Eigenfactor Score and Article Influence Score). The data related to ‘Peer Review Metrics’ (submission to first editorial decision; submission to first post-review decision and submission to accept) and ‘Journal Impact Data’ (2-year Impact Factor; 5-year Impact Factor; Immediacy Index; Eigenfactor Score and Article Influence Score) were downloaded from the ‘Nature Research’ journals website https://www.nature.com/nature-portfolio/about/journal-metrics . Accordingly, correlations were drawn between the ‘Peer Review Metrics’ and the ‘Journal Impact Data’. If the time from ‘submission to first editorial decision’ decreases, the ‘Journal Impact Data’ increases and vice versa. However, an increase or decrease in the time from ‘submission to first editorial decision’ does not affect the ‘Eigenfactor Score’ of the journal and vice versa. An increase or decrease in the time from ‘submission to first post-review decision’ does not affect any ‘Journal Impact Data’ and vice versa. If the time from ‘submission to acceptance’ increases, the ‘Journal Impact Data’ (2-year Impact Factor, 5-year Impact Factor, Immediacy Index and Article Influence Score) also increases, and if the time from ‘submission to acceptance’ decreases, so will the ‘Journal Impact Data’. However, an increase or decrease in the time from ‘submission to acceptance’ does not affect the ‘Eigenfactor Score’ of the journal and vice versa. The study will act as a ready reference tool for the scholars to select the most appropriate submitting platforms for their scholarly endeavours. Furthermore, the performance and evaluative indicators responsible for a journal’s overall research performance can also be understood from a micro-analytical view, which will help the researchers select appropriate journals for their future scholarly submissions. Lengthy publication timelines are a big problem for the researchers because they are not able to get the credit for their research on time. Since the study validates a relationship between the ‘Peer Review Metrics’ and ‘Journal Impact Data’, the findings will be of great help in making an appropriate journal’s choice. The study can be an eye opener for the journal administrators who vocalise a speed-up publication process by enhancing certain areas of publication timeline. The study is the first of its kind that correlates the ‘Peer Review Metrics’ of the journals and the ‘Journal Impact Data’. The study’s findings are limited to the data retrieved from the ‘Nature Research’ journals and cannot be generalised to the full score of journals. The study can be extended across other publishers to generalise the findings. Even the articles’ early access availability concerning ‘Peer Review Metrics’ of the journals and the ‘Journal Impact Data’ can be studied.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2012 ◽  
Vol 92 (2) ◽  
pp. 395-401 ◽  
Author(s):  
David A. Pendlebury ◽  
Jonathan Adams

2019 ◽  
Vol 23 (2) ◽  
pp. 47-51
Author(s):  
Morwenna Senior ◽  
Seena Fazel

Metrics which quantify the impact of a scientist are increasingly incorporated into decisions about how to rate and fund individuals and institutions. Several commonly used metrics, based on journal impact factors and citation counts, have been criticised as they do not reliably predict real-world impact, are highly variable between fields and are vulnerable to gaming. Bibliometrics have been incorporated into systems of research assessment but these may create flawed incentives, failing to reward research that is validated, reproducible and with wider impacts. A recent proposal for a new standardised citation metric based on a composite indicator of 6 measures has led to an online database of 100 000 of the most highly cited scientists in all fields. In this perspective article, we provide an overview and evaluation of this new citation metric as it applies to mental health research. We provide a summary of its findings for psychiatry and psychology, including clustering in certain countries and institutions, and outline some implications for mental health research. We discuss strengths and limitations of this new metric, and how further refinements could align impact metrics more closely with wider goals of scientific research.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2012 ◽  
Vol 91 (4) ◽  
pp. 329-333 ◽  
Author(s):  
A. Sillet ◽  
S. Katsahian ◽  
H. Rangé ◽  
S. Czernichow ◽  
P. Bouchard

We sought to compare the Eigenfactor Score™ journal rank with the journal Impact Factor over five years, and to identify variables that may influence the ranking differences between the two metrics. Datasets were retrieved from the Thomson Reuters® and Eigenfactor Score™ Web sites. Dentistry was identified as the most specific medical specialty. Variables were retrieved from the selected journals to be included in a regression linear model. Among the 46 dental journals included in the analysis, striking variations in ranks were observed according to the metric used. The Bland-Altman plot showed a poor agreement between the metrics. The multivariate analysis indicates that the number of original research articles, the number of reviews, the self-citations, and the citing time may explain the differences between ranks. The Eigenfactor Score™ seems to better capture the prestige of a journal than the Impact Factor. In medicine, the bibliometric indicators should focus not only on the overall medical field but also on specialized disciplinary fields. Distinct measures are needed to better describe the scientific impact of specialized medical publications.


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


Sign in / Sign up

Export Citation Format

Share Document