scholarly journals Quality assessment of instructions for authors in dental, oral and maxillofacial journals

NEMESIS ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 18
Author(s):  
Aleksandra Hebda ◽  
Guillaume A Odri ◽  
Raphael Olszewski

Objective: to develop and test inter-observer reproducibility of instructions for authors quality rating (IAQR) tool measuring the quality of instructions for authors at journal level for a possible improvement of editorial guidelines.Material and methods: instructions for authors of 75 dental and maxillofacial surgery journals were assessed by two independent observers using assessment tool inspired from AGREE with 16 questions and 1 to 4 points scale per answer. Two observers evaluated the instructions of authors independently and blind to impact factor of a given journal. Scores obtained from our tool were compared with “journal impact factor 2013”. Results: IAQR presented with an excellent interobserver reproducibility (κ= 0.81) despite a difference in data distribution between observers. There existed a weak positive correlation between IAQR and “journal impact factor 2013”. Conclusions: The IAQR is a reproducible quality assessment tool at the journal level. The IAQR assess the quality of instruction for authors and it is a goodstarting point for possible improvements of the instructions for authors, especially when it comes to their completeness. Nemesis relevance: 28% of dental and maxillofacial journals might revise their instructions for authors to provide more up-to-date version.

2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 40 (10) ◽  
pp. 1136-1142 ◽  
Author(s):  
Malke Asaad ◽  
Austin Paul Kallarackal ◽  
Jesse Meaike ◽  
Aashish Rajesh ◽  
Rafael U de Azevedo ◽  
...  

Abstract Background Citation skew refers to the unequal distribution of citations to articles published in a particular journal. Objectives We aimed to assess whether citation skew exists within plastic surgery journals and to determine whether the journal impact factor (JIF) is an accurate indicator of the citation rates of individual articles. Methods We used Journal Citation Reports to identify all journals within the field of plastic and reconstructive surgery. The number of citations in 2018 for all individual articles published in 2016 and 2017 was abstracted. Results Thirty-three plastic surgery journals were identified, publishing 9823 articles. The citation distribution showed right skew, with the majority of articles having either 0 or 1 citation (40% and 25%, respectively). A total of 3374 (34%) articles achieved citation rates similar to or higher than their journal’s IF, whereas 66% of articles failed to achieve a citation rate equal to the JIF. Review articles achieved higher citation rates (median, 2) than original articles (median, 1) (P < 0.0001). Overall, 50% of articles contributed to 93.7% of citations and 12.6% of articles contributed to 50% of citations. A weak positive correlation was found between the number of citations and the JIF (r = 0.327, P < 0.0001). Conclusions Citation skew exists within plastic surgery journals as in other fields of biomedical science. Most articles did not achieve citation rates equal to the JIF with a small percentage of articles having a disproportionate influence on citations and the JIF. Therefore, the JIF should not be used to assess the quality and impact of individual scientific work.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


2017 ◽  
Vol 402 (7) ◽  
pp. 1015-1022 ◽  
Author(s):  
Usama Ahmed Ali ◽  
Beata M. M. Reiber ◽  
Joren R. ten Hove ◽  
Pieter C. van der Sluis ◽  
Hein G. Gooszen ◽  
...  

2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Steven Peters ◽  
Aaron Switzer ◽  
Shivanand Patil ◽  
Cheryl R McCreary ◽  
Martin Dichgans ◽  
...  

Introduction: The quality of reporting of neuroimaging methods for studies of cerebral small vessel disease is unknown. We systematically reviewed studies of MRI white matter hyperintensities (WMH) of vascular origin to determine the frequency of reporting of key aspects of neuroimaging methods, and whether reporting varied by sample size, study design or journal impact factor. Methods: Three raters independently reviewed 100 consecutive papers reporting WMH severity, either as a primary outcome or covariate, to abstract 50 study characteristics based on the published STRIVE standards (Wardlaw et al Lancet Neurol 2013). Final determinations were made by consensus. An aggregate quality score (range 0-11) was created by adding one point for reporting of each of 11 key characteristics (Table). Spearman correlation or chi-square test, as appropriate, were used to test associations with quality score. Results: Papers were published between 2009 and 2013 with journal impact factors ranging from 0.56 to 15.3, with cohort (79%) and case control (21%) studies represented. Quantitative computational methods were used in 28 studies. MR field strength, MRI sequence types, type of WMH measurement method, blinding and number of raters were reported frequently, but reporting of other characteristics was inconsistent (Table). Study quality score was not correlated with journal impact factor, sample size or cohort study design. Conclusions: There is inconsistent reporting of neuroimaging methods in the small vessel disease imaging literature. Increased adherence to published reporting standards, such as the STRIVE criteria, may facilitate more objective peer review of submitted manuscripts and increase the reproducibility of published results. More work is needed to facilitate adoption of standards and checklists by authors, reviewers and editors.


2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.


2012 ◽  
Vol 34 (1) ◽  
pp. 38-41
Author(s):  
Caroline Black

Bibliometrics is the term used to describe various approaches to analysing measures of the use of academic literature, in particular articles in peer-reviewed journals. More broadly, the topic addresses the validity or otherwise of these measures as indicators of the impact, influence or value of the research being reported. These measures, and in particular the journal Impact Factor, are used as evidence for the quality of research, to make decisions about appointments, to judge a journal editor's success, and (it is assumed) to make funding decisions. Until recently, bibliometrics was mainly about citations, but now it is increasingly common to measure online usage, and even tweets, blogging and user star-ratings when assessing the contribution of a published research article.


Sign in / Sign up

Export Citation Format

Share Document