scholarly journals Towards more inclusive metrics and open science to measure research assessment in Earth and natural sciences

2022 ◽  
Author(s):  
Olivier Pourret ◽  
Dasapta Erwin Irawan ◽  
Najmeh Shaghaei ◽  
Elenora M. van Rijsingen ◽  
Lonni Besançon

Science's success and effect measures are built on a system that prioritizes citations and impact factors. These measurements are inaccurate and biased against already under-represented groups, and they fail to convey the range of individuals' significant scientific contributions, especially open science. We argue for a transition in this out-of-date value system that promotes science by promoting diversity, equity, and inclusion. To achieve systemic change, it will necessitate a concerted effort led by academic leaders and administrators.

PLoS Biology ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. e3001282
Author(s):  
Sarah W. Davies ◽  
Hollie M. Putnam ◽  
Tracy Ainsworth ◽  
Julia K. Baum ◽  
Colleen B. Bove ◽  
...  

Success and impact metrics in science are based on a system that perpetuates sexist and racist “rewards” by prioritizing citations and impact factors. These metrics are flawed and biased against already marginalized groups and fail to accurately capture the breadth of individuals’ meaningful scientific impacts. We advocate shifting this outdated value system to advance science through principles of justice, equity, diversity, and inclusion. We outline pathways for a paradigm shift in scientific values based on multidimensional mentorship and promoting mentee well-being. These actions will require collective efforts supported by academic leaders and administrators to drive essential systemic change.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 23 (2) ◽  
pp. 47-51
Author(s):  
Morwenna Senior ◽  
Seena Fazel

Metrics which quantify the impact of a scientist are increasingly incorporated into decisions about how to rate and fund individuals and institutions. Several commonly used metrics, based on journal impact factors and citation counts, have been criticised as they do not reliably predict real-world impact, are highly variable between fields and are vulnerable to gaming. Bibliometrics have been incorporated into systems of research assessment but these may create flawed incentives, failing to reward research that is validated, reproducible and with wider impacts. A recent proposal for a new standardised citation metric based on a composite indicator of 6 measures has led to an online database of 100 000 of the most highly cited scientists in all fields. In this perspective article, we provide an overview and evaluation of this new citation metric as it applies to mental health research. We provide a summary of its findings for psychiatry and psychology, including clustering in certain countries and institutions, and outline some implications for mental health research. We discuss strengths and limitations of this new metric, and how further refinements could align impact metrics more closely with wider goals of scientific research.


2014 ◽  
Vol 85 (4) ◽  
pp. 429-430 ◽  
Author(s):  
Gregory Welk ◽  
Mark G. Fischman ◽  
Christy Greenleaf ◽  
Louis Harrison ◽  
Lynda Ransdell ◽  
...  

2021 ◽  
Vol 3 (1) ◽  
pp. 71-78
Author(s):  
Heather Joseph

The COVID-19 pandemic highlights the urgent need to strengthen global scientific collaboration, and to ensure the fundamental right to universal access to scientific progress and its applications. Open Science (OS) is central to achieving these goals. It aims to make science accessible, transparent, and effective by providing barrier-free access to scientific publications, data, and infrastructures, along with open software, Open Educational Resources, and open technologies. OS also promotes public trust in science at a time when it has never been more important to do so. Over the past decade, momentum towards the widespread adoption of OS practices has been primarily driven by declarations (e.g., DORA, the Leiden Manifesto). These serve an important role, but for OS to truly take root, researchers also must be fully incentivized and rewarded for its practice. This requires research funders and academic leaders to take the lead in collaborating, with researchers in designing, and implementing new incentive structures, and to actively work to socialize these throughout the research ecosystem. The US National Academies of Science, Engineering, and Medicine (NASEM) Roundtable on Aligning Research Incentives for OS is one such effort. This paper examines the strategy behind convening the Roundtable, its current participant makeup, focus, and outputs. It also explores how this approach might be expanded and adapted throughout the global OS community.


2017 ◽  
Vol 14 ◽  
pp. 305-312 ◽  
Author(s):  
Susanne Schuck-Zöller ◽  
Jörg Cortekar ◽  
Daniela Jacob

Abstract. Basic research in the natural sciences rests on a long tradition of evaluation. However, since the San Francisco Declaration on Research Assessment (DORA) came out in 2012, there has been intense discussion in the natural sciences, above all amongst researchers and funding agencies in the different fields of applied research and scientific service. This discussion was intensified when climate services and other fields, used to make users participate in research and development activities (co-creation), demanded new evaluation methods appropriate to this new research mode. This paper starts by describing a comprehensive and interdisciplinary literature overview of indicators to evaluate co-creation of knowledge, including the different fields of integrated knowledge production. Then the authors harmonize the different elements of evaluation from literature in an evaluation cascade that scales down from very general evaluation dimensions to tangible assessment methods. They describe evaluation indicators already being documented and include a mixture of different assessment methods for two exemplary criteria. It is shown what can be deduced from already existing methodology for climate services and envisaged how climate services can further to develop their specific evaluation method.


Author(s):  
David Moher ◽  
Lex Bouter ◽  
Sabine Kleinert ◽  
Paul Glasziou ◽  
Mai Har Sham ◽  
...  

The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. Initiatives such as the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto have led the way bringing much needed global attention to the importance of taking a considered, transparent and broad approach to assessing research quality. Since publication in 2012 the DORA principles have been signed up to by over 1500 organizations and nearly 15,000 individuals. Despite this significant progress, assessment of researchers still rarely includes considerations related to trustworthiness, rigor and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded (i.e., their careers are advanced) for behavior that leads to trustworthy research. The HKP have been developed with the idea that their implementation could assist in how researchers are assessed for career advancement with a view to strengthen research integrity. We present five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle we provide a rationale for its inclusion and provide examples where these principles are already being adopted.


2021 ◽  
Author(s):  
Katerina Zourou ◽  
Mariana Ziku

<p>The importance of HEIs in supporting and promoting open science is highlighted in several EU policies. Among them, the 2017 Report of the Working Group on Education and Skills under Open Science emphasizes the need to shape HE students/next generation researchers as “open science citizens”. More precisely: “The European Research Area (ERA) should work in closer collaboration with the European Higher Education Area (EHEA) (...), enabling the next generations of researchers to evolve as Open Science citizens (...) New generations of scientists and researchers, as the driving force for innovation and economic growth, are of vital importance to Europe's future competitiveness and leadership” (p. 16).</p><p>Our study problematizes on the role of HEIs as incubators of the next generation open science citizens (in terms of HE staff and student skills, curricula and interdisciplinarity), including a niche of cross-disciplinary humanities and natural sciences applied cases, where institutions situated in a broader social context leverage citizens in knowledge creation processes through professional-amateur (pro-am) collaborations, and in decision making in diverse populations as urban, Indigenous or special needs communities (active citizenship, civic engagement, citizen science). </p><p>The study, initiated by the European project CitizenHeritage ("Citizen Science Practices in Cultural Heritage: towards a Sustainable Model in Higher Education", https://www.citizenheritage.eu/ ) presents the analysis resulting from a desktop research and a survey on practices conducted between November 2020 and January 2021. The presentation focuses on a number of registered practices that bridge scientific disciplines in the areas of earth and life sciences, history of science and cultural heritage, producing a substantial, evidence-based review of multimethod research practices of Higher Education engagement in citizen enhanced open science.</p>


2014 ◽  
Vol 9 (1) ◽  
pp. 56 ◽  
Author(s):  
Mathew Lee Stone

Objective – To quantify the value of librarianship and information science (LIS) exports knowledge to other subject disciplines. Design – Bibliometric study. Setting – LIS departments in U.K. universities. Subjects – 232 LIS research articles published between 2001 and 2007. Methods – Data from the 2008 U.K. Research Assessment Exercise were checked to identify 405 research articles submitted by 10 selected university departments (out of a total of 21), which submitted research in the LIS category. The Web of Science database was then searched to see how many of these articles had been cited in other articles (n=232). If the citing article was published in a non-LIS journal it was considered a knowledge export. Journals were defined as non-LIS if they had not been assigned the subject category of Information Science & Library Science by the Journal of Citation Reports. The journal Impact Factors (IFs) of citing journals were then normalized to measure the value of individual knowledge exports to their respective subject disciplines. This was done by comparing a citing journal’s IF with the median journal IF within that subject category. If the citing journal’s IF was above this median it was considered to be a valuable knowledge export. Main Results – The sample of LIS research articles produced a total of 1,061 knowledge exports in 444 unique non-LIS journals. These non-LIS journals covered 146 unique subject categories of which those related to computer science and chemistry/pharmacology cited LIS research with the greatest frequency. Just over three-quarters (n=798) of these citations were considered to be valuable knowledge exports. A sub-analysis showed that LIS articles published in non-LIS journals were significantly more valuable than the knowledge exports published in LIS journals. Conclusion – The validity of bibliometric studies can be improved by adopting the two methodological innovations presented in this study. The first innovation is to avoid over-estimating the number of knowledge exports by discounting “part exports” (i.e., where the citing journal is assigned to multiple subject categories, one of which includes the same as that of the cited reference). The second innovation introduced by this study is to add an extra dimension to the analysis by measuring the value of each knowledge export by taking into account the “normalized” impact factor of citing journals.


Sign in / Sign up

Export Citation Format

Share Document