scholarly journals The Hong Kong Principles for Assessing Researchers: Fostering Research Integrity

Author(s):  
David Moher ◽  
Lex Bouter ◽  
Sabine Kleinert ◽  
Paul Glasziou ◽  
Mai Har Sham ◽  
...  

The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. Initiatives such as the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto have led the way bringing much needed global attention to the importance of taking a considered, transparent and broad approach to assessing research quality. Since publication in 2012 the DORA principles have been signed up to by over 1500 organizations and nearly 15,000 individuals. Despite this significant progress, assessment of researchers still rarely includes considerations related to trustworthiness, rigor and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded (i.e., their careers are advanced) for behavior that leads to trustworthy research. The HKP have been developed with the idea that their implementation could assist in how researchers are assessed for career advancement with a view to strengthen research integrity. We present five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle we provide a rationale for its inclusion and provide examples where these principles are already being adopted.

2021 ◽  
Author(s):  
Janne Pölönen

Finland is among the first countries to have developed national recommendation on responsible research assessment in 2020. Recommendation for the Responsible Evaluation of a Researcher in Finland provides a set of general principles (transparency, integrity, fairness, competence, and diversity), which apply throughout 13 recommended good practices to improve four aspects of researcher evaluation: A) Building the evaluation process; B) Evaluation of research; C) Diversity of activities; and D) Researcher’s role in the evaluation process. The national recommendation was produced by a broad-based working-group constituted by the Federation of Finnish Learned Societies, however the implementation needs to take place at institutions, which all have their diverse circumstances, challenges, needs and goals. The national recommendation has an implementation plan, which includes development of national level infrastructures and services to support more qualitative and diverse assessments policies and practices locally. The institutional uptake of the recommendation will be promoted by forthcoming National policy and executive plan for open scholarship, and tracked across all research performing organisations as a part of biannual Open Science monitoring exercise starting in 2022.


2017 ◽  
Vol 13 (1) ◽  
pp. 25 ◽  
Author(s):  
Dasapta Erwin Irawan ◽  
Cut Novianti Rachmi ◽  
Hendy Irawan ◽  
Juneman Abraham ◽  
Kustiati Kusno ◽  
...  

A significant development of open science movement has been witnessed in the last five years. This could bring a fresh start to Indonesian academia. The objective of this paper is to showcase the advancement of open science concept and implementation that can be adopted to increase impact. We did a literature review on peer-reviewed papers, websites of funding agency, open science blogs, and threads on Twitter.We believe the values of research output are not limited to a paper in a high reputation journal. Data is now considered as separate output, as well as, data management protocols, and laboratory notebooks. Publishing research results as a preprint is also used to disseminate findings as rapid and as fast as possible. Post publication peer-review is also added to the reviewing system to add openness, transparency, and objectivity. It offers credit to the reviewers. We also see the growth of impact indicators as the results of San Francisco Declaration on Research Assessment (DORA) statutory. More initiatives and technologies have been introduced to make science more open, transparent, and inclusive.With so many developments have been made, therefore it’s not wise for Indonesian academia to rely themselves only to the old perception of research outputs and impact indicators.  


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

Abstract Background Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers. But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments. Methods We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting. Results Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct. Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity. Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity. Conclusions The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective. Study Registration osf.io/33v3m.


2021 ◽  
Vol 03 ◽  
Author(s):  
Danny Kingsley

The nature of the research endeavour is changing rapidly and requires a wide set of skills beyond the research focus. The delivery of aspects of researcher training ‘beyond the bench’ is met by different sections of an institution, including the research office, the media office and the library. In Australia researcher training in open access, research data management and other aspects of open science is primarily offered by librarians. But what training do librarians receive in scholarly communication within their librarianship degrees? For a degree to be offered in librarianship and information science, it must be accredited by the Australian Library and Information Association (ALIA), with a curriculum that is based on ALIA’s lists of skills and attributes. However, these lists do not contain any reference to key open research terms and are almost mutually exclusive with core competencies in scholarly communication as identified by the North American Serials Interest Group and an international Joint Task Force. Over the past decade teaching by academics in universities has been professionalised with courses and qualifications. Those responsible for researcher training within universities and the material that is being offered should also meet an agreed accreditation. This paper is arguing that there is a clear need to develop parallel standards around ‘research practice’ training for PhD students and Early Career Researchers, and those delivering this training should be able to demonstrate their skills against these standards. Models to begin developing accreditation standards are starting to emerge, with the recent launch of the Centre for Academic Research Quality and Improvement in the UK. There are multiple organisations, both grassroots and long-established that would be able to contribute to this project.


2018 ◽  
Vol 69 (4) ◽  
pp. 183-189 ◽  
Author(s):  
Terje Tüür-Fröhlich

ZusammenfassungEine wachsende Anzahl von wissenschaftlichen Gesellschaften, Zeitschriften, Institutionen und wissenschaftlich Tätigen protestieren und bekämpfen den „allmächtigen“ Journal Impact Faktor. Die bekannteste Initiative von Protest und Empfehlungen heißt DORA, The San Francisco Declaration on Research Assessment. Kritisiert wird die fehlerhafte, verzerrte und intransparente Art der quantitativen Evaluationsverfahren und ihre negativen Auswirkungen auf das wissenschaftliche Personal, insbesondere auf junge Nachwuchskräfte und ihre wissenschaftliche Entwicklung, insbesondere die subtile Diskriminierung von Kultur- und Sozialwissenschaften. Wir sollten nicht unkritisch im Metrik-Paradigma gefangen bleiben und der Flut neuer Indikatoren aus der Szientometrie zujubeln. Der Slogan „Putting Science into the Assessment of Research“ darf nicht szientistisch verkürzt verstanden werden. Soziale Phänomene können nicht bloß mit naturwissenschaftlichen Methoden untersucht werden. Kritik und Transformation der sozialen Aktivitäten, die sich „Evaluation“ nennen, erfordern sozialwissenschaftliche und wissenschaftsphilosophische Perspektiven. Evaluation ist kein wertneutrales Unternehmen, sondern ist eng mit Macht, Herrschaft, Ressourcenverteilung verbunden.


Author(s):  
Mario Pagliaro

In most world’s countries, scholarship evaluation for tenure and promotion continues to rely on conventional criteria of publications in journals of high impact factor and grant funding. Continuing to hire and promote scholars for their achievements in research and in securing research funds exposes universities at risk because students, directly and indirectly through government funds, are the main source of revenues for academic institutions, whereas talented young researchers are those who actually carry out most of the published research. Purposeful scholarship evaluation needs to include all three areas of scholarly activity: research, teaching and mentoring, and service to society. Young scholars seeking tenure and promotion benefit from the practice of open science because it provides better and more impactful results with respect to each of the three areas of scholarship.


2010 ◽  
Vol 16 (4) ◽  
pp. 228 ◽  
Author(s):  
Mike Calver

Only those truly cryptozoic for all of 2010 could have missed the bustle and concern created by the Australian Commonwealth?s Excellence in Research for Australia (ERA) initiative (http://www.arc.gov.au/era/default.htm). In common with other national research assessment exercises such as the RAE (UK) and PBRF (New Zealand), ERA is designed to assess research quality within the Australian higher education sector, identifying and rewarding those institutions and departments producing high-quality research. The linkages between achievement, recognition and reward have the potential to shape the research priorities and agendas of institutions and individual researchers.


2017 ◽  
Author(s):  
Etienne P. LeBel ◽  
Derek Michael Berger ◽  
Lorne Campbell ◽  
Timothy Loving

Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016’s position that “bigger samples are generally better, but … that very large samples could have the downside of commandeering resources that would have been better invested in other studies” (abstract). We identify problematic assumptions involved in FER2016’s modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently-powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate.


Sign in / Sign up

Export Citation Format

Share Document