scholarly journals Evaluating co-creation of knowledge: from quality criteria and indicators to methods

2017 ◽  
Vol 14 ◽  
pp. 305-312 ◽  
Author(s):  
Susanne Schuck-Zöller ◽  
Jörg Cortekar ◽  
Daniela Jacob

Abstract. Basic research in the natural sciences rests on a long tradition of evaluation. However, since the San Francisco Declaration on Research Assessment (DORA) came out in 2012, there has been intense discussion in the natural sciences, above all amongst researchers and funding agencies in the different fields of applied research and scientific service. This discussion was intensified when climate services and other fields, used to make users participate in research and development activities (co-creation), demanded new evaluation methods appropriate to this new research mode. This paper starts by describing a comprehensive and interdisciplinary literature overview of indicators to evaluate co-creation of knowledge, including the different fields of integrated knowledge production. Then the authors harmonize the different elements of evaluation from literature in an evaluation cascade that scales down from very general evaluation dimensions to tangible assessment methods. They describe evaluation indicators already being documented and include a mixture of different assessment methods for two exemplary criteria. It is shown what can be deduced from already existing methodology for climate services and envisaged how climate services can further to develop their specific evaluation method.

Author(s):  
Eli Coleman

There is a growing recognition among clinicians that any type of sexual behavior can become pathologically impulsive or compulsive. There is quite a bit of debate about terminology for this condition, the diagnostic criteria, assessment methods and treatment approaches. In the absence of clear consensus, clinicians are struggling with how to help the many men and women who suffer and seek help from this type of problem. This chapter will review the author’s assessment and treatment approach. Clinicians will need to keep abreast of the literature as new research evolves and follow the continued debate around this controversial area.


2018 ◽  
Vol 69 (4) ◽  
pp. 183-189 ◽  
Author(s):  
Terje Tüür-Fröhlich

ZusammenfassungEine wachsende Anzahl von wissenschaftlichen Gesellschaften, Zeitschriften, Institutionen und wissenschaftlich Tätigen protestieren und bekämpfen den „allmächtigen“ Journal Impact Faktor. Die bekannteste Initiative von Protest und Empfehlungen heißt DORA, The San Francisco Declaration on Research Assessment. Kritisiert wird die fehlerhafte, verzerrte und intransparente Art der quantitativen Evaluationsverfahren und ihre negativen Auswirkungen auf das wissenschaftliche Personal, insbesondere auf junge Nachwuchskräfte und ihre wissenschaftliche Entwicklung, insbesondere die subtile Diskriminierung von Kultur- und Sozialwissenschaften. Wir sollten nicht unkritisch im Metrik-Paradigma gefangen bleiben und der Flut neuer Indikatoren aus der Szientometrie zujubeln. Der Slogan „Putting Science into the Assessment of Research“ darf nicht szientistisch verkürzt verstanden werden. Soziale Phänomene können nicht bloß mit naturwissenschaftlichen Methoden untersucht werden. Kritik und Transformation der sozialen Aktivitäten, die sich „Evaluation“ nennen, erfordern sozialwissenschaftliche und wissenschaftsphilosophische Perspektiven. Evaluation ist kein wertneutrales Unternehmen, sondern ist eng mit Macht, Herrschaft, Ressourcenverteilung verbunden.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


Author(s):  
Vivin Ayu Lestari

E-government is an effort to utilize information and communication technology especially internet to improve public service quality which generally implemented in a web based application. Usability is one of the important quality criteria for the success of a web. In this study we developed a framework for evaluation of usability in e-government consisting of  eight stages: (1) determining the evaluation objectives, (2) determining the usability aspects, (3) determining the metrics usability, (4) selecting usability evaluation method candidates, (5) determining the required criteria of the method to be evaluated, (6) evaluating the method, (7) selecting and making the instrument, and (8) evaluate usability.. The results of the application of this framework in the case study of e-finance resulted in two methods used: user testing and questionnaires. The evaluation of usability in e-government for e-finance case studies using the proposed framework results in usability level of e-finance in terms of effectiveness, efficiency, and user satisfaction are 96%, 92%, and 70 respectively. Which can be identified to be grouped into 16 problems consisting of aspects of effectiveness and efficiency.


2014 ◽  
Vol 1073-1076 ◽  
pp. 2734-2739
Author(s):  
Zhi Yong Tian ◽  
Feng Zheng

Research on order quantity plays an important role in logistics and supply chain (SC) whether for traditional economy objective or for low carbon objective. The paper summarizes the research framework of economic order quantity (EOQ) in brief. It also introduces and reviews the new research field carbon footprint order quantity (COQ). Comparing with the research of EOQ, it finds that the research on COQ is just beginning and the research assumptions still remain at the case of the “Square Root” era of EOQ a century ago. Based on some related literatures, the paper analyzes the effect of low carbon on social economy especially some influence factors related to order quantity. And it refers some important market forces affected by low carbon that are ignored by the literatures of COQ currently. Then the paper purposes the basic research approach of COQ. Finally, it provides several important topics of COQ for further research.


Author(s):  
Thomas Mandl

Automatic quality assessment of Web pages needs to complement human information work in the current situation of an information overload. Several systems for this task have been developed and evaluated. Automatic quality assessments are most often based on the features of a Web page itself or on external information. Promising results have been achieved by systems learning to associate human judgments with Web page features. Automatic evaluation of Internet resources according to various quality criteria is a new research field emerging from several disciplines. This chapter presents the most prominent systems and prototypes implemented so far and analyzes the knowledge sources exploited for these approaches.


Author(s):  
Sunil Chaudhary ◽  
Eleni Berki ◽  
Linfeng Li ◽  
Juri Valtanen

Public awareness is a significant factor in the battle against online identity theft (phishing). Advancing public readiness can be a strategic protection mechanism for citizens' vulnerability and privacy. Further, an effective research strategy against phishing is the combination of increased social awareness with software quality and social computing. The latter will decrease phishing victims and will improve information systems quality. First, the authors discuss recent research results on software quality criteria used for the design of anti-phishing technologies. Second, it is argued that the dynamics of social surroundings affect citizens' trust and can compromise social security. Third, the authors outline basic research needs and strategic steps to be taken for timely citizens' protection. Last, the authors propose strategic research directions for improving information systems total quality management through international collaborative research and by focusing on: i) increasing social awareness; ii) predicting information phishing attempts; iii) adopting social computing approaches.


Sign in / Sign up

Export Citation Format

Share Document