scholarly journals Quality Assessment in Co-developing Climate Services in Norway and the Netherlands

2021 ◽  
Vol 3 ◽  
Author(s):  
Scott Bremer ◽  
Arjan Wardekker ◽  
Elisabeth Schøyen Jensen ◽  
Jeroen P. van der Sluijs

Climate services, and research on climate services, have mutually developed over the past 20 years, with quality assessment a central issue for orienting both practitioners and researchers. However, quality assessment is becoming more complex as the field evolves, the range and types of climate services expands, and there is an increasing appeal to co-production of climate services. Scholars describe climate services as emerging from complex knowledge systems, where information moves through institutions and actors attribute various qualities to these services. Seeing climate services' qualities as derived from and activated in knowledge systems, we argue for comprehensive assessment conducted with an extended peer community of actors from the system; co-evaluation. Drawing inspiration from Knowledge Quality Assessment and post-normal science traditions, we develop the Co-QA assessment framework; a checklist-based framework for the co-creation of criteria to assess the quality of climate services. The Co-QA framework is a deliberation support tool for critical dialogue on the quality of climate services within a co-construction collective. It provides a novel, structured, and comprehensive way to engage an extended peer community in the process of quality assessment of climate services. We demonstrate how we tested the Co-QA—through interviews, focus groups and desktop research—in two co-production processes of innovative climate services; an ex post evaluation of the “Klimathon” in Bergen, Norway, and an ex ante evaluation for designing place-based climate services in Dordrecht, the Netherlands. These cases reveal the challenges of assessing climate services in complex knowledge systems, where many concerns cannot be captured in straight-forward metrics. And they show the utility of the Co-QA in facilitating co-evaluation.

2021 ◽  
Author(s):  
Marina Baldissera Pacchetti ◽  
Suraje Dessai ◽  
Seamus Bradley ◽  
David A. Stainforth

<p>The kind of long-term regional climate information that is increasingly important for making adaptation decisions varies in temporal and spatial resolution, and this information is usually derived from Global Climate models (GCMs). However, information about future changes in regional climate also comes with high degrees of uncertainty–an important element of the information given the high decision stakes of climate change adaptation.</p><p> </p><p>Given these considerations, Baldissera Pacchetti et al. (in press) have proposed a quality assessment framework for evaluating the quality of regional climate information that intends to inform decision making. Evaluating the quality of this information is particularly important for information that is passed on to decision makers in the form of climate services. The framework has five dimensions along which quality can be assessed: diversity, completeness, theory, adequacy for purpose and transparency.  </p><p> </p><p>Here, we critically evaluate this framework by applying it to one example of climate information for adaptation: the UK Climate Projections of 2018 (UKCP18). There are two main motivations for the choice of UKCP18. First, this product embodies some of the main modeling strategies that drive the field of climate science today. For example, the land projections produced by UKCP18 provide probabilistic uncertainty assessments using multi-model and perturbed physics ensembles (MME and PPE), use locally developed GCMs and the models from the international Climate Model Intercomparison Project (CMIP), perform dynamical downscaling for producing information at the regional scale and further fine grain information with convection permitting models. Second, the earlier version of the UK Climate Projections (UKCP09) has received criticism from philosophers of science. The quality assessment framework proposed by Baldissera Pacchetti et al. partly aims to reveal whether the pitfalls identified by philosophers in UKCP09 persist in UKCP18.</p><p> </p><p>We apply the quality assessment framework to four strands of the UKCP18 land projections and illustrate whether and to what extent each of these strands satisfies the quality dimensions of the framework. When appropriate, we show whether quality varies depending on the variable of interest within a particular strand or across strands. For example, the theory quality dimension highlights that epistemic quality along this dimension is better satisfied for estimates about variables that depend on thermodynamic principles (e.g. global average temperature) than fluid dynamical theory (e.g. precipitation) (see, e.g., Risbey and O’Kane 2011) independently of the strand under assessment. We conclude that for those dimensions that can be evaluated, UKCP18 is not sufficiently epistemically reliable to provide information of high quality for all of the products provided.</p>


Author(s):  
Svetlana V. Savkina

The article presents the results of testing the complex methodology of assessment of quality of electronic books exhibitions (EBE). The author describes the project of the expert system, allowing to implement the EBE assessment without the experts’ participation. There is given the comparison of the results of assessments, carried out by experts and by the expert system.


Author(s):  
В.Г. Антоненко ◽  
Н.В. Шилова ◽  
Е.Н. Лукаш ◽  
Э.Р. Бабкеева ◽  
В.Н. Малахов

Представлены результаты экспертной оценки качества цитогенетических исследований в лабораториях РФ в системе межлабораторных сличительных испытаний «ФСВОК» в 2018-2019 гг. Обсуждаются наиболее частые причины неудовлетворительных результатов экспертизы и возможные пути улучшения качества цитогенетических исследований. We report the results of quality assessment for preparation of cytogenetic slides and chromosomal analysis in the laboratories of Russian Federation in the system of the interlaboratory comparative examinations “FSVOK” in 2018-2019. Common causes of poor results of assessment and the ways for improvement of quality for cytogenetic investigations are discussed.


2017 ◽  
pp. 139-145
Author(s):  
R. I. Hamidullin ◽  
L. B. Senkevich

A study of the quality of the development of estimate documentation on the cost of construction at all stages of the implementation of large projects in the oil and gas industry is conducted. The main problems that arise in construction organizations are indicated. The analysis of the choice of the perfect methodology of mathematical modeling of the investigated business process for improving the activity of budget calculations, conducting quality assessment of estimates and criteria for automation of design estimates is performed.


2015 ◽  
pp. 95-103 ◽  
Author(s):  
Dirk P. Vermeulen

The technological beet quality has been always important for the processors of sugar beet. An investigation into the development of the beet quality in the Netherlands since 1980 has shown that beet quality has improved significantly. Internal quality parameters that are traditionally determined in the beet laboratory, i.e. sugar content, Na, K and -aminoN, all show an improving trend over the years. In the factories, better beet quality has led to lower lime consumption in the juice purification and significantly higher thick juice purity. In 2013, Suiker Unie introduced the serial analysis of the glucose content in beet brei as part of the routine quality assessment of the beet. The invert sugar content is subsequently calculated from glucose content with a new correlation. The background, the trial phase and the first experiences with the glucose analyzer are discussed.


Author(s):  
Jacob Stegenga

Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, especially from randomized trials. These tools are designed to take into account methodological details of studies, including randomization, subject allocation concealment, and other features of studies deemed relevant to minimizing bias. There are dozens of such tools available. They differ widely from each other, and empirical studies show that they have low inter-rater reliability and low inter-tool reliability. This is an instance of a more general problem called here the underdetermination of evidential significance. Disagreements about the quality of evidence can be due to different—but in principle equally good—weightings of the methodological features that constitute quality assessment tools. Thus, the malleability of empirical research in medicine is deep: in addition to the malleability of first-order empirical methods, such as randomized trials, there is malleability in the tools used to evaluate first-order methods.


2021 ◽  
Vol 8 ◽  
pp. 100177
Author(s):  
Stephanie Popping ◽  
Meaghan Kall ◽  
Brooke E. Nichols ◽  
Evelien Stempher ◽  
Lisbeth Versteegh ◽  
...  

2021 ◽  
Vol 11 (6) ◽  
pp. 2666
Author(s):  
Hafiz Muhammad Usama Hassan Alvi ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan ◽  
Marcin Grzegorzek

Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience by providing immersion. They need an infinite number of views to provide a full parallax to the viewer, which is not practical due to various financial and technological constraints. Therefore, novel 3D views are generated from a set of available views and their depth maps using depth-image-based rendering (DIBR) techniques. The quality of a DIBR-synthesized image may be compromised for several reasons, e.g., inaccurate depth estimation. Since depth is important in this application, inaccuracies in depth maps lead to different textural and structural distortions that degrade the quality of the generated image and result in a poor quality of experience (QoE). Therefore, quality assessment DIBR-generated images are essential to guarantee an appreciative QoE. This paper aims at estimating the quality of DIBR-synthesized images and proposes a novel 3D objective image quality metric. The proposed algorithm aims to measure both textural and structural distortions in the DIBR image by exploiting the contrast sensitivity and the Hausdorff distance, respectively. The two measures are combined to estimate an overall quality score. The experimental evaluations performed on the benchmark MCL-3D dataset show that the proposed metric is reliable and accurate, and performs better than existing 2D and 3D quality assessment metrics.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3279
Author(s):  
Maria Habib ◽  
Mohammad Faris ◽  
Raneem Qaddoura ◽  
Manal Alomari ◽  
Alaa Alomari ◽  
...  

Maintaining a high quality of conversation between doctors and patients is essential in telehealth services, where efficient and competent communication is important to promote patient health. Assessing the quality of medical conversations is often handled based on a human auditory-perceptual evaluation. Typically, trained experts are needed for such tasks, as they follow systematic evaluation criteria. However, the daily rapid increase of consultations makes the evaluation process inefficient and impractical. This paper investigates the automation of the quality assessment process of patient–doctor voice-based conversations in a telehealth service using a deep-learning-based classification model. For this, the data consist of audio recordings obtained from Altibbi. Altibbi is a digital health platform that provides telemedicine and telehealth services in the Middle East and North Africa (MENA). The objective is to assist Altibbi’s operations team in the evaluation of the provided consultations in an automated manner. The proposed model is developed using three sets of features: features extracted from the signal level, the transcript level, and the signal and transcript levels. At the signal level, various statistical and spectral information is calculated to characterize the spectral envelope of the speech recordings. At the transcript level, a pre-trained embedding model is utilized to encompass the semantic and contextual features of the textual information. Additionally, the hybrid of the signal and transcript levels is explored and analyzed. The designed classification model relies on stacked layers of deep neural networks and convolutional neural networks. Evaluation results show that the model achieved a higher level of precision when compared with the manual evaluation approach followed by Altibbi’s operations team.


Sign in / Sign up

Export Citation Format

Share Document