Dependable computing system evaluation criteria: SQUALE proposal

Author(s):  
Y. Deswarte
2007 ◽  
Vol 90 (12) ◽  
pp. 35-46
Author(s):  
Yoshimitsu Yanagawa ◽  
Takuya Takahara ◽  
Takahide Mizuno ◽  
Hirobumi Saito

2016 ◽  
Vol 16 (4) ◽  
pp. 14-20 ◽  
Author(s):  
Ralph Renger

This article describes how system evaluation theory (SET) guided the evaluation of cardiac care response systems efficiency in seven rural United States. Specifically, the article focuses on the approach and methods used to evaluate system feedback mechanisms; one key factor affecting system efficiency. Mixed methods were applied to evaluate five criteria of system feedback efficiency: frequency, timeliness, credibility, specificity, and relevance. Examples from the cardiac care response system evaluation are used to illustrate each of the evaluation criteria. The discussion contrasts the role of the evaluator in system versus program evaluation, notes the post-hoc support of SET system attributes in affecting system efficiency, and offers additional consideration in evaluating system feedback mechanisms.


2018 ◽  
Vol 13 (03) ◽  
pp. 626-638 ◽  
Author(s):  
Shoukat H. Qari ◽  
Hussain R. Yusuf ◽  
Samuel L. Groseclose ◽  
Mary R. Leinhos ◽  
Eric G. Carbone

ABSTRACTObjectivesThe US Centers for Disease Control and Prevention (CDC)-funded Preparedness and Emergency Response Research Centers (PERRCs) conducted research from 2008 to 2015 aimed to improve the complex public health emergency preparedness and response (PHEPR) system. This paper summarizes PERRC studies that addressed the development and assessment of criteria for evaluating PHEPR and metrics for measuring their efficiency and effectiveness.MethodsWe reviewed 171 PERRC publications indexed in PubMed between 2009 and 2016. These publications derived from 34 PERRC research projects. We identified publications that addressed the development or assessment of criteria and metrics pertaining to PHEPR systems and describe the evaluation methods used and tools developed, the system domains evaluated, and the metrics developed or assessed.ResultsWe identified 29 publications from 12 of the 34 PERRC projects that addressed PHEPR system evaluation criteria and metrics. We grouped each study into 1 of 3 system domains, based on the metrics developed or assessed: (1) organizational characteristics (n = 9), (2) emergency response performance (n = 12), and (3) workforce capacity or capability (n = 8). These studies addressed PHEPR system activities including responses to the 2009 H1N1 pandemic and the 2011 tsunami, as well as emergency exercise performance, situational awareness, and workforce willingness to respond. Both PHEPR system process and outcome metrics were developed or assessed by PERRC studies.ConclusionsPERRC researchers developed and evaluated a range of PHEPR system evaluation criteria and metrics that should be considered by system partners interested in assessing the efficiency and effectiveness of their activities. Nonetheless, the monitoring and measurement problem in PHEPR is far from solved. Lack of standard measures that are readily obtained or computed at local levels remains a challenge for the public health preparedness field. (Disaster Med Public Health Preparedness. 2019;13:626-638)


Sign in / Sign up

Export Citation Format

Share Document