Comments on Country Program Evaluations

2017 ◽  
pp. 96-98
Keyword(s):  
2014 ◽  
Vol 58 (3) ◽  
pp. 262-277
Author(s):  
Jeanne Maree Allen ◽  
Julie Rimes

This article reports on ways in which one Australian independent school seeks to develop and sustain best practice and academic integrity in its programs through a system of ongoing program evaluation, involving a systematic, cyclical appraisal of the school’s suite of six faculties. A number of different evaluation methods have been and continue to be used, each developed to best suit the particular program under evaluation. In order to gain an understanding of the effectiveness of this process, we conducted a study into participants’ perceptions of the strengths and weaknesses of the four program evaluations undertaken between 2009 and 2011. Drawing on documentary analysis of the evaluation reports and analysis of questionnaire data from the study participants, a number of findings were generated. These findings are provided and discussed, together with suggestions about ways in which the conceptualisation and conduct of school program evaluations might be enhanced.


2016 ◽  
Vol 31 (2) ◽  
pp. 285-319 ◽  
Author(s):  
Kathleen A. Fox ◽  
John A. Shjarback

While some attention has been paid to “what works” to reduce crime, little is known about the effectiveness of programs designed to reduce victimization. This study systematically reviews 83 program evaluations to identify what works to (a) reduce victimization, (b) enhance beliefs/attitudes about victims, and (c) improve knowledge/awareness of victimization issues. Evidence-based findings are organized around 4 major forms of victimization, including bullying, intimate partner violence, sexual abuse, and other general forms of victimization. Determining whether certain types of programs can reduce the risk of victimization has important implications for improving people’s quality of life. Based on our findings, we offer several promising directions for the next generation of research on evaluating victimization programs. The goal of this study is to improve the strength of future program evaluations, replications, and other systematic reviews as researchers and practitioners continue to learn what works to reduce victimization.


2015 ◽  
Vol 130 (3) ◽  
pp. 1117-1165 ◽  
Author(s):  
Hunt Allcott

Abstract “Site selection bias” can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site selection bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the United States. Predictions based on rich microdata from the first 10 replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment. Also, because utilities initially target treatment at higher-usage consumer subpopulations, efficacy drops as the program is later expanded. The results illustrate how program evaluations can still give systematically biased out-of-sample predictions, even after many replications.


1989 ◽  
Vol 17 (2) ◽  
pp. 103-114 ◽  
Author(s):  
William G. Doerner ◽  
John C. Speir ◽  
Benjamin S. Wright

2017 ◽  
Vol 17 (2) ◽  
pp. 11-19 ◽  
Author(s):  
Alison Rogers ◽  
Madeleine Bower ◽  
Cathy Malla ◽  
Sharon Manhire ◽  
Deborah Rhodes

Evaluation is understood to be important for ensuring programs and organisations are effective and relevant. Evaluation findings, however, can be potentially inappropriate or not useful if those who have an in-depth understanding of the context are not involved in guidance, direction or implementation. The Fred Hollows Foundation's Indigenous Australia Program (IAP), with more than half of its employees identifying as Aboriginal and/or Torres Strait Islander, has developed a cultural protocol for evaluation to strengthen the quality of its program evaluations, whether they are carried out by internal staff or external evaluators. The development of the protocol was initiated after an evaluation capacity building appraisal identified the potential benefits of increased external support to undertake evaluation activities, and the requirement for this external support to be undertaken in a culturally appropriate manner. The protocol was developed by combining IAP's experience and knowledge with contemporary evaluation and research approaches, particularly those developed for use in cross-cultural settings, with the aim of producing a meaningful and locally relevant resource. The protocol aims to assist staff and external evaluators to ensure that evaluation activities are undertaken with the appropriate respect for, and participation of, Aboriginal and Torres Strait Islander individuals and communities. Consistent with IAP principles, those involved in the process of developing the protocols sought to ensure that engagement between staff, evaluators and evaluation participants occurs in culturally-appropriate ways. IAP believes that the protocol will contribute to stronger evaluation practices, deeper understanding and thus, more useful outcomes. This article describes the process of engaging IAP staff with contextual evidence and the literature around cultural protocols to create a meaningful tool that is useful in our particular context. The process of development described will be useful for: organisations undertaking initiatives that source external evaluators; internal evaluators engaging with external expertise; or evaluators linking with organisations working in a cross-cultural setting.


2006 ◽  
Vol 11 (1) ◽  
pp. 57-62 ◽  
Author(s):  
Mary Jo Kreitzer ◽  
Lixin Zhang ◽  
Michelle J. Trotter

Health professionals have jobs that are inherently stressful and most have had little opportunity or encouragement to focus on self-care. Over the past 10 years, professional development programs such as the “Courage to Teach” have been developed for teachers in primary and secondary schools. Reported outcomes include personal and professional growth, increased satisfaction and well-being, and renewed passion and commitment for teaching. Based on this model of transformational professional development, a program was developed for health professionals, the Inner Life Renewal Program. Four cohorts of health professionals have completed the program. This brief report provides descriptive information regarding the structure, format, and process of the program and evaluative data based on program evaluations and participant interviews. Outcomes reported by participants include an increase in self-awareness, improved listening skills and relationships with colleagues, and an increased ability to manage or cope with stress.


2016 ◽  
Vol 8 (3) ◽  
pp. 384-389 ◽  
Author(s):  
Kathryn M. Andolsek ◽  
Rhea F. Fortune ◽  
Alisa Nagler ◽  
Chrystal Stancil ◽  
Catherine Kuhn ◽  
...  

ABSTRACT  The Accreditation Council for Graduate Medical Education (ACGME) requires programs to engage annually in program evaluation and improvement.Background  We assessed the value of creating educational competency committees (ECCs) that use successful elements of 2 established processes—institutional special reviews and institutional oversight of annual program evaluations.Objective  The ECCs used a template to review programs' annual program evaluations. Results were aggregated into an institutional dashboard. We calculated the costs, sensitivity, specificity, and predictive value by comparing programs required to have a special review with those that had ACGME citations, requests for a progress report, or a data-prompted site visit. We assessed the value for professional development through a participant survey.Methods  Thirty-two ECCs involving more than 100 individuals reviewed 237 annual program evaluations over a 3-year period. The ECCs required less time than internal reviews. The ECCs rated 2 to 8 programs (2.4%–9.8%) as “noncompliant.” One to 13 programs (1.2%–14.6%) had opportunities for improvement identified. Institutional improvements were recognized using the dashboard. Zero to 13 programs (0%–16%) were required to have special reviews. The sensitivity of the decision to have a special review was 83% to 100%; specificity was 89% to 93%; and negative predictive value was 99% to 100%. The total cost was $280 per program. Of the ECC members, 86% to 95% reported their participation enhanced their professional development, and 60% to 95% believed the ECC benefited their program.Results  Educational competency committees facilitated the identification of institution-wide needs, highlighted innovation and best practices, and enhanced professional development. The cost, sensitivity, specificity, and predictive value indicated good value.Conclusions


Sign in / Sign up

Export Citation Format

Share Document