evaluation capacity building
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 23)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
pp. 109821402096318
Author(s):  
Kristen Rohanna

Evaluation practices are continuing to evolve, particularly in those areas related to formative, participatory, and improvement approaches. Improvement science is one of the evaluative practices. Its strength is that it seeks to embrace stakeholders’ and frontline workers’ knowledge and experience, who are often tasked with leading improvement activities in their organizations. However, very little guidance exists on how to develop crucial improvement capacity. Evaluation capacity building literature has the potential to fill this gap. This multiple methods case study follows a networked improvement community’s first year in a public education setting as network leaders sought to build capacity by incorporating Preskill and Boyle’s multidisciplinary model as its guiding framework. The purpose of this study was to better understand how to build improvement science capacity, along with what facilitates implementation and beneficial learnings. This article ends by reconceptualizing and extending Preskill and Boyle’s model to improvement science networks.


2021 ◽  
Vol 7 ◽  
pp. 71-95
Author(s):  
Elena F. Moretti

This article describes a research project focused on evaluation capacity building and internal evaluation practice, in a small sample of early learning services in Aotearoa New Zealand. Poor evaluation practice in this context has persisted for several decades, and capacity building attempts have had limited impact. Multiple methods were used to gather data on factors and conditions that motivated successful evaluation capacity building and internal evaluation practice in five unusually high-performing early learning services. The early learning sector context is described and discussed in relation to existing research on evaluation capacity building in organisations. This is followed by a brief overview of the research methodology for this study, with the majority of the article devoted to findings and areas for future exploration and research. Quotes from the research participants are used to illustrate their views, and the views of the wider early learning sector, on evaluation matters. Findings suggest that motivation is hindered by a widespread view of internal evaluation as overly demanding and minimally valuable. In addition, some features of the Aotearoa New Zealand early learning context mean that accountability factors are not effective motivators for evaluation capacity building. Early learning service staff are more motivated to engage in evaluation by factors and conditions related to their understandings of personal capability, guidance and support strategies, and the alignment of internal evaluation processes to positive children’s outcomes. The strength of agreement within the limited sample size and scope of this study, particularly considering the variation in early learning service contexts of the research participants, supports the validity of the findings. Understandings of evaluation capacity building motivators in this context will contribute to discussions related to organisation evaluation, internal evaluation, social-sector evaluation, and evaluation capacity building.


2021 ◽  
Vol 20 (3) ◽  
pp. 368-381
Author(s):  
Susanne Buehrer ◽  
Evanthia Kalpazidou Schmidt ◽  
Dorottya Rigler ◽  
Rachel Palmen

Evaluation cultures and evaluation capacity building vary greatly across the European Union. Western European countries, such as Austria, Germany, Denmark and Sweden, have been termed as leading countries in the evaluation as they have built up well-established evaluation cultures and carry out systematic evaluations of programmes and institutions. In contrast, in Central and Eastern European (CEE) countries, efforts continue to establish evaluation practices and further develop the current evaluation culture. In Hungary, for example, an established research and innovation evaluation practice does not exist, not one specifically considering gender equality in research and innovation evaluations with the exception of research and innovation programmes financed by the EU Structural Funds. Based on the results of a Horizon 2020 project, we apply a context-sensitive evaluation concept in Hungary that enables program owners and evaluators to develop a tailor-made design and impact model for their planned or ongoing gender equality interventions. The development of this evaluation was based on a thorough analysis of the literature and 19 case studies, building on documentary analysis and semi-structured interviews. The article shows that this evaluation approach is applicable also in countries with a certain catch-up demand of the existing overall evaluation culture. The special feature of the presented evaluation approach is, on the one hand, that the evaluation is context-sensitive. On the other hand, this approach makes it possible not only to depict effects on gender equality itself, but also to anticipate effects on research and innovation. Such effects can, for example, be a stronger orientation of research towards societal needs, which makes it particularly interesting for private companies.


2021 ◽  
pp. 109821402110305
Author(s):  
Gretchen S. Clarke ◽  
Elizabeth B. Douglas ◽  
Marnie J. House ◽  
Kristen E.G. Hudgins ◽  
Sofia Campos ◽  
...  

This article describes our experience of conducting a 5-year, culturally responsive evaluation of a federal program with Indigenous communities. It describes how we adapted tenets from “participatory evaluation models” to ensure cultural relevance and empowerment. We provide recommendations for evaluators engaged in similar efforts. The evaluation included stakeholder engagement through a Steering Committee and an Evaluation Working Group in designing and implementing the evaluation. That engagement facilitated attention to Indigenous cultural values in developing a program logic model and medicine wheel and in gathering local perspectives through storytelling to facilitate understanding of community traditions. Our ongoing assessment of program grantees’ needs shaped our approach to evaluation capacity building and development of a diverse array of experiential learning opportunities and user-friendly tools and resources. We present practical strategies from lessons learned during the evaluation design and implementation phases of our project that might be useful for other evaluators.


2021 ◽  
Vol 2021 (169) ◽  
pp. 97-116
Author(s):  
Monica Hargraves ◽  
Jane Buckley ◽  
Jennifer Brown Urban ◽  
Miriam R. Linver ◽  
Lisa M. Chauveron ◽  
...  

2021 ◽  
Vol 2021 (169) ◽  
pp. 79-95
Author(s):  
Lisa M. Chauveron ◽  
Jennifer Brown Urban ◽  
Satabdi Samtani ◽  
Milira Cox ◽  
Leslie Moorman ◽  
...  

2021 ◽  
pp. 109821402091721
Author(s):  
Lisa M. Chauveron ◽  
Satabdi Samtani ◽  
Megan G. Groner ◽  
Jennifer Brown Urban ◽  
Miriam R. Linver

Although experts agree that diverse stakeholder inclusion enhances quality and equity in evaluation designs and implementation, diverse voices are often omitted. Particularly antithetical to principles of youth character development, evaluations for these programs should strive to include voices from various social, economic, community, and demographic perspectives. One innovative national evaluation capacity building initiative, the Partnerships for Advancing Character Program Evaluation (PACE) project, paired practitioners from youth programs in community-based organizations with evaluation professionals to enhance stakeholders’ roles in evaluation. PACE promoted stakeholder identification and inclusion through group exercises, partnership work, and coaching sessions. Using a mixed methods design with interviews, retrospective pretest–posttest surveys, and observational data, triangulated data addressed diverse stakeholders in the evaluation process, diverse perspectives on program performance, and connecting diverse input to evaluation design. Postprogram findings indicate that participants included more varied and diverse stakeholder perspectives in all the three areas. Implications for programs and evaluations are discussed.


2020 ◽  
pp. 0193841X2097624
Author(s):  
Lily Zandniapour ◽  
Mary Hyde

The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service that received funding from 2010 to 2016, is one of a set of tiered evidence initiatives that was designed and implemented at the federal level during President Obama’s administration. The key objectives of the initiative were to (1) invest in promising interventions that address social and community challenges and grow their impact and (2) invest in evaluation and capacity building in order to support the development and use of rigorous evidence to measure the effectiveness of each funded intervention (i.e., to “move the evidence needle”) and inform decision making. The SIF proved successful in strengthening and sustaining the capacity of its implementing partners to conduct rigorous evaluations when put through a robust impact evaluation of its own at the national level. It has also spurred high-quality local evaluations that are building knowledge and a body of evidence across the supported program models to inform practice. The SIF’s evaluation technical assistance program was critical to its success, and as such, its design and approach holds interesting lessons for the larger field. This article discusses the structure and key features of the SIF as a grant making model, its evaluation requirements, and embedded approach and process for evaluation capacity building and the delivery of technical assistance, the tools and resources that it generated to support its goals, the evidence supporting its success, and how those lessons can inform other organizations and initiatives.


Sign in / Sign up

Export Citation Format

Share Document