evaluation capacity
Recently Published Documents


TOTAL DOCUMENTS

207
(FIVE YEARS 37)

H-INDEX

18
(FIVE YEARS 2)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262125
Author(s):  
Rochelle Tobin ◽  
Gemma Crawford ◽  
Jonathan Hallett ◽  
Bruce Richard Maycock ◽  
Roanna Lobo

Introduction Public health policy and practice is strengthened by the application of quality evidence to decision making. However, there is limited understanding of how initiatives that support the generation and use of evidence in public health are operationalised. This study examines factors that support the internal functioning of a partnership, the Western Australian Sexual Health and Blood-borne Virus Applied Research and Evaluation Network (SiREN). SiREN aims to build research and evaluation capacity and increase evidence-informed decision making in a public health context. Methods This study was informed by systems concepts. It developed a causal loop diagram, a type of qualitative system model that illustrated the factors that influence the internal operation of SiREN. The causal loop diagram was developed through an iterative and participatory process with SiREN staff and management (n = 9) via in-depth semi-structured interviews (n = 4), workshops (n = 2), and meetings (n = 6). Results Findings identified critical factors that affected the functioning of SiREN. Central to SiREN’s ability to meet its aims was its capacity to adapt within a dynamic system. Adaptation was facilitated by the flow of knowledge between SiREN and system stakeholders and the expertise of the team. SiREN demonstrated credibility and capability, supporting development of new, and strengthening existing, partnerships. This improved SiREN’s ability to be awarded new funding and enhanced its sustainability and growth. SiREN actively balanced divergent stakeholder interests to increase sustainability. Conclusion The collaborative development of the diagram facilitated a shared understanding of SiREN. Adaptability was central to SiREN achieving its aims. Monitoring the ability of public health programs to adapt to the needs of the systems in which they work is important to evaluate effectiveness. The detailed analysis of the structure of SiREN and how this affects its operation provide practical insights for those interested in establishing a similar project.


2021 ◽  
pp. 109821402096318
Author(s):  
Kristen Rohanna

Evaluation practices are continuing to evolve, particularly in those areas related to formative, participatory, and improvement approaches. Improvement science is one of the evaluative practices. Its strength is that it seeks to embrace stakeholders’ and frontline workers’ knowledge and experience, who are often tasked with leading improvement activities in their organizations. However, very little guidance exists on how to develop crucial improvement capacity. Evaluation capacity building literature has the potential to fill this gap. This multiple methods case study follows a networked improvement community’s first year in a public education setting as network leaders sought to build capacity by incorporating Preskill and Boyle’s multidisciplinary model as its guiding framework. The purpose of this study was to better understand how to build improvement science capacity, along with what facilitates implementation and beneficial learnings. This article ends by reconceptualizing and extending Preskill and Boyle’s model to improvement science networks.


2021 ◽  
Vol 7 ◽  
pp. 71-95
Author(s):  
Elena F. Moretti

This article describes a research project focused on evaluation capacity building and internal evaluation practice, in a small sample of early learning services in Aotearoa New Zealand. Poor evaluation practice in this context has persisted for several decades, and capacity building attempts have had limited impact. Multiple methods were used to gather data on factors and conditions that motivated successful evaluation capacity building and internal evaluation practice in five unusually high-performing early learning services. The early learning sector context is described and discussed in relation to existing research on evaluation capacity building in organisations. This is followed by a brief overview of the research methodology for this study, with the majority of the article devoted to findings and areas for future exploration and research. Quotes from the research participants are used to illustrate their views, and the views of the wider early learning sector, on evaluation matters. Findings suggest that motivation is hindered by a widespread view of internal evaluation as overly demanding and minimally valuable. In addition, some features of the Aotearoa New Zealand early learning context mean that accountability factors are not effective motivators for evaluation capacity building. Early learning service staff are more motivated to engage in evaluation by factors and conditions related to their understandings of personal capability, guidance and support strategies, and the alignment of internal evaluation processes to positive children’s outcomes. The strength of agreement within the limited sample size and scope of this study, particularly considering the variation in early learning service contexts of the research participants, supports the validity of the findings. Understandings of evaluation capacity building motivators in this context will contribute to discussions related to organisation evaluation, internal evaluation, social-sector evaluation, and evaluation capacity building.


2021 ◽  
Vol 16 (4) ◽  
pp. 52-69
Author(s):  
Barry A. Garst ◽  
James Pann ◽  
Tiffany Berry ◽  
Gretchen Biesecker ◽  
Jason Spector ◽  
...  

Youth-serving organizations seek effective and cost-efficient solutions to build evidence and advance their impact. Some common challenges include choosing data systems or assessments, budgeting and planning for 3rd-party studies, and refining measurement and outcomes when programs expand or change. Evaluation advisory boards (EABs) are a low-cost solution to add evaluation capacity and can be mutually beneficial to both youth-serving organizations and evaluation experts. Previous research suggests that EABs may encourage meaningful use of data, support internal evaluators, and/or facilitate difficult conversations among stakeholders. However, there are very few examples of successful EABs in practice. This paper shares the perspectives of EAB members and organizational evaluation leaders from a large national after-school program, After-School All-Stars (ASAS), including (a) a description of the benefits of EABs, (b) how EABs may be especially helpful with the context of the COVID-19 pandemic, and (c) examples of youth-serving organizations’ EABs. The experiences and lessons learned by ASAS and its EAB are generalizable to other non-profit youth development programs. Recommendations for structuring EABs based on organizational goals are provided.


2021 ◽  
Vol 2021 (170) ◽  
pp. 101-111
Author(s):  
KaYing Vang ◽  
Marah Moore ◽  
Claire Nicklin
Keyword(s):  

2021 ◽  
Vol 36 (1) ◽  
Author(s):  
Btissam El Hassar ◽  
Cheryl Poth ◽  
Rebecca Gokiert ◽  
Okan Bulut

Organizations are required to evaluate their programs for both learning and accountability purposes, which has increased the need to build their internal evaluation capacity. A remaining challenge is access to tools that lead to valid evidence supporting internal capacity development. The authors share practical insights from the development and use of the Evaluation Capacity Needs Assessment tool and framework and implications for using its data to make concrete decisions within Canadian contexts. The article refers to validity evidence generated from factor analyses and structural equation modelling and describes how applying the framework can be used to identify individual and organizational evaluation capac­ity strengths and gaps, concluding with practice considerations and future directions for this work.  


2021 ◽  
Vol 16 (1) ◽  
pp. 100-125
Author(s):  
Elizabeth Sparks ◽  
Michelle Molina ◽  
Natalie Shepp ◽  
Fiona Davey

Active engagement of youth participants in the evaluation process is an increasingly sought out method, but the field can still benefit from new methods that ease youth participatory evaluation implementation. Meaningful youth engagement in the evaluation process is particularly advantageous under the 4-H thriving model because of its potential to contribute to positive youth development, foster relationship building, enhance evaluation capacity, and improve program quality through improved evaluations. This program sought to facilitate actively engaging youth in the evaluation process by breaking it up into clear and manageable steps including evaluation design, data collection, data interpretation and analysis, reporting results, and outlining programmatic change. To achieve this aim, program staff designed the Evaluation Skill-a-Thon, a set of self-paced, experiential evaluation activities at various stations through which youth participants rotate. Actively involving youth participants in the evaluation process using the Skill-a-Thon model resulted in youth being able to identify and design programmatic changes, increased participation and response rates in other evaluations, and several youth participants’ gaining an interest in evaluation and working to design evaluations in later years. The Evaluation Skill-a-Thon holds promise for actively engaging youth participants in the entire evaluation process, easy implementation, and increasing evaluation capacity.


2021 ◽  
Author(s):  

Provides an overview of the work undertaken by the Office of Evaluation and Oversight (OVE) throughout 2020. It summarizes the evaluations completed, highlights lessons learned, describes evaluation capacity initiatives and dissemination efforts towards facilitating institutional learning, fostering accountability and transparency. It also outlines OVE's upcoming work program.


Sign in / Sign up

Export Citation Format

Share Document