evaluability assessment
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 4)

H-INDEX

12
(FIVE YEARS 1)

2020 ◽  
Vol 20 (4) ◽  
pp. 212-228
Author(s):  
Kwadwo Adusei-Asante ◽  
Elaine Bennett ◽  
Wendy Simpson ◽  
Sharlene Hindmarsh ◽  
Beth Harvey ◽  
...  

Evaluability assessment focuses on the readiness of organisations to carry out evaluations. Scholars argue that evaluability assessment needs to focus on internal evaluation systems and tools and their capability to measure programmes and services reliably and credibly. Even so, literature on best practice guidelines on evaluability assessment within the context of the not-for-profit sector appears to be rare. We seek to begin to fill this gap by presenting lessons learned from Ngala, Western Australia, when we reviewed the organisation’s evaluation practice and culture in 2018/2019. The Service Model and Outcomes Measurement Audit project assessed the extent to which service models within Ngala aligned with the organisation’s standardised service model and individual service contracts, as well as consistency of outcomes, data collection and reporting practices. Insights obtained from the project and their implications for evaluability assessment practice are discussed.


Evaluation ◽  
2019 ◽  
Vol 25 (3) ◽  
pp. 349-365
Author(s):  
Richard Brunner ◽  
Peter Craig ◽  
Nick Watson

Evaluation is essential to understand whether and how policies and other interventions work, why they sometimes fail, and whether they represent a good use of resources. Evaluability assessment (EA) is a means of collaboratively planning and designing evaluations, seeking to ensure they generate relevant and robust evidence that supports decision-making and contributes to the wider evidence base. This article reports on the context, the process undertaken and evidence from participants in an EA facilitated with public service workers involved in implementing a complex, area-based community improvement initiative. This is a novel context in which to conduct an EA. We show how the process allows practitioners at all levels to identify activities for evaluation and co-produce the theory of change developed through the EA. This enables evaluation recommendations to be developed that are relevant to the implementation of the programme, and which take account of available data and resources for evaluation.


2018 ◽  
Author(s):  
Rosemary Davidson ◽  
Gurch Randhawa ◽  
Stephanie Cash

BACKGROUND There is extensive literature on the methodology of evaluation research and the development and evaluation of complex interventions but little guidance on the formative stages before evaluation and how to work with partner organizations that wish to have their provision evaluated. It is important to be able to identify suitable projects for evaluation from a range of provision and describe the steps required, often with academic institutions working in partnership with external organizations, in order to set up an evaluation. However, research evaluating programs or interventions rarely discusses these stages. OBJECTIVE This study aimed to extend work on evaluability assessment and pre-evaluation planning by proposing an 8-Step Scoping Framework to enable the appraisal of multiple programs in order to identify interventions suitable for evaluation. We aimed to add to the literature on evaluability assessment and more recent evaluation guidance by describing the processes involved in working with partner organizations. METHODS This paper documents the steps required to identify multiple complex interventions suitable for process and outcome evaluation. The steps were developed using an iterative approach by working alongside staff in a local government organization, to build an evidence base to demonstrate which interventions improve children’s outcomes. The process of identifying suitable programs for evaluation, thereby establishing the pre-evaluation steps, was tested using all Flying Start provision. RESULTS The 8-Step Scoping Framework was described using the example of the local government organization Flying Start to illustrate how each step contributes to finding projects suitable for process and outcome evaluation: (1) formulating overarching key questions that encompass all programs offered by an organization, (2) gaining an in-depth understanding of the work and provision of an organization and engaging staff, (3) completing a data template per project/program offered, (4) assessing the robustness/validity of data across all programs, (5) deciding on projects suitable for evaluation and those requiring additional data, (6) negotiating with chosen project leads, both within and outside the organization, (7) developing individual project evaluation protocols, and (8) applying for ethical approval from the university and partner organization. CONCLUSIONS This paper describes the processes involved in identifying suitable projects for evaluation. It adds to the existing literature on the assessment of specific programs suitable for evaluation and guidance for conducting evaluations by establishing the formative steps required to identify suitable programs from a range of provision. This scoping framework particularly relates to academic partners and organizations tasked with delivering evidence-based services designed to meet local needs. The steps identified have been described in the context of early years provision but can be applied to a range of community-based evaluations, or more generally, to cases where an academic partner is working with external stakeholders to identify projects suitable for academic evaluation.


2017 ◽  
Vol 62 (10) ◽  
pp. 3185-3200 ◽  
Author(s):  
Abigail Henson

Over the last decade, criminal justice scholars have increasingly endorsed “evidence-based practices”; however, some criminologists have voiced concerns over the varied methodological rigor used by evaluation researchers, differing definitions of evidence, and lack of critical exploration as to why programs may be (in)effective. This article argues that evaluability assessments (EAs) can answer these concerns. Through a case study of an EA used on a prison-based fatherhood program, this article demonstrates how EA’s approach leads to a more precise understanding of outcome operationalization, and allows for the democratization of research, which is particularly important in a carceral setting.


Sign in / Sign up

Export Citation Format

Share Document