American Journal of Evaluation
Latest Publications





Published By Sage Publications

1557-0878, 1098-2140

2022 ◽  
pp. 109821402110079
Jennifer J. Esala ◽  
Liz Sweitzer ◽  
Craig Higson-Smith ◽  
Kirsten L. Anderson

Advocacy evaluation has emerged in the past 20 years as a specialized area of evaluation practice. We offer a review of existing peer-reviewed literature and draw attention to the scarcity of scholarly work on human rights advocacy evaluation in the Global South. The lack of published material in this area is concerning, given the urgent need for human rights advocacy in the Global South and the difficulties of conducting advocacy in contexts in which fundamental human rights are often poorly protected. Based on the review of the literature and our professional experiences in human rights advocacy evaluation in the Global South, we identify themes in the literature that are especially salient in the Global South and warrant more attention. We also offer critical reflections on content areas not addressed in the existing literature and conclude with suggestions as to how activists, evaluators, and other stakeholders can contribute to the development of a field of practice that is responsive to the global challenge of advocacy evaluation.

2022 ◽  
pp. 109821402199192
Roni Ellington ◽  
Clara B. Barajas ◽  
Amy Drahota ◽  
Cristian Meghea ◽  
Heatherlun Uphold ◽  

Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the operations, community and academic partner collaboration, scientific and community research dissemination, and cost-effectiveness. Unfortunately, there has been limited research on methodologies and frameworks that can be used to evaluate large initiatives. This study presents a framework for evaluating the Flint Center for Health Equity Solutions (FCHES), a National Institute of Minority Health and Health Disparities (NIMHD)-funded Transdisciplinary Collaborative Center (TCC) for health disparities research. This report presents a summary of the FCHES evaluation framework and evaluation questions as well as findings from the Year-2 evaluation of the Center and lessons learned.

2022 ◽  
pp. 109821402097548
Charles S. Reichardt

Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between what happened after the program was implemented and what would have happened if the program had not been implemented, but everything else had been the same. Such a definition is often said to be linked to the use of quantitative methods. But the definition can be used just as effectively with qualitative methods. To demonstrate its broad applicability in both qualitative and quantitative research, I show how the counterfactual definition undergirds seven common approaches to assessing effects. It is not clear how any alternative to the counterfactual definition is as generally applicable as the counterfactual definition.

2022 ◽  
pp. 109821402110416
Caitlin Howley ◽  
Johnavae Campbell ◽  
Kimberly Cowley ◽  
Kimberly Cook

In this article, we reflect on our experience applying a framework for evaluating systems change to an evaluation of a statewide West Virginia alliance funded by the National Science Foundation (NSF) to improve the early persistence of rural, first-generation, and other underrepresented minority science, technology, engineering, and mathematics (STEM) students in their programs of study. We begin with a description of the project and then discuss the two pillars around which we have built our evaluation of this project. Next, we present the challenge we confronted (despite the utility of our two pillars) in identifying and analyzing systems change, as well as the literature we consulted as we considered how to address this difficulty. Finally, we describe the framework we applied and examine how it helped us and where we still faced quandaries. Ultimately, this reflection serves two key purposes: 1) to consider a few of the challenges of measuring changes in systems and 2) to discuss our experience applying one framework to address these issues.

2022 ◽  
pp. 109821402092778
Elizabeth Tipton

Practitioners and policymakers often want estimates of the effect of an intervention for their local community, e.g., region, state, county. In the ideal, these multiple population average treatment effect (ATE) estimates will be considered in the design of a single randomized trial. Methods for sample selection for generalizing the sample ATE to date, however, focus only on the case of a single target population. In this paper, I provide a framework for sample selection in the multiple population case, including three compromise allocations. I situate the methods in an example and conclude with a discussion of the implications for the design of randomized evaluations more generally.

2022 ◽  
pp. 109821402094330
Wendy Chan

Over the past ten years, propensity score methods have made an important contribution to improving generalizations from studies that do not select samples randomly from a population of inference. However, these methods require assumptions and recent work has considered the role of bounding approaches that provide a range of treatment impact estimates that are consistent with the observable data. An important limitation to bound estimates is that they can be uninformatively wide. This has motivated research on the use of propensity score stratification to narrow bounds. This article assesses the role of distributional overlap in propensity scores on the effectiveness of stratification to tighten bounds. Using the results of two simulation studies and two case studies, I evaluate the relationship between distributional overlap and precision gain and discuss the implications when propensity score stratification is used as a method to improve precision in the bounding framework.

2021 ◽  
pp. 109821402096318
Kristen Rohanna

Evaluation practices are continuing to evolve, particularly in those areas related to formative, participatory, and improvement approaches. Improvement science is one of the evaluative practices. Its strength is that it seeks to embrace stakeholders’ and frontline workers’ knowledge and experience, who are often tasked with leading improvement activities in their organizations. However, very little guidance exists on how to develop crucial improvement capacity. Evaluation capacity building literature has the potential to fill this gap. This multiple methods case study follows a networked improvement community’s first year in a public education setting as network leaders sought to build capacity by incorporating Preskill and Boyle’s multidisciplinary model as its guiding framework. The purpose of this study was to better understand how to build improvement science capacity, along with what facilitates implementation and beneficial learnings. This article ends by reconceptualizing and extending Preskill and Boyle’s model to improvement science networks.

2021 ◽  
pp. 109821402098392
Tiffany L. S. Tovey ◽  
Gary J. Skolits

The purpose of this study was to determine professional evaluators’ perceptions of reflective practice (RP) and the extent and manner in which they engage in RP behaviors. Nineteen evaluators with 10 or more years of experience in the evaluation field were interviewed to explore our understanding and practice of RP in evaluation. Findings suggest that RP is a process of self and contextual awareness, involving thinking and questioning, and individual and group meaning-making, focused on facilitating growth in the form of learning and improvement. The roles of individual and collaborative reflection as well as reflection in- and on-action are also discussed. Findings support a call for the further refinement of our understanding of RP in evaluation practice. Evaluators seeking to be better reflective practitioners should be competent in skills such as facilitation and interpersonal skills, as well as budget needed time for RP in evaluation accordingly.

2021 ◽  
pp. 109821402110029
Carlomagno C. Panlilio ◽  
Lisa Famularo ◽  
Jessica Masters ◽  
Sarah Dore ◽  
Nicole Verdiglione ◽  

Knowledge tests used to evaluate child protection training program effectiveness for early childhood education providers may suffer from threats to construct validity given the contextual variability inherent within state-specific regulations around mandated reporting requirements. Unfortunately, guidance on instrument revision that accounts for such state-specific mandated reporting requirements is lacking across research on evaluation practices. This study, therefore, explored how collection and integration of validity evidence using a mixed methods framework can guide the instrument revision process to arrive at a more valid program outcome measure.

Sign in / Sign up

Export Citation Format

Share Document