evaluation practice
Recently Published Documents


TOTAL DOCUMENTS

359
(FIVE YEARS 102)

H-INDEX

23
(FIVE YEARS 2)

2022 ◽  
pp. 109821402110079
Author(s):  
Jennifer J. Esala ◽  
Liz Sweitzer ◽  
Craig Higson-Smith ◽  
Kirsten L. Anderson

Advocacy evaluation has emerged in the past 20 years as a specialized area of evaluation practice. We offer a review of existing peer-reviewed literature and draw attention to the scarcity of scholarly work on human rights advocacy evaluation in the Global South. The lack of published material in this area is concerning, given the urgent need for human rights advocacy in the Global South and the difficulties of conducting advocacy in contexts in which fundamental human rights are often poorly protected. Based on the review of the literature and our professional experiences in human rights advocacy evaluation in the Global South, we identify themes in the literature that are especially salient in the Global South and warrant more attention. We also offer critical reflections on content areas not addressed in the existing literature and conclude with suggestions as to how activists, evaluators, and other stakeholders can contribute to the development of a field of practice that is responsive to the global challenge of advocacy evaluation.


Author(s):  
Biagio Aragona

Solo con un conocimiento más consciente de los diferentes tipos de big data y sus posibles usos, límites y ventajas la sociología se beneficiará realmente de estas bases empíricas. En este artículo, a partir de una clasificación de los diversos tipos de big data, se describen algunas áreas de uso en la investigación social destacando cuestiones críticas y problemas éticos. Los límites se vinculados a cuestiones fundamentales relativas a la calidad de los big data. Otra cuestión clave se refiere al acceso. Otro aspecto metodológico a tener en cuenta es que los datos digitales en la web deben considerarse no intrusivos. Los métodos de investigación encubiertos ha desafiado la práctica de evaluación ética establecidas adoptadas en la mayoría de las instituciones de investigación: el consentimiento informado. Las pautas éticas digitales no pueden ser universales y estar establecidas de una vez por todas. Only through expert knowledge of the different types of big data and their possible uses, limits and advantages will sociology benefit from these empirical bases. In this article, based on a classification of the various types of big data, some areas of use in social research are described, highlighting critical questions and ethical problems. The limits are related to fundamental questions regarding the quality of big data. Another paramount issue concerns access. A further methodological aspect is that digital data on the web should be considered non-intrusive. Covert research methods have challenged the established ethical evaluation practice adopted in most research institutions: informed consent. Digital ethical guidelines cannot be universal and established once and for all.


Evaluation ◽  
2022 ◽  
pp. 135638902110646
Author(s):  
Denise E. De Souza

Pawson and Tilley’s acknowledgment of programs embedded in multiple social systems has gained little traction in realist synthesis and evaluation practice. A practice focusing on fairly closed systems—explaining how programs work and do not work—has emerged. This article negotiates the boundaries of knowledge pertinent to have in program design and evaluation from a realist perspective. It highlights critical realism as another possible response to systems thinking in evaluation. Moving one level up a program, it theorizes about social structures, mechanisms, and causes operating in a complex system within which an education-to-work program is nested. Three implications of the approach are highlighted: it foregrounds the relational nature of social, psychological, and programmatic structures and mechanisms; enables policymakers to develop a broader understanding of structures needed to support a program; and enables program architects to ascertain how a planned program might assimilate and adapt to social structures and mechanisms already established in a context.


Author(s):  
Jindra Cekan ◽  
Susan Legro

AbstractThe purpose of this research was to explore how public donors and lenders evaluate the sustainability of environmental and other sectoral development interventions. Specifically, the aim is to examine if, how, and how well post project sustainability is evaluated in donor-funded climate change mitigation (CCM) projects, including the evaluability of these projects. We assessed the robustness of current evaluation practice of results after project exit, particularly the sustainability of outcomes and long-term impact. We explored methods that could reduce uncertainty of achieving results by using data from two pools of CCM projects funded by the Global Environment Facility (GEF).


2021 ◽  
pp. 109821402098392
Author(s):  
Tiffany L. S. Tovey ◽  
Gary J. Skolits

The purpose of this study was to determine professional evaluators’ perceptions of reflective practice (RP) and the extent and manner in which they engage in RP behaviors. Nineteen evaluators with 10 or more years of experience in the evaluation field were interviewed to explore our understanding and practice of RP in evaluation. Findings suggest that RP is a process of self and contextual awareness, involving thinking and questioning, and individual and group meaning-making, focused on facilitating growth in the form of learning and improvement. The roles of individual and collaborative reflection as well as reflection in- and on-action are also discussed. Findings support a call for the further refinement of our understanding of RP in evaluation practice. Evaluators seeking to be better reflective practitioners should be competent in skills such as facilitation and interpersonal skills, as well as budget needed time for RP in evaluation accordingly.


2021 ◽  
Vol 7 ◽  
pp. 71-95
Author(s):  
Elena F. Moretti

This article describes a research project focused on evaluation capacity building and internal evaluation practice, in a small sample of early learning services in Aotearoa New Zealand. Poor evaluation practice in this context has persisted for several decades, and capacity building attempts have had limited impact. Multiple methods were used to gather data on factors and conditions that motivated successful evaluation capacity building and internal evaluation practice in five unusually high-performing early learning services. The early learning sector context is described and discussed in relation to existing research on evaluation capacity building in organisations. This is followed by a brief overview of the research methodology for this study, with the majority of the article devoted to findings and areas for future exploration and research. Quotes from the research participants are used to illustrate their views, and the views of the wider early learning sector, on evaluation matters. Findings suggest that motivation is hindered by a widespread view of internal evaluation as overly demanding and minimally valuable. In addition, some features of the Aotearoa New Zealand early learning context mean that accountability factors are not effective motivators for evaluation capacity building. Early learning service staff are more motivated to engage in evaluation by factors and conditions related to their understandings of personal capability, guidance and support strategies, and the alignment of internal evaluation processes to positive children’s outcomes. The strength of agreement within the limited sample size and scope of this study, particularly considering the variation in early learning service contexts of the research participants, supports the validity of the findings. Understandings of evaluation capacity building motivators in this context will contribute to discussions related to organisation evaluation, internal evaluation, social-sector evaluation, and evaluation capacity building.


2021 ◽  
pp. 1035719X2110530
Author(s):  
Kathryn Erskine ◽  
Matt Healey

This paper details disruption and innovation in digital evaluation practice at Movember, as a result of the COVID-19 pandemic. The paper examines a men’s digital health intervention (DHI) – Movember Conversations – and the product pivot that was necessary to ensure it could respond to the pandemic. The paper focuses on the implications of the pivot for the evaluation and how the evaluation was adapted to the COVID-19 exigencies. It details the redesign of the evaluation to ensure methods wrapped around the modified product and could deliver real-time, practical insights. The paper seeks to fill knowledge gaps in the DHI evaluation space and outlines four key principles that support evaluation re-design in an agile setting. These include a user-centred approach to evaluation design, proportionate data collection, mixed (and flexible) methodologies, and agile evaluation reporting. The paper concludes with key lessons and reflections from the evaluators about what worked at Movember, to support other evaluators planning digital evaluations.


Author(s):  
Jori Hall

Cultural competence is a complex and contested notion. Yet, cultural competence remains integral to working with difference in the context of evaluation practice. Given its status in evaluation practice, the field’s commitment to cultural competence prompts the need for further interrogation and reconsideration. Accordingly, this paper explores the establishment and conceptualization of cultural competence. Potential challenges to cultural competence are also examined. In consideration of these challenges, an alternative framework is offered based on the philosophy of Emanuel Levinas. This work aims to support the evaluation community’s ability to work with cultural diversity, a vital aspect of evaluation practice.


BMJ Open ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. e055304
Author(s):  
Bobby Lee Maher ◽  
Jillian Guthrie ◽  
Elizabeth Ann Sturgiss ◽  
Margaret Cargo ◽  
Raymond Lovett

IntroductionIndigenist evaluation is emergent in Australia; the premise of which is that evaluations are undertaken for Indigenous, by Indigenous and with Indigenous people. This provides opportunities to develop new models and approaches. Exploring a collective capability approach could be one way to inform an Indigenist evaluation methodology. Collective capability suggests that a base of skills and knowledges exist, and when these assets come together, empowerment and agency emerge. However, collective capability requires defining as it is not common terminology in population health or evaluation. Our aim is to define the concept of collective capability in Indigenist evaluation in Australia from an Australian Indigenous standpoint.Methods and analysisA modified Rodgers’ evolutionary concept analysis will be used to define collective capability in an Australian Indigenous evaluation context, and to systematically review and synthesise the literature. Approximately 20 qualitative interviews with Aboriginal and Torres Strait Islander knowledge holders will clarify the meaning of collective capability and inform appropriate search strategy terms with a consensus process then used to code the literature. We will then systematically collate, synthesise and analyse the literature to identify exemplars or models of collective capability from the literature.Ethics and disseminationThe protocol has approval from the Australian Institute of Aboriginal and Torres Strait Islander Studies Ethics Committee, approval no. EO239-20210114. All knowledge holders will provide written consent to participate in the research. This protocol provides a process to developing a concept, and will form the basis of a new framework and assessment tool for Indigenist evaluation practice. The concept analysis will establish definitions, characteristics and attributes of collective capability. Findings will be disseminated through a peer-reviewed journal, conference presentations, the project advisory group, the Thiitu Tharrmay reference group and Aboriginal and Torres Strait Islander community partners supporting the project.


Sign in / Sign up

Export Citation Format

Share Document