scholarly journals What works for poor farmers? Insights from South Africa’s national policy evaluations

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Sarah A. Chapman ◽  
Katherine Tjasink ◽  
Johann Louw

Background: Growing numbers of developing countries are investing in National Evaluation Systems (NESs). A key question is whether these have the potential to bring about meaningful policy change, and if so, what evaluation approaches are appropriate to support reflection and learning throughout the change process.Objectives: We describe the efforts of commissioned external evaluators in developing an evaluation approach to help critically assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers from the past two decades.Method: We present the diagnostic evaluation approach we developed. The approach guides evaluation end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. No additional evaluation data were collected. Groups who participated include government representatives, academics and representatives from non-governmental organisations and national associations supporting emerging farmers.Results: Our main evaluation findings relate to a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by; (1) an absence of common understanding of policies and objectives; (2) overly ambitious objectives often not directly linked to the policy frameworks; (3) lack of logical connections between target groups and interventions and (4) inadequate identification, selection, targeting and retention of beneficiaries.Conclusion: The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a NES.

Author(s):  
Mari Räkköläinen ◽  
Anu Saxén

AbstractFinland has been the first country in the world to conduct a comprehensive evaluation of the national implementation the Agenda 2030. The purpose of the evaluation was to support efficient implementation of the agenda by producing information on the nation’s sustainability work for all administrative branches. The evaluation results are used for coherence in the policies and long-term sustainable development activities. The evaluation produced concrete recommendations on future directions for sustainable development policy. It also proposed future evaluation approaches.In this chapter, the authors present the evaluation approach and discuss the key results and their usage. They identify the essential elements of the utility of the evaluation in contributing to national progress of sustainable development policy. The Agenda 2030 evaluation approach was developmentally oriented and conducted in a very participatory manner. The authors reflect on the evaluative lessons learned and future options. They encourage emphasis on learning throughout the evaluation process even more in policy-level evaluations, and special attention to usefulness of the evaluation results already in evaluation design. Designing inclusive evaluation processes is a crucial precondition for evidence-informed learning and decision making in promoting transformative policy in the country context.


2020 ◽  
Vol 15 ◽  
Author(s):  
Sara El-Metwally ◽  
Eslam Hamouda ◽  
Mayada Tarek

: The assembly evaluation process is the starting step towards meaningful downstream data analysis. We need to know how much accurate information is included in an assembled sequence before going further to any data analysis stage. Four basic metrics are targeted by different assembly evaluation tools: contiguity, accuracy, completeness, and contamination. Some tools evaluate these metrics based on comparing the assembly results to a closely related reference. Others utilize different types of heuristics to overcome the missing of a guiding reference, such as the consistency between assembly results and sequencing reads. In this paper, we discuss the assembly evaluation process as a core stage in any sequence assembly pipeline and present a roadmap that is followed by most assembly evaluation tools to assess different metrics. We highlight the challenges that currently exist in the assembly evaluation tools and summarize their technical and practical details to help the end-users choose the best tool according to their working scenarios. To address the similarities/differences among different assembly assessment tools, including their evaluation approaches, metrics, comprehensive nature, limitations, usability and how the evaluated results are presented to the end-user, we provide a practical example for evaluating Velvet assembly results for S. aureus dataset from GAGE competition. A Github repository (https://github.com/SaraEl-Metwally/Assembly-Evaluation-Tools) is created for evaluation result details along with their generated command line parameters.


2021 ◽  
Vol 9 (4) ◽  
pp. 388
Author(s):  
Huu Phu Nguyen ◽  
Jeong Cheol Park ◽  
Mengmeng Han ◽  
Chien Ming Wang ◽  
Nagi Abdussamie ◽  
...  

Wave attenuation performance is the prime consideration when designing any floating breakwater. For a 2D hydrodynamic analysis of a floating breakwater, the wave attenuation performance is evaluated by the transmission coefficient, which is defined as the ratio between the transmitted wave height and the incident wave height. For a 3D breakwater, some researchers still adopted this evaluation approach with the transmitted wave height taken at a surface point, while others used the mean transmission coefficient within a surface area. This paper aims to first examine the rationality of these two evaluation approaches via verified numerical simulations of 3D heave-only floating breakwaters in regular and irregular waves. A new index—a representative transmission coefficient—is then presented for one to easily compare the wave attenuation performances of different 3D floating breakwater designs.


Evaluation ◽  
2017 ◽  
Vol 23 (3) ◽  
pp. 294-311 ◽  
Author(s):  
Boru Douthwaite ◽  
John Mayne ◽  
Cynthia McDougall ◽  
Rodrigo Paz-Ybarnegaray

There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation. Developing complexity-aware program monitoring and evaluation systems within existing organizations is difficult because they challenge traditional orthodoxy. Little has been written about the practical experience of doing so. This article describes the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware program monitoring and evaluation. Finally, we discuss important lessons and recommendations for other programs facing similar challenges. These include developing evaluation designs that meet both learning and accountability requirements; making evaluation a part of a program’s overall approach to achieving impact; and, ensuring evaluation cumulatively builds useful theory as to how different types of program trigger change in different contexts.


2018 ◽  
Vol 32 (6) ◽  
pp. 405-417 ◽  
Author(s):  
Catherine Brentnall ◽  
Iván Diego Rodríguez ◽  
Nigel Culkin

The purpose of this article is to explore the effectiveness of entrepreneurship education (EE) programmes through the lens of realist evaluation (RE). The interest of the authoring team – a practitioner–academic mix with professional experience including developing EE in primary and secondary schools – lies with EE competitions, a type of intervention recommended for and delivered to students and pupils of all ages. RE is a theory-driven philosophy, methodology and adaptable logic of enquiry with which to conceptualize and analyse such programmes. In this study, we undertake an act of ‘organized scepticism’, as described by evidenced-based policy academic Ray Pawson, to identify and question the declared outcomes of EE competitions in European policy over a 10-year period. However, our contribution goes beyond the application of an evaluation approach, novel to EE. We argue that, while education generally, and EE specifically, appears committed to emulating ‘gold standard’ scientific evaluation approaches (e.g. randomized controlled trials, systematic review and meta-analysis), the field of evidenced-based policymaking has moved on. Now, alternative methodological strategies are being embraced and RE in particular has evolved as an approach which better aligns knowledge production with the reality of complex, socially contingent programmes. By using this approach, we not only establish that education and psychology theories challenge the outcomes of EE competitions declared in policy, but also demonstrate the wider relevance of RE to the appraisal and refinement of the theorizing and practice of entrepreneurship programmes and interventions.


2017 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Dwi Deswary

This study aims to determine (1) The results of the input evaluation; (2) The results of the evaluation process; and (3) The results of the product evaluation on the policy implementation of Act 12 of the year 2012 on the Program Study of Education Management, Postgraduate Program, University State of Jakarta (UNJ). The research method was a qualitative evaluation approach. Data collected by conducting an analysis of document-based curriculum KKNI to determine the success of the implementation by stages conducted at Postgraduate Program Study of Education Management. The data were analysed descriptively and meaning on any research findings conducted qualitatively. Stages of meaning carried out through the following stages: (1) Data Collection; (2) Data Reduction; and (3) Data Display. Based on the results of input evaluation which performed on the curriculum document, known that the preparation Program Study based on curriculum KKNI of Education Management Postgraduate Program is equipped with a clear legal basis, the formulation of goals and objectives. In the aspect of supporting resources for curriculum development, Progam Study has form data analysis of curriculum results, planned programs and implementation strategies. In the process, learning strategies are divided into two approaches, namely direct and indirect approaches. In evaluating, the results are more geared towards the achievement of the program on implementation of policies based on curriculum KKNI in Postgraduate Program Study of Education Management by a predetermined time phase, namely the achievement during the short-term period (1 to 2 years).


1986 ◽  
Vol 53 (1) ◽  
pp. 31-35 ◽  
Author(s):  
Chris Lloyd

The Forensic Unit of the Alberta Hospital Edmonton has moved from evaluating the performance of a client in a work setting by observation to providing a comprehensive data base on the client through the use of a work history, interest screening and commercial work evaluation systems. A standardized approach, to evaluation has enabled the Occupational Therapists to develop a unique treatment programme for the individual client as a result of the evaluation process and provided reliable data in returning the client to competitive employment.


Author(s):  
Seung Youn (Yonnie) Chyung ◽  
Stacey E. Olachea ◽  
Colleen Olson ◽  
Ben Davis

The College Advisory Program offered by Total Vision Soccer Club aims at providing young players with the opportunity to learn how to navigate the collegiate recruiting process, market themselves to college coaches, and increase their exposure to potential colleges and universities. A team of external evaluators (authors of this chapter) conducted a formative evaluation to determine what the program needs to do to reach its goal. By following a systemic evaluation process, the evaluation team investigated five dimensions of the program and collected data by reviewing various program materials and conducting surveys and interviews with players and their parents, upstream stakeholders, and downstream impactees. By triangulating the multiple sources of data, the team drew a conclusion that most program dimensions were rated as mediocre although the program had several strengths. The team provided evidence-based recommendations for improving the quality of the program.


Author(s):  
Richard E. Scott

E-Health continues to be implemented despite continued demonstration that it lacks value. Specific guidance regarding research approaches and methodologies would be beneficial due to the value in identifying and adopting a single model or framework for any one ‘entity’ (healthcare organisation, sub-national region, country, etc.) so that the evidence-base accumulates more rapidly and interventions can be more meaningfully compared. This paper describes a simple and systematic approach to e-health evaluation in a real-world setting, which can be applied by an evaluation team and raises the quality of e-health evaluations. The framework guides and advises users on evaluation approaches at different stages of e-health development and implementation. Termed ‘Pragmatic Evaluation,’ the approach has five principles that unfold in a staged approach that respects the collective need for timely, policy relevant, yet meticulous research.


Author(s):  
Maria de Fátima Queiroz Vieira Turnell ◽  
José Eustáquio Rangel de Queiroz ◽  
Danilo de Sousa Ferreira

This chapter presents a method for the evaluation of user interfaces for mobile applications. The method is based upon an approach that combines user opinion, standard conformity assessment, and user performance measurement. It focuses on the evaluation settings and techniques employed in the evaluation process, while offering a comparison between the laboratory evaluation and field evaluation approaches. The method’s presentation and the evaluation comparison will be supported by a discussion of the results obtained from the method’s application to a case study involving a Personal Digital Assistant (PDA). This chapter argues that the experience gained from evaluating conventional user interfaces can be applied to the world of mobile technology.


Sign in / Sign up

Export Citation Format

Share Document