scholarly journals The six-sphere framework: A practical tool for assessing monitoring and evaluation systems

2017 ◽  
Vol 5 (1) ◽  
Author(s):  
Kieron D. Crawley

Background: Successful evaluation capacity development (ECD) at regional, national and institutional levels has been built on a sound understanding of the opportunities and constraints in establishing and sustaining a monitoring and evaluation system. Diagnostics are one of the tools that ECD agents can use to better understand the nature of the ECD environment. Conventional diagnostics have typically focused on issues related to technical capacity and the ‘bridging of the gap’ between evaluation supply and demand. In so doing, they risk overlooking the more subtle organisational and environmental factors that lie outside the conventional diagnostic lens.Method: As a result of programming and dialogue carried out by the Centre for Learning on Evaluation and Results Anglophone Africa engaging with government planners, evaluators, civil society groups and voluntary organisations, the author has developed a modified diagnostic tool that extends the scope of conventional analysis.Results: This article outlines the six-sphere framework that can be used to extend the scope of such diagnostics to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the whole system. The framework employs a graphic device that allows the capture and organisation of structural knowledge relating to the ECD environment.Conclusion: The article describes the framework in relation to other organisational development tools and gives some examples of how it can be used to make sense of the ECD environment. It highlights the potential of the framework to contribute to a more nuanced understanding of the ECD environment using a structured diagnostic approach and to move beyond conventional supply and demand models.

2020 ◽  
Vol 8 (1) ◽  
Author(s):  
Takunda Chirau ◽  
Caitlin Blaser Mapitsa ◽  
Matodzi Amisi ◽  
Banele Masilela ◽  
Ayabulela Dlakavu

vidence for policy-informed decision-making, budgeting and programming. National evaluation systems (NESs) are being set up across Africa, together with the processes and other monitoring and evaluation (ME) infrastructure for efficient and effective functioning.Objectives: This article seeks to document comparative developments in the growth of systems in Anglophone African countries, and provide an understanding of these systems for capacity-development interventions in these countries. It also aims to contribute to the public debate on the development of national ME systems, institutionalisation of evaluation, and use of ME evidence in the larger African context.Methods: This article uses four key dimensions as the conceptual framework of a national monitoring and evaluation system, including monitoring and evaluation systems in the executive; the functioning of parliamentary ME systems; professionalisation of evaluation and existence of an enabling environment. A questionnaire was used to collect information based on the key dimensions from government and non-governmental personnel. The Mo Ibrahim index of 2018 was used to collect information on enabling environment.Results: Findings indicate that all systems have stakeholders with different roles and contexts and are designed according to the state architecture, prevailing resources and capacities.Conclusions: This article concludes that the findings can be used as different entry points for developing and strengthening ME capacities in countries studied.


2020 ◽  
Vol 10 (4) ◽  
pp. 6109-6115
Author(s):  
M. N. Mleke ◽  
M. A. Dida

Monitoring and evaluation systems are used by organizations or governments to measure, track progress, and evaluate the outcomes of projects. Organizations can improve their performance, effectiveness, and achieved results in project success by strengthening their monitoring and evaluation systems. Moreover, various studies reveal the need for information and communication technology systems in monitoring and evaluation activities. Despite the advantage of the tools, most organizations do not employ computerized monitoring and evaluation systems due to their cost and limited expertise whereas those having these systems lack a systematic alert mechanism of the projects' progress. Currently, the Ministry of Health, Community Development, Gender, Elderly, and Children of Tanzania monitors and evaluates its projects manually facing the risks and consequences of delayed project completeness. In this study, the evolutionary prototyping approach was used to develop the proposed system. This study describes the development of a web-based monitoring and evaluation system that aims to solve the monitoring and evaluation challenges, simplify works, generate quality data, and provide timely successful project implementation. The developed system was tested and evaluated against the user’s requirements and was positively accepted to be deployed at the Ministry of Health.


2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Ian Goldman ◽  
Albert Byamugisha ◽  
Abdoulaye Gounou ◽  
Laila R. Smith ◽  
Stanley Ntakumba ◽  
...  

Background: Evaluation is not widespread in Africa, particularly evaluations instigated by governments rather than donors. However since 2007 an important policy experiment is emerging in South Africa, Benin and Uganda, which have all implemented national evaluation systems. These three countries, along with the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa and the African Development Bank, are partners in a pioneering African partnership called Twende Mbele, funded by the United Kingdom’s Department for International Development (DFID) and Hewlett Foundation, aiming to jointly strengthen monitoring and evaluation (M&E) systems and work with other countries to develop M&E capacity and share experiences.Objectives: This article documents the experience of these three countries and summarises the progress made in deepening and widening their national evaluation systems and some of the cross-cutting lessons emerging at an early stage of the Twende Mbele partnership.Method: The article draws from reports from each of the countries, as well as work undertaken for the evaluation of the South African national evaluation system.Results and conclusions: Initial lessons include the importance of a central unit to drive the evaluation system, developing a national evaluation policy, prioritising evaluations through an evaluation agenda or plan and taking evaluation to subnational levels. The countries are exploring the role of non-state actors, and there are increasing moves to involve Parliament. Key challenges include difficulty of getting a learning approach in government, capacity issues and ensuring follow-up. These lessons are being used to support other countries seeking to establish national evaluation systems, such as Ghana, Kenya and Niger.


Author(s):  
Nancy C. Edwards ◽  
Barbara L. Riley ◽  
Cameron D. Willis

This chapter examines characteristics of and approaches to scaling-up innovations and programs, with illustrations from the field of cancer control. It summarizes definitions of scale-up, emphasizing the introduction of innovations with demonstrated effectiveness and the aims of scale-up: improving coverage and equitable access to the innovation(s) and its intended benefits. The chapter proposes a typology to help guide scaling-up activities. The typology includes five dimensions: the object of scale-up, how this object may be adapted, horizontal and vertical directions for scale-up, linear and nonlinear pathways for scale-up, and factors influencing scale-up. Featuring examples of tobacco control and human papillomavirus vaccination, the typology is applied and key scaling-up actions are described, including media campaigns, engaging key stakeholders, mobilizing political support, and investing in a monitoring and evaluation system. Systemic challenges to scale-up are discussed. Future priorities for research on scaling up cancer control initiatives are proposed.


Author(s):  
Takunda J Chirau ◽  
Caitlin Blaser-Mapitsa ◽  
Matodzi M Amisi

Background: African countries are developing their monitoring and evaluation policies to systematise, structure and institutionalise evaluations and use of evaluative evidence across the government sector. The pace at which evaluations are institutionalised and systematised across African governments is progressing relatively slowly.Aims and objectives: This article offers a comparative analysis of Africa’s national evaluation policy landscape. The article looks at the policies of Zimbabwe, South Africa, Nigeria, Kenya (not adopted) and Uganda. To achieve the aim we unpack the different characteristics taken by the national evaluation policies, emerging lessons for countries who wish to develop a national evaluation policy, and key challenges faced by countries with regard to evaluation policy development and implementation. The article draws on both a desktop review and action research approaches from the Centre for Learning on Evaluation and Results Anglophone Africa to build national evaluation systems across the region. The approach has included peer learning and co-creation of knowledge around public sector evaluation systems.Key conclusions: The national evaluation policies reviewed share certain common features in terms of purpose and composition. They are also struggling with common issues of institutionalising the evaluation system across the public sector. However, there are variations in the countries’ guiding governance frameworks at a national level that shape the nature and content of policies, as well as the ways through which the policies themselves are expected to guide the use of evaluative evidence for decision and policymaking, and programming.<br />Key messages<br /><ul><li>Peer-to-peer learning is important for sharing experiences on developing national evaluation policy.</li><br /><li>Countries should develop their policies in line with their state architecture, context and relevance to their needs.</li><br /><li>Policies necessitate new ways of thinking about the practice of monitoring and evaluation.</li><br /><li>This article fills an important empirical lacuna on evidence use and policy development in Africa</li></ul>


2019 ◽  
Vol 7 (1) ◽  
Author(s):  
Ian Goldman ◽  
Carol N. Deliwe ◽  
Stephen Taylor ◽  
Zeenat Ishmail ◽  
Laila Smith ◽  
...  

Background: South Africa has pioneered national evaluation systems (NESs) along with Canada, Mexico, Colombia, Chile, Uganda and Benin. South Africa’s National Evaluation Policy Framework (NEPF) was approved by Cabinet in November 2011. An evaluation of the NES started in September 2016.Objectives: The purpose of the evaluation was to assess whether the NES had had an impact on the programmes and policies evaluated, the departments involved and other key stakeholders; and to determine how the system needs to be strengthened.Method: The evaluation used a theory-based approach, including international benchmarking, five national and four provincial case studies, 112 key informant interviews, a survey with 86 responses and a cost-benefit analysis of a sample of evaluations.Results: Since 2011, 67 national evaluations have been completed or are underway within the NES, covering over $10 billion of government expenditure. Seven of South Africa’s nine provinces have provincial evaluation plans and 68 of 155 national and provincial departments have departmental evaluation plans. Hence, the system has spread widely but there are issues of quality and the time it takes to do evaluations. It was difficult to assess use but from the case studies it did appear that instrumental and process use were widespread. There appears to be a high return on evaluations of between R7 and R10 per rand invested.Conclusion: The NES evaluation recommendations on strengthening the system ranged from legislation to strengthen the mandate, greater resources for the NES, strengthening capacity development, communication and the tracking of use.


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Monitoring and evaluation systems rarely begin as right fits; instead, they evolve over time, often to meet the demands of internal learning, external accountability, and a given stage of program development. This case follows Invisible Children Uganda as it formalizes its monitoring and evaluation system in response to increased visibility, the demands of traditional donors, and an internal desire to understand impact. Readers will consider how Invisible Children’s first logical framework—a rough equivalent of a theory of change—lays the foundation for a right-fit monitoring and evaluation system. Readers will also analyze the broader challenges of commissioning high-quality impact evaluations and the importance of clearly defining them.


Author(s):  
Godfrey Joseph Masawe ◽  
Juliana Isanzu

Worldwide public sector and private sector put more effort in promotion of good environment for conducting monitoring and evaluation to the project conducted within their organization for the purpose of increase transparency, strengthen accountability, and improve organization performance. The study aims to determine the role of Monitoring and Evaluation systems on the Organizational performance and to examine resource monitoring and evaluation affect organization performance. A case of Tanzania Airport Authority. The study deployed descriptive quantitative research design, the study used Slovenes formula to obtain sample size of 187 from targeted population was 350 employees of TAA located in Ilala Municipality and purposively sampling was used. Questionnaire, and documents reviewed were used as the research instruments for collecting data needed in this study. The study uses multiple regression analysis models to test the relationship between monitoring and evaluation systems on organizational performances. The study concluded that monitoring and Evaluation systems improves organization performance in Tanzania and recommended that any organization(both private and public) should employ constant improvement in monitoring and Evaluation system in order to reach organization For the very rationale monthly time set of data has been approved from Jan 2014 to Dec 2019. The investigative outline contains.


2013 ◽  
Vol 1 (1) ◽  
Author(s):  
Stephen Porter ◽  
Ian Goldman

When decision-makers want to use evidence from monitoring and evaluation (M&E) systems to assist them in making choices, there is a demand for M&E. When there is great capacity to supply M&E information, but low capacity to demand quality evidence, there is a mismatch between supply and demand. In this context, as Picciotto (2009) observed, ‘monitoring masquerades as evaluation’. This article applies this observation, using six case studies of African M&E systems, by asking: What evidence is there that African governments are developing stronger endogenous demand for evidence generated from M&E systems?The argument presented here is that demand for evidence is increasing, leading to further development of M&E systems, with monitoring being dominant. As part of this dominance there are attempts to align monitoring systems to emerging local demand, whilst donor demands are still important in several countries. There is also evidence of increasing demand through government-led evaluation systems in South Africa, Uganda and Benin. One of the main issues that this article notes is that the M&E systems are not yet conceptualised within a reform effort to introduce a comprehensive results-based orientation to the public services of these countries. Results concepts are not yet consistently applied throughout the M&E systems in the case countries. In addition, the results-based notions that are applied appear to be generating perverse incentives that reinforce upward compliance and contrôle to the detriment of more developmental uses of M&E evidence.


Sign in / Sign up

Export Citation Format

Share Document