scholarly journals The Effects of Monitoring and Evaluation Systems on Organizational Performance: A Case of Tanzania Airport Authority-Dar es Salaam, Tanzania

Author(s):  
Godfrey Joseph Masawe ◽  
Juliana Isanzu

Worldwide public sector and private sector put more effort in promotion of good environment for conducting monitoring and evaluation to the project conducted within their organization for the purpose of increase transparency, strengthen accountability, and improve organization performance. The study aims to determine the role of Monitoring and Evaluation systems on the Organizational performance and to examine resource monitoring and evaluation affect organization performance. A case of Tanzania Airport Authority. The study deployed descriptive quantitative research design, the study used Slovenes formula to obtain sample size of 187 from targeted population was 350 employees of TAA located in Ilala Municipality and purposively sampling was used. Questionnaire, and documents reviewed were used as the research instruments for collecting data needed in this study. The study uses multiple regression analysis models to test the relationship between monitoring and evaluation systems on organizational performances. The study concluded that monitoring and Evaluation systems improves organization performance in Tanzania and recommended that any organization(both private and public) should employ constant improvement in monitoring and Evaluation system in order to reach organization For the very rationale monthly time set of data has been approved from Jan 2014 to Dec 2019. The investigative outline contains.

2020 ◽  
Vol 8 (1) ◽  
Author(s):  
Takunda Chirau ◽  
Caitlin Blaser Mapitsa ◽  
Matodzi Amisi ◽  
Banele Masilela ◽  
Ayabulela Dlakavu

vidence for policy-informed decision-making, budgeting and programming. National evaluation systems (NESs) are being set up across Africa, together with the processes and other monitoring and evaluation (ME) infrastructure for efficient and effective functioning.Objectives: This article seeks to document comparative developments in the growth of systems in Anglophone African countries, and provide an understanding of these systems for capacity-development interventions in these countries. It also aims to contribute to the public debate on the development of national ME systems, institutionalisation of evaluation, and use of ME evidence in the larger African context.Methods: This article uses four key dimensions as the conceptual framework of a national monitoring and evaluation system, including monitoring and evaluation systems in the executive; the functioning of parliamentary ME systems; professionalisation of evaluation and existence of an enabling environment. A questionnaire was used to collect information based on the key dimensions from government and non-governmental personnel. The Mo Ibrahim index of 2018 was used to collect information on enabling environment.Results: Findings indicate that all systems have stakeholders with different roles and contexts and are designed according to the state architecture, prevailing resources and capacities.Conclusions: This article concludes that the findings can be used as different entry points for developing and strengthening ME capacities in countries studied.


2017 ◽  
Vol 5 (1) ◽  
Author(s):  
Kieron D. Crawley

Background: Successful evaluation capacity development (ECD) at regional, national and institutional levels has been built on a sound understanding of the opportunities and constraints in establishing and sustaining a monitoring and evaluation system. Diagnostics are one of the tools that ECD agents can use to better understand the nature of the ECD environment. Conventional diagnostics have typically focused on issues related to technical capacity and the ‘bridging of the gap’ between evaluation supply and demand. In so doing, they risk overlooking the more subtle organisational and environmental factors that lie outside the conventional diagnostic lens.Method: As a result of programming and dialogue carried out by the Centre for Learning on Evaluation and Results Anglophone Africa engaging with government planners, evaluators, civil society groups and voluntary organisations, the author has developed a modified diagnostic tool that extends the scope of conventional analysis.Results: This article outlines the six-sphere framework that can be used to extend the scope of such diagnostics to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the whole system. The framework employs a graphic device that allows the capture and organisation of structural knowledge relating to the ECD environment.Conclusion: The article describes the framework in relation to other organisational development tools and gives some examples of how it can be used to make sense of the ECD environment. It highlights the potential of the framework to contribute to a more nuanced understanding of the ECD environment using a structured diagnostic approach and to move beyond conventional supply and demand models.


Evaluation ◽  
2018 ◽  
Vol 24 (1) ◽  
pp. 26-41 ◽  
Author(s):  
Estelle Raimondo

Evaluations do not take place in a vacuum. Evaluation systems are embedded within organizations; they shape and are shaped by organizational norms, processes, and behaviors. In International Organizations, evaluation systems are ubiquitous. Yet, little is known about how they “function,” namely how they are used, how they contribute to organizational performance, and how they influence actors’ behaviors. These are empirical questions that cannot be solved without a robust theoretical grounding, which is currently absent from the existing evaluation literature. This article seeks to bridge some of the identified gaps by weaving together insights from evaluation theory and international organization sociology into a unifying framework of factors. The article then demonstrates how the framework can be used to empirically study the relative power and dysfunction of evaluation systems within International Organizations. A forthcoming connected contribution will illustrate such empirical inquiry through the case of the World Bank’s project-level evaluation system.


2020 ◽  
Vol 10 (4) ◽  
pp. 6109-6115
Author(s):  
M. N. Mleke ◽  
M. A. Dida

Monitoring and evaluation systems are used by organizations or governments to measure, track progress, and evaluate the outcomes of projects. Organizations can improve their performance, effectiveness, and achieved results in project success by strengthening their monitoring and evaluation systems. Moreover, various studies reveal the need for information and communication technology systems in monitoring and evaluation activities. Despite the advantage of the tools, most organizations do not employ computerized monitoring and evaluation systems due to their cost and limited expertise whereas those having these systems lack a systematic alert mechanism of the projects' progress. Currently, the Ministry of Health, Community Development, Gender, Elderly, and Children of Tanzania monitors and evaluates its projects manually facing the risks and consequences of delayed project completeness. In this study, the evolutionary prototyping approach was used to develop the proposed system. This study describes the development of a web-based monitoring and evaluation system that aims to solve the monitoring and evaluation challenges, simplify works, generate quality data, and provide timely successful project implementation. The developed system was tested and evaluated against the user’s requirements and was positively accepted to be deployed at the Ministry of Health.


2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Ian Goldman ◽  
Albert Byamugisha ◽  
Abdoulaye Gounou ◽  
Laila R. Smith ◽  
Stanley Ntakumba ◽  
...  

Background: Evaluation is not widespread in Africa, particularly evaluations instigated by governments rather than donors. However since 2007 an important policy experiment is emerging in South Africa, Benin and Uganda, which have all implemented national evaluation systems. These three countries, along with the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa and the African Development Bank, are partners in a pioneering African partnership called Twende Mbele, funded by the United Kingdom’s Department for International Development (DFID) and Hewlett Foundation, aiming to jointly strengthen monitoring and evaluation (M&E) systems and work with other countries to develop M&E capacity and share experiences.Objectives: This article documents the experience of these three countries and summarises the progress made in deepening and widening their national evaluation systems and some of the cross-cutting lessons emerging at an early stage of the Twende Mbele partnership.Method: The article draws from reports from each of the countries, as well as work undertaken for the evaluation of the South African national evaluation system.Results and conclusions: Initial lessons include the importance of a central unit to drive the evaluation system, developing a national evaluation policy, prioritising evaluations through an evaluation agenda or plan and taking evaluation to subnational levels. The countries are exploring the role of non-state actors, and there are increasing moves to involve Parliament. Key challenges include difficulty of getting a learning approach in government, capacity issues and ensuring follow-up. These lessons are being used to support other countries seeking to establish national evaluation systems, such as Ghana, Kenya and Niger.


Author(s):  
Takunda J Chirau ◽  
Caitlin Blaser-Mapitsa ◽  
Matodzi M Amisi

Background: African countries are developing their monitoring and evaluation policies to systematise, structure and institutionalise evaluations and use of evaluative evidence across the government sector. The pace at which evaluations are institutionalised and systematised across African governments is progressing relatively slowly.Aims and objectives: This article offers a comparative analysis of Africa’s national evaluation policy landscape. The article looks at the policies of Zimbabwe, South Africa, Nigeria, Kenya (not adopted) and Uganda. To achieve the aim we unpack the different characteristics taken by the national evaluation policies, emerging lessons for countries who wish to develop a national evaluation policy, and key challenges faced by countries with regard to evaluation policy development and implementation. The article draws on both a desktop review and action research approaches from the Centre for Learning on Evaluation and Results Anglophone Africa to build national evaluation systems across the region. The approach has included peer learning and co-creation of knowledge around public sector evaluation systems.Key conclusions: The national evaluation policies reviewed share certain common features in terms of purpose and composition. They are also struggling with common issues of institutionalising the evaluation system across the public sector. However, there are variations in the countries’ guiding governance frameworks at a national level that shape the nature and content of policies, as well as the ways through which the policies themselves are expected to guide the use of evaluative evidence for decision and policymaking, and programming.<br />Key messages<br /><ul><li>Peer-to-peer learning is important for sharing experiences on developing national evaluation policy.</li><br /><li>Countries should develop their policies in line with their state architecture, context and relevance to their needs.</li><br /><li>Policies necessitate new ways of thinking about the practice of monitoring and evaluation.</li><br /><li>This article fills an important empirical lacuna on evidence use and policy development in Africa</li></ul>


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Monitoring and evaluation systems rarely begin as right fits; instead, they evolve over time, often to meet the demands of internal learning, external accountability, and a given stage of program development. This case follows Invisible Children Uganda as it formalizes its monitoring and evaluation system in response to increased visibility, the demands of traditional donors, and an internal desire to understand impact. Readers will consider how Invisible Children’s first logical framework—a rough equivalent of a theory of change—lays the foundation for a right-fit monitoring and evaluation system. Readers will also analyze the broader challenges of commissioning high-quality impact evaluations and the importance of clearly defining them.


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

A theory of change can build consensus on a program’s vision and guide the development of a right-fit monitoring and evaluation system. This case examines how the Uganda-based youth empowerment NGO Educate! used the theory of change process to clearly define its intended impact and decide how to measure it. After analyzing the process Educate! used to develop its theory of change, readers will be able to discuss the value of gathering internal perspectives and conducting field research to develop a theory of change. Readers will also assess how successive iterations of the theory of change provide clarity on program design and objectives and determine whether the final theory of change is sufficient to design a monitoring and evaluation plan that adheres to CART principles.


Sign in / Sign up

Export Citation Format

Share Document