scholarly journals A Web-based Monitoring and Evaluation System for Government Projects in Tanzania: The Case of Ministry of Health

2020 ◽  
Vol 10 (4) ◽  
pp. 6109-6115
Author(s):  
M. N. Mleke ◽  
M. A. Dida

Monitoring and evaluation systems are used by organizations or governments to measure, track progress, and evaluate the outcomes of projects. Organizations can improve their performance, effectiveness, and achieved results in project success by strengthening their monitoring and evaluation systems. Moreover, various studies reveal the need for information and communication technology systems in monitoring and evaluation activities. Despite the advantage of the tools, most organizations do not employ computerized monitoring and evaluation systems due to their cost and limited expertise whereas those having these systems lack a systematic alert mechanism of the projects' progress. Currently, the Ministry of Health, Community Development, Gender, Elderly, and Children of Tanzania monitors and evaluates its projects manually facing the risks and consequences of delayed project completeness. In this study, the evolutionary prototyping approach was used to develop the proposed system. This study describes the development of a web-based monitoring and evaluation system that aims to solve the monitoring and evaluation challenges, simplify works, generate quality data, and provide timely successful project implementation. The developed system was tested and evaluated against the user’s requirements and was positively accepted to be deployed at the Ministry of Health.

Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Without high-quality data, even the best-designed monitoring and evaluation systems will collapse. Chapter 7 introduces some the basics of collecting high-quality data and discusses how to address challenges that frequently arise. High-quality data must be clearly defined and have an indicator that validly and reliably measures the intended concept. The chapter then explains how to avoid common biases and measurement errors like anchoring, social desirability bias, the experimenter demand effect, unclear wording, long recall periods, and translation context. It then guides organizations on how to find indicators, test data collection instruments, manage surveys, and train staff appropriately for data collection and entry.


10.28945/2465 ◽  
2002 ◽  
Author(s):  
Aziz Deraman ◽  
Syahrul Fahmi ◽  
Mohamad Naim Yaakub ◽  
Abdul Aziz Jemain

This paper presents a case study of the Malaysian technical education system. The Technical and Vocational Department (TVED) is designated to prepare skilled technical and intelligent workforce to Malaysia in order to meet the goals of Vision 2020. For that reason, a web-based management support system is proposed to TVED for its planning, management and decision-making activities. e-BME is a system for education monitoring and evaluation by means of establishing internal and external efficiency indicators. e-BME would receive input mainly from Technical and Vocational Education (TVE) schools and graduates. There are four types of reports that are generated by the system: Management, Financial, Research and Planning. TVED could use these reports in its policy and decisionmaking activities. This system promotes faster data collection, higher integrity of generated information and a systematic channel for distribution of reports.


2020 ◽  
Vol 8 (1) ◽  
Author(s):  
Takunda Chirau ◽  
Caitlin Blaser Mapitsa ◽  
Matodzi Amisi ◽  
Banele Masilela ◽  
Ayabulela Dlakavu

vidence for policy-informed decision-making, budgeting and programming. National evaluation systems (NESs) are being set up across Africa, together with the processes and other monitoring and evaluation (ME) infrastructure for efficient and effective functioning.Objectives: This article seeks to document comparative developments in the growth of systems in Anglophone African countries, and provide an understanding of these systems for capacity-development interventions in these countries. It also aims to contribute to the public debate on the development of national ME systems, institutionalisation of evaluation, and use of ME evidence in the larger African context.Methods: This article uses four key dimensions as the conceptual framework of a national monitoring and evaluation system, including monitoring and evaluation systems in the executive; the functioning of parliamentary ME systems; professionalisation of evaluation and existence of an enabling environment. A questionnaire was used to collect information based on the key dimensions from government and non-governmental personnel. The Mo Ibrahim index of 2018 was used to collect information on enabling environment.Results: Findings indicate that all systems have stakeholders with different roles and contexts and are designed according to the state architecture, prevailing resources and capacities.Conclusions: This article concludes that the findings can be used as different entry points for developing and strengthening ME capacities in countries studied.


2017 ◽  
Vol 5 (1) ◽  
Author(s):  
Kieron D. Crawley

Background: Successful evaluation capacity development (ECD) at regional, national and institutional levels has been built on a sound understanding of the opportunities and constraints in establishing and sustaining a monitoring and evaluation system. Diagnostics are one of the tools that ECD agents can use to better understand the nature of the ECD environment. Conventional diagnostics have typically focused on issues related to technical capacity and the ‘bridging of the gap’ between evaluation supply and demand. In so doing, they risk overlooking the more subtle organisational and environmental factors that lie outside the conventional diagnostic lens.Method: As a result of programming and dialogue carried out by the Centre for Learning on Evaluation and Results Anglophone Africa engaging with government planners, evaluators, civil society groups and voluntary organisations, the author has developed a modified diagnostic tool that extends the scope of conventional analysis.Results: This article outlines the six-sphere framework that can be used to extend the scope of such diagnostics to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the whole system. The framework employs a graphic device that allows the capture and organisation of structural knowledge relating to the ECD environment.Conclusion: The article describes the framework in relation to other organisational development tools and gives some examples of how it can be used to make sense of the ECD environment. It highlights the potential of the framework to contribute to a more nuanced understanding of the ECD environment using a structured diagnostic approach and to move beyond conventional supply and demand models.


Author(s):  
Roos Keja ◽  
Kathrin Knodel

Information and communication technologies for development (ICT4D) are seen to have great potential for boosting democratization processes all over the world by giving people access to information and thereby empowering them to demand more accountability and transparency of authorities. Based on ethnographic research in Togo and Rwanda on an SMS-based citizen monitoring and evaluation system, this article argues that focusing on access to information is too narrow a view. We show that it is crucial to take into account the respective socio-political backgrounds, such as levels of mistrust or existing social hierarchies. In this context, mobile phone usage has rather varied and ambiguous meanings here. These dynamics can pose a challenge to the successful implementation of ICT4D projects aimed at political empowerment. By addressing these often overlooked issues, we offer explanations for the gap between ICT4D assumptions and people’s lifeworlds in Togo and Rwanda.


2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Ian Goldman ◽  
Albert Byamugisha ◽  
Abdoulaye Gounou ◽  
Laila R. Smith ◽  
Stanley Ntakumba ◽  
...  

Background: Evaluation is not widespread in Africa, particularly evaluations instigated by governments rather than donors. However since 2007 an important policy experiment is emerging in South Africa, Benin and Uganda, which have all implemented national evaluation systems. These three countries, along with the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa and the African Development Bank, are partners in a pioneering African partnership called Twende Mbele, funded by the United Kingdom’s Department for International Development (DFID) and Hewlett Foundation, aiming to jointly strengthen monitoring and evaluation (M&E) systems and work with other countries to develop M&E capacity and share experiences.Objectives: This article documents the experience of these three countries and summarises the progress made in deepening and widening their national evaluation systems and some of the cross-cutting lessons emerging at an early stage of the Twende Mbele partnership.Method: The article draws from reports from each of the countries, as well as work undertaken for the evaluation of the South African national evaluation system.Results and conclusions: Initial lessons include the importance of a central unit to drive the evaluation system, developing a national evaluation policy, prioritising evaluations through an evaluation agenda or plan and taking evaluation to subnational levels. The countries are exploring the role of non-state actors, and there are increasing moves to involve Parliament. Key challenges include difficulty of getting a learning approach in government, capacity issues and ensuring follow-up. These lessons are being used to support other countries seeking to establish national evaluation systems, such as Ghana, Kenya and Niger.


Author(s):  
Arif Budiarto ◽  
Muhammad Fitra Kacamarga ◽  
Teddy Suparyanto ◽  
Shinta Purnamasari ◽  
Rezzy Eko Caraka ◽  
...  

Author(s):  
Takunda J Chirau ◽  
Caitlin Blaser-Mapitsa ◽  
Matodzi M Amisi

Background: African countries are developing their monitoring and evaluation policies to systematise, structure and institutionalise evaluations and use of evaluative evidence across the government sector. The pace at which evaluations are institutionalised and systematised across African governments is progressing relatively slowly.Aims and objectives: This article offers a comparative analysis of Africa’s national evaluation policy landscape. The article looks at the policies of Zimbabwe, South Africa, Nigeria, Kenya (not adopted) and Uganda. To achieve the aim we unpack the different characteristics taken by the national evaluation policies, emerging lessons for countries who wish to develop a national evaluation policy, and key challenges faced by countries with regard to evaluation policy development and implementation. The article draws on both a desktop review and action research approaches from the Centre for Learning on Evaluation and Results Anglophone Africa to build national evaluation systems across the region. The approach has included peer learning and co-creation of knowledge around public sector evaluation systems.Key conclusions: The national evaluation policies reviewed share certain common features in terms of purpose and composition. They are also struggling with common issues of institutionalising the evaluation system across the public sector. However, there are variations in the countries’ guiding governance frameworks at a national level that shape the nature and content of policies, as well as the ways through which the policies themselves are expected to guide the use of evaluative evidence for decision and policymaking, and programming.<br />Key messages<br /><ul><li>Peer-to-peer learning is important for sharing experiences on developing national evaluation policy.</li><br /><li>Countries should develop their policies in line with their state architecture, context and relevance to their needs.</li><br /><li>Policies necessitate new ways of thinking about the practice of monitoring and evaluation.</li><br /><li>This article fills an important empirical lacuna on evidence use and policy development in Africa</li></ul>


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Monitoring and evaluation systems rarely begin as right fits; instead, they evolve over time, often to meet the demands of internal learning, external accountability, and a given stage of program development. This case follows Invisible Children Uganda as it formalizes its monitoring and evaluation system in response to increased visibility, the demands of traditional donors, and an internal desire to understand impact. Readers will consider how Invisible Children’s first logical framework—a rough equivalent of a theory of change—lays the foundation for a right-fit monitoring and evaluation system. Readers will also analyze the broader challenges of commissioning high-quality impact evaluations and the importance of clearly defining them.


Author(s):  
Godfrey Joseph Masawe ◽  
Juliana Isanzu

Worldwide public sector and private sector put more effort in promotion of good environment for conducting monitoring and evaluation to the project conducted within their organization for the purpose of increase transparency, strengthen accountability, and improve organization performance. The study aims to determine the role of Monitoring and Evaluation systems on the Organizational performance and to examine resource monitoring and evaluation affect organization performance. A case of Tanzania Airport Authority. The study deployed descriptive quantitative research design, the study used Slovenes formula to obtain sample size of 187 from targeted population was 350 employees of TAA located in Ilala Municipality and purposively sampling was used. Questionnaire, and documents reviewed were used as the research instruments for collecting data needed in this study. The study uses multiple regression analysis models to test the relationship between monitoring and evaluation systems on organizational performances. The study concluded that monitoring and Evaluation systems improves organization performance in Tanzania and recommended that any organization(both private and public) should employ constant improvement in monitoring and Evaluation system in order to reach organization For the very rationale monthly time set of data has been approved from Jan 2014 to Dec 2019. The investigative outline contains.


Sign in / Sign up

Export Citation Format

Share Document