From Data to Action: Integrating Program Evaluation and Program Improvement

Author(s):  
Thomas J. Chapel ◽  
Kim Seechuk
Author(s):  
Shelley Anne Doucet ◽  
Diane MacKenzie ◽  
Elaine Loney ◽  
Anne Godden-Webster ◽  
Heidi Lauckner ◽  
...  

Background: The Dalhousie Health Mentors Program (DHMP) is a community-based, pre-licensure interprofessional education initiative that aims to prepare health professional students for collaborative practice in the care of patients with chronic conditions. This program evaluation explores the students’ 1) learning and plans to incorporate skills into future practice; 2) ratings of program content, delivery, and assignments; 3) perspectives of curricular factors that inadvertently acted as barriers to learning; and 4) program improvement suggestions.Methods: All students (N = 745) from the 16 participating health programs were invited to complete an online mixed methods program evaluation survey at the conclusion of the 2012–2013 DHMP. A total of 295 students (40% response rate) responded to the Likert-type questions analyzed using descriptive and non-parametric statistics. Of these students, 204 (69%) provided responses to 10 open-ended questions, which were analyzed thematically.Findings: While the majority of respondents agreed that they achieved the DHMP learning objectives, the mixed-methods approach identified curriculum integration, team composition, and effectiveness of learning assignments as factors that unintentionally acted as barriers to learning, with three key student recommendations for program improvement.Conclusions: Educators and program planners need to be aware that even well-intended learning activities may result in unintended experiences that hamper interprofessional learning.


2015 ◽  
Vol 30 (12) ◽  
pp. 1743 ◽  
Author(s):  
Hyun Bae Yoon ◽  
Jwa-Seop Shin ◽  
Seung-Hee Lee ◽  
Do-Hwan Kim ◽  
Jinyoung Hwang ◽  
...  

2014 ◽  
Vol 6 (1) ◽  
pp. 133-138 ◽  
Author(s):  
Aditee P. Narayan ◽  
Shari A. Whicker ◽  
Betty B. Staples ◽  
Jack Bookman ◽  
Kathleen W. Bartlett ◽  
...  

Abstract Background Program evaluation is important for assessing the effectiveness of the residency curriculum. Limited resources are available, however, and curriculum evaluation processes must be sustainable and well integrated into program improvement efforts. Intervention We describe the pediatric Clinical Skills Fair, an innovative method for evaluating the effectiveness of residency curriculum through assessment of trainees in 2 domains: medical knowledge/patient care and procedure. Each year from 2008 to 2011, interns completed the Clinical Skills Fair as rising interns in postgraduate year (PGY)-1 (R1s) and again at the end of the year, as rising residents in PGY-2 (R2s). Trainees completed the Clinical Skills Fair at the beginning and end of the intern year for each cohort to assess how well the curriculum prepared them to meet the intern goals and objectives. Results Participants were 48 R1s and 47 R2s. In the medical knowledge/patient care domain, intern scores improved from 48% to 65% correct (P < .001). Significant improvement was demonstrated in the following subdomains: jaundice (41% to 65% correct; P < .001), fever (67% to 94% correct; P < .001), and asthma (43% to 62% correct; P  =  .002). No significant change was noted within the arrhythmia subdomain. There was significant improvement in the procedure domain for all interns (χ2  =  32.82, P < .001). Conclusions The Clinical Skills Fair is a readily implemented and sustainable method for our residency program curriculum assessment. Its feasibility may allow other programs to assess their curriculum and track the impact of programmatic changes; it may be particularly useful for program evaluation committees.


Author(s):  
Dewa Gede Hendra Divayana ◽  
Baso Intang Sappaile ◽  
I Gusti Ngurah Pujawan ◽  
I Ketut Dibia ◽  
Luh Artaningsih ◽  
...  

<span lang="EN-GB">One of various types of application program which could facilitate human activities is known as expert system application program. The concept of the expert system of the application program so important that it requires to be shared to other people by developing a course program called expert system conducted in particular by an institution which specialized in the field of information technology. This course program of expert system which is offered to the students need to be designed in terms of good instructional process to improve its effectiveness. In fact, not all existing course programs conducted based on expert system could run more effectively than it was expected in order to achieve higher quality of learning outcomes. This could be explained as a numerous problems occurred during the process of implementation. In order to discover the extend of program effectivity in the expert system of the course, a through program evaluation is required to be conducted. One model called CSE-UCLA will be recommended in making deeper instructional program evaluation in relation to expert system of course program. Through this model the instructional affectivity of application program of expert system could be measured by involving system assessment, program planning, program</span><span lang="EN-GB"> implementation, program improvement, and program certification.</span>


2018 ◽  
Vol 32 (3) ◽  
Author(s):  
Robert P. Shepherd

Abstract:Various forms of assurance are being demanded by different constituencies in the federal public administration. One form of assurance is that of financial accountability, and spending reviews are an essential input to processes that contribute to federal budgetary and expenditure management decisions. Program evaluation has also been an important contributor, but it may be the case that this federal function is over-extended in that contribution. It may be time to consider removing this responsibility and attaching it to other functions, thereby affording the function to better focus on what it does best: contribute to program improvement, including effectiveness. This would also mean a shift in evaluation culture to one of learning, rather than accountability. Résumé :Diverses formes d’imputabilité sont exigées des différents groupes de l'administration publique fédérale dont la responsabilité financière. Les exercices de révision budgétaire sont une composante essentielle contribuant aux décisions du gouvernement fédéral en matière de gestion budgétaire et de dépenses. L'évaluation du programme a joué un rôle important à ce niveau, mais il se peut que le fédéral ait des attentes trop élevées concernant sa contribution. Il serait peut-être temps d'envisager enlever cette responsabilité à l’évaluation et de la rattacher à d'autres fonctions, ce qui lui permettrait de se concentrer sur ce qu'elle fait le mieux: contribuer à l'amélioration de programme, ce qui inclut l’analyse de l'efficacité. Cela signifierait aussi que la culture de l'évaluation évolue vers une culture d'apprentissage plutôt que vers la reddition de comptes.  


2020 ◽  
Vol 34 (3) ◽  
pp. 797-804 ◽  
Author(s):  
Carlos Trombetta ◽  
Michelle Capdeville ◽  
Prakash A. Patel ◽  
Jared W. Feinman ◽  
Lourdes AL-Ghofaily ◽  
...  

2000 ◽  
Vol 25 (3) ◽  
pp. 26-31
Author(s):  
Moira A. Fallon

The purpose of this article is to present one model's approach to program evaluation of early intervention programs. The model presented requires implementation by a trained program evaluator and utilises clear and simple data collection methods. The model is based on measures of parental and staff satisfaction resulting in qualitative and quantitative information. Such flexible and accurate measures are necessary for stakeholders to use in making practical policy decisions for program improvement.


Sign in / Sign up

Export Citation Format

Share Document