Advancing Evidence-Based Practice Through Program Evaluation
Latest Publications


TOTAL DOCUMENTS

6
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190609108, 9780190609115

Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

This chapter consists of three case studies that illustrate how the evaluation approaches, methods, techniques, and tools presented in Chapters 1 to 5 can be translated into practice. The first case study describes an evaluation of the Dyslexia Pilot Project, a statewide multi-tier system of supports (MTSS) initiative targeting early literacy. In this evaluation, special attention was paid to the evaluating the cost-effectiveness of serving students in kindergarten to grade 2 proactively. The second case study features the use of single-case designs and corresponding summary statistics to evaluate the collective impact of more than 500 academic and behavioral interventions provided within an MTSS framework as part of the annual statewide evaluation of the Ohio Internship Program in School Psychology. The third case study focuses on efforts to evaluate the fidelity of implementation for teacher teams’ use of a five-step process for data-based decision making and instructional planning.


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

The objective of this chapter is to describe the evaluation methods, techniques, and tools involved in measuring implementation fidelity. Implementation fidelity is defined and the distinction between process evaluation and formative evaluation is described. The body of knowledge known as implementation research is introduced, and the National Implementation Research Network’s Implementation Drivers Framework and implementation stages are presented. The use of manuals, protocols, checklists, and practice profiles to support high-fidelity practitioner behaviors are illustrated. Research-validated tools for assessing implementation fidelity within a multi-tiered system of support initiative are highlighted. This chapter concludes with a discussion regarding the use of established criterion for judging implementation fidelity and the critical importance of measuring implementation fidelity linked to intended student outcomes.


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

The objective of this chapter is to provide the school-based professional a concise introduction to program evaluation. Program evaluation is defined and the distinguishing characteristics of program evaluation and research are described. An overview of the most prominent evaluation approaches relevant to a school-based context is provided. The distinction between formative and summative evaluation is presented. Throughout this chapter, evaluation approaches, purposes, and foci are presented as they relate to the evaluation of a multi-tier system of supports (MTSS) framework. Finally, the distinction between an internal and external evaluator is outlined in terms of the relative advantages and disadvantages of each. The role of the school-based professional as an internal or external evaluator is illustrated given the current context in education emphasizing results-driven accountability.


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

The objective of Chapter 4 is to provide practical guidelines for developing evaluation plans. Practical tips for writing measurable goals and objectives are provided first following by guidance for developing a logic model based on the program’s theory of change. Guidance is provided for writing evaluation questions and linking the questions to evaluation methods. Useful templates for designing and communicating evaluation plans and roles, responsibilities, and milestones are presented in this chapter. This chapter on developing an evaluation plan incorporates content and concepts from the previous chapters on program evaluation approaches (Chapter 1), evaluating program implementation (Chapter 2), and evaluating teacher professional learning opportunities (Chapter 3).


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

Professional learning in the form of training and coaching/consultation is central to supporting evidence-based practices in schools. The objective of this chapter is to review what is known about effective strategies to support adult learning and to compare frameworks for evaluating professional learning for educators. Five critical levels have been identified for evaluating educator professional learning. These are (a) participants’ reactions/perceptions of satisfaction, (b) participants’ learning, (c) organization support and change, (d) participants’ use of new knowledge and skills, and (e) student learning outcomes. Other topics discussed include evaluation approaches, methods, and tools for assessing the implementation and impact of teacher professional learning opportunities in a multi-tiered system of support initiative.


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

The topic of data visualization and the effective communication of evaluation findings has received considerable attention in recent years. The objective of Chapter 5 is to present practical strategies for communicating findings through data visualization and report writing that maximize the impact of the evaluation. The chapter highlights the use of web-based data systems and data dashboards for communicating implementation and outcome data relative to a multi-tiered system of support initiative. Practical guidelines for developing communication plans, maintaining a data-driven focus to communications, and linking current results to proposed actions are provided. Special attention is paid to the use of targeted, tailored communication as informed by diffusion theory.


Sign in / Sign up

Export Citation Format

Share Document