Facilitators and Barriers to Adopting Evidence-Based Physical Education in Elementary Schools

2011 ◽  
Vol 8 (s1) ◽  
pp. S17-S25 ◽  
Author(s):  
Monica A.F. Lounsbery ◽  
Thomas L. McKenzie ◽  
Stewart Trost ◽  
Nicole J. Smith

Background:Evidence-based physical education (EBPE) programs have increased physical activity (PA) by as much as 18%, yet widespread adoption has not occurred. Understanding school facilitators and barriers to PE should prove useful to EBPE dissemination efforts.Methods:Pairs of principals and PE teachers from 154 schools (75 Adopters and 79 Non-Adopters) from 34 states completed questionnaires. Differences between Adopter and Non-Adopter schools were tested using t tests or Wilcoxon Signed Rank Tests and chi-square analyses.Results:Principals and teachers reported distinct PE curriculum adoption decision making roles, but few viewed themselves as very involved in program evaluation. Teachers in Adopter schools were more satisfied with PE program outcomes and had greater involvement in teacher evaluation and program decision making. Compared with teachers, principals were generally more satisfied with their school’s PE program outcomes and did not share the same perceptions of PE barriers. However, principals also demonstrated a general lack of PE program familiarity.Conclusions:To facilitate EBPE adoption, dissemination efforts should target both principals and PE teachers. Increasing principal’s knowledge may be instrumental in addressing some teacher perceptions of barriers to PE. Strategic advocacy efforts, including targeting policies that require PE program evaluation, are needed.

2016 ◽  
Vol 41 (5) ◽  
pp. 407-435 ◽  
Author(s):  
Christopher S. Horne

Background: Government and private funders increasingly require social service providers to adopt program models deemed “evidence based,” particularly as defined by evidence-based program registries, such as What Works Clearinghouse and National Registry of Evidence-Based Programs and Practices. These registries summarize the evidence about programs’ effectiveness, giving near-exclusive priority to evidence from experimental-design evaluations. The registries’ goal is to aid decision making about program replication, but critics suspect the emphasis on evidence from experimental-design evaluations, while ensuring strong internal validity, may inadvertently undermine that goal, which requires strong external validity as well. Objective: The objective of this study is to determine the extent to which the registries’ reports provide information about context-specific program implementation factors that affect program outcomes and would thus support decision making about program replication and adaptation. Method: A research-derived rubric was used to rate the extent of context-specific reporting in the population of seven major registries’ evidence summaries ( N = 55) for youth development programs. Findings: Nearly all (91%) of the reports provide context-specific information about program participants, but far fewer provide context-specific information about implementation fidelity and other variations in program implementation (55%), the program’s environment (37%), costs (27%), quality assurance measures (22%), implementing agencies (19%), or staff (15%). Conclusion: Evidence-based program registries provide insufficient information to guide context-sensitive decision making about program replication and adaptation. Registries should supplement their evidence base with nonexperimental evaluations and revise their methodological screens and synthesis-writing protocols to prioritize reporting—by both evaluators and the registries themselves—of context-specific implementation factors that affect program outcomes.


2007 ◽  
Vol 2 (1) ◽  
pp. 32 ◽  
Author(s):  
Gillian Byrne

As libraries and librarians move more towards evidence-based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence-based practice (EBP), in order to correctly design and analyze research as well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi-square, correlation, and analysis of variance (ANOVA).


2007 ◽  
Vol 2 (1) ◽  
pp. 118-123 ◽  
Author(s):  
James P. Marshall ◽  
Brian J. Higginbotham ◽  
Victor W. Harris ◽  
Thomas R. Lee

The importance of program evaluation for decision making, accountability, and sustainability is examined in this article. Pros and cons of traditional pretest-posttest and posttest-then-retrospective-pretest methodologies are discussed. A case study of Utah’s 4-H mentoring program using a posttest-then-retrospective-pretest design is presented. Furthermore, it is argued that the posttest-then-retrospective-pretest design is a valid, efficient, and cost-effective way to assess program outcomes and impacts.


Evaluation ◽  
2005 ◽  
Vol 11 (1) ◽  
pp. 95-109 ◽  
Author(s):  
William R. Shadish ◽  
Salvador Chacón-Moscoso ◽  
Julio Sánchez-Meca

2020 ◽  
Vol 43 ◽  
Author(s):  
Valerie F. Reyna ◽  
David A. Broniatowski

Abstract Gilead et al. offer a thoughtful and much-needed treatment of abstraction. However, it fails to build on an extensive literature on abstraction, representational diversity, neurocognition, and psychopathology that provides important constraints and alternative evidence-based conceptions. We draw on conceptions in software engineering, socio-technical systems engineering, and a neurocognitive theory with abstract representations of gist at its core, fuzzy-trace theory.


2011 ◽  
Vol 20 (4) ◽  
pp. 121-123
Author(s):  
Jeri A. Logemann

Evidence-based practice requires astute clinicians to blend our best clinical judgment with the best available external evidence and the patient's own values and expectations. Sometimes, we value one more than another during clinical decision-making, though it is never wise to do so, and sometimes other factors that we are unaware of produce unanticipated clinical outcomes. Sometimes, we feel very strongly about one clinical method or another, and hopefully that belief is founded in evidence. Some beliefs, however, are not founded in evidence. The sound use of evidence is the best way to navigate the debates within our field of practice.


Sign in / Sign up

Export Citation Format

Share Document