scholarly journals Assessing the STEM landscape: the current instructional climate survey and the evidence-based instructional practices adoption scale

Author(s):  
R. Eric Landrum ◽  
Karen Viskupic ◽  
Susan E. Shadle ◽  
Doug Bullock
2011 ◽  
Vol 35 (2) ◽  
pp. 220-225
Author(s):  
Ian Dempsey

AbstractIn Volume 35, Issue 1 of the Australasian Journal of Special Education, Carter, Stephenson and Strnadová (2011) replicated a study by Burns and Ysseldyke (2009). In Carter et al.'s study, 194 Australian special educators were asked to rate the extent to which they used eight instructional practices. These practices were applied behaviour analysis, direct instruction, formative evaluation, mnemonic strategies, modality training, perceptual-motor training, psycholinguistic training, and social skills training. The first four of these practices had moderate to high effect sizes (and were regarded by the authors as more desirable techniques), and the final four practices had low effect sizes, on the basis of past meta-analytic research. Carter et al.'s findings were that while the Australian teachers used some desirable strategies relatively frequently, they also used some less desirable practices frequently and so desirable instructional practices should be encouraged at the expense of less effective practices. While these results are of interest, they also have the potential to mislead readers and later sections of the current article examine these potential misconceptions.


2018 ◽  
Vol 34 (1) ◽  
pp. 3-14 ◽  
Author(s):  
Victoria F. Knight ◽  
Heartley B. Huber ◽  
Emily M. Kuntz ◽  
Erik W. Carter ◽  
A. Pablo Juarez

Improving educational outcomes for students with autism and intellectual disability requires delivering services and supports marked by evidence-based practices. We surveyed 535 special educators of students with autism and/or intellectual disability about (a) their implementation of 26 instructional practices, (b) their recent access to training and resources on those practices, (c) the factors they consider when deciding which practices to use, (d) the importance they place on various instructional areas (e.g., social skills, reading), and (e) their preparedness to provide that instruction. Although teachers reported implementing a wide range of evidence-based instructional practices, their recent access to training and resources was fairly limited. Special educators identified a constellation of factors informing their instructional decision making, placing emphasis on student needs and professional judgment. When considering instructional areas, a gap was evident between ratings of importance and preparedness. We address implications for strengthening professional development pathways and offer recommendations for future research.


Author(s):  
Barbara Fink Chorzempa ◽  
Michael D. Smith ◽  
Jane M. Sileo

Within their teacher preparation courses and field experiences, preservice teachers are introduced to numerous instructional practices, not all of which are considered research-based. For this reason, instruction in how to evaluate the effectiveness of one’s practices is essential, but it is often a lacking component of initial certification programs. In this article, a flexible, problem-solving model for collecting and reflecting on practice-based evidence (PBE) is described. The model, utilized in a graduate program in Special Education, was designed to assist teacher candidates in evaluating the effectiveness of the practices they implement to optimize students’ learning outcomes. Implications for practice in the K-12 environment are also provided.


2017 ◽  
Vol 16 (1) ◽  
pp. rm1 ◽  
Author(s):  
Marilyne Stains ◽  
Trisha Vickrey

The discipline-based education research (DBER) community has been invested in the research and development of evidence-based instructional practices (EBIPs) for decades. Unfortunately, investigations of the impact of EBIPs on student outcomes typically do not characterize instructors’ adherence to an EBIP, often assuming that implementation was as intended by developers. The validity of such findings is compromised, since positive or negative outcomes can be incorrectly attributed to an EBIP when other factors impacting implementation are often present. This methodological flaw can be overcome by developing measures to determine the fidelity of implementation (FOI) of an intervention, a construct extensively studied in other fields, such as healthcare. Unfortunately, few frameworks to measure FOI in educational settings exist, which likely contributes to a lack of FOI constructs in most impact studies of EBIPs in DBER. In this Essay, we leverage the FOI literature presented in other fields to propose an appropriate framework for FOI within the context of DBER. We describe how this framework enhances the validity of EBIP impact studies and provide methodological guidelines for how it should be integrated in such studies. Finally, we demonstrate the application of our framework to peer instruction, a commonly researched EBIP within the DBER community.


2013 ◽  
Vol 79 (2) ◽  
pp. 147-161
Author(s):  
Garnett J. Smith ◽  
Matthew M. Schmidt ◽  
Patricia J. Edelen-Smith ◽  
Bryan G. Cook

A tension exists between educational practitioners and researchers, which is often attributed to their dichotomous and oftentimes polarizing professional ideologies or Discourse communities. When determining what works in education, researchers tend to emphasize evidence-based practices (EBPs) supported by research that is rigorous and internally valid, whereas practitioners tend to value practice-based evidence (PBE) that is relevant and externally valid. The authors argue that these separate mindsets stem from the classical view of research as being either rigorous or relevant. In his canonical Pasteur's Quadrant, Stokes (1997) proposed that rigor and relevance are complementary notions that, when merged, further the production, translation, and implementation of instructional practices that are both rigorous (i.e., evidence-based) and relevant (i.e., practice-based). The authors propose educational design research (EDR) and communities of practice (CoPs) as frameworks through which to realize the promise of Pasteur's quadrant.


Author(s):  
Mary-Kate Sableski ◽  
Catherine A. Rosemary ◽  
Kathryn Kinnucan-Welsch

This chapter describes use of a metacognitive tool to facilitate teacher reflection in an online graduate reading practicum course. The Teacher Learning Instrument (TLI) is a tool designed to support the evidence-based practice of reflection on teaching through collaborative inquiry. The purpose of using the TLI in an online reading practicum course is to facilitate candidates' reflections on teaching struggling readers in one-to-one intervention settings with the goal of refining instruction to improve students' reading ability. The analysis of the assignment data associated with use of the TLI demonstrates the potential of the TLI to inform a collaborative, reflective process among practicing teachers within the context of a practicum course, addressing the requirements of Standard 7. The reflective process and sharing of insights among colleagues make literacy instructional practices visible for close examination in an online environment and thus exposes the complexity inherent in the effective teaching of struggling readers.


Sign in / Sign up

Export Citation Format

Share Document