A Systematic Review of Instructional Comparisons in Single-Case Research

2019 ◽  
pp. 074193251985505 ◽  
Author(s):  
Jennifer R. Ledford ◽  
Kate T. Chazin ◽  
Kari L. Gagnon ◽  
Anne K. Lord ◽  
Virginia R. Turner ◽  
...  

Comparison studies conducted to determine which instructional interventions are most efficient for teaching discrete behaviors to individuals with disabilities are potentially valuable, although some threats to internal validity may be more likely in these studies. Studies included in this review typically met common internal validity standards, such as reliability measurement, but often did not include controls specific to comparison designs. Comparisons often included young children with autism and were frequently conducted by researchers in self-contained classroom settings. Systematic instruction was effective in nearly all comparisons, although many included undifferentiated data (i.e., both interventions were equally effective), and within-participant replications were often inconsistent (i.e., outcomes varied across comparisons for a single participant). Results suggest implementers should conduct high-fidelity instruction with corrective and instructive feedback and should choose intervention variations based on participant preference. We recommend researchers include control sets or time-lagged introductions, counterbalance behavior sets, and measure differential acquisition over time.

Author(s):  
Jennifer R. Ledford ◽  
Erin E. Barton ◽  
Katherine E. Severini ◽  
Kathleen N. Zimmerman

Abstract The overarching purpose of this article is to provide an introduction to the use of rigorous single-case research designs (SCRDs) in special education and related fields. Authors first discuss basic design types and research questions that can be answered with SCRDs, examine threats to internal validity and potential ways to control for and detect common threats, and provide guidelines for selection of specific designs. Following, contemporary standards regarding rigor, measurement, description, and outcomes are presented. Then, authors discuss data analytic techniques, differentiating rigor, positive outcomes, functional relations, and magnitude of effects.


2017 ◽  
Vol 23 (2) ◽  
pp. 206-225 ◽  
Author(s):  
Kevin M Roessger ◽  
Arie Greenleaf ◽  
Chad Hoggan

To overcome situational hurdles when researching transformative learning in adults, we outline a research approach using single-case research designs and smartphone data collection apps. This approach allows researchers to better understand learners’ current lived experiences and determine the effects of transformative learning interventions on demonstrable outcomes. We first discuss data collection apps and their features. We then describe how they can be integrated into single-case research designs to make causal inferences about a learning intervention’s effects when limited by researcher access and learner retrospective reporting. Design controls for internal validity threats and visual and statistical data analysis are then discussed. Throughout, we highlight applications to transformative learning and conclude by discussing the approach’s potential limitations.


2017 ◽  
Vol 39 (1) ◽  
pp. 71-90 ◽  
Author(s):  
Jennifer R. Ledford

Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the treatment(s) of interest. SCDs are particularly relevant when a dependent variable of interest can be measured repeatedly over time across two conditions (e.g., baseline and intervention). Rather than using randomization of large numbers of participants, SCD researchers use careful and prescribed ordering of experimental conditions, which allow researchers to improve internal validity by ruling out alternative explanations for behavior change. This article describes SCD logic, control of threats to internal validity, the use of randomization and counterbalancing, and data analysis in the context of single case research.


2019 ◽  
pp. 014544551986705
Author(s):  
Jennifer Ninci

Practitioners frequently use single-case data for decision-making related to behavioral programming and progress monitoring. Visual analysis is an important and primary tool for reporting results of graphed single-case data because it provides immediate, contextualized information. Criticisms exist concerning the objectivity and reliability of the visual analysis process. When practitioners are equipped with knowledge about single-case designs, including threats and safeguards to internal validity, they can make technically accurate conclusions and reliable data-based decisions with relative ease. This paper summarizes single-case experimental design and considerations for professionals to improve the accuracy and reliability of judgments made from single-case data. This paper can also help practitioners to appropriately incorporate single-case research design applications in their practice.


2016 ◽  
Vol 97 (10) ◽  
pp. e106
Author(s):  
Robyn Tate ◽  
Linda Sigmundsdottir ◽  
Janet Doubleday ◽  
Ulrike Rosenkoetter ◽  
Donna Wakim ◽  
...  

1994 ◽  
Vol 3 (7) ◽  
pp. 316-324 ◽  
Author(s):  
Linda M Proudfoot ◽  
Elizabeth S Farmer ◽  
Jean B McIntosh

2016 ◽  
Vol 45 (4) ◽  
pp. 379-399 ◽  
Author(s):  
Denise A. Soares ◽  
Judith R. Harrison ◽  
Kimberly J. Vannest ◽  
Susan S. McClelland

Sign in / Sign up

Export Citation Format

Share Document