Comparing the Effects of Two Reading Interventions Using a Randomized Alternating Treatment Design

2019 ◽  
Vol 86 (4) ◽  
pp. 355-373
Author(s):  
Youjia Hua ◽  
Michelle Hinzman ◽  
Chengan Yuan ◽  
Kinga Balint Langel

An emerging body of research suggests that incorporating randomization schemes in single-case research designs strengthens study internal validity and data evaluation. The purpose of this study was to test the utility and feasibility of a randomized alternating-treatment design in an investigation that compared the combined effects of vocabulary instruction and the paraphrasing strategies on expository comprehension of six students with reading difficulties. We analyzed the data using three types of randomization tests as well as visual analysis. The visual analysis and randomization tests confirmed the additional benefit of vocabulary instruction on expository comprehension for one student. However, the effects were not replicated across the other five students. We found that proper randomization schemes can improve both internal validity and data analysis strategies of the alternating-treatment design.

Author(s):  
Jennifer R. Ledford ◽  
Erin E. Barton ◽  
Katherine E. Severini ◽  
Kathleen N. Zimmerman

Abstract The overarching purpose of this article is to provide an introduction to the use of rigorous single-case research designs (SCRDs) in special education and related fields. Authors first discuss basic design types and research questions that can be answered with SCRDs, examine threats to internal validity and potential ways to control for and detect common threats, and provide guidelines for selection of specific designs. Following, contemporary standards regarding rigor, measurement, description, and outcomes are presented. Then, authors discuss data analytic techniques, differentiating rigor, positive outcomes, functional relations, and magnitude of effects.


2017 ◽  
Vol 23 (2) ◽  
pp. 206-225 ◽  
Author(s):  
Kevin M Roessger ◽  
Arie Greenleaf ◽  
Chad Hoggan

To overcome situational hurdles when researching transformative learning in adults, we outline a research approach using single-case research designs and smartphone data collection apps. This approach allows researchers to better understand learners’ current lived experiences and determine the effects of transformative learning interventions on demonstrable outcomes. We first discuss data collection apps and their features. We then describe how they can be integrated into single-case research designs to make causal inferences about a learning intervention’s effects when limited by researcher access and learner retrospective reporting. Design controls for internal validity threats and visual and statistical data analysis are then discussed. Throughout, we highlight applications to transformative learning and conclude by discussing the approach’s potential limitations.


2017 ◽  
Vol 39 (1) ◽  
pp. 71-90 ◽  
Author(s):  
Jennifer R. Ledford

Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the treatment(s) of interest. SCDs are particularly relevant when a dependent variable of interest can be measured repeatedly over time across two conditions (e.g., baseline and intervention). Rather than using randomization of large numbers of participants, SCD researchers use careful and prescribed ordering of experimental conditions, which allow researchers to improve internal validity by ruling out alternative explanations for behavior change. This article describes SCD logic, control of threats to internal validity, the use of randomization and counterbalancing, and data analysis in the context of single case research.


2019 ◽  
pp. 014544551986705
Author(s):  
Jennifer Ninci

Practitioners frequently use single-case data for decision-making related to behavioral programming and progress monitoring. Visual analysis is an important and primary tool for reporting results of graphed single-case data because it provides immediate, contextualized information. Criticisms exist concerning the objectivity and reliability of the visual analysis process. When practitioners are equipped with knowledge about single-case designs, including threats and safeguards to internal validity, they can make technically accurate conclusions and reliable data-based decisions with relative ease. This paper summarizes single-case experimental design and considerations for professionals to improve the accuracy and reliability of judgments made from single-case data. This paper can also help practitioners to appropriately incorporate single-case research design applications in their practice.


2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


1994 ◽  
Vol 3 (7) ◽  
pp. 316-324 ◽  
Author(s):  
Linda M Proudfoot ◽  
Elizabeth S Farmer ◽  
Jean B McIntosh

2014 ◽  
Vol 23 (1) ◽  
pp. 15-26 ◽  
Author(s):  
Shelley L. Bredin-Oja ◽  
Marc E. Fey

PurposeThe purpose of this study was to determine whether children in the early stage of combining words are more likely to respond to imitation prompts that are telegraphic than to prompts that are grammatically complete and whether they produce obligatory grammatical morphemes more reliably in response to grammatically complete imitation prompts than to telegraphic prompts.MethodFive children between 30 and 51 months of age with language delay participated in a single-case alternating treatment design with 14 sessions split between a grammatical and a telegraphic condition. Alternating orders of the 14 sessions were randomly assigned to each child. Children were given 15 prompts to imitate a semantic relation that was either grammatically complete or telegraphic.ResultsNo differences between conditions were found for the number of responses that contained a semantic relation. In contrast, 3 of the 5 children produced significantly more grammatical morphemes when presented with grammatically complete imitation prompts. Two children did not include a function word in either condition.ConclusionProviding a telegraphic prompt to imitate does not offer any advantage as an intervention technique. Children are just as likely to respond to a grammatically complete imitation prompt. Further, including function words encourages children who are developmentally ready to imitate them.


2011 ◽  
Vol 12 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Robyn L. Tate ◽  
Jacinta Douglas

AbstractIn this special article we describe a number of reporting guidelines endorsed by the CONSORT (Consolidated Standards of Reporting Trials) group for a range of research designs that commonly appear in scientific journals: systematic reviews, clinical trials with and without randomisation, observational studies, n-of-1 (or single-case experimental design) trials, and diagnostic studies. We also consider reporting guidelines for studies using qualitative methodology. In addition to reporting guidelines, we present method quality rating scales, which aim to measure risk of bias that threatens the internal validity of a study. Advantages of reporting guidelines and method quality rating scales for authors include the provision of a structure by which to improve the clarity and transparency of report writing; for reviewers and readers advantages include a method by which to critically appraise an article.Brain Impairmentendorses these reporting guidelines and applies them within the review process for submissions to the journal.


Sign in / Sign up

Export Citation Format

Share Document