scholarly journals Proportional Reasoning Interventions in Special Education Synthesis Coding Protocol

Author(s):  
Gena Nelson

The purpose of document is to provide readers with the coding protocol that authors used to code nine group and single case design intervention studies focused on proportional reasoning interventions for students (grades 5-9) with learning disabilities (LD) or mathematics difficulty (MD). The studies yielded intervention effects ranging from g = −0.10 to 1.87 and from Tau-U = 0.88 to 1.00. We coded all of the studies for variables in the following categories: study information, intervention features, dependent measures, participant demographics, LD and MD criteria and definitions, instructional content, study results, and quality indicators for group and single case design. The study quality indicator coding portion of this coding protocol was adapted from Gersten et al. (2005) and Horner et al. (2005). This code book contains variable names, code options, and code definitions. The mean interrater reliability across all codes using this protocol was 91% (range across categories = 82%–96%). The publication associated with this coding protocol is Nelson et al. (2020).

2021 ◽  
Vol 11 (2) ◽  
pp. 76
Author(s):  
Chao-Ying Joanne Peng ◽  
Li-Ting Chen

Due to repeated observations of an outcome behavior in N-of-1 or single-case design (SCD) intervention studies, the occurrence of missing scores is inevitable in such studies. Approximately 21% of SCD articles published in five reputable journals between 2015 and 2019 exhibited evidence of missing scores. Missing rates varied by designs, with the highest rate (24%) found in multiple baseline/probe designs. Missing scores cause difficulties in data analysis. And inappropriate treatments of missing scores lead to consequences that threaten internal validity and weaken generalizability of intervention effects reported in SCD research. In this paper, we comprehensively review nine methods for treating missing SCD data: the available data method, six single imputations, and two model-based methods. The strengths, weaknesses, assumptions, and examples of these methods are summarized. The available data method and three single imputation methods are further demonstrated in assessing an intervention effect at the class and students’ levels. Assessment results are interpreted in terms of effect sizes, statistical significances, and visual analysis of data. Differences in results among the four methods are noted and discussed. The extensive review of problems caused by missing scores and possible treatments should empower researchers and practitioners to account for missing scores effectively and to support evidence-based interventions vigorously. The paper concludes with a discussion of contingencies for implementing the nine methods and practical strategies for managing missing scores in single-case intervention studies.


2021 ◽  
Author(s):  
Bryan G. Cook ◽  
Austin H Johnson ◽  
Daniel Maggin ◽  
William Therrien ◽  
Erin Barton ◽  
...  

Research indicating many study results do not replicate has raised questions about the credibility of science and prompted concerns about a potential reproducibility crisis. Moreover, most published research is not freely accessible, which limits the potential impact of science. Open science, which aims to make the research process more open and reproducible, has been proposed as one approach to increase the credibility and impact of scientific research. Although relatively little attention has been paid to open science in relation to single-case design, we propose that open-science practices can be applied to enhance the credibility and impact of single-case design research. In this paper, we discuss how open-science practices align with other recent developments in single-case design research, describe four prominent open-science practices (i.e., preregistration, registered reports, data and materials sharing, and open access), and discuss potential benefits and limitations of each practice for single-case design.


2011 ◽  
Vol 36 (4) ◽  
pp. 210-218 ◽  
Author(s):  
Matthew K. Burns

The current study demonstrates how conceptual and procedural knowledge can be used as a heuristic to better understand student math difficulties in order to develop interventions and lay the groundwork for future research. Math interventions were implemented with two elementary students using a nonexperimental single-case design. One student demonstrated acceptable conceptual understanding but low procedural knowledge, and the other student demonstrated low conceptual understanding in addition to difficulties with procedural knowledge. The mismatched intervention (e.g., procedural for a student who needs a conceptual intervention) was implemented first for both students, followed by the appropriate intervention. The intervention that was identified as the most appropriate was more effective than the mismatched intervention for both students. The mean percentage of nonoverlapping data was 100% for the matched intervention and 16.5% for the mismatched intervention. Suggestions for future research are included.


2021 ◽  
pp. 074193252199645
Author(s):  
Bryan G. Cook ◽  
Austin H. Johnson ◽  
Daniel M. Maggin ◽  
William J. Therrien ◽  
Erin E. Barton ◽  
...  

Research indicating many study results do not replicate has raised questions about the credibility of science and prompted concerns about a potential reproducibility crisis. Moreover, most published research is not freely accessible, which limits the potential impact of science. Open science, which aims to make the research process more open and reproducible, has been proposed as one approach to increase the credibility and impact of scientific research. Although relatively little attention has been paid to open science in relation to single-case design, we propose that open-science practices can be applied to enhance the credibility and impact of single-case design research. In this article, we discuss how open-science practices align with other recent developments in single-case design research, describe four prominent open-science practices (i.e., preregistration, registered reports, data and materials sharing, and open access), and discuss potential benefits and limitations of each practice for single-case design.


2020 ◽  
Vol 51 (1) ◽  
pp. 165-175 ◽  
Author(s):  
Lindsey A. Peters-Sanders ◽  
Elizabeth S. Kelley ◽  
Christa Haring Biel ◽  
Keri Madsen ◽  
Xigrid Soto ◽  
...  

Purpose This study evaluated the effects of an automated, small-group intervention designed to teach preschoolers challenging vocabulary words. Previous studies have provided evidence of efficacy. In this study, we evaluated the effects of the program after doubling the number of words taught from 2 to 4 words per book. Method Seventeen preschool children listened to 1 prerecorded book per week for 9 weeks. Each storybook had embedded, interactive lessons for 4 target vocabulary words. Each lesson provided repeated exposures to words and their definitions, child-friendly contexts, and multiple opportunities for children to respond verbally to instructional prompts. Participants were asked to define the weekly targeted vocabulary before and after intervention. A repeated acquisition single-case design was used to examine the effects of the books and embedded lessons on learning of target vocabulary words. Results Treatment effects were observed for all children across many of the books. Learning of at least 2 points (i.e., 1 word) was replicated for 74.5% of 149 books tested across the 17 participants. On average, children learned to define 47% of the target vocabulary words (17 out of 36). Conclusions Results support including 4 challenging words per book, as children learned substantially more words when 4 words were taught, in comparison to previous studies. Within an iterative development process, results of the current study take us 1 step closer to creating an optimal vocabulary intervention that supports the language development of at-risk children.


2020 ◽  
Vol 63 (12) ◽  
pp. 4148-4161
Author(s):  
Christine S.-Y. Ng ◽  
Stephanie F. Stokes ◽  
Mary Alt

Purpose We report on a replicated single-case design study that measured the feasibility of an expressive vocabulary intervention for three Cantonese-speaking toddlers with small expressive lexicons relative to their age. The aim was to assess the cross-cultural and cross-linguistic feasibility of an intervention method developed for English-speaking children. Method A nonconcurrent multiple-baseline design was used with four baseline data points and 16 intervention sessions per participant. The intervention design incorporated implicit learning principles, high treatment dosage, and control of the phonological neighborhood density of the stimuli. The children (24–39 months) attended 7–9 weeks of twice weekly input-based treatment in which no explicit verbal production was required from the child. Each target word was provided as input a minimum of 64 times in at least two intervention sessions. Treatment feasibility was measured by comparison of how many of the target and control words the child produced across the intervention period, and parent-reported expressive vocabulary checklists were completed for comparison of pre- and postintervention child spoken vocabulary size. An omnibus effect size for the treatment effect of the number of target and control words produced across time was calculated using Kendall's Tau. Results There was a significant treatment effect for target words learned in intervention relative to baselines, and all children produced significantly more target than control words across the intervention period. The effect of phonological neighborhood density on expressive word production could not be evaluated because two of the three children learned all target words. Conclusion The results provide cross-cultural evidence of the feasibility of a model of intervention that incorporated a high-dosage, cross-situational statistical learning paradigm to teach spoken word production to children with small expressive lexicons.


Sign in / Sign up

Export Citation Format

Share Document