scholarly journals Assessing Consistency in Single-Case Alternation Designs

2020 ◽  
pp. 014544552092399
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Tamal Kumar De ◽  
Patrick Onghena

Consistency is one of the crucial single-case data aspects that are expected to be assessed visually, when evaluating the presence of an intervention effect. Complementarily to visual inspection, there have been recent proposals for quantifying the consistency of data patterns in similar phases and the consistency of effects for reversal, multiple-baseline, and changing criterion designs. The current text continues this line of research by focusing on alternation designs using block randomization. Specifically, three types of consistency are discussed: consistency of superiority of one condition over another, consistency of the average level across blocks, and consistency in the magnitude of the effect across blocks. The focus is put especially on the latter type of consistency, which is quantified on the basis of partitioning the variance, as attributed to the intervention, to the blocking factor or remaining as residual (including the interaction between the intervention and the blocks). Several illustrations with real and fictitious data are provided in order to make clear the meaning of the quantification proposed. Moreover, specific graphical representations are recommend for complementing the numerical assessment of consistency. A freely available user-friendly webpage is developed for implementing the proposal.

2020 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features. The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


2020 ◽  
pp. 152574012091542
Author(s):  
Beatriz de Diego-Lázaro ◽  
María Adelaida Restrepo

This study examined the effects of a 9-week auditory intervention on the auditory skills of five children with hearing loss who experienced prolonged auditory deprivation before receiving hearing aids. A single-case multiple baseline design across participants was used to examine the effect of the intervention on the children’s auditory skills using a weekly probe. The analyses showed a moderate to high intervention effect for three out of five participants. Children demonstrated gains in detection, discrimination, and identification, and Participant 5 also showed gains in sentence comprehension. Findings provide preliminary support for offering auditory intervention to children with hearing loss who are late identified and aided in the presence of residual hearing.


2021 ◽  
Vol 11 (2) ◽  
pp. 76
Author(s):  
Chao-Ying Joanne Peng ◽  
Li-Ting Chen

Due to repeated observations of an outcome behavior in N-of-1 or single-case design (SCD) intervention studies, the occurrence of missing scores is inevitable in such studies. Approximately 21% of SCD articles published in five reputable journals between 2015 and 2019 exhibited evidence of missing scores. Missing rates varied by designs, with the highest rate (24%) found in multiple baseline/probe designs. Missing scores cause difficulties in data analysis. And inappropriate treatments of missing scores lead to consequences that threaten internal validity and weaken generalizability of intervention effects reported in SCD research. In this paper, we comprehensively review nine methods for treating missing SCD data: the available data method, six single imputations, and two model-based methods. The strengths, weaknesses, assumptions, and examples of these methods are summarized. The available data method and three single imputation methods are further demonstrated in assessing an intervention effect at the class and students’ levels. Assessment results are interpreted in terms of effect sizes, statistical significances, and visual analysis of data. Differences in results among the four methods are noted and discussed. The extensive review of problems caused by missing scores and possible treatments should empower researchers and practitioners to account for missing scores effectively and to support evidence-based interventions vigorously. The paper concludes with a discussion of contingencies for implementing the nine methods and practical strategies for managing missing scores in single-case intervention studies.


2021 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Belén Fernández

In science in general and, therefore, in the context of single-case experimental designs (SCED), the replication of the effects of the intervention within and across participants is crucial for establishing causality and also for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the SCED context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. In order to make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on purely statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user-friendly software is made available for obtaining automatically the graphical representation.


2019 ◽  
pp. 014544551988288 ◽  
Author(s):  
René Tanious ◽  
Rumen Manolov ◽  
Patrick Onghena

Quality standards for single-case experimental designs (SCEDs) recommend inspecting six data aspects: level, trend, variability, overlap, immediacy, and consistency of data patterns. The data aspect consistency has long been neglected by visual and statistical analysts of SCEDs despite its importance for inferring a causal relationship. However, recently a first quantification has been proposed in the context of A-B-A-B designs, called CONsistency of DAta Patterns (CONDAP). In the current paper, we extend the existing CONDAP measure for assessing consistency in designs with more than two successive A-B elements (e.g., A-B-A-B-A-B), multiple baseline designs, and changing criterion designs. We illustrate each quantification with published research.


2020 ◽  
pp. 014544552098296
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


2020 ◽  
Author(s):  
Da-Wei Zhang ◽  
Stuart J. Johnstone ◽  
Hui Li ◽  
Xiangsheng Li ◽  
Li Sun

The current study used behavioral and electroencephalograph measures to compare the transferability of cognitive training (CT), neurofeedback training (NFT), and CT combined with NFT in children with AD/HD. Following a multiple-baseline single-case experimental design, twelve children were randomized to a training condition. Each child completed a baseline phase, followed by an intervention phase. The intervention phase consisted of 20 sessions of at-home training. Tau-U analysis and standardized visual analysis were adopted to detect effects. CT improved inhibitory function, and NFT showed improved alpha activity and working memory. The combined condition, who was a reduced 'dose' of CT and NFT, did not show any improvements. The three conditions did not alleviate AD/HD symptoms. While CT and NFT may have near transfer effects, considering the lack of improvement in symptoms, this study does not support CT and NFT on their own as a treatment for children with AD/HD.


Sign in / Sign up

Export Citation Format

Share Document