Estimating effect size with respect to variance in baseline to treatment phases of single-case experimental designs: A Bayesian simulation study

2020 ◽  
Vol 14 (1-2) ◽  
pp. 69-81
Author(s):  
Lucy Barnard-Brak ◽  
Laci Watkins ◽  
David Richman
2019 ◽  
Vol 44 (4) ◽  
pp. 518-551 ◽  
Author(s):  
René Tanious ◽  
Tamal Kumar De ◽  
Bart Michiels ◽  
Wim Van den Noortgate ◽  
Patrick Onghena

Previous research has introduced several effect size measures (ESMs) to quantify data aspects of single-case experimental designs (SCEDs): level, trend, variability, overlap, and immediacy. In the current article, we extend the existing literature by introducing two methods for quantifying consistency in single-case A-B-A-B phase designs. The first method assesses the consistency of data patterns across phases implementing the same condition, called CONsistency of DAta Patterns (CONDAP). The second measure assesses the consistency of the five other data aspects when changing from baseline to experimental phase, called CONsistency of the EFFects (CONEFF). We illustrate the calculation of both measures for four A-B-A-B phase designs from published literature and demonstrate how CONDAP and CONEFF can supplement visual analysis of SCED data. Finally, we discuss directions for future research.


2020 ◽  
Vol 14 (1-2) ◽  
pp. 28-51 ◽  
Author(s):  
Mariola Moeyaert ◽  
Diana Akhmedjanova ◽  
John Ferron ◽  
S. Natasha Beretvas ◽  
Wim Van den Noortgate

2017 ◽  
Vol 60 (6S) ◽  
pp. 1739-1751 ◽  
Author(s):  
Julie L. Wambaugh ◽  
Christina Nessler ◽  
Sandra Wright ◽  
Shannon C. Mauszycki ◽  
Catharine DeLong ◽  
...  

Purpose The purpose of this investigation was to compare the effects of schedule of practice (i.e., blocked vs. random) on outcomes of Sound Production Treatment (SPT; Wambaugh, Kalinyak-Fliszar, West, & Doyle, 1998) for speakers with chronic acquired apraxia of speech and aphasia. Method A combination of group and single-case experimental designs was used. Twenty participants each received SPT administered with randomized stimuli presentation (SPT-R) and SPT applied with blocked stimuli presentation (SPT-B). Treatment effects were examined with respect to accuracy of articulation as measured in treated and untreated experimental words produced during probes. Results All participants demonstrated improved articulation of treated items with both practice schedules. Effect sizes were calculated to estimate magnitude of change for treated and untreated items by treatment condition. No significant differences were found for SPT-R and SPT-B relative to effect size. Percent change over the highest baseline performance was also calculated to provide a clinically relevant indication of improvement. Change scores associated with SPT-R were significantly higher than those for SPT-B for treated items but not untreated items. Conclusion SPT can result in improved articulation regardless of schedule of practice. However, SPT-R may result in greater gains for treated items. Supplemental Materials https://doi.org/10.23641/asha.5116831


2021 ◽  
Author(s):  
Orhan Aydin ◽  
René Tanious

Visual analysis and nonoverlap-based effect sizes are predominantly used in analyzing single case experimental designs (SCEDs). Although they are popular analytical methods for SCEDs, they have certain limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of four nonoverlap-based effect size measures, widely accepted in the literature and blend well with visual analysis. In the field test of PCES, actual data from published studies were utilized, and the relationship between PCES, visual analysis, and the four nonoverlap-based methods was examined. In determining the data to be used in the field test, 1,012 tiers (AB phases) were identified from four journals, which have the highest frequency of SCEDs studies, published between 2015 and 2019. The findings revealed a weak or moderate relationship between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it promises to eliminate the causes that may create issues in nonoverlap-based methods, using quantitative data to determine socially significant changes in behavior and complement visual analysis.


2020 ◽  
Author(s):  
Orhan Aydin

To date, several effect size measurement methods have been proposed to determine the effect sizes of single case experimental designs (SCEDs) based on probability, mean or overlap. All these methods have certain considerable limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of four nonoverlap-based effect size measures, which are widely accepted in the literature and blend well with visual analysis. In the field test of PCES, real data from published studies were utilized and the relationship between PCES, visual analysis and the four nonoverlap-based methods was examined. In determining the data to be used in the field test, 1,012 tiers (AB phases) were identified from the issues of the four journals, which have most frequency SCEDs studies, published in the last five years. The findings revealed a weak or moderate relationship between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it was found to be promising in eliminating the cases that may create issues in nonoverlap-based methods, using quantitative data to determine the presence of socially important changes in behavior and complementing the visual analysis.


Healthcare ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 143
Author(s):  
René Tanious ◽  
Patrick Onghena

Health problems are often idiosyncratic in nature and therefore require individualized diagnosis and treatment. In this paper, we show how single-case experimental designs (SCEDs) can meet the requirement to find and evaluate individually tailored treatments. We give a basic introduction to the methodology of SCEDs and provide an overview of the available design options. For each design, we show how an element of randomization can be incorporated to increase the internal and statistical conclusion validity and how the obtained data can be analyzed using visual tools, effect size measures, and randomization inference. We illustrate each design and data analysis technique using applied data sets from the healthcare literature.


2018 ◽  
Author(s):  
Prathiba Natesan ◽  
Smita Mehta

Single case experimental designs (SCEDs) have become an indispensable methodology where randomized control trials may be impossible or even inappropriate. However, the nature of SCED data presents challenges for both visual and statistical analyses. Small sample sizes, autocorrelations, data types, and design types render many parametric statistical analyses and maximum likelihood approaches ineffective. The presence of autocorrelation decreases interrater reliability in visual analysis. The purpose of the present study is to demonstrate a newly developed model called the Bayesian unknown change-point (BUCP) model which overcomes all the above-mentioned data analytic challenges. This is the first study to formulate and demonstrate rate ratio effect size for autocorrelated data, which has remained an open question in SCED research until now. This expository study also compares and contrasts the results from BUCP model with visual analysis, and rate ratio effect size with nonoverlap of all pairs (NAP) effect size. Data from a comprehensive behavioral intervention are used for the demonstration.


2013 ◽  
Vol 82 (3) ◽  
pp. 358-374 ◽  
Author(s):  
Maaike Ugille ◽  
Mariola Moeyaert ◽  
S. Natasha Beretvas ◽  
John M. Ferron ◽  
Wim Van den Noortgate

Sign in / Sign up

Export Citation Format

Share Document