Integrating Visual and Bayesian Statistical Analyses in Single Case Experimental Research to Evaluate the Effectiveness and Magnitude of a Comprehensive Behavioral Intervention

2018 ◽  
Author(s):  
Prathiba Natesan ◽  
Smita Mehta

Single case experimental designs (SCEDs) have become an indispensable methodology where randomized control trials may be impossible or even inappropriate. However, the nature of SCED data presents challenges for both visual and statistical analyses. Small sample sizes, autocorrelations, data types, and design types render many parametric statistical analyses and maximum likelihood approaches ineffective. The presence of autocorrelation decreases interrater reliability in visual analysis. The purpose of the present study is to demonstrate a newly developed model called the Bayesian unknown change-point (BUCP) model which overcomes all the above-mentioned data analytic challenges. This is the first study to formulate and demonstrate rate ratio effect size for autocorrelated data, which has remained an open question in SCED research until now. This expository study also compares and contrasts the results from BUCP model with visual analysis, and rate ratio effect size with nonoverlap of all pairs (NAP) effect size. Data from a comprehensive behavioral intervention are used for the demonstration.

2020 ◽  
pp. 019874292093070 ◽  
Author(s):  
Prathiba Natesan Batley ◽  
Smita Shukla Mehta ◽  
John H. Hitchcock

Single case experimental design (SCED) is an indispensable methodology when evaluating intervention efficacy. Despite long-standing success with using visual analyses to evaluate SCED data, this method has limited utility for conducting meta-analyses. This is critical because meta-analyses should drive practice and policy in behavioral disorders, more than evidence derived from individual SCEDs. Even when analyzing data from individual studies, there is merit to using multiple analytic methods since statistical analyses in SCED can be challenging given small sample sizes and autocorrelated data. These complexities are exacerbated when using count data, which are common in SCEDs. Bayesian methods can be used to develop new statistical procedures that may address these challenges. The purpose of the present study was to formulate a within-subject Bayesian rate ratio effect size (BRR) for autocorrelated count data which obviates the need for small sample corrections. This effect size is the first step toward building a between-subject rate ratio that can be used for meta-analyses. We illustrate this within-subject effect size using real data for an ABAB design and provide codes for practitioners who may want to compute BRR.


Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 49-58 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas ◽  
David Leiva

Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm, and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, and changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.


2021 ◽  
Vol 11 ◽  
Author(s):  
Prathiba Natesan Batley ◽  
Ratna Nandakumar ◽  
Jayme M. Palka ◽  
Pragya Shrestha

Recently, there has been an increased interest in developing statistical methodologies for analyzing single case experimental design (SCED) data to supplement visual analysis. Some of these are simulation-driven such as Bayesian methods because Bayesian methods can compensate for small sample sizes, which is a main challenge of SCEDs. Two simulation-driven approaches: Bayesian unknown change-point model (BUCP) and simulation modeling analysis (SMA) were compared in the present study for three real datasets that exhibit “clear” immediacy, “unclear” immediacy, and delayed effects. Although SMA estimates can be used to answer some aspects of functional relationship between the independent and the outcome variables, they cannot address immediacy or provide an effect size estimate that considers autocorrelation as required by the What Works Clearinghouse (WWC) Standards. BUCP overcomes these drawbacks of SMA. In final analysis, it is recommended that both visual and statistical analyses be conducted for a thorough analysis of SCEDs.


2019 ◽  
Vol 44 (4) ◽  
pp. 518-551 ◽  
Author(s):  
René Tanious ◽  
Tamal Kumar De ◽  
Bart Michiels ◽  
Wim Van den Noortgate ◽  
Patrick Onghena

Previous research has introduced several effect size measures (ESMs) to quantify data aspects of single-case experimental designs (SCEDs): level, trend, variability, overlap, and immediacy. In the current article, we extend the existing literature by introducing two methods for quantifying consistency in single-case A-B-A-B phase designs. The first method assesses the consistency of data patterns across phases implementing the same condition, called CONsistency of DAta Patterns (CONDAP). The second measure assesses the consistency of the five other data aspects when changing from baseline to experimental phase, called CONsistency of the EFFects (CONEFF). We illustrate the calculation of both measures for four A-B-A-B phase designs from published literature and demonstrate how CONDAP and CONEFF can supplement visual analysis of SCED data. Finally, we discuss directions for future research.


2009 ◽  
Vol 31 (4) ◽  
pp. 500-506 ◽  
Author(s):  
Robert Slavin ◽  
Dewi Smith

Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of the Best Evidence Encyclopedia. As predicted, there was a significant negative correlation between sample size and effect size. The differences in effect sizes between small and large experiments were much greater than those between randomized and matched experiments. Explanations for the effects of sample size on effect size are discussed.


2018 ◽  
Vol 43 (3) ◽  
pp. 361-388 ◽  
Author(s):  
Katie Wolfe ◽  
Tammiee S. Dickenson ◽  
Bridget Miller ◽  
Kathleen V. McGrath

A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.


2019 ◽  
Author(s):  
Stefan Lenz ◽  
Moritz Hess ◽  
Harald Binder

AbstractDeep Boltzmann machines (DBMs) are models for unsupervised learning in the field of artificial intelligence, promising to be useful for dimensionality reduction and pattern detection in clinical and genomic data. Multimodal and partitioned DBMs alleviate the problem of small sample sizes and make it possible to combine different input data types in one DBM model. We present the package “BoltzmannMachines” for the Julia programming language, which makes this model class available for practical use in working with biomedical data.AvailabilityNotebook with example data: http://github.com/stefan-m-lenz/BMs4BInf2019 Julia package: http://github.com/stefan-m-lenz/BoltzmannMachines.jl


2021 ◽  
Author(s):  
Orhan Aydin ◽  
René Tanious

Visual analysis and nonoverlap-based effect sizes are predominantly used in analyzing single case experimental designs (SCEDs). Although they are popular analytical methods for SCEDs, they have certain limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of four nonoverlap-based effect size measures, widely accepted in the literature and blend well with visual analysis. In the field test of PCES, actual data from published studies were utilized, and the relationship between PCES, visual analysis, and the four nonoverlap-based methods was examined. In determining the data to be used in the field test, 1,012 tiers (AB phases) were identified from four journals, which have the highest frequency of SCEDs studies, published between 2015 and 2019. The findings revealed a weak or moderate relationship between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it promises to eliminate the causes that may create issues in nonoverlap-based methods, using quantitative data to determine socially significant changes in behavior and complement visual analysis.


Sign in / Sign up

Export Citation Format

Share Document