scholarly journals Assessing Functional Relations in Single-Case Designs

2014 ◽  
Vol 38 (6) ◽  
pp. 878-913 ◽  
Author(s):  
Rumen Manolov ◽  
Vicenta Sierra ◽  
Antonio Solanas ◽  
Juan Botella

In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.

2009 ◽  
Vol 12 (2) ◽  
pp. 823-832 ◽  
Author(s):  
Verônica M. Ximenes ◽  
Rumen Manolov ◽  
Antonio Solanas ◽  
Vicenç Quera

Visual inspection remains the most frequently applied method for detecting treatment effects in single-case designs. The advantages and limitations of visual inference are here discussed in relation to other procedures for assessing intervention effectiveness. The first part of the paper reviews previous research on visual analysis, paying special attention to the validation of visual analysts' decisions, inter-judge agreement, and false alarm and omission rates. The most relevant factors affecting visual inspection (i.e., effect size, autocorrelation, data variability, and analysts' expertise) are highlighted and incorporated into an empirical simulation study with the aim of providing further evidence about the reliability of visual analysis. Our results concur with previous studies that have reported the relationship between serial dependence and increased Type I rates. Participants with greater experience appeared to be more conservative and used more consistent criteria when assessing graphed data. Nonetheless, the decisions made by both professionals and students did not match sufficiently the simulated data features, and we also found low intra-judge agreement, thus suggesting that visual inspection should be complemented by other methods when assessing treatment effectiveness.


2018 ◽  
Vol 40 (3) ◽  
pp. 131-149 ◽  
Author(s):  
Elizabeth A. Stevens ◽  
Sunyoung Park ◽  
Sharon Vaughn

This systematic review examines the effects of summarizing and main idea interventions on the reading comprehension outcomes of struggling readers in Grades 3 through 12. A comprehensive search identified 30 studies published in peer-reviewed journals between 1978 and 2016. Studies included struggling reader participants in Grades 3 through 12; targeted summarizing or main idea instruction; used an experimental, quasi-experimental, or single-case design; and included a reading comprehension outcome. A meta-analysis of 23 group design studies resulted in a statistically significant mean effect of 0.97. Group size, number of sessions, grade level, and publication year did not moderate treatment effect. Visual analysis of six single-case designs yielded strong evidence for retell measures and a range of evidence for short-answer comprehension measures. Findings suggest that main idea and summarizing instruction may improve struggling readers’ main idea identification and reading comprehension. Limitations include the lack of standardized measures and the unreported, changing description of the counterfactual.


2020 ◽  
Author(s):  
Marc J Lanovaz ◽  
Jordan D Bailey

Since the start of the 21st century, few advances have had as far reaching consequences in science as the widespread adoption of artificial neural networks in fields as diverse as fundamental physics, clinical medicine, and social networking. In behavior analysis, one promising area for the adoption of neural networks involves the analysis of single-case graphs. However, few behavior analysts have any training on the use of these methods, which may limit progress in this area. The purpose of our tutorial is to address this issue by providing a step-by-step description on using artificial neural networks to improve the analysis of single-case graphs. To this end, we trained a new model using simulated data to analyze multiple baseline graphs and compared its outcomes to those of visual analysis on a previously published dataset. In addition to showing that artificial neural networks may outperform visual analysis, the tutorial provides information to facilitate the replication and extension of this line of work to other datasets and designs.


Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 49-58 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas ◽  
David Leiva

Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm, and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, and changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.


2017 ◽  
Vol 38 (6) ◽  
pp. 387-393 ◽  
Author(s):  
Rumen Manolov ◽  
Georgina Guilera ◽  
Antonio Solanas

The current text comments on three systematic reviews published in the special section Issues and Advances in the Systematic Review of Single-Case Research: An Update and Exemplars. The commentary is provided in relation to the need to combine the assessment of the methodological quality of the studies included in systematic reviews, the assessment of the presence of functional relations via visual analysis following objective rules, and the quantification of the magnitudes of effect, providing meaningful information. Although it was not required that the exemplars follow specific guidelines for conduct and reporting, we applied an existing methodological quality checklist for systematic reviews and meta-analyses. Finally, we point at specific signs of advance in the field of performing systematic reviews of single-case design studies, as identified in the three exemplars, and we also suggest some issues requiring further research and discussion.


2017 ◽  
Vol 43 (1) ◽  
pp. 115-131 ◽  
Author(s):  
Marc J. Lanovaz ◽  
Patrick Cardinal ◽  
Mary Francis

Although visual inspection remains common in the analysis of single-case designs, the lack of agreement between raters is an issue that may seriously compromise its validity. Thus, the purpose of our study was to develop and examine the properties of a simple structured criterion to supplement the visual analysis of alternating-treatment designs. To this end, we generated simulated data sets with varying number of points, number of conditions, effect sizes, and autocorrelations, and then measured Type I error rates and power produced by the visual structured criterion (VSC) and permutation analyses. We also validated the results for Type I error rates using nonsimulated data. Overall, our results indicate that using the VSC as a supplement for the analysis of systematically alternating-treatment designs with at least five points per condition generally provides adequate control over Type I error rates and sufficient power to detect most behavior changes.


2021 ◽  
pp. 105381512110550
Author(s):  
Mollie J. Todt ◽  
Erin E. Barton ◽  
Jennifer R. Ledford ◽  
Gabriela N. Robinson ◽  
Emma B. Skiba

Researchers have identified effective instructional strategies for teaching peer imitation, including embedded classroom-based interventions. However, there is a dearth of strategies that have been effective for teaching generalization of imitation skills to novel contexts. Building on previous research, we examined the use of progressive time delay to increase peer imitation in the context of a play activity for four preschoolers with disabilities. We conducted preference and reinforcer assessments to identify effective reinforcers for each child prior to intervention. We conducted a multiple baseline across participants design meeting contemporary single case standards and used visual analysis to identify a functional relation: the intervention package was associated with an increase in the participants’ peer imitation in training contexts. The intervention also led to levels of peer imitation comparable to those of typically developing peers, as measured by a normative peer sample, and generalization to novel contexts.


Author(s):  
Julie Q. Morrison ◽  
Anna L. Harms

This chapter consists of three case studies that illustrate how the evaluation approaches, methods, techniques, and tools presented in Chapters 1 to 5 can be translated into practice. The first case study describes an evaluation of the Dyslexia Pilot Project, a statewide multi-tier system of supports (MTSS) initiative targeting early literacy. In this evaluation, special attention was paid to the evaluating the cost-effectiveness of serving students in kindergarten to grade 2 proactively. The second case study features the use of single-case designs and corresponding summary statistics to evaluate the collective impact of more than 500 academic and behavioral interventions provided within an MTSS framework as part of the annual statewide evaluation of the Ohio Internship Program in School Psychology. The third case study focuses on efforts to evaluate the fidelity of implementation for teacher teams’ use of a five-step process for data-based decision making and instructional planning.


2020 ◽  
Vol 43 (4) ◽  
pp. 209-225
Author(s):  
Leslie Ann Bross ◽  
Jason C. Travers ◽  
Howard P. Wills ◽  
Jonathan M. Huffman ◽  
Emma K. Watson ◽  
...  

This single case design study evaluated the effects of a video modeling (VM) intervention on the customer service skills of five young adults with autism spectrum disorder (ASD). Verbalization of greeting, service, and closing phrases contextualized to community employment settings were the target behaviors. A systematic approach to visual analysis indicated the presence of a functional relation for all participants. Coworkers, job coaches, and supervisors successfully applied the VM intervention during the generalization condition. Maintenance probes conducted at 2 and 4 weeks indicated that most customer service skills were maintained. Results indicated VM was also effective in enhancing the quality of interactions with customers. Implications for research and practice related to the competitive employment of young adults with ASD are discussed.


2019 ◽  
Vol 54 (2) ◽  
pp. 113-125 ◽  
Author(s):  
Hedda Meadan ◽  
Moon Y. Chung ◽  
Michelle M. Sands ◽  
Melinda R. Snodgrass

Teaching caregivers to support their young children’s language development is recommended as an effective early language intervention, and caregiver-implemented interventions are recognized as evidence-based. However, as the natural change agents for training and coaching caregivers, early intervention (EI) service providers are in need of professional development to effectively coach caregivers to use interventions with their child. The purpose of this study was to examine the Coaching Caregivers Professional Development program (CoCare PD) in which researchers train and coach EI service providers via telepractice in caregiver coaching, a set of skills useful in nurturing partnerships with families to support caregivers’ use of evidence-based practices with their young children with disabilities. A single-case research study across four EI service providers was conducted and findings support a functional relation between training and coaching EI service providers via telepractice and providers’ use of coaching practices with families on their caseload.


Sign in / Sign up

Export Citation Format

Share Document