scholarly journals Simple dissociations for a higher-powered neuropsychology

2017 ◽  
Author(s):  
Robert D McIntosh

Neuropsychological dissociations are often investigated at the level of the single-case, and formal criteria exist for the detection of dissociations, and their sub-classification into ‘classical’ and ‘strong’ types. These criteria require a patient to show a frank deficit on one task (for a classical dissociation) or both tasks (for a strong dissociation), and a significantly extreme difference between tasks. I propose that only the significant between-task difference is logically necessary, and that if this simple criterion is met, the patient should be said to show a dissociation. Using Monte Carlo simulations, I demonstrate that this simplification increases the power to detect dissociations across a range of practically-relevant conditions, whilst retaining excellent control over Type I error. Additional testing for frank deficits on the individual tasks provides further qualifying information, but using these outcomes to sub-classify dissociations as classical or strong may be too uncertain to guide theoretical inferences. I suggest that we should instead characterise the strength of the dissociation using a more continuous index, such as the effect size of the simple between-task difference.

2019 ◽  
pp. 014544551986021 ◽  
Author(s):  
Antonia R. Giannakakos ◽  
Marc J. Lanovaz

Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below 0.05 and then examined whether using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice.


2020 ◽  
Author(s):  
Marc Lanovaz

Recently, Lanovaz et al. (2020) have found that machine learning algorithms may adequately control for Type I error rate and power when analyzing single-case graphs. However, the study limited most of its analyses to simulated datasets. To replicate and extend this study, we applied the four machine learning models developed by Lanovaz et al. (2020) to a previously published nonsimulated dataset. On average, the four models produced lower proportions of false positives than well-established methods to analyze AB graphs (i.e., the dual-criteria and conservative dual-criteria methods). These results support the use of machine learning to analyze single-case graphs, but further replication by an independent research team using educational and clinical data remains necessary.


2020 ◽  
Vol 18 (1) ◽  
pp. 2-20
Author(s):  
Joel R. Levin ◽  
John M. Ferron ◽  
Boris S. Gafurov

Detailed is a 20-year arduous journey to develop a statistically viable two-phase (AB) single-case two independent-samples randomization test procedure. The test is designed to compare the effectiveness of two different interventions that are randomly assigned to cases. In contrast to the unsatisfactory simulation results produced by an earlier proposed randomization test, the present test consistently exhibited acceptable Type I error control under various design and effect-type configurations, while at the same time possessing adequate power to detect moderately sized intervention-difference effects. Selected issues, applications, and a multiple-baseline extension of the two-sample test are discussed.


2021 ◽  
Author(s):  
Marc J Lanovaz ◽  
Rachel Primiani

Researchers and practitioners often use single-case designs (SCDs), or n-of-1 trials, to develop and validate novel treatments. Standards and guidelines have been published to provide guidance as to how to implement SCDs, but many of their recommendations are not derived from the research literature. For example, one of these recommendations suggests that researchers and practitioners should wait for baseline stability prior to introducing an independent variable. However, this recommendation is not strongly supported by empirical evidence. To address this issue, we used a Monte Carlo simulation to generate a total of 480,000 AB graphs with fixed, response-guided, and random baseline lengths. Then, our analyses compared the Type I error rate and power produced by two methods of analysis: the conservative dual-criteria method (a structured visual aid) and a support vector classifier (a model derived from machine learning). The conservative dual-criteria method produced more power when using response-guided decision-making (i.e., waiting for stability) with negligeable effects on Type I error rate. In contrast, waiting for stability did not reduce decision-making errors with the support vector classifier. Our findings question the necessity of waiting for baseline stability when using SCDs with machine learning, but the study must be replicated with other designs to support our results.


2021 ◽  
Author(s):  
Marc J Lanovaz ◽  
Kieva Hranchuk

Behavior analysts commonly use visual inspection to analyze single-case graphs, but studies on its reliability have produced mixed results. To examine this issue, we compared the Type I error rate and power of visual inspection with a novel approach, machine learning. Five expert visual raters analyzed 1,024 simulated AB graphs, which differed on number of points per phase, autocorrelation, trend, variability, and effect size. The ratings were compared to those obtained by the conservative dual-criteria method and two models derived from machine learning. On average, visual raters agreed with each other on only 73% of graphs. In contrast, both models derived from machine learning showed the best balance between Type I error rate and power while producing more consistent results across different graph characteristics. The results suggest that machine learning may support researchers and practitioners in making less error when analyzing single-case graphs, but further replications remain necessary.


2017 ◽  
Vol 43 (1) ◽  
pp. 115-131 ◽  
Author(s):  
Marc J. Lanovaz ◽  
Patrick Cardinal ◽  
Mary Francis

Although visual inspection remains common in the analysis of single-case designs, the lack of agreement between raters is an issue that may seriously compromise its validity. Thus, the purpose of our study was to develop and examine the properties of a simple structured criterion to supplement the visual analysis of alternating-treatment designs. To this end, we generated simulated data sets with varying number of points, number of conditions, effect sizes, and autocorrelations, and then measured Type I error rates and power produced by the visual structured criterion (VSC) and permutation analyses. We also validated the results for Type I error rates using nonsimulated data. Overall, our results indicate that using the VSC as a supplement for the analysis of systematically alternating-treatment designs with at least five points per condition generally provides adequate control over Type I error rates and sufficient power to detect most behavior changes.


2021 ◽  
Author(s):  
Marc J Lanovaz

Despite being a cornerstone of the science of behavior analysis, researchers and practitioners often rely on tradition and consensus-based guidelines, rather than empirical evidence, to make decisions about single-case designs. One approach to develop empirically-based guidelines is to use Monte Carlo simulations for validation, but behavior analysts are not necessarily trained to apply this type of methodology. Therefore, the purpose of our technical article is to walk the reader through conducting Monte Carlo simulations to examine the accuracy, Type I error rate, and power of a visual aid for AB graphs using R Code. Additionally, the tutorial provides code to replicate the procedures with single-case experimental designs as well as with the Python programming language. Overall, a broader adoption of Monte Carlo simulations to validate guidelines should lead to an improvement in how researchers and practitioners use single-case designs.


2021 ◽  
pp. 13-24
Author(s):  
Jürgen Wilbert ◽  
Jannis Bosch ◽  
Timo Lüke

Analysis of data from single-case intervention studies commonly involves visual analysis. Previous research indicates that visual analysis may suffer from low reliability and unpromising error rates. We investigated the reliability and validity of visual analysis and explored to what extent data trends affect judgments. We administered a within-subject experiment in which 186 teacher-education students visually analyzed specifically constructed single-case graphs that included either an intervention effect, a trend effect, both effects, or no effect. Participants identified intervention effects in 75% of the graphs, regardless of the existence of a trend. Type I error rates were low (5%) in graphs without a trend but increased fivefold (25%) for graphs with a trend. Inter- and intra-rater reliability was low, particularly when a trend was present in the data.


2016 ◽  
Vol 41 (4) ◽  
pp. 427-467 ◽  
Author(s):  
Kevin R. Tarlow

Measuring treatment effects when an individual’s pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between −1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.


Sign in / Sign up

Export Citation Format

Share Document