Justifying the data analytical choice in single case research in relation to the expected data pattern

2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.

2014 ◽  
Vol 50 (1) ◽  
pp. 18-26 ◽  
Author(s):  
Eun Kyeng Baek ◽  
Merlande Petit-Bois ◽  
Wim Van den Noortgate ◽  
S. Natasha Beretvas ◽  
John M. Ferron

2019 ◽  
Vol 86 (4) ◽  
pp. 355-373
Author(s):  
Youjia Hua ◽  
Michelle Hinzman ◽  
Chengan Yuan ◽  
Kinga Balint Langel

An emerging body of research suggests that incorporating randomization schemes in single-case research designs strengthens study internal validity and data evaluation. The purpose of this study was to test the utility and feasibility of a randomized alternating-treatment design in an investigation that compared the combined effects of vocabulary instruction and the paraphrasing strategies on expository comprehension of six students with reading difficulties. We analyzed the data using three types of randomization tests as well as visual analysis. The visual analysis and randomization tests confirmed the additional benefit of vocabulary instruction on expository comprehension for one student. However, the effects were not replicated across the other five students. We found that proper randomization schemes can improve both internal validity and data analysis strategies of the alternating-treatment design.


2019 ◽  
Vol 12 (2) ◽  
pp. 491-502 ◽  
Author(s):  
Katie Wolfe ◽  
Erin E. Barton ◽  
Hedda Meadan

2012 ◽  
Vol 37 (1) ◽  
pp. 62-89 ◽  
Author(s):  
Dawn H. Davis ◽  
Phill Gagné ◽  
Laura D. Fredrick ◽  
Paul A. Alberto ◽  
Rebecca E. Waugh ◽  
...  

2019 ◽  
Author(s):  
Rumen Manolov ◽  
John M. Ferron

In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the “replicability crisis” presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques: multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and it is implemented in free user-friendly software.


2017 ◽  
Vol 38 (6) ◽  
pp. 387-393 ◽  
Author(s):  
Rumen Manolov ◽  
Georgina Guilera ◽  
Antonio Solanas

The current text comments on three systematic reviews published in the special section Issues and Advances in the Systematic Review of Single-Case Research: An Update and Exemplars. The commentary is provided in relation to the need to combine the assessment of the methodological quality of the studies included in systematic reviews, the assessment of the presence of functional relations via visual analysis following objective rules, and the quantification of the magnitudes of effect, providing meaningful information. Although it was not required that the exemplars follow specific guidelines for conduct and reporting, we applied an existing methodological quality checklist for systematic reviews and meta-analyses. Finally, we point at specific signs of advance in the field of performing systematic reviews of single-case design studies, as identified in the three exemplars, and we also suggest some issues requiring further research and discussion.


2018 ◽  
Vol 43 (3) ◽  
pp. 361-388 ◽  
Author(s):  
Katie Wolfe ◽  
Tammiee S. Dickenson ◽  
Bridget Miller ◽  
Kathleen V. McGrath

A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.


2021 ◽  
pp. 002246692110472
Author(s):  
Kelli A. Sanderson ◽  
Samantha E. Goldman ◽  
Amanda Rojas

The purpose of this systematic review and meta-analysis was to identify and synthesize single-case research examining interventions used to increase the quantity and/or quality of participation by adolescents with disabilities during Individualized Education Program (IEP) meetings. For studies meeting quality indicators, we used visual analysis, Tau-U, and standardized mean difference to synthesize outcomes, including maintenance and generalization of effects. We identified seven studies examining quality of participation and eight studies examining quantity of participation that met our inclusion criteria; however, only three studies from each group met quality standards. Overall, interventions positively influenced student contributions at IEP meetings. When measured, increased quantity and quality of participation maintained over time and generalized to real IEP meetings. Implications for research and practice are discussed.


2021 ◽  
Author(s):  
André Fuchs ◽  
Swapnil Kharche ◽  
Matthias Wächter ◽  
Joachim Peinke

<p>We present a user-friendly open-source Matlab package for stochastic data analysis. This package enables to perform a standard analysis of given timeseries like scaling analysis of structure functions and energy spectral density, estimation of correlation functions or investigation of the PDF’s of increments including Castaing fits. Also, this package can be used to extract the stochastic equations describing scale-dependent processes, such as the cascade process in turbulent flows, through Fokker-Planck equations and concepts of non-equilibrium stochastic thermodynamics. This stochastic treatment of scale-dependent processes has the potential for a new way to link to fluctuation theorems of non-equilibrium stochastic thermodynamics and extreme events (small scale intermittency, structures of rogue waves). </p><p>The development of this user-friendly package greatly enhances the practicability and availability of this method, which allows a comprehensive statistical description in terms of the complexity of time series. It can also be used by researchers outside of the field of turbulence for the analysis of data with turbulent like complexity, including ocean gravity waves, stock prices and inertial particles in direct numerical simulations. Support is available: github.com/andre-fuchs-uni-oldenburg/OPEN FPE IFT, where questions can be posted and generally receive quick responses from the authors.</p><p>This package was developed by the research group Turbulence, Wind energy and Stochastics (TWiSt) at the Carl von Ossietzky University of Oldenburg. We acknowledge funding by Volkswagen Foundation. </p>


Sign in / Sign up

Export Citation Format

Share Document