scholarly journals Assessing Consistency in Single-Case Data Features Using Modified Brinley Plots

2020 ◽  
pp. 014544552098296
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.

2020 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features. The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xiangfei Chen ◽  
David Trafimow ◽  
Tonghui Wang ◽  
Tingting Tong ◽  
Cong Wang

PurposeThe authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.Design/methodology/approachCohen's d, which indexes the difference in means in standard deviation units, is the most popular effect size measure in the social sciences and economics. Not surprisingly, researchers have developed statistical procedures for estimating sample sizes needed to have a desirable probability of rejecting the null hypothesis given assumed values for Cohen's d, or for estimating sample sizes needed to have a desirable probability of obtaining a confidence interval of a specified width. However, for researchers interested in using the sample Cohen's d to estimate the population value, these are insufficient. Therefore, it would be useful to have a procedure for obtaining sample sizes needed to be confident that the sample. Cohen's d to be obtained is close to the population parameter the researcher wishes to estimate, an expansion of the a priori procedure (APP). The authors derive the necessary mathematics, provide computer simulations and links to free and user-friendly computer programs, and analyze real data sets for illustration of our main results.FindingsIn this paper, the authors answered the following two questions: The precision question: How close do I want my sample Cohen's d to be to the population value? The confidence question: What probability do I want to have of being within the specified distance?Originality/valueTo the best of the authors’ knowledge, this is the first paper for estimating Cohen's effect size, using the APP method. It is convenient for researchers and practitioners to use the online computing packages.


2017 ◽  
Vol 18 (2) ◽  
pp. 0233 ◽  
Author(s):  
Hassan S Bakouch ◽  
Sanku Dey ◽  
Pedro Luiz Ramos ◽  
Francisco Louzada

In this paper, we have considered different estimation methods of the unknown parameters of a binomial-exponential 2 distribution. First, we briefly describe different frequentist approaches such as the method of moments, modified moments, ordinary least-squares estimation, weightedleast-squares estimation, percentile, maximum product of spacings, Cramer-von Mises type minimum distance, Anderson-Darling and Right-tail Anderson-Darling, and compare them using extensive numerical simulations. We apply our proposed methodology to three real data sets related to the total monthly rainfall during April, May and September at Sao Carlos, Brazil.


2020 ◽  
pp. 014544552092399
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Tamal Kumar De ◽  
Patrick Onghena

Consistency is one of the crucial single-case data aspects that are expected to be assessed visually, when evaluating the presence of an intervention effect. Complementarily to visual inspection, there have been recent proposals for quantifying the consistency of data patterns in similar phases and the consistency of effects for reversal, multiple-baseline, and changing criterion designs. The current text continues this line of research by focusing on alternation designs using block randomization. Specifically, three types of consistency are discussed: consistency of superiority of one condition over another, consistency of the average level across blocks, and consistency in the magnitude of the effect across blocks. The focus is put especially on the latter type of consistency, which is quantified on the basis of partitioning the variance, as attributed to the intervention, to the blocking factor or remaining as residual (including the interaction between the intervention and the blocks). Several illustrations with real and fictitious data are provided in order to make clear the meaning of the quantification proposed. Moreover, specific graphical representations are recommend for complementing the numerical assessment of consistency. A freely available user-friendly webpage is developed for implementing the proposal.


2021 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Belén Fernández

In science in general and, therefore, in the context of single-case experimental designs (SCED), the replication of the effects of the intervention within and across participants is crucial for establishing causality and also for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the SCED context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. In order to make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on purely statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user-friendly software is made available for obtaining automatically the graphical representation.


2017 ◽  
Vol 19 (1) ◽  
pp. 18-32 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas

Single-case experimental designs meeting evidence standards are useful for identifying empirically-supported practices. Part of the research process entails data analysis, which can be performed both visually and numerically. In the current text, we discuss several statistical techniques focusing on the descriptive quantifications that they provide on aspects such as overlap, difference in level and in slope. In both cases, the numerical results are interpreted in light of the characteristics of the data as identified via visual inspection. Two previously published data sets from patients with traumatic brain injury are re-analysed, illustrating several analytical options and the data patterns for which each of these analytical techniques is especially useful, considering their assumptions and limitations. In order to make the current review maximally informative for applied researchers, we point to free user-friendly web applications of the analytical techniques. Moreover, we offer up-to-date references to the potentially useful analytical techniques not illustrated in the article. Finally, we point to some analytical challenges and offer tentative recommendations about how to deal with them.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Musfiqur Sazal ◽  
Vitalii Stebliankin ◽  
Kalai Mathee ◽  
Changwon Yoo ◽  
Giri Narasimhan

AbstractCausal inference in biomedical research allows us to shift the paradigm from investigating associational relationships to causal ones. Inferring causal relationships can help in understanding the inner workings of biological processes. Association patterns can be coincidental and may lead to wrong conclusions about causality in complex systems. Microbiomes are highly complex, diverse, and dynamic environments. Microbes are key players in human health and disease. Hence knowledge of critical causal relationships among the entities in a microbiome, and the impact of internal and external factors on microbial abundance and their interactions are essential for understanding disease mechanisms and making appropriate treatment recommendations. In this paper, we employ causal inference techniques to understand causal relationships between various entities in a microbiome, and to use the resulting causal network to make useful computations. We introduce a novel pipeline for microbiome analysis, which includes adding an outcome or “disease” variable, and then computing the causal network, referred to as a “disease network”, with the goal of identifying disease-relevant causal factors from the microbiome. Internventional techniques are then applied to the resulting network, allowing us to compute a measure called the causal effect of one or more microbial taxa on the outcome variable or the condition of interest. Finally, we propose a measure called causal influence that quantifies the total influence exerted by a microbial taxon on the rest of the microiome. Our pipeline is robust, sensitive, different from traditional approaches, and able to predict interventional effects without any controlled experiments. The pipeline can be used to identify potential eubiotic and dysbiotic microbial taxa in a microbiome. We validate our results using synthetic data sets and using results on real data sets that were previously published.


Sign in / Sign up

Export Citation Format

Share Document