A Proposal for the Assessment of Replication of Effects in Single-Case Experimental Designs

2021 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Belén Fernández

In science in general and, therefore, in the context of single-case experimental designs (SCED), the replication of the effects of the intervention within and across participants is crucial for establishing causality and also for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the SCED context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. In order to make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on purely statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user-friendly software is made available for obtaining automatically the graphical representation.

2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


2019 ◽  
Author(s):  
Rumen Manolov ◽  
John M. Ferron

In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the “replicability crisis” presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques: multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and it is implemented in free user-friendly software.


2020 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features. The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


2020 ◽  
pp. 014544552092399
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Tamal Kumar De ◽  
Patrick Onghena

Consistency is one of the crucial single-case data aspects that are expected to be assessed visually, when evaluating the presence of an intervention effect. Complementarily to visual inspection, there have been recent proposals for quantifying the consistency of data patterns in similar phases and the consistency of effects for reversal, multiple-baseline, and changing criterion designs. The current text continues this line of research by focusing on alternation designs using block randomization. Specifically, three types of consistency are discussed: consistency of superiority of one condition over another, consistency of the average level across blocks, and consistency in the magnitude of the effect across blocks. The focus is put especially on the latter type of consistency, which is quantified on the basis of partitioning the variance, as attributed to the intervention, to the blocking factor or remaining as residual (including the interaction between the intervention and the blocks). Several illustrations with real and fictitious data are provided in order to make clear the meaning of the quantification proposed. Moreover, specific graphical representations are recommend for complementing the numerical assessment of consistency. A freely available user-friendly webpage is developed for implementing the proposal.


2020 ◽  
Author(s):  
Orhan Aydin

To date, several effect size measurement methods have been proposed to determine the effect sizes of single case experimental designs (SCEDs) based on probability, mean or overlap. All these methods have certain considerable limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of four nonoverlap-based effect size measures, which are widely accepted in the literature and blend well with visual analysis. In the field test of PCES, real data from published studies were utilized and the relationship between PCES, visual analysis and the four nonoverlap-based methods was examined. In determining the data to be used in the field test, 1,012 tiers (AB phases) were identified from the issues of the four journals, which have most frequency SCEDs studies, published in the last five years. The findings revealed a weak or moderate relationship between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it was found to be promising in eliminating the cases that may create issues in nonoverlap-based methods, using quantitative data to determine the presence of socially important changes in behavior and complementing the visual analysis.


2017 ◽  
Vol 19 (1) ◽  
pp. 18-32 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas

Single-case experimental designs meeting evidence standards are useful for identifying empirically-supported practices. Part of the research process entails data analysis, which can be performed both visually and numerically. In the current text, we discuss several statistical techniques focusing on the descriptive quantifications that they provide on aspects such as overlap, difference in level and in slope. In both cases, the numerical results are interpreted in light of the characteristics of the data as identified via visual inspection. Two previously published data sets from patients with traumatic brain injury are re-analysed, illustrating several analytical options and the data patterns for which each of these analytical techniques is especially useful, considering their assumptions and limitations. In order to make the current review maximally informative for applied researchers, we point to free user-friendly web applications of the analytical techniques. Moreover, we offer up-to-date references to the potentially useful analytical techniques not illustrated in the article. Finally, we point to some analytical challenges and offer tentative recommendations about how to deal with them.


2020 ◽  
pp. 014544552098296
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


Marketing ZFP ◽  
2019 ◽  
Vol 41 (4) ◽  
pp. 21-32
Author(s):  
Dirk Temme ◽  
Sarah Jensen

Missing values are ubiquitous in empirical marketing research. If missing data are not dealt with properly, this can lead to a loss of statistical power and distorted parameter estimates. While traditional approaches for handling missing data (e.g., listwise deletion) are still widely used, researchers can nowadays choose among various advanced techniques such as multiple imputation analysis or full-information maximum likelihood estimation. Due to the available software, using these modern missing data methods does not pose a major obstacle. Still, their application requires a sound understanding of the prerequisites and limitations of these methods as well as a deeper understanding of the processes that have led to missing values in an empirical study. This article is Part 1 and first introduces Rubin’s classical definition of missing data mechanisms and an alternative, variable-based taxonomy, which provides a graphical representation. Secondly, a selection of visualization tools available in different R packages for the description and exploration of missing data structures is presented.


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


2021 ◽  
Vol 16 (5) ◽  
pp. 1186-1216
Author(s):  
Nikola Simkova ◽  
Zdenek Smutny

An opportunity to resolve disputes as an out-of-court settlement through computer-mediated communication is usually easier, faster, and cheaper than filing an action in court. Artificial intelligence and law (AI & Law) research has gained importance in this area. The article presents a design of the E-NeGotiAtion method for assisted negotiation in business to business (B2B) relationships, which uses a genetic algorithm for selecting the most appropriate solution(s). The aim of the article is to present how the method is designed and contribute to knowledge on online dispute resolution (ODR) with a focus on B2B relationships. The evaluation of the method consisted of an embedded single-case study, where participants from two countries simulated the realities of negotiation between companies. For comparison, traditional negotiation via e-mail was also conducted. The evaluation confirms that the proposed E-NeGotiAtion method quickly achieves solution(s), approaching the optimal solution on which both sides can decide, and also very importantly, confirms that the method facilitates negotiation with the partner and creates a trusted result. The evaluation demonstrates that the proposed method is economically efficient for parties of the dispute compared to negotiation via e-mail. For a more complicated task with five or more products, the E-NeGotiAtion method is significantly more suitable than negotiation via e-mail for achieving a resolution that favors one side or the other as little as possible. In conclusion, it can be said that the proposed method fulfills the definition of the dual-task of ODR—it resolves disputes and builds confidence.


Sign in / Sign up

Export Citation Format

Share Document