scholarly journals Analytical Options for Single-Case Experimental Designs: Review and Application to Brain Impairment

2017 ◽  
Vol 19 (1) ◽  
pp. 18-32 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas

Single-case experimental designs meeting evidence standards are useful for identifying empirically-supported practices. Part of the research process entails data analysis, which can be performed both visually and numerically. In the current text, we discuss several statistical techniques focusing on the descriptive quantifications that they provide on aspects such as overlap, difference in level and in slope. In both cases, the numerical results are interpreted in light of the characteristics of the data as identified via visual inspection. Two previously published data sets from patients with traumatic brain injury are re-analysed, illustrating several analytical options and the data patterns for which each of these analytical techniques is especially useful, considering their assumptions and limitations. In order to make the current review maximally informative for applied researchers, we point to free user-friendly web applications of the analytical techniques. Moreover, we offer up-to-date references to the potentially useful analytical techniques not illustrated in the article. Finally, we point to some analytical challenges and offer tentative recommendations about how to deal with them.

2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


2020 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features. The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


2021 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Belén Fernández

In science in general and, therefore, in the context of single-case experimental designs (SCED), the replication of the effects of the intervention within and across participants is crucial for establishing causality and also for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the SCED context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. In order to make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on purely statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user-friendly software is made available for obtaining automatically the graphical representation.


2021 ◽  
Author(s):  
Rumen Manolov ◽  
René Tanious ◽  
Belén Fernández

The data gathered via single-case experimental designs usually lead to obtaining more than one effect size, quantifying the difference between each baseline condition (A) and intervention condition (B). These effect sizes resulting from the A-B comparisons present a nested structure, as there can be several effect sizes for the same participant, or one or several effect sizes per participant if there are several participants in the same study. There is no single optimal way to quantitatively aggregate these effect sizes within a study without making assumptions. Thus, in the current text, we propose to depict several possible means of effect sizes, weighted and unweighted, at different levels. Specifically, we propose to extend modified Brinley plots, so that they can be used for performing a sensitivity analysis, in order to explore the degree to which the conclusions about intervention effectiveness would vary according to how the aggregate value is computed. We focus on exploratory or descriptive quantifications and plots, in order to avoid using inferential tools requiring parametric assumptions or any sophisticated analytical techniques that may not be fully and correctly understood. The proposals are illustrated with previously published behavioral data and implemented in a user-friendly freely available website.


Healthcare ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 143
Author(s):  
René Tanious ◽  
Patrick Onghena

Health problems are often idiosyncratic in nature and therefore require individualized diagnosis and treatment. In this paper, we show how single-case experimental designs (SCEDs) can meet the requirement to find and evaluate individually tailored treatments. We give a basic introduction to the methodology of SCEDs and provide an overview of the available design options. For each design, we show how an element of randomization can be incorporated to increase the internal and statistical conclusion validity and how the obtained data can be analyzed using visual tools, effect size measures, and randomization inference. We illustrate each design and data analysis technique using applied data sets from the healthcare literature.


2020 ◽  
pp. 014544552098296
Author(s):  
Rumen Manolov ◽  
René Tanious

The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.


Author(s):  
Kathleen Gerson ◽  
Sarah Damaske

Qualitative interviewing is one of the most widely used methods in social research, but it is arguably the least well understood. To address that gap, this book offers a theoretically rigorous, empirically rich, and user-friendly set of strategies for conceiving and conducting interview-based research. Much more than a how-to manual, the book shows why depth interviewing is an indispensable method for discovering and explaining the social world—shedding light on the hidden patterns and dynamics that take place within institutions, social contexts, relationships, and individual experiences. It offers a step-by-step guide through every stage in the research process, from initially formulating a question to developing arguments and presenting the results. To do this, the book shows how to develop a research question, decide on and find an appropriate sample, construct an interview guide, conduct probing and theoretically focused interviews, and systematically analyze the complex material that depth interviews provide—all in the service of finding and presenting important new empirical discoveries and theoretical insights. The book also lays out the ever-present but rarely discussed challenges that interviewers routinely encounter and then presents grounded, thoughtful ways to respond to them. By addressing the most heated debates about the scientific status of qualitative methods, the book demonstrates how depth interviewing makes unique and essential contributions to the research enterprise. With an emphasis on the integral relationship between carefully crafted research and theory building, the book offers a compelling vision for what the “interviewing imagination” can and should be.


Genetics ◽  
1997 ◽  
Vol 147 (4) ◽  
pp. 1855-1861 ◽  
Author(s):  
Montgomery Slatkin ◽  
Bruce Rannala

Abstract A theory is developed that provides the sampling distribution of low frequency alleles at a single locus under the assumption that each allele is the result of a unique mutation. The numbers of copies of each allele is assumed to follow a linear birth-death process with sampling. If the population is of constant size, standard results from theory of birth-death processes show that the distribution of numbers of copies of each allele is logarithmic and that the joint distribution of numbers of copies of k alleles found in a sample of size n follows the Ewens sampling distribution. If the population from which the sample was obtained was increasing in size, if there are different selective classes of alleles, or if there are differences in penetrance among alleles, the Ewens distribution no longer applies. Likelihood functions for a given set of observations are obtained under different alternative hypotheses. These results are applied to published data from the BRCA1 locus (associated with early onset breast cancer) and the factor VIII locus (associated with hemophilia A) in humans. In both cases, the sampling distribution of alleles allows rejection of the null hypothesis, but relatively small deviations from the null model can account for the data. In particular, roughly the same population growth rate appears consistent with both data sets.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592092800
Author(s):  
Erin M. Buchanan ◽  
Sarah E. Crain ◽  
Ari L. Cunningham ◽  
Hannah R. Johnson ◽  
Hannah Stash ◽  
...  

As researchers embrace open and transparent data sharing, they will need to provide information about their data that effectively helps others understand their data sets’ contents. Without proper documentation, data stored in online repositories such as OSF will often be rendered unfindable and unreadable by other researchers and indexing search engines. Data dictionaries and codebooks provide a wealth of information about variables, data collection, and other important facets of a data set. This information, called metadata, provides key insights into how the data might be further used in research and facilitates search-engine indexing to reach a broader audience of interested parties. This Tutorial first explains terminology and standards relevant to data dictionaries and codebooks. Accompanying information on OSF presents a guided workflow of the entire process from source data (e.g., survey answers on Qualtrics) to an openly shared data set accompanied by a data dictionary or codebook that follows an agreed-upon standard. Finally, we discuss freely available Web applications to assist this process of ensuring that psychology data are findable, accessible, interoperable, and reusable.


Sign in / Sign up

Export Citation Format

Share Document