scholarly journals Confocal 3D DNA Cytometry: Assessment of Required Coefficient of Variation by Computer Simulation

2004 ◽  
Vol 26 (3) ◽  
pp. 93-99
Author(s):  
Lennert S. Ploeger ◽  
Jeroen A.M. Beliën ◽  
Neal M. Poulin ◽  
William Grizzle ◽  
Paul J. van Diest

Background: Confocal Laser Scanning Microscopy (CLSM) provides the opportunity to perform 3D DNA content measurements on intact cells in thick histological sections. So far, sample size has been limited by the time consuming nature of the technology. Since the power of DNA histograms to resolve different stemlines depends on both the sample size and the coefficient of variation (CV) of histogram peaks, interpretation of 3D CLSM DNA histograms might be hampered by both a small sample size and a large CV. The aim of this study was to analyze the required CV for 3D CLSM DNA histograms given a realistic sample size. Methods: By computer simulation, virtual histograms were composed for sample sizes of 20000, 10000, 5000, 1000, and 273 cells and CVs of 30, 25, 20, 15, 10 and 5%. By visual inspection, the histogram quality with respect to resolution of G0/1 and G2/M peaks of a diploid stemline was assessed. Results: As expected, the interpretability of DNA histograms deteriorated with decreasing sample sizes and higher CVs. For CVs of 15% and lower, a clearly bimodal peak pattern with well distinguishable G0/1 and G2/M peaks were still seen at a sample size of 273 cells, which is our current average sample size with 3D CLSM DNA cytometry. Conclusions: For unambiguous interpretation of DNA histograms obtained using 3D CLSM, a CV of at most 15% is tolerable at currently achievable sample sizes. To resolve smaller near diploid stemlines, a CV of 10% or better should be aimed at. With currently available 3D imaging technology, this CV is achievable.

2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Author(s):  
Emilie Laurin ◽  
Julia Bradshaw ◽  
Laura Hawley ◽  
Ian A. Gardner ◽  
Kyle A Garver ◽  
...  

Proper sample size must be considered when designing infectious-agent prevalence studies for mixed-stock fisheries, because bias and uncertainty complicate interpretation of apparent (test)-prevalence estimates. Sample size varies between stocks, often smaller than expected during wild-salmonid surveys. Our case example of 2010-2016 survey data of Sockeye salmon (Oncorhynchus nerka) from different stocks of origin in British Columbia, Canada, illustrated the effect of sample size on apparent-prevalence interpretation. Molecular testing (viral RNA RT-qPCR) for infectious hematopoietic necrosis virus (IHNv) revealed large differences in apparent-prevalence across wild salmon stocks (much higher from Chilko Lake) and sampling location (freshwater or marine), indicating differences in both stock and host life-stage effects. Ten of the 13 marine non-Chilko stock-years with IHNv-positive results had small sample sizes (< 30 samples per stock-year) which, with imperfect diagnostic tests (particularly lower diagnostic sensitivity), could lead to inaccurate apparent-prevalence estimation. When calculating sample size for expected apparent prevalence using different approaches, smaller sample sizes often led to decreased confidence in apparent-prevalence results and decreased power to detect a true difference from a reference value.


2005 ◽  
Vol 27 (4) ◽  
pp. 225-230
Author(s):  
Lennert S. Ploeger ◽  
André Huisman ◽  
Jurryt van der Gugten ◽  
Dionne M. van der Giezen ◽  
Jeroen A. M. Beliën ◽  
...  

Background: DNA cytometry is a powerful method for measuring genomic instability. Standard approaches that measure DNA content of isolated cells may induce selection bias and do not allow interpretation of genomic instability in the context of the tissue. Confocal Laser Scanning Microscopy (CLSM) provides the opportunity to perform 3D DNA content measurements on intact cells in thick histological sections. Because the technique is technically challenging and time consuming, only a small number of usually manually selected nuclei were analyzed in different studies, not allowing wide clinical evaluation. The aim of this study was to describe the conditions for accurate and fast 3D CLSM cytometry with a minimum of user interaction to arrive at sufficient throughput for pilot clinical applications. Methods: Nuclear DNA was stained in 14 μm thick tissue sections of normal liver and adrenal stained with either YOYO-1 iodide or TO-PRO-3 iodide. Different pre-treatment strategies were evaluated: boiling in citrate buffer (pH 6.0) followed by RNase application for 1 or 18 hours, or hydrolysis. The image stacks obtained with CLSM at microscope magnifications of ×40 or ×100 were analyzed off-line using in-house developed software for semi-automated 3D fluorescence quantitation. To avoid sectioned nuclei, the top and bottom of the stacks were identified from ZX and YZ projections. As a measure of histogram quality, the coefficient of variation (CV) of the diploid peak was assessed. Results: The lowest CV (10.3%) was achieved with a protocol without boiling, with 1 hour RNase treatment and TO-PRO-3 iodide staining, and a final image recording at ×60 or ×100 magnifications. A sample size of 300 nuclei was generally achievable. By filtering the set of automatically segmented nuclei based on volume, size and shape, followed by interactive removal of the few remaining faulty objects, a single measurement was completely analyzed in approximately 3 hours. Conclusions: The described methodology allows to obtain a largely unbiased sample of nuclei in thick tissue sections using 3D DNA cytometry by confocal laser scanning microscopy within an acceptable time frame for pilot clinical applications, and with a CV small enough to resolve smaller near diploid stemlines. This provides a suitable method for 3D DNA ploidy assessment of selected rare cells based on morphologic characteristics and of clinical samples that are too small to prepare adequate cell suspensions.


2021 ◽  
Author(s):  
Metin Bulus

A recent systematic review of experimental studies conducted in Turkey between 2010 and 2020 reported that small sample sizes had been a significant drawback (Bulus and Koyuncu, 2021). A small chunk of the studies were small-scale true experiments (subjects randomized into the treatment and control groups). The remaining studies consisted of quasi-experiments (subjects in treatment and control groups were matched on pretest or other covariates) and weak experiments (neither randomized nor matched but had the control group). They had an average sample size below 70 for different domains and outcomes. These small sample sizes imply a strong (and perhaps erroneous) assumption about the minimum relevant effect size (MRES) of intervention before an experiment is conducted; that is, a standardized intervention effect of Cohen’s d &lt; 0.50 is not relevant to education policy or practice. Thus, an introduction to sample size determination for pretest-posttest simple experimental designs is warranted. This study describes nuts and bolts of sample size determination, derives expressions for optimal design under differential cost per treatment and control units, provide convenient tables to guide sample size decisions for MRES values between 0.20 ≤ Cohen’s d ≤ 0.50, and describe the relevant software along with illustrations.


2020 ◽  
Author(s):  
Chia-Lung Shih ◽  
Te-Yu Hung

Abstract Background A small sample size (n < 30 for each treatment group) is usually enrolled to investigate the differences in efficacy between treatments for knee osteoarthritis (OA). The objective of this study was to use simulation for comparing the power of four statistical methods for analysis of small sample size for detecting the differences in efficacy between two treatments for knee OA. Methods A total of 10,000 replicates of 5 sample sizes (n=10, 15, 20, 25, and 30 for each group) were generated based on the previous reported measures of treatment efficacy. Four statistical methods were used to compare the differences in efficacy between treatments, including the two-sample t-test (t-test), the Mann-Whitney U-test (M-W test), the Kolmogorov-Smirnov test (K-S test), and the permutation test (perm-test). Results The bias of simulated parameter means showed a decreased trend with sample size but the CV% of simulated parameter means varied with sample sizes for all parameters. For the largest sample size (n=30), the CV% could achieve a small level (<20%) for almost all parameters but the bias could not. Among the non-parametric tests for analysis of small sample size, the perm-test had the highest statistical power, and its false positive rate was not affected by sample size. However, the power of the perm-test could not achieve a high value (80%) even using the largest sample size (n=30). Conclusion The perm-test is suggested for analysis of small sample size to compare the differences in efficacy between two treatments for knee OA.


1999 ◽  
Vol 26 (1) ◽  
pp. 39-44 ◽  
Author(s):  
T. B. Whitaker ◽  
F. G. Giesbrecht ◽  
W. M. Hagler

Abstract Loose shelled kernels (LSK) are a defined grade component of farmers stock peanuts and represented, on the average, 33.3% of the total aflatoxin mass and 7.7% of the kernel mass among the 120 farmers stock peanut lots studied. The functional relationship between aflatoxin in LSK taken from 2-kg test samples and the aflatoxin in farmers stock peanut lots was determined to be linear with zero intercept and a slope of 0.297. The correlation between aflatoxin in LSK and aflatoxin in the lot was 0.844 which suggests that LSK taken from large test samples can be used to estimate the aflatoxin concentration in a farmer's lot. Using only LSK allows large test samples to be used to estimate the lot concentration since LSK can be easily screened from a large test sample. If LSK accounts for 7.7% of the lot kernel mass, a 50-kg sample will yield about 3.9 kg of LSK which can be easily prepared for aflatoxin analysis. Increasing the test sample size from 2 to 50 kg reduced the coefficient of variation associated with estimating a lot with 100 parts per billion (ppb) aflatoxin from 114 to 23%, respectively. As an example, a farmers stock aflatoxin sampling plan with dual tolerances (10 and 100 ppb) that classified lots into three categories was evaluated for two test sample sizes (2 and 50 kg). The effect of increasing test sample size from 2 to 50 kg on the number of lots classified into each of the three categories was demonstrated when measuring aflatoxin only in LSK.


Genetics ◽  
1984 ◽  
Vol 108 (4) ◽  
pp. 1035-1045
Author(s):  
F P Doerder ◽  
S L Diblasi

ABSTRACT The compound nature of the macronucleus of Tetrahymena thermophila presents multiple opportunities for recombination between genes on the same macronuclear chromosome. Such recombinants should be detectable through their assortment at subsequent amitotic macronuclear divisions. Thus, a macronucleus that is initially AB/ab should produce recombinant assortees of the genotypes Ab/aB. Computer simulation shows that, when the recombination frequency is two or fewer times per cell cycle, recombinant assortees are produced at experimentally measurable frequencies of less than 40%. At higher recombination frequencies, linked genes appear to assort independently. The simulations also show that recombination during macronuclear development can be distinguished from recombination in subsequent cell cycles only if the first appearance of recombinant assortees is 100 or more fissions after conjugation. The use of macronuclear recombination and assortment as a means of mapping macronuclear genes is severely constrained by the large variances in assortment outcomes; with experimentally small sample sizes, such mapping is impossible.


2018 ◽  
Author(s):  
Stephan Geuter ◽  
Guanghao Qi ◽  
Robert C. Welsh ◽  
Tor D. Wager ◽  
Martin A. Lindquist

AbstractMulti-subject functional magnetic resonance imaging (fMRI) analysis is often concerned with determining whether there exists a significant population-wide ‘activation’ in a comparison between two or more conditions. Typically this is assessed by testing the average value of a contrast of parameter estimates (COPE) against zero in a general linear model (GLM) analysis. In this work we investigate several aspects of this type of analysis. First, we study the effects of sample size on the sensitivity and reliability of the group analysis, allowing us to evaluate the ability of small sampled studies to effectively capture population-level effects of interest. Second, we assess the difference in sensitivity and reliability when using volumetric or surface based data. Third, we investigate potential biases in estimating effect sizes as a function of sample size. To perform this analysis we utilize the task-based fMRI data from the 500-subject release from the Human Connectome Project (HCP). We treat the complete collection of subjects (N = 491) as our population of interest, and perform a single-subject analysis on each subject in the population. We investigate the ability to recover population level effects using a subset of the population and standard analytical techniques. Our study shows that sample sizes of 40 are generally able to detect regions with high effect sizes (Cohen’s d > 0.8), while sample sizes closer to 80 are required to reliably recover regions with medium effect sizes (0.5 < d < 0.8). We find little difference in results when using volumetric or surface based data with respect to standard mass-univariate group analysis. Finally, we conclude that special care is needed when estimating effect sizes, particularly for small sample sizes.


2017 ◽  
Author(s):  
Xiao Chen ◽  
Bin Lu ◽  
Chao-Gan Yan

ABSTRACTConcerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability / replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 (40 per group)) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect “true” effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility.


Sign in / Sign up

Export Citation Format

Share Document