Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans

2012 ◽  
Vol 102 (5) ◽  
pp. 531-538
Author(s):  
R. Shah ◽  
S.P. Worner ◽  
R.B. Chapman

AbstractPesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

Author(s):  
Maram Salem ◽  
Zeinab Amin ◽  
Moshira Ismail

This paper presents Bayesian reliability sampling plans for the Weibull distribution based on progressively Type-II censored data with binomial removals. In constructing sampling plans, the decision theoretic approach is used. A dependent bivariate nonconjugate prior is employed. The total cost of the sampling plan consists of sampling, time-consuming, rejection, and acceptance costs. The decision rule is based on the Bayes estimator of the survival function. Lindley’s approximation is used to obtain Bayes estimates of the survival function under the quadratic and LINEX loss functions. However, the poor performance of Lindley’s approximation with small sample sizes can be observed. The Metropolis-within-Gibbs Markov Chain Monte Carlo (MCMC) algorithm show significantly improved performance compared to Lindley’s approximation. We use simulation studies to evaluate the Bayes risk and determine the optimal sampling plans for different sample sizes, observed number of failures, binomial removal probabilities and minimum acceptable reliability.


2018 ◽  
Author(s):  
Christopher Chabris ◽  
Patrick Ryan Heck ◽  
Jaclyn Mandart ◽  
Daniel Jacob Benjamin ◽  
Daniel J. Simons

Williams and Bargh (2008) reported that holding a hot cup of coffee caused participants to judge a person’s personality as warmer, and that holding a therapeutic heat pad caused participants to choose rewards for other people rather than for themselves. These experiments featured large effects (r = .28 and .31), small sample sizes (41 and 53 participants), and barely statistically significant results. We attempted to replicate both experiments in field settings with more than triple the sample sizes (128 and 177) and double-blind procedures, but found near-zero effects (r = –.03 and .02). In both cases, Bayesian analyses suggest there is substantially more evidence for the null hypothesis of no effect than for the original physical warmth priming hypothesis.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Weitong Cui ◽  
Huaru Xue ◽  
Lei Wei ◽  
Jinghua Jin ◽  
Xuewen Tian ◽  
...  

Abstract Background RNA sequencing (RNA-Seq) has been widely applied in oncology for monitoring transcriptome changes. However, the emerging problem that high variation of gene expression levels caused by tumor heterogeneity may affect the reproducibility of differential expression (DE) results has rarely been studied. Here, we investigated the reproducibility of DE results for any given number of biological replicates between 3 and 24 and explored why a great many differentially expressed genes (DEGs) were not reproducible. Results Our findings demonstrate that poor reproducibility of DE results exists not only for small sample sizes, but also for relatively large sample sizes. Quite a few of the DEGs detected are specific to the samples in use, rather than genuinely differentially expressed under different conditions. Poor reproducibility of DE results is mainly caused by high variation of gene expression levels for the same gene in different samples. Even though biological variation may account for much of the high variation of gene expression levels, the effect of outlier count data also needs to be treated seriously, as outlier data severely interfere with DE analysis. Conclusions High heterogeneity exists not only in tumor tissue samples of each cancer type studied, but also in normal samples. High heterogeneity leads to poor reproducibility of DEGs, undermining generalization of differential expression results. Therefore, it is necessary to use large sample sizes (at least 10 if possible) in RNA-Seq experimental designs to reduce the impact of biological variability and DE results should be interpreted cautiously unless soundly validated.


Forests ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 772
Author(s):  
Bryce Frank ◽  
Vicente J. Monleon

The estimation of the sampling variance of point estimators under two-dimensional systematic sampling designs remains a challenge, and several alternative variance estimators have been proposed in the past few decades. In this work, we compared six alternative variance estimators under Horvitz-Thompson (HT) and post-stratification (PS) point estimation regimes. We subsampled a multitude of species-specific forest attributes from a large, spatially balanced national forest inventory to compare the variance estimators. A variance estimator that assumes a simple random sampling design exhibited positive relative bias under both HT and PS point estimation regimes ranging between 1.23 to 1.88 and 1.11 to 1.78 for HT and PS, respectively. Alternative estimators reduced this positive bias with relative biases ranging between 1.01 to 1.66 and 0.90 to 1.64 for HT and PS, respectively. The alternative estimators generally obtained improved efficiencies under both HT and PS, with relative efficiency values ranging between 0.68 to 1.28 and 0.68 to 1.39, respectively. We identified two estimators as promising alternatives that provide clear improvements over the simple random sampling estimator for a wide variety of attributes and under HT and PS estimation regimes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


Author(s):  
Kathryn Rayson ◽  
Louise Waddington ◽  
Dougal Julian Hare

Abstract Background: Cognitive behavioural therapy (CBT) is in high demand due to its strong evidence base and cost effectiveness. To ensure CBT is delivered as intended in research, training and practice, fidelity assessment is needed. Fidelity is commonly measured by assessors rating treatment sessions, using CBT competence scales (CCSs). Aims: The current review assessed the quality of the literature examining the measurement properties of CCSs and makes recommendations for future research, training and practice. Method: Medline, PsychINFO, Scopus and Web of Science databases were systematically searched to identify relevant peer-reviewed, English language studies from 1980 onwards. Relevant studies were those that were primarily examining the measurement properties of CCSs used to assess adult 1:1 CBT treatment sessions. The quality of studies was assessed using a novel tool created for this study, following which a narrative synthesis is presented. Results: Ten studies met inclusion criteria, most of which were assessed as being ‘fair’ methodological quality, primarily due to small sample sizes. Construct validity and responsiveness definitions were applied inconsistently in the studies, leading to confusion over what was being measured. Conclusions: Although CCSs are widely used, we need to pay careful attention to the quality of research exploring their measurement properties. Consistent definitions of measurement properties, consensus about adequate sample sizes and improved reporting of individual properties are required to ensure the quality of future research.


Sign in / Sign up

Export Citation Format

Share Document