The Effectiveness of Increasing Sample Size to Mitigate the Influence of Population Characteristics in Haphazard Sampling

2001 ◽  
Vol 20 (1) ◽  
pp. 169-185 ◽  
Author(s):  
Thomas W. Hall ◽  
Terri L. Herron ◽  
Bethane Jo Pierce ◽  
Terry J. Witt

Over 40 years ago both Deming (1954) and Arkin (1957) expressed concerns that the composition of samples chosen through haphazard selection may be unrepresentative due to the presence of unintended selection biases. To mitigate this problem some experts in the field of audit sampling recommend increasing sample sizes by up to 100 percent when utilizing haphazard selection. To examine the effectiveness of this recommendation 142 participants selected haphazard samples from two populations. The compositions of these samples were then analyzed to determine if certain population elements were overrepresented, and if the extent of overrepresentation declined as sample size increased. Analyses disclosed that certain population elements were overrepresented in the samples. Also, increasing sample size produced no statistically significant change in the composition of samples from one population, while in the second population increasing the sample size produced a statistically significant but minor reduction in overrepresentation. These results suggest that individuals may be incapable of complying with audit guidelines that haphazard sample selections be made without regard to the observable physical features of population elements and cast doubt on the effectiveness of using larger sample sizes to mitigate the problem. Given these findings, standard-setting bodies should reconsider the conditions under which haphazard sampling is sanctioned as a reliable audit tool.

2003 ◽  
Vol 78 (4) ◽  
pp. 983-1002 ◽  
Author(s):  
Randal J. Elder ◽  
Robert D. Allen

This study examines changes in auditor risk assessments and sample size decisions based on information gathered from three large accounting firms for audits during 1994 and 1999. The five-year interval between data collection periods allows us to measure changes in risk assessments and sample sizes between the two periods. Auditors relied on controls and assessed inherent risk below the maximum on most audits, and were more likely to do so in the later period, consistent with a trend of lower risk assessment levels. Average sample sizes declined between 1994 and 1999 for the firms that had larger sample sizes in the earlier period. Overall, we find a significant relationship between inherent risk assessments and sample sizes, but this relationship is stronger in the earlier period and is not significant for all firms, especially in the later period. We find limited evidence of a relationship between control risk and sample sizes.


2018 ◽  
Vol 10 (11) ◽  
pp. 123
Author(s):  
Alberto Cargnelutti Filho ◽  
Cleiton Antonio Wartha ◽  
Jéssica Andiara Kleinpaul ◽  
Ismael Mario Marcio Neu ◽  
Daniela Lixinski Silveira

The aim of this study was to determine the sample size (i.e., number of plants) required to estimate the mean and median of canola (Brassica napus L.) traits of the Hyola 61, Hyola 76, and Hyola 433 hybrids with precision levels. At 124 days after sowing, 225 plants of each hybrid were randomly collected. In each plant, morphological (plant height) and productive traits (number of siliques, fresh matter of siliques, fresh matter of aerial part without siliques, fresh matter of aerial part, dry matter of siliques, dry matter of aerial part without siliques, and dry matter of aerial part) were measured. For each trait, measures of central tendency, variability, skewness, and kurtosis were calculated. Sample size was determined by resampling with replacement of 10,000 resamples. The sample size required for the estimation of measures of central tendency (mean and median) varies between traits and hybrids. Productive traits required larger sample sizes in relation to the morphological traits. Larger sample sizes are required for the hybrids Hyola 433, Hyola 61, and Hyola 76, in this sequence. In order to estimate the mean of canola traits of the Hyola 61, Hyola 76 e Hyola 433 hybrids with the amplitude of the confidence interval of 95% equal to 30% of the estimated mean, 208 plants are required. Whereas 661 plants are necessary to estimate the median with the same precision.


2021 ◽  
pp. 0148558X2110642
Author(s):  
Thomas W. Hall ◽  
Lucas A. Hoogduin ◽  
Bethane Jo Pierce ◽  
Jeffrey J. Tsay

Despite technological advances in accounting systems and audit techniques, sampling remains a commonly used audit tool. For critical estimation applications involving low error rate populations, stratified mean-per-unit sampling (SMPU) has the unique advantage of producing trustworthy confidence intervals. However, SMPU is less efficient than other classical sampling techniques because it requires a larger sample size to achieve comparable precision. To address this weakness, we investigated how SMPU efficiency can be improved via three key design choices: (a) stratum boundary selection method, (b) number of sampling strata, and (c) minimum stratum sample size. Our tests disclosed that SMPU efficiency varies significantly with stratum boundary selection method. An iterative search-based method yielded the best efficiency, followed by the Dalenius–Hodges and Equal-Value-Per-Stratum methods. We also found that variations in Dalenius–Hodges implementation procedures yielded meaningful differences in efficiency. Regardless of boundary selection method, increasing the number of sampling strata beyond levels recommended in the professional literature yielded significant improvements in SMPU efficiency. Although a minor factor, smaller values of minimum stratum sample size were found to yield better SMPU efficiency. Based on these findings, suggestions for improving SMPU efficiency are provided. We also present the first known equations for planning the number of sampling strata given various application-specific parameters.


Author(s):  
Derek Stephens ◽  
Diana J. Schwerha

The purpose of this study was to determine if safety professionals can use an ergonomic intervention costing calculator, which integrates performance and quality data into the costing matrix, to increase communication and better of decision making for the company. The sample size included 9 participants, which included four safety managers, four EHS managers, and one HR generalist. Results showed that all participants found the calculator very useful, well integrated, and it increased communication across the company. The mean System Usability Score (SUS) score was 82, which is rated as a perfectly acceptable software for use. Recommendations from this study include adding some additional features to the calculator, increasing awareness and availability of the calculator, and conducting further analysis using larger sample sizes. Limitations in this study include small sample size and limited interventions that were tested.


2001 ◽  
Vol 20 (1) ◽  
pp. 81-96 ◽  
Author(s):  
William F. Messier ◽  
Steven J. Kachelmeier ◽  
Kevan L. Jensen

The American Institute of Certified Public Accountants has recently set forth significant revisions in its nonstatistical audit sample-size decision aid (AICPA 1999). In a controlled setting involving 149 experienced auditors, we test the effects of the new guidance on auditors' sample-size judgments, extending Kachelmeier and Messier's (1990) (KM) investigation of a previous AICPA (1983) decision aid. We find that the current decision aid results in significantly smaller sample sizes than the previous aid. Further, auditors continue to “work backward” in their choice of decision aid inputs, resulting in sample sizes that are more intuitively acceptable. An optional supplemental worksheet added to the AICPA's guidance to assist the auditor in specifying tolerable misstatement generates a marginal increase in sample sizes, but does not eliminate the working-backward phenomenon. However, the supplemental worksheet significantly reduces sample size variability. Additional findings update the conclusions in KM by showing that the excess of decision-aided sample sizes over intuitive sample sizes in their study no longer applies. A final extension addresses a limitation in KM by showing that sample-size judgments are not sensitive to the variation of population size as a separate treatment factor. Overall, this study directs focus to an improved understanding of nonstatistical sampling judgments, which are of increasing importance in the contemporary audit environment.


2017 ◽  
Author(s):  
Benjamin O. Turner ◽  
Erick J. Paul ◽  
Michael B. Miller ◽  
Aron K. Barbey

Despite a growing body of research suggesting that task-based functional magnetic resonance imaging (fMRI) studies often suffer from a lack of statistical power due to too-small samples, the proliferation of such underpowered studies continues unabated. Using large independent samples across eleven distinct tasks, we demonstrate the impact of sample size on replicability, assessed at different levels of analysis relevant to fMRI researchers. We find that the degree of replicability for typical sample sizes is modest and that sample sizes much larger than typical (e.g., N = 100) produce results that fall well short of perfectly replicable. Thus, our results join the existing line of work advocating for larger sample sizes. Moreover, because we test sample sizes over a fairly large range and use intuitive metrics of replicability, our hope is that our results are more understandable and convincing to researchers who may have found previous results advocating for larger samples inaccessible.


Author(s):  
Uìis Kagainis

AbstractThe morphology of Oribatida and similar little-known groups of organisms varies considerably, which complicates morphological analysis (e.g. species descriptions). Qualitative analyses have been carried out mostly on a small number of individuals (n < 25). There is lack of studies dealing with mechanisms of how that variation can change in relation to sample size and insufficient discussion on whether qualitative or quantitative analysis is more appropriate for description of morphological variability. A total of 500 adult Carabodes subarcticus Trägårdh, 1902 Oribatida were collected from a local population. Six qualitative and six quantitative traits were characterised using light microscopy and scanning electron microscopy. The relationships between the sample size of different subsamples (n < 500) and morphological variation were examined using randomised selection (10 000 replicates) and calculation of the percentage of cases in which the sizevalues were within a certain distance (less than 10%, 25%, or 50%) from the range of the reference population (n = 500). Qualitative traits were significantly less variable than quantitative due to binomial distribution of the obtained data; thus they were less comparable and interpretive to describe morphological variability. When sample size was small (n < 25), in less than 2 to 15% of cases the observed variability was within 10% distance of the range of the reference population. Larger sample sizes resulted in size-ranges that approached those of the reference population. It is possible that execution of quantitative characterisation and use of relatively larger sample sizes could improve species descriptions by characterising the morphological variability more precisely and objectively.


1990 ◽  
Vol 29 (03) ◽  
pp. 243-246 ◽  
Author(s):  
M. A. A. Moussa

AbstractVarious approaches are considered for adjustment of clinical trial size for patient noncompliance. Such approaches either model the effect of noncompliance through comparison of two survival distributions or two simple proportions. Models that allow for variation of noncompliance and event rates between time intervals are also considered. The approach that models the noncompliance adjustment on the basis of survival functions is conservative and hence requires larger sample size. The model to be selected for noncompliance adjustment depends upon available estimates of noncompliance and event rate patterns.


2020 ◽  
Vol 26 (2) ◽  
pp. 218-227
Author(s):  
Yi-Hang Chiu ◽  
Chia-Yueh Hsu ◽  
Mong-Liang Lu ◽  
Chun-Hsin Chen

Background: Clozapine has been used in treatment-resistant patients with schizophrenia. However, only 40% of patients with treatment-resistant schizophrenia have response to clozapine. Many augmentation strategies have been proposed to treat those clozapine-resistant patients, but the results are inconclusive. In this review, we intended to review papers dealing with the augmentation strategies in the treatment of clozapineresistant patients with schizophrenia. Method: We reviewed randomized, double-blind, placebo- or sham-controlled trials (RCT) for clozapine-resistant patients with schizophrenia in Embase, PsycINFO, Cochrane, and PubMed database from January 1990 to June 2019. Results: Antipsychotics, antidepressants, mood stabilizers, brain stimulation, such as electroconvulsive therapy (ECT) and repetitive transcranial magnetic stimulation, and other strategies, were used as an augmentation in clozapine-resistant patients with schizophrenia. Except for better evidence in memantine with 2 RCTs and cognitive behavior therapy in 2 studies to support its effectiveness, we found that all the other effective augmentations, including sulpiride, ziprasidone, duloxetine, mirtazapine, ECT, sodium benzoate, ginkgo biloba, and minocycline, had only one RCT with limited sample size. Conclusion: In this review, no definite effective augmentation strategy was found for clozapine-resistant patients. Some potential strategies with beneficial effects on psychopathology need further studies with a larger sample size to support their efficacy.


2021 ◽  
Vol 11 (3) ◽  
pp. 234
Author(s):  
Abigail R. Basson ◽  
Fabio Cominelli ◽  
Alexander Rodriguez-Palacios

Poor study reproducibility is a concern in translational research. As a solution, it is recommended to increase sample size (N), i.e., add more subjects to experiments. The goal of this study was to examine/visualize data multimodality (data with >1 data peak/mode) as cause of study irreproducibility. To emulate the repetition of studies and random sampling of study subjects, we first used various simulation methods of random number generation based on preclinical published disease outcome data from human gut microbiota-transplantation rodent studies (e.g., intestinal inflammation and univariate/continuous). We first used unimodal distributions (one-mode, Gaussian, and binomial) to generate random numbers. We showed that increasing N does not reproducibly identify statistical differences when group comparisons are repeatedly simulated. We then used multimodal distributions (>1-modes and Markov chain Monte Carlo methods of random sampling) to simulate similar multimodal datasets A and B (t-test-p = 0.95; N = 100,000), and confirmed that increasing N does not improve the ‘reproducibility of statistical results or direction of the effects’. Data visualization with violin plots of categorical random data simulations with five-integer categories/five-groups illustrated how multimodality leads to irreproducibility. Re-analysis of data from a human clinical trial that used maltodextrin as dietary placebo illustrated multimodal responses between human groups, and after placebo consumption. In conclusion, increasing N does not necessarily ensure reproducible statistical findings across repeated simulations due to randomness and multimodality. Herein, we clarify how to quantify, visualize and address disease data multimodality in research. Data visualization could facilitate study designs focused on disease subtypes/modes to help understand person–person differences and personalized medicine.


Sign in / Sign up

Export Citation Format

Share Document