incremental sampling
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 299 ◽  
pp. 113599
Author(s):  
Alexis López ◽  
Kent Sorenson ◽  
Jeffrey Bamer ◽  
Randa Chichakli ◽  
Thomas Boivin ◽  
...  

2021 ◽  
Author(s):  
Jay Clausen ◽  
Samuel Beal ◽  
Thomas Georgian ◽  
Kevin Gardner ◽  
Thomas Douglas ◽  
...  

Metallic residues are distributed heterogeneously onto small-arms range soils from projectile fragmentation upon impact with a target or berm backstop. Incremental Sampling Methodology (ISM) can address the spatially heterogeneous contamination of surface soils on small-arms ranges, but representative kilogram-sized ISM subsamples are affected by the range of metallic residue particle sizes in the sample. This study compares the precision and concentrations of metals in a small-arms range soil sample processed by a puck mill, ring and puck mill, ball mill, and mortar and pestle prior to analysis. The ball mill, puck mill, and puck and ring mill produced acceptable relative standard deviations of less than 15% for the anthropogenic metals of interest (Lead (Pb), Antimony (Sb), Copper (Cu), and Zinc (Zn)), with the ball mill exhibiting the greatest precision for Pb, Cu, and Zn. Precision by mortar and pestle, without milling, was considerably higher (40% to >100%) for anthropogenic metals. Media anthropogenic metal concentrations varied by more than 40% between milling methods, with the greatest concentrations produced by the puck mill, followed by the puck and ring mill and then the ball mill. Metal concentrations were also dependent on milling time, with concentrations stabilizing for the puck mill by 300 s but still increasing for the ball mill over 20 h. Differences in metal concentrations were not directly related to the surface area of the milled sample. Overall, the tested milling methods were successful in producing reproducible data for soils containing metallic residues. However, the effects of milling type and time on concentrations require consideration in environmental investigations.


2021 ◽  
Author(s):  
Elizabeth Corriveau ◽  
Jay Clausen

Historically, researchers studying contaminated sites have used grab sampling to collect soil samples. However, this methodology can introduce error in the analysis because it does not account for the wide variations of contaminant concentrations in soil. An alternative method is the Incremental Sampling Methodology (ISM), which previous studies have shown more accurately captures the true concentration of contaminants over an area, even in heterogeneous soils. This report describes the methods and materials used with ISM to collect soil samples, specifically for the purpose of mapping subsurface contamination from site activities. The field data presented indicates that ISM is a promising methodology for collecting subsurface soil samples containing contaminants of concern, including metals and semivolatile organic compounds (SVOCs), for analysis. Ultimately, this study found ISM to be useful for supplying information to assist in the decisions needed for remediation activities.


Author(s):  
Harsha Gangammanavar ◽  
Yifan Liu ◽  
Suvrajeet Sen

Stochastic decomposition (SD) has been a computationally effective approach to solve large-scale stochastic programming (SP) problems arising in practical applications. By using incremental sampling, this approach is designed to discover an appropriate sample size for a given SP instance, thus precluding the need for either scenario reduction or arbitrary sample sizes to create sample average approximations (SAA). When compared with the solutions obtained using the SAA procedure, SD provides solutions of similar quality in far less computational time using ordinarily available computational resources. However, previous versions of SD were not applicable to problems with randomness in second-stage cost coefficients. In this paper, we extend its capabilities by relaxing this assumption on cost coefficients in the second stage. In addition to the algorithmic enhancements necessary to achieve this, we also present the details of implementing these extensions, which preserve the computational edge of SD. Finally, we illustrate the computational results obtained from the latest implementation of SD on a variety of test instances generated for problems from the literature. We compare these results with those obtained from the regularized L-shaped method applied to the SAA function of these problems with different sample sizes.


Author(s):  
Pamela Reinagel

AbstractAfter an experiment has been completed and analyzed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers incrementally grow their sample size N in an effort to achieve statistical significance. This is especially tempting in situations when samples are very costly or time-consuming to collect, such that collecting an entirely new sample larger than N (the statistically sanctioned alternative) would be prohibitive. Such post-hoc sampling or “N-hacking” is condemned, however, because it leads to an excess of false positive results. Here Monte-Carlo simulations are used to show why and how incremental sampling causes false positives, but also to challenge the claim that it necessarily produces alarmingly high false positive rates. In a parameter regime that would be representative of practice in many research fields, simulations show that the inflation of the false positive rate is modest and easily bounded. But the effect on false positive rate is only half the story. What many researchers really want to know is the effect N-hacking would have on the likelihood that a positive result is a real effect that will be replicable: the positive predictive value (PPV). This question has not been considered in the reproducibility literature. The answer depends on the effect size and the prior probability of an effect. Although in practice these values are not known, simulations show that for a wide range of values, the PPV of results obtained by N-hacking is in fact higher than that of non-incremented experiments of the same sample size and statistical power. This is because the increase in false positives is more than offset by the increase in true positives. Therefore in many situations, adding a few samples to shore up a nearly-significant result is in fact statistically beneficial. In conclusion, if samples are added after an initial hypothesis test this should be disclosed, and if a p value is reported it should be corrected. But, contrary to widespread belief, collecting additional samples to resolve a borderline p value is not invalid, and can confer previously unappreciated advantages for efficiency and positive predictive value.


2019 ◽  
Vol 4 (5) ◽  
pp. e001810 ◽  
Author(s):  
Chalapati Rao

Information on cause-specific mortality from civil registration and vital statistics (CRVS) systems is essential for health policy and epidemiological research. Currently, there are critical gaps in the international availability of timely and reliable mortality data, which limits planned progress towards the UN Sustainable Development Goals. This article describes an evidence-based strategic approach for strengthening mortality data from CRVS systems. National mortality data availability scores from the Global Burden of Disease study were used to group countries into those with adequate, partial or negligible mortality data. These were further categorised by geographical region and population size, which showed that there were shortcomings in availability of mortality data in approximately two-thirds of all countries. Existing frameworks for evaluating design and functional status of mortality components of CRVS systems were reviewed to identify themes and topics for assessment. Detailed national programme assessments can be used to investigate systemic issues that are likely to affect death reporting, cause of death ascertainment and data management. Assessment findings can guide interventions to strengthen system performance. The strategic national approach should be customised according to data availability and population size and supported by human and institutional capacity building. Countries with larger populations should use an incremental sampling approach to strengthen CRVS systems and use interim data for mortality estimation. Periodic data quality evaluation is required to monitor system performance and scale up interventions. A comprehensive implementation and operations research programme should be concurrently launched to evaluate the feasibility, success and sustainability of system strengthening activities.


10.29007/h4p9 ◽  
2018 ◽  
Author(s):  
Shubham Sharma ◽  
Rahul Gupta ◽  
Subhajit Roy ◽  
Kuldeep S. Meel

Uniform sampling has drawn diverse applications in programming languages and software engineering, like in constrained-random verification (CRV), constrained-fuzzing and bug synthesis. The effectiveness of these applications depend on the uniformity of test stimuli generated from a given set of constraints. Despite significant progress over the past few years, the performance of the state of the art techniques still falls short of those of heuristic methods employed in the industry which sacrifice either uniformity or scalability when generating stimuli.In this paper, we propose a new approach to the uniform generation that builds on recent progress in knowledge compilation. The primary contribution of this paper is marrying knowledge compilation with uniform sampling: our algorithm, KUS, employs the state-of-the-art knowledge compilers to first compile constraints into d-DNNF form, and then, generates samples by making two passes over the compiled representation.We show that KUS is able to significantly outperform existing state-of-the-art algorithms, SPUR and UniGen2, by up to 3 orders of magnitude in terms of runtime while achieving a geometric speedup of 1.7× and 8.3× over SPUR and UniGen2 respectively. Also, KUS achieves a lower PAR-21 score, around 0.82× that of SPUR and 0.38× that of UniGen2. Furthermore, KUS achieves speedups of up to 3 orders of magnitude for incremental sampling. The distribution generated by KUS is statistically indistinguishable from that generated by an ideal uniform sampler. Moreover, KUS is almost oblivious to the number of samples requested.


Sign in / Sign up

Export Citation Format

Share Document