Guided Visibility Sampling++

Author(s):  
Thomas Koch ◽  
Michael Wimmer

Visibility computation is a common problem in the field of computer graphics. Examples include occlusion culling, where parts of the scene are culled away, or global illumination simulations, which are based on the mutual visibility of pairs of points to calculate lighting. In this paper, an aggressive from-region visibility technique called Guided Visibility Sampling++ (GVS++) is presented. The proposed technique improves the Guided Visibility Sampling algorithm through improved sampling strategies, thus achieving low error rates on various scenes, and being over four orders of magnitude faster than the original CPU-based Guided Visibility Sampling implementation. We present sampling strategies that adaptively compute sample locations and use ray casting to determine a set of triangles visible from a flat or volumetric rectangular region in space. This set is called a potentially visible set (PVS). Based on initial random sampling, subsequent exploration phases progressively grow an intermediate solution. A termination criterion is used to terminate the PVS search. A modern implementation using the Vulkan graphics API and RTX ray tracing is discussed. Furthermore, we show optimizations that allow for an implementation that is over 20 times faster than a naive implementation.

Author(s):  
Augusto Hernandez-Solis ◽  
Christian Ekberg ◽  
Arvid O¨dega˚rd Jensen ◽  
Christophe Demaziere ◽  
Ulf Bredolt

In recent years, a more realistic safety analysis of nuclear reactors has been based on best estimate (BE) computer codes. Because their predictions are unavoidably affected by conceptual, aleatory and experimental sources of uncertainty, an uncertainty analysis is needed if useful conclusions are to be obtained from BE codes. In this paper, statistical uncertainty analyses of cross-sectional averaged void fraction calculations using the POLCA-T system code, and based on the BWR Full-Size Fine-Mesh Bundle Test (BFBT) benchmark are presented by means of two different sampling strategies: Latin Hypercube (LHS) and Simple Random Sampling (SRS). LHS has the property of densely stratifying across the range of each input probability distribution, allowing a much better coverage of the input uncertainties than SRS. The aim here is to compare both uncertainty analyses on the BWR assembly void axial profile prediction in steady-state, and on the transient void fraction prediction at a certain axial level coming from a simulated re-circulation pump trip scenario. It is shown that the replicated void fraction mean (either in steady-state or transient conditions) has less variability when using LHS than SRS for the same number of calculations (i.e. same input space sample size) even if the resulting void fraction axial profiles are non-monotonic. It is also shown that the void fraction uncertainty limits achieved with SRS by running 458 calculations (sample size required to cover 95% of 8 uncertain input parameters with a 95% confidence), result in the same uncertainty limits achieved by LHS with only 100 calculations. These are thus clear indications on the advantages of using LHS.


2003 ◽  
Vol 10 (1) ◽  
pp. 63
Author(s):  
Stuart E. Marsh ◽  
Thomas K. Park ◽  
Barbara A. Eiswerth ◽  
Mohamud H. Farah ◽  
Douglas S. Rautenkranz ◽  
...  

This article discusses the sampling scheme employed by the Six Cities project to ensure that all areas of habitation have a chance of being selected, that we know what that chance is, and that we are able to critically evaluate the sampling strategy after it has been carried out. A weighting strategy that is slightly different from one used only to do research is therefore employed. The article describes a procedure for generating two kinds of random sample points for areas of change and of no change. Finally, a few simple rules for incorporating socioeconomic, demographic, and other relevant information into the sampling frame without introducing bias into the sample are discussed.Key words: sampling strategies; random sampling; sampling bias; local knowledge; Six Cities project; remote sensing; urban areas in Africa 


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Padmi Nagirikandalage ◽  
Arnaz Binsardi ◽  
Kaouther Kooli

Purpose This paper aims to investigate how professionals such as accountants, auditors, senior civil servants and academics perceive the use of audit sampling strategies adopted by professionals to increase detection rates of frauds and corruption within the public sector in Africa. It also examines the respondents’ perceived values regarding the reasons for committing frauds, types of fraud and corruption, as well as the aspects of audit sampling strategies to tackle frauds. Design/methodology/approach This research uses non-parametric statistics and logistic regression to analyse the respondents’ opinions regarding the state of frauds and corruption in Africa (particularly in Tunisia and non-Tunisia countries), the common factors behind people committing frauds, including the types of frauds and corruption and the respondents’ opinions on the use of audit sampling strategies (non-random and random) to examine the instances of frauds and corruption. Findings The findings indicate that most respondents prefer to use non-probabilistic audit sampling rather than more robust sampling strategies such as random sampling and systematic random sampling to detect frauds and corruption. In addition, although there are some minor statistical differences between the countries in terms of the respondents’ perceived values on skimming fraud and on the use of audit random sampling to tackle rampant corruption in Africa, the overall findings indicate that opinions do not significantly differ between the respondents from Tunisia and other countries in terms of the types of fraud, the reasons for committing fraud and the auditing sampling strategies used to investigate the frauds. Research limitations/implications This research serves as an analytical exploratory study to instigate further audit sampling research to combat rampant fraud and corruption in the public sector in Africa. Originality/value There are few or non-existent studies investigating the application of audit sampling strategies in Africa countries, particularly to examine the application of audit random sampling and audit non-random sampling strategies to detect fraudulent activities and corruption. Correspondingly, this research carries strategic implications for accountants and auditors to successfully detect fraudulent activities and corruption in Africa.


1993 ◽  
Vol 43 (1-2) ◽  
pp. 65-74
Author(s):  
N. Mukhopadhyay ◽  
S. Chattopadhyay

Sequential and multistage sampling strategies via simple random sampling without replacement, are proposed for simultaneously estimating several proportions in a finite population. Various asymptotic first-order properties are addressed, while some limited moderate sample performance have also been included. AMS (1980) Subject Classification: Primary 62L99; Secondary 62L12


2004 ◽  
Vol 78 (1) ◽  
pp. 03-11 ◽  
Author(s):  
R. M. Lewis ◽  
B. Grundy ◽  
L. A. Kuehn

AbstractWith an increase in the number of candidate genes for important traits in livestock, effective strategies for incorporating such genes into selection programmes are increasingly important. Those strategies in part depend on the frequency of a favoured allele in a population. Since comprehensive genotyping of a population is seldom possible, we investigate the consequences of sampling strategies on the reliability of the gene frequency estimate for a bi-allelic locus. Even within a subpopulation or line, often only a proportion of individuals will be genotype tested. However, through segregation analysis, probable genotypes can be assigned to individuals that themselves were not tested, using known genotypes on relatives and a starting (presumed) gene frequency. The value of these probable genotypes in estimation of gene frequency was considered. A subpopulation or line was stochastically simulated and sampled at random, over a cluster of years or by favouring a particular genotype. Line was simulated (replicated) 1000 times. The reliability of gene frequency estimates depended on the sampling strategy used. With random sampling, even when a small proportion of a line was genotyped (0·10), the gene frequency of the population was well estimated from the across-line mean. When information on probable genotypes on untested individuals was combined with known genotypes, the between-line variance in gene frequency was estimated well; including probable genotypes overcame problems of statistical sampling. When the sampling strategy favoured a particular genotype, unsurprisingly the estimate of gene frequency was biased towards the allele favoured. In using probable genotypes the bias was lessened but the estimate of gene frequency still reflected the sampling strategy rather than the true population frequency. When sampling was confined to a few clustered years, the estimation of gene frequency was biased for those generations preceding the sampling event, particularly when the presumed starting gene frequency differed from the true population gene frequency. The potential risks of basing inferences about a population from a potentially biased sample are discussed.


Crop Science ◽  
2001 ◽  
Vol 41 (1) ◽  
pp. 241-246 ◽  
Author(s):  
C. Grenier ◽  
P. Hamon ◽  
P.J. Bramel‐Cox

Sign in / Sign up

Export Citation Format

Share Document