importance sampling techniques
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 2)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 7 ◽  
Author(s):  
Wouter Dillen ◽  
Geert Lombaert ◽  
Mattias Schevenels

Metaheuristic optimization algorithms are strongly present in the literature on discrete optimization. They typically 1) use stochastic operators, making each run unique, and 2) often have algorithmic control parameters that have an unpredictable impact on convergence. Although both 1) and 2) affect algorithm performance, the effect of the control parameters is mostly disregarded in the literature on structural optimization, making it difficult to formulate general conclusions. In this article, a new method is presented to assess the performance of a metaheuristic algorithm in relation to its control parameter values. A Monte Carlo simulation is conducted in which several independent runs of the algorithm are performed with random control parameter values. In each run, a measure of performance is recorded. The resulting dataset is limited to the runs that performed best. The frequency of each parameter value occurring in this subset reveals which values are responsible for good performance. Importance sampling techniques are used to ensure that inferences from the simulation are sufficiently accurate. The new performance assessment method is demonstrated for the genetic algorithm in matlab R2018b, applied to seven common structural optimization test problems, where it successfully detects unimportant parameters (for the problems at hand) while identifying well-performing values for the important parameters. For two of the test problems, a better solution is found than the best solution reported so far in the literature.


2017 ◽  
Author(s):  
François Rousset ◽  
Champak Reddy Beeravolu ◽  
Raphaël Leblois

AbstractLikelihood methods are being developed for inference of migration rates and past demographic changes from population genetic data. We survey an approach for such inference using sequential importance sampling techniques derived from coalescent and diffusion theory. The consistent application and assessment of this approach has required the re-implementation of methods often considered in the context of computer experiments methods, in particular of Kriging which is used as a smoothing technique to infer a likelihood surface from likelihoods estimated in various parameter points, as well as reconsideration of methods for sampling the parameter space appropriately for such inference. We illustrate the performance and application of the whole tool chain on simulated and actual data, and highlight desirable developments in terms of data types and biological scenarios.RésuméDiverses approches ont été développées pour l’inférence des taux de migration et des changements démo-graphiques passés à partir de la variation génétique des populations. Nous décrivons une de ces approches utilisant des techniques d’échantillonnage pondéré séquentiel, fondées sur la modélisation par approches de coalescence et de diffusion de l’évolution de ces polymorphismes. L’application et l’évaluation systématique de cette approche ont requis la ré-implémentation de méthodes souvent considérées pour l’analyse de fonctions simulées, en particulier le krigeage, ici utilisé pour inférer une surface de vraisemblance à partir de vraisemblances estimées en différents points de l’espace des paramètres, ainsi que des techniques d’échantillonage de ces points. Nous illustrons la performance et l’application de cette série de méthodes sur données simulées et réelles, et indiquons les améliorations souhaitables en termes de types de données et de scénarios biologiques.Mots-cléshistoire démographique, processus de coalescence, importance sampling, genetic polymorphismAMS 2000 subject classifications92D10, 62M05, 65C05


2013 ◽  
Vol 32 (2) ◽  
pp. 101 ◽  
Author(s):  
Christian Lantuéjoul

A Boolean model is a union of independent objects (compact random subsets) located at Poisson points. Two algorithms are proposed for simulating a Boolean model in a bounded domain. The first one applies only to stationary models. It generates the objects prior to their Poisson locations. Two examples illustrate its applicability. The second algorithm applies to stationary and non-stationary models. It generates the Poisson points prior to the objects. Its practical difficulties of implementation are discussed. Both algorithms are based on importance sampling techniques, and the generated objects are weighted.


2013 ◽  
Vol 25 (2) ◽  
pp. 418-449 ◽  
Author(s):  
Matthew T. Harrison

Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.


Sign in / Sign up

Export Citation Format

Share Document