Looking Beyond Selection Probabilities: Adaptation of the χ2 Measure for the Performance Analysis of Selection Methods in GAs

2001 ◽  
Vol 9 (2) ◽  
pp. 243-256 ◽  
Author(s):  
Thomas Schell ◽  
Stefan Wegenkittl

Viewing the selection process in a genetic algorithm as a two-step procedure consisting of the assignment of selection probabilities and the sampling according to this distribution, we employ the χ2 measure as a tool for the analysis of the stochastic properties of the sampling. We are thereby able to compare different selection schemes even in the case that their probability distributions coincide. Introducing a new sampling algorithm with adjustable accuracy and employing two-level test designs enables us to further reveal the intrinsic correlation structures of well-known sampling algorithms. Our methods apply well to integral methods like tournament selection and can be automated.

2020 ◽  
Vol 498 (3) ◽  
pp. 4492-4502 ◽  
Author(s):  
Rory J E Smith ◽  
Gregory Ashton ◽  
Avi Vajpeyi ◽  
Colm Talbot

ABSTRACT Understanding the properties of transient gravitational waves (GWs) and their sources is of broad interest in physics and astronomy. Bayesian inference is the standard framework for astrophysical measurement in transient GW astronomy. Usually, stochastic sampling algorithms are used to estimate posterior probability distributions over the parameter spaces of models describing experimental data. The most physically accurate models typically come with a large computational overhead which can render data analsis extremely time consuming, or possibly even prohibitive. In some cases highly specialized optimizations can mitigate these issues, though they can be difficult to implement, as well as to generalize to arbitrary models of the data. Here, we investigate an accurate, flexible, and scalable method for astrophysical inference: parallelized nested sampling. The reduction in the wall-time of inference scales almost linearly with the number of parallel processes running on a high-performance computing cluster. By utilizing a pool of several hundreds or thousands of CPUs in a high-performance cluster, the large wall times of many astrophysical inferences can be alleviated while simultaneously ensuring that any GW signal model can be used ‘out of the box’, i.e. without additional optimization or approximation. Our method will be useful to both the LIGO-Virgo-KAGRA collaborations and the wider scientific community performing astrophysical analyses on GWs. An implementation is available in the open source gravitational-wave inference library pBilby (parallel bilby).


Biometrika ◽  
2019 ◽  
Vol 107 (1) ◽  
pp. 1-23 ◽  
Author(s):  
D B Dunson ◽  
J E Johndrow

Summary In a 1970 Biometrika paper, W. K. Hastings developed a broad class of Markov chain algorithms for sampling from probability distributions that are difficult to sample from directly. The algorithm draws a candidate value from a proposal distribution and accepts the candidate with a probability that can be computed using only the unnormalized density of the target distribution, allowing one to sample from distributions known only up to a constant of proportionality. The stationary distribution of the corresponding Markov chain is the target distribution one is attempting to sample from. The Hastings algorithm generalizes the Metropolis algorithm to allow a much broader class of proposal distributions instead of just symmetric cases. An important class of applications for the Hastings algorithm corresponds to sampling from Bayesian posterior distributions, which have densities given by a prior density multiplied by a likelihood function and divided by a normalizing constant equal to the marginal likelihood. The marginal likelihood is typically intractable, presenting a fundamental barrier to implementation in Bayesian statistics. This barrier can be overcome by Markov chain Monte Carlo sampling algorithms. Amazingly, even after 50 years, the majority of algorithms used in practice today involve the Hastings algorithm. This article provides a brief celebration of the continuing impact of this ingenious algorithm on the 50th anniversary of its publication.


2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Milan Narandžić ◽  
Christian Schneider ◽  
Wim Kotterman ◽  
Reiner S. Thomä

Starting from the premise that stochastic properties of a radio environment can be abstracted by defining scenarios, a generic MIMO channel model is built by the WINNER project. The parameter space of the WINNER model is, among others, described by normal probability distributions and correlation coefficients that provide a suitable space for scenario comparison. The possibility to quantify the distance between reference scenarios and measurements enables objective comparison and classification of measurements into scenario classes. In this paper we approximate the WINNER scenarios with multivariate normal distributions and then use the mean Kullback-Leibler divergence to quantify their divergence. The results show that the WINNER scenario groups (A, B, C, and D) or propagation classes (LoS, OLoS, and NLoS) do not necessarily ensure minimum separation within the groups/classes. Instead, the following grouping minimizes intragroup distances: (i) indoor-to-outdoor and outdoor-to-indoor scenarios (A2, B4, and C4), (ii) macrocell configurations for suburban, urban, and rural scenarios (C1, C2, and D1), and (iii) indoor/hotspot/microcellular scenarios (A1, B3, and B1). The computation of the divergence between Ilmenau and Dresden measurements and WINNER scenarios confirms that the parameters of the C2 scenario are a proper reference for a large variety of urban macrocell environments.


2021 ◽  
Author(s):  
Hsing-Jui Wang ◽  
Soohyun Yang ◽  
Ralf Merz ◽  
Stefano Basso

<p>Heavy-tailed probability distributions of streamflow are frequently observed in river basins. They indicate sizable odds of extreme events in these catchments and thus signal the existence of enhanced hydrological perils. Notwithstanding their relevance for characterizing the hydrological hazard of river basins, identifying specific mechanisms which promote the emergence of heavy-tailed flow distributions has proved challenging due to the complex hydrological response of such dynamical systems exposed to highly variable rainfall inputs.</p><p>In this study we combine a continuous hydrological model grounded on the geomorphological theory of the hydrologic response with archetypical descriptions of the spatial and temporal distributions of rainfall inputs and catchment attributes to investigate physical mechanisms and stochastic features leading to the emergence of heavy tails.</p><p>In the model, soil moisture dynamics driven by the water balance in the root zone trigger superficial and subsurface runoff contributions, which are routed to the catchment outlet by means of a representation of transport by travel time distributions. The framework enables a parsimonious distributed description of hydrological processes, suitably considered with their stochastic character, and is thus fit for the goal of investigating manifold mechanisms promoting heavy-tailed streamflow distributions.</p><p>A set of archetypical spatial and temporal variabilities of rainfall inputs and catchment attributes (e.g., localized versus uniform rainfall in the catchment, lumped versus distributed catchment attributes, mainly upstream versus downstream source areas, high versus low rainfall frequency) are finally imposed in the model and their capability (or not) to affect the tail of the streamflow distribution is investigated.</p><p>The proposed framework provides a way to disentangle physical attributes of river catchments and stochastic properties of hydroclimatic variables which control the emergence of heavy-tailed streamflow distributions and thus identify the key drivers of the inherent hydrological hazard of river basins.</p>


Author(s):  
Mohammad Shehab ◽  
Ahamad Tajudin Khader

Background: Cuckoo Search Algorithm (CSA) was introduced by Yang and Deb in 2009. It considers as one of the most successful in various fields compared with the metaheuristic algorithms. However, random selection is used in the original CSA which means there is no high chance for the best solution to select, also, losing the diversity. Methods: In this paper, the Modified Cuckoo Search Algorithm (MCSA) is proposed to enhance the performance of CSA for unconstrained optimization problems. MCSA is focused on the default selection scheme of CSA (i.e. random selection) which is replaced with tournament selection. So, MCSA will increase the probability of better results and avoid the premature convergence. A set of benchmark functions is used to evaluate the performance of MCSA. Results: The experimental results showed that the performance of MCSA outperformed standard CSA and the existing literature methods. Conclusion: The MCSA provides the diversity by using the tournament selection scheme because it gives the opportunity to all solutions to participate in the selection process.


2020 ◽  
Vol 77 (1) ◽  
pp. 125-131 ◽  
Author(s):  
Helena Norberg ◽  
Ellinor Bergdahl ◽  
Karin Hellström Ängerud ◽  
Krister Lindmark

Abstract Purpose To develop a model for systematic introduction and to test the feasibility in a chronic disease population. We also investigated how the approach was received by the patients. Methods and results The systematic introduction approach is a seven-step procedure: step 1, define a few main criteria; step 2, primary scan patients with the one or two main criteria using computerized medical records/databases/clinical registries; step 3, identify patients applying the other predefined criteria; step 4, evaluate if any examinations/laboratory test updates are required; step 5, summon identified patients to the clinic with an information letter; step 6, discuss treatment with the patient and prescribe if appropriate; and step 7, follow up on initiated therapy and evaluate the applied process. The model was tested in a case study during introduction of the new drug sacubitril-valsartan in a heart failure population. In total, 76 out of 1924 patients were identified to be eligible for sacubitril-valsartan and summoned to the clinic to discuss treatment. Patient experiences with the approach were investigated in an interview study with general inductive approach using qualitative content analysis. This resulted in three final categories: a good approach, role of the information letter, and trust in care. Conclusions The systematic introduction approach ensures that strict criteria are used in the selection process and that a treatment can be implemented in eligible patients within a specified population with limited resources and time. The model was effective in our case study and maintained the patient’s confidence in healthcare.


2020 ◽  
Author(s):  
Helena Norberg ◽  
Ellinor Bergdahl ◽  
Karin Hellström Ängerud ◽  
Krister Lindmark

Abstract Background When novel treatments prove to be better than conventional therapy there is a need for effective introduction, both for cost-effectiveness and to minimize patient suffering. Effective implementation is a challenge in patients with chronic diseases who may be treated both in primary care and/or specialist clinics. Studies have shown that introduction is often unnecessary delayed. Our aims were to develop a model for systematic introduction and to test the feasibility of this model on a new treatment of a chronic disease. We also investigated how such an approach would be received by the patients. Methods The systematic introduction approach is a seven step procedure; Step 1 - define a few main criteria for the specific therapy, Step 2 – primary scan patients with the one or two main criteria using computerised medical records, databases or clinical registries, Step 3 – identify patients applying the other predefined criteria, Step 4 - evaluate if any examinations or laboratory test updates are required, Step 5 - summon identified patients to the clinic with an information letter, Step 6 – discuss treatment with the patient and prescribe to appropriate patients, Step 7 – follow-up on initiated therapy and evaluate the applied process. The model was tested in a case study during introduction of the new drug sacubitril-valsartan in a heart failure population. Patient experiences with the approach were investigated in an interview study with general inductive approach using qualitative content analysis. Results By applying the systematic introduction approach in the case study, 76 out of 1924 patients were identified to be eligible for sacubitril-valsartan and summoned to the clinic to discuss treatment. The content analysis resulted in three final categories of how the model was received by the patients; A good approach , Role of the information letter , and Trust in care . Conclusions The systematic introduction approach ensures that strict criteria are used in the selection process and that a treatment can be implemented in eligible patients within a specified population with limited resources and time. The model was effective in our case study and maintained the patient’s confidence in health care.


2019 ◽  
Vol 2 (2) ◽  
pp. 72
Author(s):  
Retno Dewi Anissa ◽  
Wayan Firdaus Mahmudy ◽  
Agus Wahyu Widodo

There are so many problems with food scarcity. One of them is not too good rice quality. So, an enhancement in rice production through an optimal fertiliser composition. Genetic algorithm is used to optimise the composition for a more affordable price. The process of genetic algorithm is done by using a representation of a real code chromosome. The reproduction process using a one-cut point crossover and random mutation, while for the selection using binary tournament selection process for each chromosome. The test results showed the optimum results are obtained on the size of the population of 10, the crossover rate of 0.9 and the mutation rate of 0.1. The amount of generation is 10 with the best fitness value is generated is equal to 1,603.


2021 ◽  
Author(s):  
◽  
Huayang Xie

<p>This thesis presents an analysis of the selection process in tree-based Genetic Programming (GP), covering the optimisation of both parent and offspring selection, and provides a detailed understanding of selection and guidance on how to improve GP search effectively and efficiently. The first part of the thesis providesmodels and visualisations to analyse selection behaviour in standard tournament selection, clarifies several issues in standard tournament selection, and presents a novel solution to automatically and dynamically optimise parent selection pressure. The fitness evaluation cost of parent selection is then addressed and some cost-saving algorithms introduced. In addition, the feasibility of using good predecessor programs to increase parent selection efficiency is analysed. The second part of the thesis analyses the impact of offspring selection pressure on the overall GP search performance. The fitness evaluation cost of offspring selection is then addressed, with investigation of some heuristics to efficiently locate good offspring by constraining crossover point selection structurally through the analysis of the characteristics of good crossover events. The main outcomes of the thesis are three new algorithms and four observations: 1) a clustering tournament selection method is developed to automatically and dynamically tune parent selection pressure; 2) a passive evaluation algorithm is introduced for reducing parent fitness evaluation cost for standard tournament selection using small tournament sizes; 3) a heuristic population clustering algorithm is developed to reduce parent fitness evaluation cost while taking advantage of clustering tournament selection and avoiding the tournament size limitation; 4) population size has little impact on parent selection pressure thus the tournament size configuration is independent of population size; and different sampling replacement strategies have little impact on the selection behaviour in standard tournament selection; 5) premature convergence occurs more often when stochastic elements are removed from both parent and offspring selection processes; 6) good crossover events have a strong preference for whole program trees, and (less strongly) single-node or small subtrees that are at the bottom of parent program trees; 7) the ability of standard GP crossover to generate good offspring is far below what was expected.</p>


Sign in / Sign up

Export Citation Format

Share Document