scholarly journals How many marker loci are necessary? Analysis of dominant marker data sets using two popular population genetic algorithms

2013 ◽  
pp. n/a-n/a ◽  
Author(s):  
Michael F. Nelson ◽  
Neil O. Anderson
2010 ◽  
Vol 2010 ◽  
pp. 1-8 ◽  
Author(s):  
F. A. Aravanopoulos

Clonal identification in forestry may employ different means, each with unique advantages. A comparative evaluation of different approaches is reported. Nine quantitative leaf morphometric parameters, 15 variable codominant (isoenzyme) and 15 variable dominant (RAPD) loci, were used. All clones presented unique multilocus isoenzyme genotypes and 86% presented unique multilocus RAPD genotypes. Quantitative, isoenzyme and molecular data were subjected to principal component analysis, the latter two data sets after vector transformation. Most of the variability (quantitative 99%, isoenzyme 72.5%, RAPD 89%) was accounted for in the first three axes. This study has shown: (1) individual quantitative parameters were inefficient for clonal identification, (2) multilocus clonal identification was successful, (3) dominant markers were more polymorphic than codominant ones: 1.5 variable loci per enzyme system, 7.5 variable RAPD loci per primer, (4) 15 codominant marker loci could identify about 2.8 times more individuals than 15 dominant ones, but this advantage is surpassed when 42 dominant loci are employed, (5) multivariate analysis of morphological, codominant and dominant genetic data could not discriminate at the clonal level. It was concluded that due to their higher number of loci available dominant markers perform better than codominant ones, despite the higher informativeness of the latter.


Genetics ◽  
2004 ◽  
Vol 166 (4) ◽  
pp. 1963-1979 ◽  
Author(s):  
Jinliang Wang

Abstract Likelihood methods have been developed to partition individuals in a sample into full-sib and half-sib families using genetic marker data without parental information. They invariably make the critical assumption that marker data are free of genotyping errors and mutations and are thus completely reliable in inferring sibships. Unfortunately, however, this assumption is rarely tenable for virtually all kinds of genetic markers in practical use and, if violated, can severely bias sibship estimates as shown by simulations in this article. I propose a new likelihood method with simple and robust models of typing error incorporated into it. Simulations show that the new method can be used to infer full- and half-sibships accurately from marker data with a high error rate and to identify typing errors at each locus in each reconstructed sib family. The new method also improves previous ones by adopting a fresh iterative procedure for updating allele frequencies with reconstructed sibships taken into account, by allowing for the use of parental information, and by using efficient algorithms for calculating the likelihood function and searching for the maximum-likelihood configuration. It is tested extensively on simulated data with a varying number of marker loci, different rates of typing errors, and various sample sizes and family structures and applied to two empirical data sets to demonstrate its usefulness.


Genetics ◽  
2003 ◽  
Vol 163 (3) ◽  
pp. 1177-1191 ◽  
Author(s):  
Gregory A Wilson ◽  
Bruce Rannala

Abstract A new Bayesian method that uses individual multilocus genotypes to estimate rates of recent immigration (over the last several generations) among populations is presented. The method also estimates the posterior probability distributions of individual immigrant ancestries, population allele frequencies, population inbreeding coefficients, and other parameters of potential interest. The method is implemented in a computer program that relies on Markov chain Monte Carlo techniques to carry out the estimation of posterior probabilities. The program can be used with allozyme, microsatellite, RFLP, SNP, and other kinds of genotype data. We relax several assumptions of early methods for detecting recent immigrants, using genotype data; most significantly, we allow genotype frequencies to deviate from Hardy-Weinberg equilibrium proportions within populations. The program is demonstrated by applying it to two recently published microsatellite data sets for populations of the plant species Centaurea corymbosa and the gray wolf species Canis lupus. A computer simulation study suggests that the program can provide highly accurate estimates of migration rates and individual migrant ancestries, given sufficient genetic differentiation among populations and sufficient numbers of marker loci.


2014 ◽  
Vol 643 ◽  
pp. 237-242 ◽  
Author(s):  
Tahari Abdou El Karim ◽  
Bendakmousse Abdeslam ◽  
Ait Aoudia Samy

The image registration is a very important task in image processing. In the field of medical imaging, it is used to compare the anatomical structures of two or more images taken at different time to track for example the evolution of a disease. Intensity-based techniques are widely used in the multi-modal registration. To have the best registration, a cost function expressing the similarity between these images is maximized. The registration problem is reduced to the optimization of a cost function. We propose to use neighborhood meta-heuristics (tabu search, simulated annealing) and a meta-heuristic population (genetic algorithms). An evaluation step is necessary to estimate the quality of registration obtained. In this paper we present some results of medical image registration


Author(s):  
Thomas Bäck

In section 1.1.3 it was clarified that a variety of different, more or less drastic changes of the genome are summarized under the term mutation by geneticists and evolutionary biologists. Several mutation events are within the bounds of possibility, ranging from single base pair changes to genomic mutations. The phenotypic effect of genotypic mutations, however, can hardly be predicted from knowledge about the genotypic change. In general, advantageous mutations have a relatively small effect on the phenotype, i.e., their expression does not deviate very much (in phenotype space) from the expression of the unmutated genotype ([Fut90], p. 85). More drastic phenotypic changes are usually lethal or become extinct due to a reduced capability of reproduction. The discussion, to which extent evolution based on phenotypic macro-mutations in the sense of “hopeful monsters” is important to facilitate the process of speciation, is still ongoing (such macromutations have been observed and classified for the fruitfly Drosophila melangonaster, see [Got89], p. 286). Actually, only a few data sets are available to assess the phylogenetic significance of macro-mutations completely, but small phenotypical effects of mutation are clearly observed to be predominant. This is the main argument justifying the use of normally distributed mutations with expectation zero in Evolutionary Programming and Evolution Strategies. It reflects the emphasis of both algorithms on modeling phenotypic rather than genotypic change. The model of mutation is quite different in Genetic Algorithms, where bit reversal events (see section 2.3.2) corresponding with single base pair mutations in biological reality implement a model of evolution on the basis of genotypic changes. As observed in nature, the mutation rate used in Genetic Algorithms is very small (cf. section 2.3.2). In contrast to the biological model, it is neither variable by external influences nor controlled (at least partially) by the genotype itself (cf. section 1.1.3). Holland defined the role of mutation in Genetic Algorithms to be a secondary one, of little importance in comparison to crossover (see [Hol75], p. 111): . . . Summing up: Mutation is a “background” operator, assuring that the crossover operator has a full range of alleles so that the adaptive plan is not trapped on local optima. . . .


Author(s):  
M. Akbarizadeh ◽  
A. Daghbandan ◽  
M. Yaghoobi

Coagulation-flocculation is the most important parts of water treatment process. Traditionally, optimum pre coagulant dosage is determined by used jar tests in laboratory. However; jar tests are time-consuming, expensive, and less adaptive to changes in raw water quality in real time. Soft computing can be used to overcome these limitations. In this paper, multi-objective evolutionary Pareto optimal design of GMDH Type-Neural Network has been used for modeling and predicting of optimum poly electrolyte dosage in Rasht WTP, Guilan, Iran, using Input - output data sets. In this way, multi-objective uniform-diversity genetic algorithms (MUGA) are then used for Pareto optimization of GMDH networks. In order to achieve this modeling, the experimental data were divided into train and test sections. The predicted values were compared with those of experimental values in order to estimate the performance of the GMDH network. Also, Multi Objective Genetic Algorithms (MOGA) are then used for optimization of influence parameters in pre coagulant (Poly electrolyte) dosage.


Author(s):  
Christopher Hammond ◽  
Cameron J. Turner

Non-Uniform Rational B-Splines (NURBS) curves have long been used to model 1D and 2D data because they are efficient to calculate, easy to manipulate, and capable of handling discontinuities and drastic changes in the general topology of the data. However, the user must assist in defining the control points, weights, knots and an order for the curve in order to fit the curve to the data. This paper uses a modified Genetic Algorithm (GA) to choose and manipulate these various parameters to produce a NURBS curve that minimizes the error between the data and the curve and also minimizes the time it takes the algorithm to compute the solution. The algorithm is tested on several 1D trial data sets and the results are explained. Then, several general difficulties for this application of the GA are explained and possible methods for overcoming them are discussed.


Sign in / Sign up

Export Citation Format

Share Document