scholarly journals Construction of a Genetic Linkage Map in Tetraploid Species Using Molecular Markers

Genetics ◽  
2001 ◽  
Vol 157 (3) ◽  
pp. 1369-1385 ◽  
Author(s):  
Z W Luo ◽  
C A Hackett ◽  
J E Bradshaw ◽  
J W McNicol ◽  
D Milbourne

Abstract This article presents methodology for the construction of a linkage map in an autotetraploid species, using either codominant or dominant molecular markers scored on two parents and their full-sib progeny. The steps of the analysis are as follows: identification of parental genotypes from the parental and offspring phenotypes; testing for independent segregation of markers; partition of markers into linkage groups using cluster analysis; maximum-likelihood estimation of the phase, recombination frequency, and LOD score for all pairs of markers in the same linkage group using the EM algorithm; ordering the markers and estimating distances between them; and reconstructing their linkage phases. The information from different marker configurations about the recombination frequency is examined and found to vary considerably, depending on the number of different alleles, the number of alleles shared by the parents, and the phase of the markers. The methods are applied to a simulated data set and to a small set of SSR and AFLP markers scored in a full-sib population of tetraploid potato.

Author(s):  
Valentin Raileanu ◽  

The article briefly describes the history and fields of application of the theory of extreme values, including climatology. The data format, the Generalized Extreme Value (GEV) probability distributions with Bock Maxima, the Generalized Pareto (GP) distributions with Point of Threshold (POT) and the analysis methods are presented. Estimating the distribution parameters is done using the Maximum Likelihood Estimation (MLE) method. Free R software installation, the minimum set of required commands and the GUI in2extRemes graphical package are described. As an example, the results of the GEV analysis of a simulated data set in in2extRemes are presented.


1997 ◽  
Vol 70 (3) ◽  
pp. 237-250 ◽  
Author(s):  
C. MALIEPAARD ◽  
J. JANSEN ◽  
J. W. VAN OOIJEN

Linkage analysis and map construction using molecular markers is far more complicated in full-sib families of outbreeding plant species than in progenies derived from homozygous parents. Markers may vary in the number of segregating alleles. One or both parents may be heterozygous, markers may be dominant or codominant and usually the linkage phases of marker pairs are unknown. Because of these differences, marker pairs provide different amounts of information for the estimation of recombination frequencies and the linkage phases of the markers in the two parents, and usually these have to be estimated simultaneously. In this paper we present a complete overview of all possible configurations of marker pairs segregating in full-sib families. Maximum likelihood estimators for the recombination frequency and LOD score formulas are presented for all cases. Statistical properties of the estimators are studied analytically and by simulation. Specific problems of dominant markers, in particular with respect to the probability of detecting linkage, the probability of obtaining zero estimates, and the ability to distinguish linkage phase combinations, and consequences for mapping studies in outbred progenies are discussed.


2019 ◽  
Vol 20 (1) ◽  
pp. 9-29 ◽  
Author(s):  
Mirko Signorelli ◽  
Ernst C. Wit

Until recently obtaining data on populations of networks was typically rare. However, with the advancement of automatic monitoring devices and the growing social and scientific interest in networks, such data has become more widely available. From sociological experiments involving cognitive social structures to fMRI scans revealing large-scale brain networks of groups of patients, there is a growing awareness that we urgently need tools to analyse populations of networks and particularly to model the variation between networks due to covariates. We propose a model-based clustering method based on mixtures of generalized linear (mixed) models that can be employed to describe the joint distribution of a populations of networks in a parsimonious manner and to identify subpopulations of networks that share certain topological properties of interest (degree distribution, community structure, effect of covariates on the presence of an edge, etc.). Maximum likelihood estimation for the proposed model can be efficiently carried out with an implementation of the EM algorithm. We assess the performance of this method on simulated data and conclude with an example application on advice networks in a small business.


Author(s):  
A. S. Ogunsanya ◽  
E. E. E. Akarawak ◽  
W. B. Yahya

In this paper, we compared different Parameter Estimation method of the two parameter Weibull-Rayleigh Distribution (W-RD) namely; Maximum Likelihood Estimation (MLE), Least Square Estimation method (LSE) and three methods of Quartile Estimators. Two of the quartile methods have been applied in literature, while the third method (Q1-M) is introduced in this work. The methods have been applied to simulate data. These methods of estimation were compared using Error, Mean Square Error and Total Deviation (TD) which is also known as Sum Absolute Error Estimate (SAEE). The analytical results show that the performances of all the parameter estimation methods were satisfactory with data set of Weibull-Rayleigh distribution while degree of accuracy is determined by the sample size. The proposed quartile (Q1-M) method has the least Total Deviation and MSE. In addition, the quartile methods perform better than MLE for the simulated data. In particular, the proposed quartile methods (Q1-M) have an added advantage of simplicity in usage than MLE methods.


2021 ◽  
Vol 19 (1) ◽  
pp. 2-17
Author(s):  
Gyan Prakash

In the present study, the Pareto model is considered as the model from which observations are to be estimated using a Bayesian approach. Properties of the Bayes estimators for the unknown parameters have studied by using different asymmetric loss functions on hybrid censoring pattern and their risks have compared. The properties of maximum likelihood estimation and approximate confidence length have also been investigated under hybrid censoring. The performances of the procedures are illustrated based on simulated data obtained under the Metropolis-Hastings algorithm and a real data set.


1993 ◽  
Vol 43 (3-4) ◽  
pp. 199-212 ◽  
Author(s):  
Anup Dewanji ◽  
D. Dhar

In many practical situations dealing with parallel systems, the failure times of the components are not observable unless that is the last one resulting in the failure of the system. Based on observation on the life time of the system, nonparametric estimation of the life time distributions of the components is considered. Dealing with a parallel system with two components, a competing risks framework is developed and an algorithm of the EM type for maximum likelihood estimation is obtained. The method is illustrated with a simulated data set.


Author(s):  
M D MacNeil ◽  
J W Buchanan ◽  
M L Spangler ◽  
E Hay

Abstract The objective of this study was to evaluate the effects of various data structures on the genetic evaluation for the binary phenotype of reproductive success. The data were simulated based on an existing pedigree and an underlying fertility phenotype with a heritability of 0.10. A data set of complete observations was generated for all cows. This data set was then modified mimicking the culling of cows when they first failed to reproduce, cows having a missing observation at either their second or fifth opportunity to reproduce as if they had been selected as donors for embryo transfer, and censoring records following the sixth opportunity to reproduce as in a cull-for-age strategy. The data were analyzed using a third order polynomial random regression model. The EBV of interest for each animal was the sum of the age-specific EBV over the first 10 observations (reproductive success at ages 2-11). Thus, the EBV might be interpreted as the genetic expectation of number of calves produced when a female is given ten opportunities to calve. Culling open cows resulted in the EBV for 3 year-old cows being reduced from 8.27 ± 0.03 when open cows were retained to 7.60 ± 0.02 when they were culled. The magnitude of this effect decreased as cows grew older when they first failed to reproduce and were subsequently culled. Cows that did not fail over the 11 years of simulated data had an EBV of 9.43 ± 0.01 and 9.35 ± 0.01 based on analyses of the complete data and the data in which cows that failed to reproduce were culled, respectively. Cows that had a missing observation for their second record had a significantly reduced EBV, but the corresponding effect at the fifth record was negligible. The current study illustrates that culling and management decisions, and particularly those that impact the beginning of the trajectory of sustained reproductive success, can influence both the magnitude and accuracy of resulting EBV.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2008 ◽  
Vol 20 (5) ◽  
pp. 1211-1238 ◽  
Author(s):  
Gaby Schneider

Oscillatory correlograms are widely used to study neuronal activity that shows a joint periodic rhythm. In most cases, the statistical analysis of cross-correlation histograms (CCH) features is based on the null model of independent processes, and the resulting conclusions about the underlying processes remain qualitative. Therefore, we propose a spike train model for synchronous oscillatory firing activity that directly links characteristics of the CCH to parameters of the underlying processes. The model focuses particularly on asymmetric central peaks, which differ in slope and width on the two sides. Asymmetric peaks can be associated with phase offsets in the (sub-) millisecond range. These spatiotemporal firing patterns can be highly consistent across units yet invisible in the underlying processes. The proposed model includes a single temporal parameter that accounts for this peak asymmetry. The model provides approaches for the analysis of oscillatory correlograms, taking into account dependencies and nonstationarities in the underlying processes. In particular, the auto- and the cross-correlogram can be investigated in a joint analysis because they depend on the same spike train parameters. Particular temporal interactions such as the degree to which different units synchronize in a common oscillatory rhythm can also be investigated. The analysis is demonstrated by application to a simulated data set.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Sign in / Sign up

Export Citation Format

Share Document