scholarly journals Up hill, down dale: quantitative genetics of curvaceous traits

2005 ◽  
Vol 360 (1459) ◽  
pp. 1443-1455 ◽  
Author(s):  
Karin Meyer ◽  
Mark Kirkpatrick

‘Repeated’ measurements for a trait and individual, taken along some continuous scale such as time, can be thought of as representing points on a curve, where both means and covariances along the trajectory can change, gradually and continually. Such traits are commonly referred to as ‘function-valued’ (FV) traits. This review shows that standard quantitative genetic concepts extend readily to FV traits, with individual statistics, such as estimated breeding values and selection response, replaced by corresponding curves, modelled by respective functions. Covariance functions are introduced as the FV equivalent to matrices of covariances. Considering the class of functions represented by a regression on the continuous covariable, FV traits can be analysed within the linear mixed model framework commonly employed in quantitative genetics, giving rise to the so-called random regression model. Estimation of covariance functions, either indirectly from estimated covariances or directly from the data using restricted maximum likelihood or Bayesian analysis, is considered. It is shown that direct estimation of the leading principal components of covariance functions is feasible and advantageous. Extensions to multi-dimensional analyses are discussed.

2020 ◽  
Vol 110 (10) ◽  
pp. 1623-1631
Author(s):  
Karyn L. Reeves ◽  
Clayton R. Forknall ◽  
Alison M. Kelly ◽  
Kirsty J. Owen ◽  
Joshua Fanning ◽  
...  

The root lesion nematode (RLN) species Pratylenchus thornei and P. neglectus are widely distributed within cropping regions of Australia and have been shown to limit grain production. Field experiments conducted to compare the performance of cultivars in the presence of RLNs investigate management options for growers by identifying cultivars with resistance, by limiting nematode reproduction, and tolerance, by yielding well in the presence of nematodes. A novel experimental design approach for RLN experiments is proposed where the observed RLN density, measured prior to sowing, is used to condition the randomization of cultivars to field plots. This approach ensured that all cultivars were exposed to consistent ranges of RLN in order to derive valid assessments of relative cultivar tolerance and resistance. Using data from a field experiment designed using the conditioned randomization approach and conducted in Formartin, Australia, the analysis of tolerance and resistance was undertaken in a linear mixed model framework. Yield response curves were derived using a random regression approach and curves modeling change in RLN densities between sowing and harvest were derived using splines to account for nonlinearity. Groups of cultivars sharing similar resistance levels could be identified. A comparison of slopes of yield response curves of cultivars belonging to the same resistance class identified differing tolerance levels for cultivars with equivalent exposures to both presowing and postharvest RLN densities. As such, the proposed design and analysis approach allowed tolerance to be assessed independently of resistance.


2021 ◽  
pp. 0272989X2110038
Author(s):  
Felix Achana ◽  
Daniel Gallacher ◽  
Raymond Oppong ◽  
Sungwook Kim ◽  
Stavros Petrou ◽  
...  

Economic evaluations conducted alongside randomized controlled trials are a popular vehicle for generating high-quality evidence on the incremental cost-effectiveness of competing health care interventions. Typically, in these studies, resource use (and by extension, economic costs) and clinical (or preference-based health) outcomes data are collected prospectively for trial participants to estimate the joint distribution of incremental costs and incremental benefits associated with the intervention. In this article, we extend the generalized linear mixed-model framework to enable simultaneous modeling of multiple outcomes of mixed data types, such as those typically encountered in trial-based economic evaluations, taking into account correlation of outcomes due to repeated measurements on the same individual and other clustering effects. We provide new wrapper functions to estimate the models in Stata and R by maximum and restricted maximum quasi-likelihood and compare the performance of the new routines with alternative implementations across a range of statistical programming packages. Empirical applications using observed and simulated data from clinical trials suggest that the new methods produce broadly similar results as compared with Stata’s merlin and gsem commands and a Bayesian implementation in WinBUGS. We highlight that, although these empirical applications primarily focus on trial-based economic evaluations, the new methods presented can be generalized to other health economic investigations characterized by multivariate hierarchical data structures.


2015 ◽  
Author(s):  
Sang Hong Lee ◽  
Julius van der Werf

We have developed an algorithm for genetic analysis of complex traits using genome-wide SNPs in a linear mixed model framework. Compared to current standard REML software based on the mixed model equation, our method could be more than 1000 times faster. The advantage is largest when there is only a single genetic covariance structure. The method is particularly useful for multivariate analysis, including multi-trait models and random regression models for studying reaction norms. We applied our proposed method to publicly available mice and human data and discuss advantages and limitations.


Author(s):  
Ying Zhang ◽  
Yuxin Song ◽  
Jin Gao ◽  
Hengyu Zhang ◽  
Ning Yang ◽  
...  

AbstractA hierarchical random regression model (Hi-RRM) was extended into a genome-wide association analysis for longitudinal data, which significantly reduced the dimensionality of repeated measurements. The Hi-RRM first modeled the phenotypic trajectory of each individual using a RRM and then associated phenotypic regressions with genetic markers using a multivariate mixed model (mvLMM). By spectral decomposition of genomic relationship and regression covariance matrices, the mvLMM was transformed into a multiple linear regression, which improved computing efficiency while implementing mvLMM associations in efficient mixed-model association expedited (EMMAX). Compared with the existing RRM-based association analyses, the statistical utility of Hi-RRM was demonstrated by simulation experiments. The method proposed here was also applied to find the quantitative trait nucleotides controlling the growth pattern of egg weights in poultry data.


2019 ◽  
Vol 65 (5) ◽  
pp. 593-601
Author(s):  
James A Westfall ◽  
Megan B E Westfall ◽  
KaDonna C Randolph

Abstract Tree crown ratio is useful in various applications such as prediction of tree mortality probabilities, growth potential, and fire behavior. Crown ratio is commonly assessed in two ways: (1) compacted crown ratio (CCR—lower branches visually moved upwards to fill missing foliage gaps) and (2) uncompacted crown ratio (UNCR—no missing foliage adjustment). The national forest inventory of the United States measures CCR on all trees, whereas only a subset of trees also are assessed for UNCR. Models for 27 species groups are presented to predict UNCR for the northern United States. The model formulation is consistent with those developed for other US regions while also accounting for the presence of repeated measurements and heterogeneous variance in a mixed-model framework. Ignoring random-effects parameters, the fit index values ranged from 0.43 to 0.78, and root mean squared error spanned 0.08–0.15; considerable improvements in both goodness-of-fit statistics were realized via inclusion of the random effects. Comparison of UNCR predictions with models developed for the southern United States exhibited close agreement, whereas comparisons with models used in Forest Vegetation Simulator variants indicated poor association. The models provide additional analytical flexibility for using the breadth of northern region data in applications where UNCR is the appropriate crown characteristic.


2021 ◽  
Author(s):  
Lukas Roth ◽  
María Xosé Rodríguez-Álvarez ◽  
Fred van Eeuwijk ◽  
Hans-Peter Piepho ◽  
Andreas Hund

Decision-making in breeding increasingly depends on the ability to capture and predict crop responses to changing environmental factors. Advances in crop modeling as well as high-throughput field phenotyping (HTFP) hold promise to provide such insights. Processing HTFP data is an interdisciplinary task that requires broad knowledge on experimental design, measurement techniques, feature extraction, dynamic trait modeling, and prediction of genotypic values using statistical models. To get an overview of sources of variations in HTFP, we develop a general plot-level model for repeated measurements. Based on this model, we propose a seamless stage-wise process that allows to carry on estimated means and variances from stage to stage and approximates the gold standard of a single-stage analysis. The process builds on the extraction of three intermediate trait categories; (1) timing of key stages, (2) quantities at defined time points or periods, and (3) dose-response curves. In a first stage, these intermediate traits are extracted from low-level traits' time series (e.g., canopy height) using P-splines and the quarter of maximum elongation rate method (QMER), as well as final height percentiles. In a second and third stage, extracted traits are further processed using a stage-wise linear mixed model analysis. Using a wheat canopy growth simulation to generate canopy height time series, we demonstrate the suitability of the stage-wise process for traits of the first two above-mentioned categories. Results indicate that, for the first stage, the P-spline/QMER method was more robust than the percentile method. In the subsequent two-stage linear mixed model processing, weighting the second and third stage with error variance estimates from the previous stages improved the root mean squared error. We conclude that processing phenomics data in stages represents a feasible approach if using appropriate weighting through all stages. P-splines in combination with the QMER method are suitable tools to extract timing of key stages and quantities at defined time points from HTFP data.


2019 ◽  
Vol 11 (24) ◽  
pp. 2897 ◽  
Author(s):  
Yuhui Zheng ◽  
Feiyang Wu ◽  
Hiuk Jae Shim ◽  
Le Sun

Hyperspectral unmixing is a key preprocessing technique for hyperspectral image analysis. To further improve the unmixing performance, in this paper, a nonlocal low-rank prior associated with spatial smoothness and spectral collaborative sparsity are integrated together for unmixing the hyperspectral data. The proposed method is based on a fact that hyperspectral images have self-similarity in nonlocal sense and smoothness in local sense. To explore the spatial self-similarity, nonlocal cubic patches are grouped together to compose a low-rank matrix. Then, based on the linear mixed model framework, the nuclear norm is constrained to the abundance matrix of these similar patches to enforce low-rank property. In addition, the local spatial information and spectral characteristic are also taken into account by introducing TV regularization and collaborative sparse terms, respectively. Finally, the results of the experiments on two simulated data sets and two real data sets show that the proposed algorithm produces better performance than other state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document