compound symmetry
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 10)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 58 (1) ◽  
pp. 69-79
Author(s):  
Anna Szczepańska-Álvarez ◽  
Bogna Zawieja ◽  
Adolfo Álvarez

Summary In this paper we present properties of an algorithm to determine the maximum likelihood estimators of the covariance matrix when two processes jointly affect the observations. Additionally, one process is partially modeled by a compound symmetry structure. We perform a simulation study of the properties of an iteratively determined estimator of the covariance matrix.


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


Author(s):  
Nonvikan Karl-Augustt ALAHASSA ◽  
alejandro Murua

We have built a Shallow Gibbs Network model as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented with the Mulan project multivariate regression dataset, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0).


2020 ◽  
Vol 110 (12) ◽  
pp. 1908-1922
Author(s):  
F. Dalla Lana ◽  
P. A. Paul ◽  
R. Minyo ◽  
P. Thomison ◽  
L. V. Madden

Trials were conducted to quantify the stability (or lack of G × E interaction) of 15 maize hybrids to Gibberella ear rot (GER; caused by Fusarium graminearum) and deoxynivalenol (DON) contamination of grain across 30 Ohio environments (3 years × 10 locations). In each environment, one plot of each hybrid was planted and 10 ears per plot were inoculated via the silk channel. GER severity (proportion of ear area diseased) and DON contamination of grain (ppm) were quantified. Multiple rank-based methods, including Kendall’s concordance coefficient (W) and Piepho’s U, were used to quantify hybrid stability. The results found insufficient evidence to suggest crossover G × E interaction of ranks, with W greater than zero for GER (W = 0.28) and DON (W = 0.26), and U not statistically significant for either variable (P > 0.20). Linear mixed models (LMMs) were also used to quantify hybrid stability, accounting for crossover or noncrossover G × E interaction of transformed observed data. Based on information criteria and likelihood ratio tests for GER and DON response variables, the models with more complex variance-covariance structures—heterogeneous compound symmetry and factor-analytic—provided a better fit than the model with the simpler compound symmetry structure, indicating that one or more hybrids differed in stability. Overall, hybrids were stable based on rank-based methods, which indicated a lack of crossover G × E interaction, but the LMMs identified a few hybrids that were sensitive to environment. Resistant hybrids were generally more stable than susceptible hybrids.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242705
Author(s):  
Igor Ferreira Coelho ◽  
Marco Antônio Peixoto ◽  
Jeniffer Santana Pinto Coelho Evangelista ◽  
Rodrigo Silva Alves ◽  
Suellen Sales ◽  
...  

An efficient and informative statistical method to analyze genotype-by-environment interaction (GxE) is needed in maize breeding programs. Thus, the objective of this study was to compare the effectiveness of multiple-trait models (MTM), random regression models (RRM), and compound symmetry models (CSM) in the analysis of multi-environment trials (MET) in maize breeding. For this, a data set with 84 maize hybrids evaluated across four environments for the trait grain yield (GY) was used. Variance components were estimated by restricted maximum likelihood (REML), and genetic values were predicted by best linear unbiased prediction (BLUP). The best fit MTM, RRM, and CSM were identified by the Akaike information criterion (AIC), and the significance of the genetic effects were tested using the likelihood ratio test (LRT). Genetic gains were predicted considering four selection intensities (5, 10, 15, and 20 hybrids). The selected MTM, RRM, and CSM models fit heterogeneous residuals. Moreover, for RRM the genetic effects were modeled by Legendre polynomials of order two. Genetic variability between maize hybrids were assessed for GY. In general, estimates of broad-sense heritability, selective accuracy, and predicted selection gains were slightly higher when obtained using MTM and RRM. Thus, considering the criterion of parsimony and the possibility of predicting genetic values of hybrids for untested environments, RRM is a preferential approach for analyzing MET in maize breeding.


2019 ◽  
Vol 56 (2) ◽  
pp. 165-181
Author(s):  
Adam Mieldzioc ◽  
Monika Mokrzycka ◽  
Aneta Sawikowska

SummaryModern chromatography largely uses the technique of gas chromatography coupled with mass spectrometry (GC–MS). For a set of data concerning the drought resistance of barley, the problem of the characterization of a covariance structure is investigated with the use of two methods. The first is based on the Frobenius norm and the second on the entropy loss function. For the four considered covariance structures – compound symmetry, three-diagonal and penta-diagonal Toeplitz and autoregression of order one – the Frobenius norm indicates the compound symmetry matrix and autoregression of order one as the most relevant, whilst the entropy loss function gives a slight indication in favor of the compound symmetry structure.


Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 378 ◽  
Author(s):  
Chalikias

In this paper, we construct optimal repeated measurement designs of two treatments for estimating direct effects, and we examine the case of compound symmetry dependency. We present the model and the design that minimizes the variance of the estimated difference of the two treatments. The optimal designs with dependent observations in a compound symmetry model are the same as in the case of independent observations.


Sign in / Sign up

Export Citation Format

Share Document