information matrices
Recently Published Documents


TOTAL DOCUMENTS

83
(FIVE YEARS 18)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
pp. 1
Author(s):  
Oluwole A Nuga ◽  
Abba Zakirai Abdulhamid ◽  
Shobanke Emmanuel Omobola Kayode

This study examines design preference in Completely Randomized (CR) split-plot experiments involving random whole plot factor effect and fixed sub-plot factor effect. Many previous works on optimally designing split-plot experiments assumed only factors with fixed levels. However, the cases where interests are on random factors have received little attention. These problems have similarities with optimal design of experiments for fixed parameters of non-linear models because the solution rely on the unknown parameters.  Design Space (DS) containing exhaustive list of balanced designs for a fixed sample size were compared for optimality using the product of determinants of derived information matrices of the Maximum Likelihood (ML) estimators equivalent to random and fixed effect in the model. Different magnitudes of components of variance configurations where variances of factor effects are larger than variances of error term were empirically used for the comparisons. The results revealed that the D-optimal designs are those with whole plot factor levels greater than replicates within each level of whole plot.


Author(s):  
Aaron Z. Goldberg ◽  
José L. Romero ◽  
Ángel S. Sanz ◽  
Luis L. Sánchez-Soto

Quantum Fisher information matrices (QFIMs) are fundamental to estimation theory: they encode the ultimate limit for the sensitivity with which a set of parameters can be estimated using a given probe. Since the limit invokes the inverse of a QFIM, an immediate question is what to do with singular QFIMs. Moreover, the QFIM may be discontinuous, forcing one away from the paradigm of regular statistical models. These questions of nonregular quantum statistical models are present in both single- and multiparameter estimation. Geometrically, singular QFIMs occur when the curvature of the metric vanishes in one or more directions in the space of probability distributions, while QFIMs have discontinuities when the density matrix has parameter-dependent rank. We present a nuanced discussion of how to deal with each of these scenarios, stressing the physical implications of singular QFIMs and the ensuing ramifications for quantum metrology.


2021 ◽  
Author(s):  
◽  
Nuovella Williams

<p>The advent of new technology for extracting genetic information from tissue samples has increased the availability of suitable data for finding genes controlling complex traits in plants, animals and humans. Quantitative trait locus (QTL) analysis relies on statistical methods to interpret genetic data in the presence of phenotype data and possibly other factors such as environmental factors. The goal is to both detect the presence of QTL with significant effects on trait value as well as to estimate their locations on the genome relative to those of known markers. This thesis reviews commonly used statistical techniques for QTL mapping in experimental populations. Regression and likelihood methods are discussed. The mixture-modelling approach to QTL mapping is explored in some detail. This thesis presents new matrix formulas for exact and convenient calculation of both the Observed and Fisher information matrices in the context of Multinomial mixtures of Univariate Normal distributions. An extension to Composite Interval mapping is proposed, together with a hypothesis testing strategy which is robust enough to de- tect existing QTL in the presence of slight deviations from model assumptions while reducing false detections.</p>


2021 ◽  
Author(s):  
◽  
Nuovella Williams

<p>The advent of new technology for extracting genetic information from tissue samples has increased the availability of suitable data for finding genes controlling complex traits in plants, animals and humans. Quantitative trait locus (QTL) analysis relies on statistical methods to interpret genetic data in the presence of phenotype data and possibly other factors such as environmental factors. The goal is to both detect the presence of QTL with significant effects on trait value as well as to estimate their locations on the genome relative to those of known markers. This thesis reviews commonly used statistical techniques for QTL mapping in experimental populations. Regression and likelihood methods are discussed. The mixture-modelling approach to QTL mapping is explored in some detail. This thesis presents new matrix formulas for exact and convenient calculation of both the Observed and Fisher information matrices in the context of Multinomial mixtures of Univariate Normal distributions. An extension to Composite Interval mapping is proposed, together with a hypothesis testing strategy which is robust enough to de- tect existing QTL in the presence of slight deviations from model assumptions while reducing false detections.</p>


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Maria Iannario ◽  
Anna Clara Monti ◽  
Pietro Scalera

Abstract The choice of the number m of response categories is a crucial issue in categorization of a continuous response. The paper exploits the Proportional Odds Models’ property which allows to generate ordinal responses with a different number of categories from the same underlying variable. It investigates the asymptotic efficiency of the estimators of the regression coefficients and the accuracy of the derived inferential procedures when m varies. The analysis is based on models with closed-form information matrices so that the asymptotic efficiency can be analytically evaluated without need of simulations. The paper proves that a finer categorization augments the information content of the data and consequently shows that the asymptotic efficiency and the power of the tests on the regression coefficients increase with m. The impact of the loss of information produced by merging categories on the efficiency of the estimators is also considered, highlighting its risks especially when performed in its extreme form of dichotomization. Furthermore, the appropriate value of m for various sample sizes is explored, pointing out that a large number of categories can offset the limited amount of information of a small sample by a better quality of the data. Finally, two case studies on the quality of life of chemotherapy patients and on the perception of pain, based on discretized continuous scales, illustrate the main findings of the paper.


Computation ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 94
Author(s):  
Monika Arora ◽  
N. Rao Chaganty

Count data with excessive zeros are ubiquitous in healthcare, medical, and scientific studies. There are numerous articles that show how to fit Poisson and other models which account for the excessive zeros. However, in many situations, besides zero, the frequency of another count k tends to be higher in the data. The zero- and k-inflated Poisson distribution model (ZkIP) is appropriate in such situations The ZkIP distribution essentially is a mixture distribution of Poisson and degenerate distributions at points zero and k. In this article, we study the fundamental properties of this mixture distribution. Using stochastic representation, we provide details for obtaining parameter estimates of the ZkIP regression model using the Expectation–Maximization (EM) algorithm for a given data. We derive the standard errors of the EM estimates by computing the complete, missing, and observed data information matrices. We present the analysis of two real-life data using the methods outlined in the paper.


2021 ◽  
Vol 2021 (3) ◽  
pp. 41-49
Author(s):  
Khayriddin Nazarov ◽  

The scientific paper investigates the methods for solving problems of synthesis of physical principles of operation of mechatronic modules of intelligent robotic systems are considered. In addition, it deals with the specifics of the process of synthesis of the physical principles of the operation ofmechatronic modules using predicate models, list models and information matrices of physical and technical effects. It shows the interpretation of the components of the model of the system for the synthesis of the physical principles of operation of mechatronic modules of intelligent robotic systems and automatic control and the features of the synthesis algorithm for the physical principles of operation of mechatronic modules of intelligent robotic systems based on list models of physical and technical effects are considered.


2021 ◽  
pp. 1-33
Author(s):  
Chudamani Poudyal

Abstarct The primary objective of this scholarly work is to develop two estimation procedures – maximum likelihood estimator (MLE) and method of trimmed moments (MTM) – for the mean and variance of lognormal insurance payment severity data sets affected by different loss control mechanism, for example, truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance proportions), in insurance and financial industries. Maximum likelihood estimating equations for both payment-per-payment and payment-per-loss data sets are derived which can be solved readily by any existing iterative numerical methods. The asymptotic distributions of those estimators are established via Fisher information matrices. Further, with a goal of balancing efficiency and robustness and to remove point masses at certain data points, we develop a dynamic MTM estimation procedures for lognormal claim severity models for the above-mentioned transformed data scenarios. The asymptotic distributional properties and the comparison with the corresponding MLEs of those MTM estimators are established along with extensive simulation studies. Purely for illustrative purpose, numerical examples for 1500 US indemnity losses are provided which illustrate the practical performance of the established results in this paper.


2021 ◽  
Vol 3 (1) ◽  
pp. 123-167
Author(s):  
Lars Hillebrand ◽  
David Biesner ◽  
Christian Bauckhage ◽  
Rafet Sifa

Unsupervised topic extraction is a vital step in automatically extracting concise contentual information from large text corpora. Existing topic extraction methods lack the capability of linking relations between these topics which would further help text understanding. Therefore we propose utilizing the Decomposition into Directional Components (DEDICOM) algorithm which provides a uniquely interpretable matrix factorization for symmetric and asymmetric square matrices and tensors. We constrain DEDICOM to row-stochasticity and non-negativity in order to factorize pointwise mutual information matrices and tensors of text corpora. We identify latent topic clusters and their relations within the vocabulary and simultaneously learn interpretable word embeddings. Further, we introduce multiple methods based on alternating gradient descent to efficiently train constrained DEDICOM algorithms. We evaluate the qualitative topic modeling and word embedding performance of our proposed methods on several datasets, including a novel New York Times news dataset, and demonstrate how the DEDICOM algorithm provides deeper text analysis than competing matrix factorization approaches.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1439
Author(s):  
Guillermo Martínez-Flórez ◽  
Víctor Leiva ◽  
Emilio Gómez-Déniz ◽  
Carolina Marchant

In this paper, we consider skew-normal distributions for constructing new a distribution which allows us to model proportions and rates with zero/one inflation as an alternative to the inflated beta distributions. The new distribution is a mixture between a Bernoulli distribution for explaining the zero/one excess and a censored skew-normal distribution for the continuous variable. The maximum likelihood method is used for parameter estimation. Observed and expected Fisher information matrices are derived to conduct likelihood-based inference in this new type skew-normal distribution. Given the flexibility of the new distributions, we are able to show, in real data scenarios, the good performance of our proposal.


Sign in / Sign up

Export Citation Format

Share Document