Grouping corrections and maximum likelihood equations

Author(s):  
D. V. Lindley

1. Any mention of the word ‘grouping’ immediately brings to a statistician's mind the Sheppard corrections. These are usually used to make inferences about the underlying ungrouped population from observations made on the grouped population, but it is important to realize that, as stated and proved, they have nothing to do with sampling or inference and are merely expressions for the moments of one population in terms of the moments of another population derived from it. They can only be used for the inference problem when allied to the method of moments. This method, as formulated by K. Pearson, consists in taking for θ* the estimate of the population parameter θ, the same function of the sample moments mi that θ is of the population moments μi, each mi being an estimate of the corresponding μi. If the population is grouped the mi are estimates of the , the grouped population moments, so we require θ as a function of the to apply Pearson's method. This can be done since θ is known as a function of the µi and the µi are known as functions of the by the corrections. Use of the Sheppard corrections with any other inference method, even when this method, when applied to the continuous population, yields an estimate which is a sample moment, so far as I am aware, has not been examined except for the normal curve.

2020 ◽  
Author(s):  
Jianhao Peng ◽  
Ullas V. Chembazhi ◽  
Sushant Bangru ◽  
Ian M. Traniello ◽  
Auinash Kalsotra ◽  
...  

AbstractMotivationWith the use of single-cell RNA sequencing (scRNA-Seq) technologies, it is now possible to acquire gene expression data for each individual cell in samples containing up to millions of cells. These cells can be further grouped into different states along an inferred cell differentiation path, which are potentially characterized by similar, but distinct enough, gene regulatory networks (GRNs). Hence, it would be desirable for scRNA-Seq GRN inference methods to capture the GRN dynamics across cell states. However, current GRN inference methods produce a unique GRN per input dataset (or independent GRNs per cell state), failing to capture these regulatory dynamics.ResultsWe propose a novel single-cell GRN inference method, named SimiC, that jointly infers the GRNs corresponding to each state. SimiC models the GRN inference problem as a LASSO optimization problem with an added similarity constraint, on the GRNs associated to contiguous cell states, that captures the inter-cell-state homogeneity. We show on a mouse hepatocyte single-cell data generated after partial hepatectomy that, contrary to previous GRN methods for scRNA-Seq data, SimiC is able to capture the transcription factor (TF) dynamics across liver regeneration, as well as the cell-level behavior for the regulatory program of each TF across cell states. In addition, on a honey bee scRNA-Seq experiment, SimiC is able to capture the increased heterogeneity of cells on whole-brain tissue with respect to a regional analysis tissue, and the TFs associated specifically to each sequenced tissue.AvailabilitySimiC is written in Python and includes an R API. It can be downloaded from https://github.com/jianhao2016/[email protected], [email protected] informationSupplementary data are available at the code repository.


2013 ◽  
Vol 5 (8) ◽  
pp. 394-400 ◽  
Author(s):  
Hasna Fadhila ◽  
Nora Amelda Rizal

Value at Risk (VaR) is a tool to predict the greater loss less than the certain confidence level over a period of time. Value at Risk Historical Simulation produce reliable value of VaR because of the historical data and measure the skewness of the observe data. So, Value at Risk well used by investors to determine the risk to be faced on their investment. To calculate VAR it is better to use maximum likelihood, which has been considered for estimating from historical data and also available for estimating nonlinear model. It is also a mathematic function that can approximate return. From the maximum likelihood function with normal distribution, we can draw the normal curve at one tail test. This research conducted to calculate Value at Risk using maximum likelihood. The normal curve will be compared with data return at each bank (Bank Mandiri, Bank BRI and Bank BNI). Empirical results demonstrated that Bank BNI in 2009, Bank BRI in 2010 and Bank BNI in 2011, had less value of VaR by historical simulation in each year. It is concluded that by using maximum likelihood method in the estimation of VaR, has certain appropriates compared with the normal curve.


2021 ◽  
Vol 1 (1) ◽  
pp. 34-37
Author(s):  
Kannadasan Karuppaiah ◽  
Vinoth Raman

This study derives the parameter estimation in truncated form of a continuous distribution which is comparable to Erlang truncated exponential distribution. The shape and scale parameter will predict the whole distributions properties. Approximation will be useful in making the mathematical calculation an easy understand for non-mathematician or statistician. An explicit mathematical derivation is seen for some properties of, Method of Moments, Skewness, Kurtosis, Mean and Variance, Maximum Likelihood Function and Reliability Analysis. We compared ratio and regression estimators empirically based on bias and coefficient of variation.


2020 ◽  
Vol 3 (2) ◽  
pp. 12-25
Author(s):  
Simon Sium ◽  
Rama Shanker

This study proposes and examines a zero-truncated discrete Akash distribution and obtains its probability and moment-generating functions. Its moments and moments-based statistical constants, including coefficient of variation, skewness, kurtosis, and the index of dispersion, are also presented. The parameter estimation is discussed using both the method of moments and maximum likelihood. Applications of the distribution are explained through three examples of real datasets, which demonstrate that the zero-truncated discrete Akash distribution gives better fit than several zero-truncated discrete distributions.


Author(s):  
Kannadasan Karuppaiah ◽  
◽  
Vinoth Raman ◽  

This study derives the parameter estimation in truncated form of a continuous distribution which is comparable to Erlang truncated exponential distribution. The shape and scale parameter will predict the whole distributions properties. Approximation will be useful in making the mathematical calculation an easy understand for non-mathematician or statistician. An explicit mathematical derivation is seen for some properties of, Method of Moments, Skewness, Kurtosis, Mean and Variance, Maximum Likelihood Function and Reliability Analysis. We compared ratio and regression estimators empirically based on bias and coefficient of variation.


2018 ◽  
Vol 41 (1) ◽  
pp. 87-108 ◽  
Author(s):  
Maha Ahmad Omair ◽  
Fatimah E AlMuhayfith ◽  
Abdulhamid A Alzaid

A new bivariate model is introduced by compounding negative binomial and geometric distributions. Distributional properties, including joint, marginal and conditional distributions are discussed. Expressions for the product moments, covariance and correlation coefficient are obtained. Some properties such as ordering, unimodality, monotonicity and self-decomposability are studied. Parameter estimators using the method of moments and maximum likelihood are derived. Applications to traffic accidents data are illustrated.


2017 ◽  
Vol 9 (1) ◽  
pp. 224
Author(s):  
Peterson Owusu Junior ◽  
Carl H. Korkpoe

The four-parameter generalised lambda distribution provides the flexibility required to describe the key moments of any distribution as compared with the normal distribution which characterises the distribution with only two moments. As markets have increasingly become nervous, the inadequacies of the normal distribution in capturing correctly the tail events and describing fully the entire distribution of market returns have been laid bare. The focus of this paper is to compare the generalised method of moments (GMM) and maximum likelihood essential estimates (MLE) methods as subsets of the GLD for a better fit of JSE All Share Index returns data. We have demonstrated that the appropriate method of the GLD to completely describe the measures of central tendency and dispersion by additionally capturing the risk dimensions of skewness and kurtosis of the return distribution is the Generalised Method of Moments (GMM) with the Kolmogorov-Smirnoff Distance good-of-fit statistics and the quantile-quantile graph. These measures are very important to any investor in the equity markets.


Sign in / Sign up

Export Citation Format

Share Document