Estimation and testing of regression disturbances based on modified likelihood functions

1998 ◽  
Vol 71 (1-2) ◽  
pp. 75-92 ◽  
Author(s):  
Mizan R. Laskar ◽  
Maxwell L. King
2014 ◽  
Vol 70 (a1) ◽  
pp. C319-C319
Author(s):  
Randy Read ◽  
Paul Adams ◽  
Airlie McCoy

In translational noncrystallographic symmetry (tNCS), two or more copies of a component are present in a similar orientation in the asymmetric unit of the crystal. This causes systematic modulations of the intensities in the diffraction pattern, leading to problems with methods that assume, either implicitly or explicitly, that the distribution of intensities is a function only of resolution. To characterize the statistical effects of tNCS accurately, it is necessary to determine the translation relating the copies, any small rotational differences in their orientations, and the size of random coordinate differences caused by conformational differences. An algorithm has been developed to estimate these parameters and refine their values against a likelihood function. By accounting for the statistical effects of tNCS, it is possible to unmask the competing statistical effects of twinning and tNCS and to more robustly assess the crystal for the presence of twinning. Modified likelihood functions that account for the statistical effects of tNCS have been developed for use in molecular replacement and implemented in Phaser. With the use of these new targets, it is now possible to solve structures that eluded earlier versions of the program. Pseudosymmetry and space group ambiguities often accompany tNCS, but the new version of Phaser is less likely to fall into the traps that these set.


Author(s):  
Russell Cheng

This chapter examines the well-known Box-Cox method, which transforms a sample of non-normal observations into approximately normal form. Two non-standard aspects are highlighted. First, the likelihood of the transformed sample has an unbounded maximum, so that the maximum likelihood estimate is not consistent. The usually suggested remedy is to assume grouped data so that the sample becomes multinomial. An alternative method is described that uses a modified likelihood similar to the spacings function. This eliminates the infinite likelihood problem. The second problem is that the power transform used in the Box-Cox method is left-bounded so that the transformed observations cannot be exactly normal. This biases estimates of observational probabilities in an uncertain way. Moreover, the distributions fitted to the observations are not necessarily unimodal. A simple remedy is to assume the transformed observations have a left-bounded distribution, like the exponential; this is discussed in detail, and a numerical example given.


Geosciences ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 150
Author(s):  
Nilgün Güdük ◽  
Miguel de la Varga ◽  
Janne Kaukolinna ◽  
Florian Wellmann

Structural geological models are widely used to represent relevant geological interfaces and property distributions in the subsurface. Considering the inherent uncertainty of these models, the non-uniqueness of geophysical inverse problems, and the growing availability of data, there is a need for methods that integrate different types of data consistently and consider the uncertainties quantitatively. Probabilistic inference provides a suitable tool for this purpose. Using a Bayesian framework, geological modeling can be considered as an integral part of the inversion and thereby naturally constrain geophysical inversion procedures. This integration prevents geologically unrealistic results and provides the opportunity to include geological and geophysical information in the inversion. This information can be from different sources and is added to the framework through likelihood functions. We applied this methodology to the structurally complex Kevitsa deposit in Finland. We started with an interpretation-based 3D geological model and defined the uncertainties in our geological model through probability density functions. Airborne magnetic data and geological interpretations of borehole data were used to define geophysical and geological likelihoods, respectively. The geophysical data were linked to the uncertain structural parameters through the rock properties. The result of the inverse problem was an ensemble of realized models. These structural models and their uncertainties are visualized using information entropy, which allows for quantitative analysis. Our results show that with our methodology, we can use well-defined likelihood functions to add meaningful information to our initial model without requiring a computationally-heavy full grid inversion, discrepancies between model and data are spotted more easily, and the complementary strength of different types of data can be integrated into one framework.


2021 ◽  
pp. 1-25
Author(s):  
Yu-Chin Hsu ◽  
Ji-Liang Shiu

Under a Mundlak-type correlated random effect (CRE) specification, we first show that the average likelihood of a parametric nonlinear panel data model is the convolution of the conditional distribution of the model and the distribution of the unobserved heterogeneity. Hence, the distribution of the unobserved heterogeneity can be recovered by means of a Fourier transformation without imposing a distributional assumption on the CRE specification. We subsequently construct a semiparametric family of average likelihood functions of observables by combining the conditional distribution of the model and the recovered distribution of the unobserved heterogeneity, and show that the parameters in the nonlinear panel data model and in the CRE specification are identifiable. Based on the identification result, we propose a sieve maximum likelihood estimator. Compared with the conventional parametric CRE approaches, the advantage of our method is that it is not subject to misspecification on the distribution of the CRE. Furthermore, we show that the average partial effects are identifiable and extend our results to dynamic nonlinear panel data models.


Genetics ◽  
1997 ◽  
Vol 147 (4) ◽  
pp. 1855-1861 ◽  
Author(s):  
Montgomery Slatkin ◽  
Bruce Rannala

Abstract A theory is developed that provides the sampling distribution of low frequency alleles at a single locus under the assumption that each allele is the result of a unique mutation. The numbers of copies of each allele is assumed to follow a linear birth-death process with sampling. If the population is of constant size, standard results from theory of birth-death processes show that the distribution of numbers of copies of each allele is logarithmic and that the joint distribution of numbers of copies of k alleles found in a sample of size n follows the Ewens sampling distribution. If the population from which the sample was obtained was increasing in size, if there are different selective classes of alleles, or if there are differences in penetrance among alleles, the Ewens distribution no longer applies. Likelihood functions for a given set of observations are obtained under different alternative hypotheses. These results are applied to published data from the BRCA1 locus (associated with early onset breast cancer) and the factor VIII locus (associated with hemophilia A) in humans. In both cases, the sampling distribution of alleles allows rejection of the null hypothesis, but relatively small deviations from the null model can account for the data. In particular, roughly the same population growth rate appears consistent with both data sets.


Biometrika ◽  
1990 ◽  
Vol 77 (4) ◽  
pp. 897
Author(s):  
Bryan Langholz ◽  
Duncan Thomas ◽  
Tsunjen Chen ◽  
Phillip Rhodes

2018 ◽  
Vol 30 (11) ◽  
pp. 3072-3094 ◽  
Author(s):  
Hongqiao Wang ◽  
Jinglai Li

We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)–based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015 ). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. We then provide an adaptive algorithm to construct such an approximation, where an active learning method is used to choose the design points. With numerical examples, we illustrate that the proposed method has competitive performance against existing approaches for Bayesian computation.


Sign in / Sign up

Export Citation Format

Share Document