scholarly journals A New Double Truncated Generalized Gamma Model with Some Applications

2021 ◽  
Vol 2021 ◽  
pp. 1-27
Author(s):  
Awad A. Bakery ◽  
Wael Zakaria ◽  
OM Kalthum S. K. Mohamed

The generalized Gamma model has been applied in a variety of research fields, including reliability engineering and lifetime analysis. Indeed, we know that, from the above, it is unbounded. Data have a bounded service area in a variety of applications. A new five-parameter bounded generalized Gamma model, the bounded Weibull model with four parameters, the bounded Gamma model with four parameters, the bounded generalized Gaussian model with three parameters, the bounded exponential model with three parameters, and the bounded Rayleigh model with two parameters, is presented in this paper as a special case. This approach to the problem, which utilizes a bounded support area, allows for a great deal of versatility in fitting various shapes of observed data. Numerous properties of the proposed distribution have been deduced, including explicit expressions for the moments, quantiles, mode, moment generating function, mean variance, mean residual lifespan, and entropies, skewness, kurtosis, hazard function, survival function, r   th order statistic, and median distributions. The delivery has hazard frequencies that are monotonically increasing or declining, bathtub-shaped, or upside-down bathtub-shaped. We use the Newton Raphson approach to approximate model parameters that increase the log-likelihood function and some of the parameters have a closed iterative structure. Six actual data sets and six simulated data sets were tested to demonstrate how the proposed model works in reality. We illustrate why the Model is more stable and less affected by sample size. Additionally, the suggested model for wavelet histogram fitting of images and sounds is very accurate.

2016 ◽  
Author(s):  
Kassian Kobert ◽  
Alexandros Stamatakis ◽  
Tomáš Flouri

The phylogenetic likelihood function is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory saving attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 10-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the phylogenetic likelihood function currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation.


2015 ◽  
Vol 11 (A29A) ◽  
pp. 205-207
Author(s):  
Philip C. Gregory

AbstractA new apodized Keplerian model is proposed for the analysis of precision radial velocity (RV) data to model both planetary and stellar activity (SA) induced RV signals. A symmetrical Gaussian apodization function with unknown width and center can distinguish planetary signals from SA signals on the basis of the width of the apodization function. The general model for m apodized Keplerian signals also includes a linear regression term between RV and the stellar activity diagnostic In (R'hk), as well as an extra Gaussian noise term with unknown standard deviation. The model parameters are explored using a Bayesian fusion MCMC code. A differential version of the Generalized Lomb-Scargle periodogram provides an additional way of distinguishing SA signals and helps guide the choice of new periods. Sample results are reported for a recent international RV blind challenge which included multiple state of the art simulated data sets supported by a variety of stellar activity diagnostics.


2020 ◽  
Vol 69 (5) ◽  
pp. 973-986 ◽  
Author(s):  
Joëlle Barido-Sottani ◽  
Timothy G Vaughan ◽  
Tanja Stadler

Abstract Heterogeneous populations can lead to important differences in birth and death rates across a phylogeny. Taking this heterogeneity into account is necessary to obtain accurate estimates of the underlying population dynamics. We present a new multitype birth–death model (MTBD) that can estimate lineage-specific birth and death rates. This corresponds to estimating lineage-dependent speciation and extinction rates for species phylogenies, and lineage-dependent transmission and recovery rates for pathogen transmission trees. In contrast with previous models, we do not presume to know the trait driving the rate differences, nor do we prohibit the same rates from appearing in different parts of the phylogeny. Using simulated data sets, we show that the MTBD model can reliably infer the presence of multiple evolutionary regimes, their positions in the tree, and the birth and death rates associated with each. We also present a reanalysis of two empirical data sets and compare the results obtained by MTBD and by the existing software BAMM. We compare two implementations of the model, one exact and one approximate (assuming that no rate changes occur in the extinct parts of the tree), and show that the approximation only slightly affects results. The MTBD model is implemented as a package in the Bayesian inference software BEAST 2 and allows joint inference of the phylogeny and the model parameters.[Birth–death; lineage specific rates, multi-type model.]


2021 ◽  
Author(s):  
Gah-Yi Ban ◽  
N. Bora Keskin

We consider a seller who can dynamically adjust the price of a product at the individual customer level, by utilizing information about customers’ characteristics encoded as a d-dimensional feature vector. We assume a personalized demand model, parameters of which depend on s out of the d features. The seller initially does not know the relationship between the customer features and the product demand but learns this through sales observations over a selling horizon of T periods. We prove that the seller’s expected regret, that is, the revenue loss against a clairvoyant who knows the underlying demand relationship, is at least of order [Formula: see text] under any admissible policy. We then design a near-optimal pricing policy for a semiclairvoyant seller (who knows which s of the d features are in the demand model) who achieves an expected regret of order [Formula: see text]. We extend this policy to a more realistic setting, where the seller does not know the true demand predictors, and show that this policy has an expected regret of order [Formula: see text], which is also near-optimal. Finally, we test our theory on simulated data and on a data set from an online auto loan company in the United States. On both data sets, our experimentation-based pricing policy is superior to intuitive and/or widely-practiced customized pricing methods, such as myopic pricing and segment-then-optimize policies. Furthermore, our policy improves upon the loan company’s historical pricing decisions by 47% in expected revenue over a six-month period. This paper was accepted by Noah Gans, stochastic models and simulation.


Author(s):  
Wassila Nissas ◽  
Soufiane Gasmi

In the reliability literature, maintenance efficiency is usually dealt with as a fixed value. Since repairable systems are subject to different degrees and types of repair, it is more convenient to regard a random variable for maintenance efficiency. This paper is devoted to the statistical study of a general hybrid model for repairable systems working under imperfect maintenance. For both failure improvement and virtual age reduction of the system, maintenance efficiency is assumed to be random, with an exponential distribution as a probability density function. The likelihood function of this model is provided, and the estimation of the model parameters is computed by considering the maximization likelihood procedure. Obtained results were tested and applied to simulated and real data sets. To construct confidence intervals, the bias-corrected accelerated bootstrap method has been used.


2021 ◽  
Author(s):  
Arthur Zwaenepoel ◽  
Yves Van de Peer

AbstractPhylogenetic models of gene family evolution based on birth-death processes (BDPs) vide an awkward fit to comparative genomic data sets. A central assumption of these models is the constant per-gene loss rate in any particular family. Because of the possibility of partial functional redundancy among gene family members, gene loss dynamics are however likely to be dependent on the number of genes in a family, and different variations of commonly employed BDP models indeed suggest this is the case. We propose a simple two-type branching process model to better approximate the stochastic evolution of gene families by gene duplication and loss and perform Bayesian statistical inference of model parameters in a phylogenetic context. We evaluate the statistical methods using simulated data sets and apply the model to gene family data for Drosophila, yeasts and primates, providing new quantitative insights in the long-term maintenance of duplicated genes.


2007 ◽  
Vol 2007 ◽  
pp. 1-11 ◽  
Author(s):  
C. D. Lai ◽  
Michael B. C. Khoo ◽  
K. Muralidharan ◽  
M. Xie

A generalized Weibull model that allows instantaneous or early failures is modified so that the model can be expressed as a mixture of the uniform distribution and the Weibull distribution. Properties of the resulting distribution are derived; in particular, the probability density function, survival function, and the hazard rate function are obtained. Some selected plots of these functions are also presented. An R script was written to fit the model parameters. An application of the modified model is illustrated.


2019 ◽  
Vol 52 (3) ◽  
pp. 397-423
Author(s):  
Luc Steinbuch ◽  
Thomas G. Orton ◽  
Dick J. Brus

AbstractArea-to-point kriging (ATPK) is a geostatistical method for creating high-resolution raster maps using data of the variable of interest with a much lower resolution. The data set of areal means is often considerably smaller ($$<\,50 $$<50 observations) than data sets conventionally dealt with in geostatistical analyses. In contemporary ATPK methods, uncertainty in the variogram parameters is not accounted for in the prediction; this issue can be overcome by applying ATPK in a Bayesian framework. Commonly in Bayesian statistics, posterior distributions of model parameters and posterior predictive distributions are approximated by Markov chain Monte Carlo sampling from the posterior, which can be computationally expensive. Therefore, a partly analytical solution is implemented in this paper, in order to (i) explore the impact of the prior distribution on predictions and prediction variances, (ii) investigate whether certain aspects of uncertainty can be disregarded, simplifying the necessary computations, and (iii) test the impact of various model misspecifications. Several approaches using simulated data, aggregated real-world point data, and a case study on aggregated crop yields in Burkina Faso are compared. The prior distribution is found to have minimal impact on the disaggregated predictions. In most cases with known short-range behaviour, an approach that disregards uncertainty in the variogram distance parameter gives a reasonable assessment of prediction uncertainty. However, some severe effects of model misspecification in terms of overly conservative or optimistic prediction uncertainties are found, highlighting the importance of model choice or integration into ATPK.


2018 ◽  
Vol 7 (4) ◽  
pp. 57 ◽  
Author(s):  
Jehhan. A. Almamy ◽  
Mohamed Ibrahim ◽  
M. S. Eliwa ◽  
Saeed Al-mualim ◽  
Haitham M. Yousof

In this work, we study the two-parameter Odd Lindley Weibull lifetime model. This distribution is motivated by the wide use of the Weibull model in many applied areas and also for the fact that this new generalization provides more flexibility to analyze real data. The Odd Lindley Weibull density function can be written as a linear combination of the exponentiated Weibull densities. We derive explicit expressions for the ordinary and incomplete moments, moments of the (reversed) residual life, generating functions and order statistics. We discuss the maximum likelihood estimation of the model parameters. We assess the performance of the maximum likelihood estimators in terms of biases, variances, mean squared of errors by means of a simulation study. The usefulness of the new model is illustrated by means of two real data sets. The new model provides consistently better fits than other competitive models for these data sets. The Odd Lindley Weibull lifetime model is much better than \ Weibull, exponential Weibull, Kumaraswamy Weibull, beta Weibull, and the three parameters odd lindly Weibull with three parameters models so the Odd Lindley Weibull model is a good alternative to these models in modeling glass fibres data as well as the Odd Lindley Weibull model is much better than the Weibull, Lindley Weibull transmuted complementary Weibull geometric and beta Weibull models so it is a good alternative to these models in modeling time-to-failure data.


2014 ◽  
Vol 10 (S306) ◽  
pp. 90-93
Author(s):  
Michael Vespe

AbstractIn the statistical framework of likelihood-free inference, the posterior distribution of model parameters is explored via simulation rather than direct evaluation of the likelihood function, permitting inference in situations where this function is analytically intractable. We consider the problem of estimating cosmological parameters using measurements of the weak gravitational lensing of galaxies; specifically, we propose the use a likelihood-free approach to investigate the posterior distribution of some parameters in the ΛCDM model upon observing a large number of sheared galaxies. The choice of summary statistic used when comparing observed data and simulated data in the likelihood-free inference framework is critical, so we work toward a principled method of choosing the summary statistic, aiming for dimension reduction while seeking a statistic that is as close as possible to being sufficient for the parameters of interest.


Sign in / Sign up

Export Citation Format

Share Document