scholarly journals The Cause Specific Hazard Quantile Function

2018 ◽  
Vol 48 (1) ◽  
pp. 56-69
Author(s):  
Sankaran Paduthol ◽  
Isha Dewan ◽  
Dileep Kumar M

In this paper, we discuss modeling and analysis of competing risks data using the quantile function. We introduce and study the cause specific hazard quantile function. We present competing risks models using various functional forms for the cause specific hazard quantile functions. A non-parametric estimator of the cause specific hazard quantile function is derived. Asymptotic properties of the estimator are studied. Simulation studies are carried out to assess the performance of the estimator. Finally, we apply the proposed procedure to real life data sets.

Author(s):  
Adebisi Ade Ogunde ◽  
Gbenga Adelekan Olalude ◽  
Donatus Osaretin Omosigho

In this paper we introduced Gompertz Gumbel II (GG II) distribution which generalizes the Gumbel II distribution. The new distribution is a flexible exponential type distribution which can be used in modeling real life data with varying degree of asymmetry. Unlike the Gumbel II distribution which exhibits a monotone decreasing failure rate, the new distribution is useful for modeling unimodal (Bathtub-shaped) failure rates which sometimes characterised the real life data. Structural properties of the new distribution namely, density function, hazard function, moments, quantile function, moment generating function, orders statistics, Stochastic Ordering, Renyi entropy were obtained. For the main formulas related to our model, we present numerical studies that illustrate the practicality of computational implementation using statistical software. We also present a Monte Carlo simulation study to evaluate the performance of the maximum likelihood estimators for the GGTT model. Three life data sets were used for applications in order to illustrate the flexibility of the new model.


2020 ◽  
Vol 9 (6) ◽  
pp. 90
Author(s):  
A. A. Ogunde ◽  
S. T. Fayose ◽  
B. Ajayi ◽  
D. O. Omosigho

In this work, we introduce a new generalization of the Inverted Weibull distribution called the alpha power Extended Inverted Weibull distribution using the alpha power transformation method. This approach adds an extra parameter to the baseline distribution. The statistical properties of this distribution including the mean, variance, coefficient of variation, quantile function, median, ordinary and incomplete moments, skewness, kurtosis, moment and moment generating functions, reliability analysis, Lorenz and Bonferroni and curves, Rényi of entropy and order statistics are studied. We consider the method of maximum likelihood for estimating the model parameters and the observed information matrix is derived. Simulation method and three real life data sets are presented to demonstrate the effectiveness of the new model.


2021 ◽  
Vol 3 (2) ◽  
pp. 81-94
Author(s):  
Sule Ibrahim ◽  
Sani Ibrahim Doguwa ◽  
Audu Isah ◽  
Haruna, M. Jibril

Many Statisticians have developed and proposed new distributions by extending the existing distributions. The distributions are extended by adding one or more parameters to the baseline distributions to make it more flexible in fitting different kinds of data. In this study, a new four-parameter lifetime distribution called the Topp Leone Kumaraswamy Lomax distribution was introduced by using a family of distributions which has been proposed in the literature. Some mathematical properties of the distribution such as the moments, moment generating function, quantile function, survival, hazard, reversed hazard and odds functions were presented. The estimation of the parameters by maximum likelihood method was discussed. Three real life data sets representing the failure times of the air conditioning system of an air plane, the remission times (in months) of a random sample of one hundred and twenty-eight (128) bladder cancer patients and Alumina (Al2O3) data were used to show the fit and flexibility of the new distribution over some lifetime distributions in literature. The results showed that the new distribution fits better in the three datasets considered.


Author(s):  
Oseghale O. I. ◽  
Akomolafe A. A. ◽  
Gayawan E.

This work is focused on the four parameters Exponentiated Cubic Transmuted Weibull distribution which mostly found its application in reliability analysis most especially for data that are non-monotone and Bi-modal. Structural properties such as moment, moment generating function, Quantile function, Renyi entropy, and order statistics were investigated. The maximum likelihood estimation technique was used to estimate the parameters of the distribution. Application to two real-life data sets shows the applicability of the distribution in modeling real data.


2013 ◽  
Vol 4 (2) ◽  
Author(s):  
Yan-Xia Lin ◽  
Phillip Wise

This paper considers the scenario that all data entries in a confidentialised unit record file were masked by multiplicative noises, regardless of whether unit records are sensitive or not and regardless of whether the masked variables are dependent or independent variables in the underlying regression analysis. A technique is introduced in this paper to show how to estimate parameters in a regression model, which is originally fitted by unmasked data, based on masked data. Several simulation studies and a real-life data application are presented.


2008 ◽  
pp. 1231-1249
Author(s):  
Jaehoon Kim ◽  
Seong Park

Much of the research regarding streaming data has focused only on real time querying and analysis of recent data stream allowable in memory. However, as data stream mining, or tracking of past data streams, is often required, it becomes necessary to store large volumes of streaming data in stable storage. Moreover, as stable storage has restricted capacity, past data stream must be summarized. The summarization must be performed periodically because streaming data flows continuously, quickly, and endlessly. Therefore, in this paper, we propose an efficient periodic summarization method with a flexible storage allocation. It improves the overall estimation error by flexibly adjusting the size of the summarized data of each local time section. Additionally, as the processing overhead of compression and the disk I/O cost of decompression can be an important factor for quick summarization, we also consider setting the proper size of data stream to be summarized at a time. Some experimental results with artificial data sets as well as real life data show that our flexible approach is more efficient than the existing fixed approach.


2008 ◽  
Vol 20 (4) ◽  
pp. 1042-1064
Author(s):  
Maciej Pedzisz ◽  
Danilo P. Mandic

A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.


Author(s):  
SANGHAMITRA BANDYOPADHYAY ◽  
UJJWAL MAULIK ◽  
MALAY KUMAR PAKHIRA

An efficient partitional clustering technique, called SAKM-clustering, that integrates the power of simulated annealing for obtaining minimum energy configuration, and the searching capability of K-means algorithm is proposed in this article. The clustering methodology is used to search for appropriate clusters in multidimensional feature space such that a similarity metric of the resulting clusters is optimized. Data points are redistributed among the clusters probabilistically, so that points that are farther away from the cluster center have higher probabilities of migrating to other clusters than those which are closer to it. The superiority of the SAKM-clustering algorithm over the widely used K-means algorithm is extensively demonstrated for artificial and real life data sets.


Author(s):  
Mohamed Ibrahim Mohamed ◽  
Laba Handique ◽  
Subrata Chakraborty ◽  
Nadeem Shafique Butt ◽  
Haitham M. Yousof

In this article an attempt is made to introduce a new extension of the Fréchet model called the Xgamma Fréchet model. Some of its properties are derived. The estimation of the parameters via different estimation methods are discussed. The performances of the proposed estimation methods are investigated through simulations as well as real life data sets. The potentiality of the proposed model is established through modelling of two real life data sets. The results have shown clear preference for the proposed model compared to several know competing ones.


2020 ◽  
Author(s):  
Hanjie Shen ◽  
Jong-Hyeon Jeong ◽  
Loren K Mell

In this article, we propose a Proportional Relative Hazards (PRH) model to differentiate subjects according to their risk for a primary event relative to competing events. The model estimates effects on the baseline ratio of the hazard for a primary event, or set of primary events, relative to the hazard for a competing event, or set of competing events (ω+ ratio). An analogous model is presented to estimate effects on the baseline ratio of the hazard for a primary event (or set of events) relative to the hazard for all events (ω ratio). A weighted regression method is introduced, along with practical presentation of risk-stratification using the PRH model in breast and head and neck cancer data sets.


Sign in / Sign up

Export Citation Format

Share Document