ON THE ESTIMATION OF SURVIVAL FUNCTION UNDER RANDOM CENSORSHIP

2002 ◽  
Vol 31 (6) ◽  
pp. 961-975 ◽  
Author(s):  
Agnieszka Rossa
1992 ◽  
Vol 42 (1-2) ◽  
pp. 75-86 ◽  
Author(s):  
D. Dhar

The paper deals with the estimation of survival function of a particular random variable of interest in proportional hazard model of random censorship under the condition that data are randomly censored by k independent variables. Estimators are constructed using results from Abdushukurov (1984), Cheng and Lin (1984), Ebrahimi (1985) and Kaplan and Meier (1958). The asymptotic behaviour of all these estimators is investigated. Numerical results are provided to calculate the efficiencies of ACL and Ebrahimi's estimators in comparsion to classical Kaplan-Meier (1958) estimator.


2020 ◽  
Vol 72 (2) ◽  
pp. 111-121
Author(s):  
Abdurakhim Akhmedovich Abdushukurov ◽  
Rustamjon Sobitkhonovich Muradov

At the present time there are several approaches to estimation of survival functions of vectors of lifetimes. However, some of these estimators either are inconsistent or not fully defined in range of joint survival functions and therefore not applicable in practice. In this article, we consider three types of estimates of exponential-hazard, product-limit, and relative-risk power structures for the bivariate survival function, when replacing the number of summands in empirical estimates with a sequence of Poisson random variables. It is shown that these estimates are asymptotically equivalent. AMS 2000 subject classification: 62N01


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 518
Author(s):  
Osamu Komori ◽  
Shinto Eguchi

Clustering is a major unsupervised learning algorithm and is widely applied in data mining and statistical data analyses. Typical examples include k-means, fuzzy c-means, and Gaussian mixture models, which are categorized into hard, soft, and model-based clusterings, respectively. We propose a new clustering, called Pareto clustering, based on the Kolmogorov–Nagumo average, which is defined by a survival function of the Pareto distribution. The proposed algorithm incorporates all the aforementioned clusterings plus maximum-entropy clustering. We introduce a probabilistic framework for the proposed method, in which the underlying distribution to give consistency is discussed. We build the minorize-maximization algorithm to estimate the parameters in Pareto clustering. We compare the performance with existing methods in simulation studies and in benchmark dataset analyses to demonstrate its highly practical utilities.


Risks ◽  
2018 ◽  
Vol 6 (3) ◽  
pp. 91 ◽  
Author(s):  
Riccardo Gatto

In this article we introduce the stability analysis of a compound sum: it consists of computing the standardized variation of the survival function of the sum resulting from an infinitesimal perturbation of the common distribution of the summands. Stability analysis is complementary to the classical sensitivity analysis, which consists of computing the derivative of an important indicator of the model, with respect to a model parameter. We obtain a computational formula for this stability from the saddlepoint approximation. We apply the formula to the compound Poisson insurer loss with gamma individual claim amounts and to the compound geometric loss with Weibull individual claim amounts.


Sign in / Sign up

Export Citation Format

Share Document